IT and HR: Not such an odd couple

Human Resources SectionHow businesses perform has always depended on how well their employees perform. Yet never before has the relationship between how well employees work and the digital technology that they use been so complex.

At the same time, companies are grappling with the transition to increasingly data-driven and automated processes. What’s more, the top skills at all levels are increasingly harder to find — and hold onto — for supporting strategic business agility.

As a result, business leaders must enhance and optimize today’s employee experience so that they in turn can optimize the customer experience and — by extension — better support the success of the overall business.

Stay with us as BriefingsDirect explores how those writing the next chapters of human resources (HR) and information technology (IT) interactions are finding common ground to significantly improve the modern employee experience.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

We’re now joined by two leaders in this area who will share their thoughts on how intelligent workspace solutions are transforming work — and heightening worker satisfaction. Please welcome Art Mazor, Principal and Global Human Resources Transformation Practice Leader at Deloitte, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Art, is there more of a direct connection now between employee experience and overall business success?

Mazor: There has been a longstanding sense on the part of leaders intuitively that there must be a link. For a long time people have said, “Happy employees equal happy customers.” It’s been understood.

Arthur Mazor

Mazor

But now, what’s really powerful is we have true evidence that demonstrates the linkage. For example, in our Deloitte Global Human Capital Trends Report 2019, in its ninth year running, we noticed a very important finding in this regard: Purpose-focused companies outperformed their S&P 500 peers by a factor of 8. And, when you think about, “Well, how do you get to purpose for people working in an organization?” It’s about creating that strong experience.

What’s more, I was really intrigued when MIT recently published a study that demonstrated the direct linkage between positive employee experience and business performance. They showed that those with really strong employee experiences have twice the innovation, double the satisfaction of customers, and 25 percent greater profitability.

So those kinds of statistics tell me pretty clearly that it matters — and it’s driving business results.

Gardner: It’s seemingly commonsense and an inevitable outcome when employees and their positive experiences impact the business. But reflecting on my own experiences, some companies will nonetheless talk the talk, but not always walk the walk on building better employee experiences, unless they are forced to.

Do you sense, Art, that there are some pressures on companies now that hadn’t been there before?

Purposeful employees produce profits 

Mazor: Yes, I think there are. Some of those pressures, appropriately, are coming from the market. Customers have a very high bar with which they measure their experience with an organization. We know that if the employee or workforce experience is not up to par, the customers feel it.

That demand, that pressure, is coming from customers who have louder voices now than ever before. They have the power of social media, the ability to make their voices known, and their perspectives heard.

There is also a tremendous amount of competition among a variety of customers. As a result, leaders recognize that they have to get this right. They have to get their workers in a place where those workers feel they can be highly productive and in the service of customer outcomes.

Minahan: Yes, I totally agree with Art. In addition, there is an added pressure going on in the market today and that is the fact that there is a huge talent crunch. Globally McKinsey estimates there is a shortage of 95 million medium- to high-skilled workers.

Tim Minahan

Minahan

We are beginning to see even forward-thinking digital companies like Amazon saying, “Hey, look, we can’t go out and hire everyone we need; certainly not in one location.” So that’s why you have the HQ2 competition, and the like.

Just in July, Amazon committed to investing more than $700 million to retrain a third of their workforce with the skills that they need to continue to advance. This is part of that pressure companies are feeling: “Hey, we need to drive growth. We need to digitize our businesses. We need to provide a greater customer experience. But we need these new skills to do it, and there just is not enough talent in the market.”

So companies are rethinking that whole employee engagement model to advance.

Gardner: Tim, the concept of employee experience was largely in the domain of people like Art and those that he supports in the marketplace — the human resources and human capital management (HCM) people.

How does IT now have more of a role? Why do IT and HR leaders need to be more attached at the hip?

Download The Economist Research

On How Technology Drives 

The Modern Employee Experience 

Minahan: Much of what chief human resources officers (CHROs) and chief people officers (CPOs) have done to advance the culture and physical environment with which to attract and retain the right talent has gone extremely far. That includes improving benefits, ensuring there is a purpose, and ensuring that the work environment is very pleasurable.

However, we just conducted a study together with The Economist, a global study into employee experience and how companies are prioritizing it. And one of things that we found is organizations have neglected to take a look at the tools and the access to information that they give their employees to get their jobs done. And that seems to be a big gap.

This gap was reaffirmed by a recent global Gallup study where right behind the manager, the number one indicator of employee engagement was if they feel they have the right access to the information and tools they need to do their best job.

So technology — the digital workspace, if you will — plays an increasingly important role, particularly in how we work today. We don’t always work at a desk or in a physical environment. In fact, most of us work in multiple locations throughout the day. And so our digital workspace needs to travel with us, and it needs to simplify our day — not make it more complex.

Gardner: Art, as part of The Economist study that Tim cited, “ease of access to information required to get work done” was one of the top things those surveyed identified as being part of a world-class employee experience.

That doesn’t surprise me because we are asking people to be more data-driven. But to do so we have to give them that data in a way they can use it.

Are you seeing people thinking more about the technology and the experience of using and accessing technology when it comes to HR challenges and improvement?

HR plus IT gets the job done 

Mazor: Yes, for sure. And in the HR function, technology has been front and center for many years. In fact, HR executives, their teams, and the workers they serve have been at an advantage in that technology investments have been quite rich. The HR space was one of the first to move to the cloud. That’s created lots of opportunities beyond those that may have been available even just a few short years ago.

To your point, though, and building on Tim’s comments, [employee experience requirements] go well beyond the traditional HR technologies. They are focused around areas like collaboration, knowledge sharing, interaction, and go into the toolsets that foster those kinds of necessities. They are at the heart of being able to drive work in the way that work needs to get done today.

The days of traditional hierarchies — where your manager tells you what to do and you do it — are quickly dwindling. We are moving to a world where teams are forming in a more agile way, demanding new toolsets.

The days of traditional hierarchies — where your manager tells you what to do and you go do it — are quickly dwindling. Now, we still have leaders and they tell us to do things and that’s important; I don’t mean to take away from that. Yet, we are moving to a world where, in order to act with speed, teams are forming in a more agile way. Networked groups are operating together cross-functionally and across businesses, and geographies — and it’s all demanding, to your point, new toolsets.

Fortunately, there are a lot of tools that are out there for that. Like with any new area of innovation, though, it can be overwhelming because there are just so many technologies coming into the marketplace to take advantage of.

The trick we are finding is for organizations to be able to separate the noise from the impactful technologies and create a suite of tools that are easy to navigate and remove that kind of friction from the workplace.

Gardner: Tim, a fire hose of technology is certainly not the way to go. From The Economist survey we heard that making applications simple — with a consumer-like user experience — and with the ability to work from anywhere are all important. How do you get the right balance between the use of technology, but in a simplified and increasingly automated way?

A workspace to unify work

Minahan: Art hit the exact right word. All this choice and access to technology that we use to get our jobs done has actually created a lot more complexity. The typical employee now uses a dozen or more apps throughout the day, and oftentimes needs to navigate four more applications just to get a single task or a bit of information that they are looking for. As a result, they need to navigate a whole bunch of different environments, remember a whole bunch of different usernames and passwords, and it’s creating a lot of noise in their day.

To Art’s point, there is an emergence of a new category of technology, a digital workspace that unifies everything for an employee, gives them single sign-on access to everything they need to be productive, and one unified experience, so they don’t need to have as much noise in their day.

Workspace AppCertainly, it also provides an added layer of security around things. And then the third component that gets very, very exciting is that forward-thinking companies are beginning to infuse things like machine learning (ML) and simplified workflows or micro apps that connect some of these technologies together so that the employee can be guided through their day — very much like they are in their personal lives, where Facebook might guide you and curate your day for the news and social interactions you want.

Netflix, for example, will make up the recommendations based on your historical behaviors and preferences. And that’s beginning to work its way into the workplace. So the study we just did with The Economist clearly points to bringing that consumer-like experience into the workplace as a priority among IT and HR leaders.

Gardner: Art, you have said that a positive employee experience requires removing friction from work. What do you mean by friction and is that related to this technology issue, or is it something even bigger?

Remove friction, maximize productivity 

Mazor: I love that you are asking that, Dana. I think it is something bigger than technology — yet technology plays a massively important role.

When we think about friction, and what I love about that word in this context, is it’s a plain English word. We know that friction means. It’s what causes something to slow down.

And so it’s bigger than just technology in the sense that to create that positive worker experience we need to think about a broader construct, which is the human experience overall. And elevating that human experience is about, first and foremost, recognizing that everyone wakes up every morning as a human. We might play the role of a worker, we might play the role of customer, or some other role. But in our day-to-day life, anything that slows us down from being as productive as possible is, in my view, the element that is this  friction.

So that could be process-oriented, it could be policy and bureaucracy that gets in the way. It could be managers who may be struggling with empowerment of their teams. It might even be technology, to your point, that causes it to be more difficult to, as Tim was rightly saying, navigate through to all the different apps or tools.

And so this idea of friction and removing it is really about enabling that workforce to be focused myopically on delivering results for customers, the business, and the other workers in the enterprise. Whatever it may be, anything that stands in the way should be evaluated as a potential cause of friction.

Sometimes that friction is good in the sense of slowing things down for purposes like compliance or risk management. In other cases, it’s bad friction that just gets in the way of good results.

View Video on How Companies

Drive Improved Employee Experience

To Foster Better Business Results 

Minahan: I love what Art’s talking about. That is the next wave we will see in technology. When we talk about these digital workspaces — moving from traditional enterprise applications — built around giving functions and modern collaborations tools, they are focused on team-based collaboration. Still, individuals need to navigate all of these environments — and oftentimes work in different ways.

And so this idea of people-centric computing, in which you put the person at the center, makes it easy for them to interact with all of these different channels and remove some of the noise from their day. They can do much more meaningful work — or in some cases, as one person put it to me, “Get the job done that I was hired to do.” I really believe this is where we are now going.

And you have seen it in consumer technologies. The web came about to organize the world’s information, and apps came about to organize the web. Now you have this idea of the workspace coming about to organize all of those apps so that we can finally get all the utility that had been promised.

Gardner: If we return to our North Star concept, the guiding principle, that this is all about the customer experience, how do we make a connection between solidifying that employee experience as Tim just described but to the benefit of the customer experience?

Art, who in the organization needs to make sure that there isn’t a disconnect or dissonance between that guiding principle of the customer experience and buttressing it through the employee experience?

Leaders emphasize end-customer experience 

Mazor: We are finding this is one of the biggest challenges, because there isn’t a clear-cut owner for the workforce experience. That’s probably a good thing in the long run, because there are way too many individual groups, teams, and leaders who must be involved to have only one accountable leader.

That said, we are finding a number of organizations achieving great success by at least appointing either an existing function — and in many cases we are finding that happens to be HR — or in some organizations finding a different way of having accountability for orchestrating the experience. The best meaning is around bringing together a variety of groups — those could be HR, IT, real estate, marketing, finance, and the business leaders for sure to all play their roles inside of that experience.

Delivering on that end-customer experience as the brass ring, or the North Star to mix metaphors, becomes a way of thinking. It requires a different mindset that enterprises are shaping for themselves — and their leaders can model that behavior.

Delivering on that end-customer experience as the brass ring becomes a way of thinking. It requires a different mindset that enterprises are shaping for themselves — and their leaders can model that behavior.

I will share with you one great example of this. In the typical world of an airline, you would expect that flight attendants are there — as you hear on the announcements — for your safety first, and then to provide services. But one particular major airline recognized that those flight attendants are also the ones who can create the greatest stickiness to customer relationships because they see their top customers in flight, where it matters the most.

Deloitte logoAnd they have equipped that group of flight attendants with data in the form of a mobile device app that they use to see who is on board and where they sit in the importance of being customers in terms of revenue and other important factors. That provides triggers to those flight attendants, and others on the flight staff, to help recognize those customers and to ensure that they are having a great experience. And when things don’t go as well as possible, perhaps due to Mother Nature, those flight attendants are there to keep watch over their most important customers.

That’s a very new kind of construct in a world where the typical job was not focused on customers. Now, in an unwitting way, those flight attendants are playing a critical role in fostering and advancing those relationships with key customers.

There are many, many examples like that that are the outcome of leaders across functions coming together to orchestrate an experience that ultimately is centered around creating a rich customer experience where it matters the most.

Minahan: Two points. One, what Art said is absolutely consistent with the findings of the study we conducted jointly with The Economist. There is no clear-cut leader on employee experience today. In fact, both CHROs and CIOs equally indicated that they were on-point as the lead for driving that experience.

 We are beginning to see the emergence of a digital employee experience officer that’s emerging at some organizations to help drive the coordination that Art is talking about.

But the second point to your question, Dana, around how do we keep employees focused on the customer experience, it goes back to your opening question around purpose. Increasingly, as Art indicated, there is clear demonstration of companies that have clear purpose and are performing better — and that’s because that purpose tends to be on some business outcome. It drives some greater experience or innovation or business outcome for their customers.

If we can ensure that employees have the right tools, information, skills, and training to deliver that customer experience, then they are clearly aligned. I think it all ties very well together.

Gardner: Tim, when I heard Art talking about the flight attendants, it occurred to me that there is a whole class of such employees that are in that direct-interaction-with-the-customer role. It could be retail, the person on the floor of a clothing seller; or it could be a help desk operator. These are the power users that need to get more data, help, and inference knowledge delivered to them. They might be the perfect early types of users that you provide a digital workspace to.

Let’s focus on that workspace. What sort of qualities does that workspace need to have? Why are we in a better position, when it comes to automation and intelligence, than ever before to empower those employees, the ones on the front lines interacting with the customers?

Effective digital workspace requirements

Minahan: Excellent question. There are three, and an emerging fourth, capabilities required for an effective digital workspace. The first is it needs to be unified. We talked about all of the complexity and noise that bogs down an employee’s day, and all of the applications they need to navigate. Well, the digital workspace must unify that by giving a single-sign-on experience into the workspace to access all the apps and content that an employee needs to be productive and to do engaging work, whether they are at the office, on the corporate network, or on their tablet at home, or on their smartphone on a train or a plane.

The second part is obviously — in this day and age, considering especially those front-line employees that are touching customer information — it all needs to be secure. The apps and content need to be more secure within the workspace than when accessed natively. That means dynamically applying security policies and perhaps asking for a second layer of authentication, based on that employee’s behavior.

The third part is around intelligence. Bringing things like machine learning and simplified workflows into the workspace to create a consumer-like experience, where the employee is presented with the right information and the right task within the workspace so that they can quickly access those — rather than needing to log-in to multiple applications and go four layers deep.

citrix-logo-blackThe fourth capability that’s emerging, and that we hear a lot about, is the assurance that those applications — especially for front-line employees who are engaged with customers — are performing at their very best within the workspace. [Such high-level performance needs to be delivered] whether that employee is at a corporate office or more likely at a remote retail branch.

Bringing some of the historical infrastructure like networking technology to bear in order to ensure those applications are always on and reliable is the fourth pillar of what’s making new digital workspace strategies emerge in the enterprise.

The Employee Experience is Broken,

Learn How IT and HR Together Can Fix it 

Gardner: Art, for folks like Tim and me, we live in this IT world and we sometimes get lost in the weeds and start talking in acronyms and techy-talk. At Deloitte, you are widely regarded as the world’s number-one HR transformation consultancy.

First, tell us about the HR consultancy practice at Deloitte. And then, is explaining what technology does and is capable of a big part of what you do? Are you trying to explain the tech to the HR people, and then perhaps HR to the tech people?

Transforming HR with technology 

Mazor: First, thanks for the recognition. We are truly humbled and yet proud to be the world’s leading HR transformation firm. By having the opportunity as we do to partner with the world’s leading enterprises to shape and influence the future of HR, it gives us a really interesting window into exactly what you are describing.

At a lot of the organizations we work with, the HR leaders and their teams are increasingly well-versed in the various technologies out there. The biggest challenge we find is being able to harness the value of those technologies, to find the ones that are going to produce impact at a pace and at a cost and return that really is valued by the enterprise overall.

For sure, the technology elements are critical enablers. We recently published a piece on the future of HR-as-a-function that’s based on a combination of our research and field experience. What we identified is that the future of HR requires a shift in four big areas:

  • The mindset, meaning the culture and the behaviors of the HR function.

  • The focus, meaning focusing in on the customers themselves.

  • The lens through which the HR function operates, meaning the operating model and the shift toward a more agile-network kind of enterprise HR function.

  • The enablers, meaning the wide array of technologies from core HR platform technologies to collaboration tools to automation, ML, artificial intelligence (AI), and so on.

The combination of these four areas enables HR-as-a-function to shift into what we’re referring to as a world that is exponential. I will give you one quick example though where all this comes together.

There is a solution set that we are finding is incredibly powerful inside of driving employee experiences that we refer to as creating a unified engagement platform, meaning the blend of all these technologies in a simple-to-navigate experience that empowers the workers across an enterprise.

We, Deloitte, have actually created one of those platforms in the market that leads the space, called ConnectMe, and there are certainly others. And in that, what we are essentially finding is that HR leaders are looking for that simple-to-navigate, frictionless kind of environment where people can get their jobs done and enjoy doing them at the same time using technology to empower them.

HR leaders are navigating this complex set of technologies out there that are terrific because they’re providing advantages for the business functions. A lot of technology firms are investing heavily in worker-facing technologies.

The premise that you described is spot-on. HR leaders are navigating this complex set of technologies out there that are terrific because they’re providing advantages for the business functions. A lot of the technology firms are investing heavily in worker-facing technology platforms, for exactly the reason we have been chatting about here.

Gardner: Tim, when it comes to the skills gap, it is an employee’s market. Unemployment rates are very low, and the types of skills in demand are hard to find. And so the satisfaction of that top-tier worker is essential.

It seems to me that the better tools you can give them, the more they want to work. If I were a top-skilled employee, I would want to go with the place that has the best information that empowers me in the best way and brings contextual information with security to my fingertips.

But that’s really difficult to do. How do businesses then best enhance and entice employees by giving them the best intelligence tools?

Intelligent tools support smart workers 

Minahan: If you think about your top-performing employees, they want to do their most meaningful work and to perform at their best. As a result, they want to eliminate a lot of the noise from their day, and, as Art mentioned before, that friction.

And that friction is not solely technological, it’s often manifested through technology due to certain tasks or requirements that we need to do that may not pertain to our core jobs.

So, last time I checked, I don’t think either Art or myself were hired to review and approve expense reports or to spend a good chunk of our time approving vacations or doing full-scale performance reviews. Yet those types of applications that may not be pertinent to our jobs or processes, tend to take up a good part of our time.

What digital workspaces or digital work platforms do in the first phase is remove that noise from your day so that your best-performing employees can do their best work. The second phase uses those same platforms to help employees do better work through making sure that information is pushed to them as they need it.

Citrix campusThat’s information that is pertinent to their jobs. In a salesperson’s environment that might be a change in pipeline status, or a change in a prospect or customer activity. Not only do they get information at their fingertips, they can take action.

And what gets very exciting about that is you have the opportunity now to elevate the skills of every employee. We talk about the skills gap, but this is but one way to go re-train everybody.

Another way is to make sure that you’re giving them an unfair advantage within the work platforms you are using to guide them through the right process. So a great example is sales force productivity. A typical company takes 9-12 months to get a salesperson up to full productivity. Average tenure of a salesperson is somewhere around 36 months. So a company is getting a year-and-a-half of productivity out of a salesperson.

What if by eliminating all that noise, and by using this digital work platform to help push the right information, tasks, right actions, and the right customer sales pitches to them at the right time, you can cut that time to full productivity in half?

Think about the real business value that comes from using technology to actually elevate the skill set of the entire workforce, rather than bog it down.

Gardner: Tim, do you have any examples that illustrate what you just described? Any named or use case types of examples that show how what you’re doing at Citrix has been a big contributor?

Minahan: One example that’s top-of-mind not only helps improve employee experiences to elevate the experience for customers, but also allows companies to rethink work models in ways they probably haven’t since the days of Henry Ford. And the example that comes to mind is eBay.

We are all familiar with eBay, one of the world’s largest online digital marketplaces. Like many other companies, they have a large customer call center where buyers and sellers ask questions. These call center employees have to have the right information at their fingertips to get things done.

Well, the challenge they faced was with the talent gap and labor shortage. Traditionally they would build a big call center, hire a bunch of employees, and train them at the call center. But now, it’s harder to do that; they are competing with the likes of Amazon, Google and others who are all trying to do the same thing.

And so they used technology to break the traditional mold and to create a new work model. Now they go to where the talent is, such as the stay-at-home parent in Montana and the retiree in Florida, or the gig worker in Boston or New York. They can now arm them with a digital workspace and push the right information and toolsets to them. By doing so you ensure they get the job done even though if you or I call in we don’t know that they are not sitting in a centralized call center.

This is just one example as we begin to harness and unify this technology of how we can change work models. We can create not just the better employee experience, but entirely new ways to work.

How to Harness Technology

To Inspire Workers to Perform 

At Their Unencumbered Best 

Gardner: Art, it’s been historically difficult to measure productivity, and especially to find out what contributes to that productivity. The same unfortunately is the case with technology. It’s very difficult to measure quantitatively and qualitatively what technology directly does for both employee productivity and overall organizational productivity.

Are there ways for us to try to measure how new workspaces and good HR contribute to good employee satisfaction — and ultimately customer satisfaction? How do we know when we are doing this all right?

Success, measured 

Mazor: This is the holy grail in many ways, right? You get what you measure, and this whole space of workforce experience in many ways is a newer discipline. Customer experience has been around for a while and gained great traction and measurement. We can measure customer feedback. We can measure net promoter scores, and a variety of other indicators, not the least of which may be revenue, for example, or even profitability relative to customer base. We equally are now starting to see the emergence of measurements in the workforce experience arena.

And at the top-line we can see measurements like measuring workforce engagement. As that rises, likely there is a connection to positive worker experience. We can measure productivity. We can even measure the growth of capabilities within the workforce that are being gained as a result of — as we like to say — learning in the flow of work, to develop their capabilities.

That path is really important to chart out because it has similarities to those tools, methods, and approaches used inside the customer space. We think about it in very simple terms, we need to first look, listen, and understand to sense what’s happening with the workforce.

We need to generate and prioritize different ideas of ways in which the experience for the workforce can be moved. Then we need to iterate, test, refine, and plan the kinds of changes you might prototype that provides you that foundation to measure. And in the workforce experience space, it’s a variety of measures that we are starting to see to get down into the granular levels below those top-line measures that I mentioned.

What comes to mind for me are things like measuring the user experience for all of the workers. How effective is the product or service that they are being asked to use? How quickly can they deliver their work? What feedback do we get from workers? So kind of a worker feedback category.

We need to generate and prioritize different ideas of ways in which the experience for the workforce can be moved. We need to iterate, test, refine, and plan the types of changes you might prototype that provide a foundation to measure.

And then there are a set of operational measures that can track inputs and outputs from various processes and various portions of the experience. There is that kind of categorization “in those three buckets” that really seems to be working well for many of our clients to measure that notion of workforce experience to your point, of, “Did we get it right?”

But in the end, as I shared at the beginning, I think it’s really critical that organizations measure that workforce experience through the ultimate lens, which is, “How are we dealing with our customers?” When that’s performing well, chances are pretty good, based on the research that we have seen, that the connection is there to the employee or workforce experience.

Minahan: When we are talking about the employee experience, we should be careful — it’s not synonymous with just productivity. It’s a balance of productivity and employee engagement that together ultimately drives greater business results, customer experience, satisfaction, and improved profitability. Employee experience has been synonymous with productivity, it’s certainly a key integer into it, but it’s not the only one.

Gardner: Tim, how should IT people be thinking differently when it comes to how they view their own success? It was not that long ago where simply the performance of the systems — when all the green lights were on and the networks were not down — was the gauge of success. Should IT be elevating how it perceives itself and therefore how it should rate itself when it’s successful within these larger digital transformation endeavors?

Information, technology, and better business

Minahan: Yes, absolutely. I think this could be the revitalization of IT as it moves beyond the items that you mentioned: keeping the networks up, keeping the applications performing well. IT can now drive better business outcomes and results.

Human Resources SectionThose forward-thinking companies looking to digitize their business realize that it’s very hard to ask an employee base to drive a greater digital customer experience without arming them with the right tools, information, and experience in their own right in order to get that done. IT plays a very major role here, locking arms in unison with the CHRO, to move the needle and turn employee experience into a competitive edge — not just for attracting and retaining talent, but ultimately for driving better business results.

Gardner: I hope, if anything, this conversation prompts more opportunity for the human resources leadership and the IT leadership to spend time together and brainstorm and find commonality.

Before we sign off, just a quick look to the future. Art, for you, what might be changing soon that will help remove even more friction for employees? What is  it that’s down the pike over the next three to five years — technologies, processes, market forces — that might be an accelerant to removing friction? Are there bright spots in your thinking about the future?

Bright symphony ahead

Mazor: I think the future is really bright. We are optimistic by nature, and we see enterprises making terrific, bold moves to embrace their future as challenging as the future is.

One of the biggest opportunities is the recognition of the imperative for executives and their teams to operate in a more symphonic way. And when I say that I mean to work together to achieve a common set of results, moving away from the historical silos that were emerging from a zeal for efficiency and that led to organizations having these various departments, and then the departments working within themselves and finding it a struggle to create integration.

We are seeing a huge unlocking of that, in the spirit of creating more cross-functional teams and more agile ways of working — truly operating in the digital age. As we talked about in one of our recent capital trends reports, the idea of driving this is a more symphonic C-Suite, which then has a cascading effect for teams across the board inside of enterprises all to be working better together.

And then, secondly, there is a big recognition by enterprises now around the imperative to create meaning in the work that workers are doing. Increasingly, we are seeing this as a demand. This is not a single-generational demand. It’s not that the younger generation needs meaning or anything like that, that fits into stereotypes.

Rather, it’s a recognition that when we create purpose and meaning for the workers in an enterprise, they are more committed. They are more focused on outcomes, as opposed to activities. They begin to recognize the outcomes’ linkage to their own personal purpose, meaning for the enterprise, and for the work itself.

 And so, I think those two things will continue to emerge on a fairly rapid basis, to be able to embrace that need for symphonic operations and symphonic collaboration, as well as the imperative to create meaning and purpose for the workers of an enterprise. This will all unlock and unleash those capabilities focused on the customer through creating terrific employee or workforce experiences.

Gardner: Tim, last word to you. How do you foresee over the next several years technology evolving to support and engender the symphonic culture that Art just described?

Minahan: We have gotten to the point where employees are asking for a simplification of their environment, a unified access to everything, and to remove noise from their days so they can do that meaningful, purposeful work.

But what’s exciting is that same platform can be enabled to elevate the skill sets of all employees, giving them the right information, and the right task at the right time so they can perform at their very best.

But what gets me very excited about the future is the technology and a lot of the new thinking that’s going on. In the next few years, we’re going to see work models similar to the example I shared about eBay. We will see change in ways we work that we haven’t see in the past 100 years, where the lines between different functions and different organizations begin to evaporate.

What gets me excited about the future is the technology and a lot of new thinking that’s going on. In the next few years, we’re going to see new work models. We will see change in the ways we work that we haven’t seen in the past 100 years.

Instead we will have work models where companies are beginning to organize around pools of talent, where they know who has the right skills and the right knowledge, regardless if they are full-time employees or a contractor. Technology will pull them together into workgroups no matter where they are in the world, to solve the given problem or produce a given outcome, and then dissolve them very quickly again. So I am very excited about what we are going to see in just the next five years ahead.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Advertisements
Posted in application transformation, Citrix, Cloud computing, Cyber security, Data center transformation, Deloitte, Enterprise architect, machine learning, Mobile apps, mobile computing, Security, social media, User experience | Tagged , , , , , , , , , , , , , , , | Leave a comment

How rapid machine learning at the racing edge accelerates Venturi Formula E Team to top-efficiency wins

Venturi E frontThe next BriefingsDirect Voice of the Customer advanced motorsports efficiency innovation discussion explores some of the edge computing and deep analytics technology behind the Formula E auto racing sport.

Our interview focuses on how data-driven technology and innovation make high-performance electric racing cars an example for all endeavors where limits are continuously tested and bested.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the latest in Formula E efficiency strategy and refinement, please welcome Susie Wolff, Team Principal at Venturi Formula E Team in Monaco. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Aside from providing a great viewing and fan experience, what are the primary motivators for Formula E racing? Why did it start at all?

Wolff: It’s a really interesting story, because Formula E is like a startup. We are only in our fifth season, and Formula E and the management of Formula E disrupted the world of motorsport because it brought to the table a new concept of growing racing.

Susie Wolff

Wolff

We race in city centers. That means that the tracks are built up just for one-day events, right in the heart of some of the most iconic capitals throughout the world. Because it’s built up within a city center and it’s usually only a one-day event, you get very limited track time, which is quite unusual in motorsport. In the morning we get up, we test, we go straight into qualifying, and then we race.

Yet, it’s attracting a new audience because people don’t need to travel to a race circuit. They don’t need to buy an expensive ticket. The race comes to the people, as opposed to the people going out to see racing.

Obviously, the technology is something completely new for people. There is very little noise, mostly you hear the whooshing of the cars going past. It’s a showcase for new technologies, which we are all going to see appearing on the road in the next three to five years.

Race down to Electric Avenue 

The automotive industry is going through a massive change with electric mobility and motorsport is following suit with Formula E.

We already see some of the applications on the roads, and I think that will increase year on year. What motorsport is so good at is testing and showcasing the very latest technology.

Gardner: I was going to ask you about the noise because I had the privilege and joy of watching a Formula One event in Monaco years ago, and the noise was a big part of it. Aside from these cars being so quiet, what is also different in terms of an electric Formula E race compared to traditional Formula One?

Wolff: The noise is the biggest factor, and that takes a bit of getting used to. It’s the roaring of the engines that creates emotion and passion. Obviously, in the Formula E cars you are missing any kind of noise.

Venturi E cityEven the cars we are starting to drive on the roads now have a little electric start and every time I switch it on I think, “Oh, the car is not working, I have a problem.” I forget that there is no noise when you switch an electric car on.

Also, in Formula E, the way that technology is developing and how quickly it’s developing is very clear through the racing. Last season, the drivers had two cars and they had to switch cars in the middle of the race because the battery wouldn’t last long enough for a complete race distance. Now, because the battery technology has advanced so quickly, we are doing one race with one car and one battery. So I think that’s really the beauty of what Formula E is. It’s showcasing this new technology and electric mobility. Add to this the incredible racing and the excitement that brings, and you have a really enticing offering.

Gardner: Please tell us about Venturi, as a startup, and how you became Team Principal. You have been involved with racing for quite some time.

A new way to manage a driving career

Wolff: Yes, my background is predominately in racing. I started racing cars when I was only eight years old, and I made it through the ranks as a racing driver, all the way to becoming a test driver in Formula One.

Then I stepped away and decided to do something completely different and started a second career. I was pretty sure it wouldn’t be in motorsport, because my husband, Toto Wolff, works in motorsport. I didn’t want to work for him and didn’t want to work against him, so I was very much looking for a different challenge and then Venturi came along.

The President of Venturi, a great gentleman, Gildo Pastor, is a pioneer in electric mobility. He was one of the first to see the possibility of using batteries in cars, and he set a number of land speed records — all electric. He joined Formula E from the very beginning, realizing the potential it had.

Venturi bugThe team is based in Monaco, which is a very small principality, but one with a very rich history in racing because of the Grand Prix. Gildo had approached me previously when I was still racing to drive for his team in Formula E. I was one of the cynics, not sure Formula E was going to be for the long-term. So I said, “Thank you, but no thank you.”

But then he contacted me last year and said, “Look, I think we should work together. I think you will be fantastic running the team.” We very quickly found a great way to work together, and for me, it was just the perfect challenge. It’s a new form of racing, it’s breaking new ground and it’s at such an exciting stage of development. So, it was the perfect step for me into the business and management side of motorsports.

Gardner: For me, the noise difference is not much of an issue because the geek factorgets me jazzed about automobiles, and I don’t think I am alone in that. I love the technology. I love the idea of the tiny refinements that improve things and that interaction between the best of what people can do and what machines can do.

Tell us about your geek factor. What is new and fascinating for you about Formula E cars? What’s different from the refinement process that goes on with traditional motorsport and the new electric version?

The software challenge 

Wolff: It’s a massively different challenge than what we are used to within traditional forms of motorsport.

The new concept behind Formula E has functioned really well. Just this season, for example, we had eight races with eight different winners. In other categories, for example in Formula One, you just don’t get that. There is only the possibility for threeteams to win a race, whereas in Formula E, the competition is very different.

Also, as a team, we don’t build the cars from scratch. A Formula One team would be responsible for the design and build of their whole car. In Formula E, 80 percent of the car is standardized. So every team receives the same car up to that 80 percent. The last part is the power train, the rear suspension, and some of the rear-end design of the car.

HPE bugThe big challenge within Formula E then, is in the software. It’s ultimately a software race: Who can develop, upgrade, and react quickly enough on the software side. And obviously, as soon as you deal with software, you are dealing with a lot of data.

That’s one of the biggest challenges in Formula E — it’s predominantly a software race as opposed to a hardware race. If it’s hardware, it’s set at the beginning of the season, it’s homologated, and it can’t be changed.

In Formula E, the performance differentiators are the software and how quickly you can analyze, use, and redevelop your data to enable you to find the weak points and correct them quickly enough to bring to the on-track performance.

Gardner: It’s fascinating to me that this could be the ultimate software development challenge, because the 80/20 rule applies to a lot of other software development, too. The first 80 percent can be fairly straightforward and modular; it’s the last 20 percent that can make or break an endeavor.

Tell us about the real-time aspects. Are you refining the software during the race day? How does that possibly happen?

Winning: When preparation meets data 

Wolff: Well, the preparation work is a big part of a race performance. We have a simulator based back at our factory in Monaco. That’s where the bulk of the preparation work is done. Because we are dealing with only a one-day event, it means we have to get everything crammed into an eight-hour window, which leaves us very little time between stations to analyze and use the data.

The bulk of the preparation work is done in the simulator back at the factory. Each driver does between four to six days in a simulator preparing for a race. That’s where we do all of the coding and try to find the most efficient ways to get from the start to the finish of the race. That’s where we do the bulk of the analytical work.

Venturi_Massa_Marrakesch_2019When we arrive at the actual race, we are just doing the very fine tweaks because the race day is so compact. It means that you need to be really efficient. You need to minimize the errors and maximize the opportunities, and that’s something that is hugely challenging.

If you had a team of 200 engineers, it would be doable. But in Formula E, the regulations limit you to 20 people on your technical team on a race day. So that means that efficiency is of the utmost importance to get the best performance.

Gardner: I’m sure in the simulation and modeling phase you leverage high-performance computing (HPC) and other data technologies. But I’m particularly curious about that real-time aspect, with a limit of 20 people and the ability to still make some tweaks. How did you solve the data issues in a real-time, intensive, human-factor-limited environment like that?

Wolff: First of all, it’s about getting the right people on-board and being able to work with the right people to make sure that we have the knowhow on the team. The data is real-time, so in a race situation we are aware if there is a problem starting to arise in the car. It’s very much up to the driver to control that themselves, from within the car, because they have a lot of the controls. The very important data numbers are on their steering wheel.

They have the ability to change settings within the car — and that’s also what makes it massively challenging for the driver. This is not just about how fast you can go, it’s also how much extra capacity you have to manage in your car and your battery — to make sure that you are being efficient.

The data is utmost in importance for how it’s created and then how quickly it can be analyzed and used to help performance. That’s something HPE has been a huge benefit for us for. … We can apply that ability to crunch the numbers more quickly. 

The data is utmost in importance for how it’s created and then how quickly it can be analyzed and used to help performance. That’s something that Hewlett Packard Enterprise (HPE) has been a huge benefit to us for. First of all, HPE has been able to increase the speed at which we can send data from factory to race track, between engineers. That technology has also increased the level of our simulator and what it’s able to crunch through in the preparation work.

And that was just the start. We are now looking at all the different areas where we can apply that ability to crunch the numbers more quickly. It allows us to look at every different aspect, and it will all come down to those marginal gains in the end.

Gardner: Given that this is a team sport on many levels, you are therefore working with a number of technology partners. What do you look for in a technology partner?

Partner for performance 

Wolff: In motorsport, you very quickly realize if you are doing a good job or not. Every second weekend you go racing, and the results are shown on the track. It’s brutal because if you are at the top step of the podium, you have done a great job. If you are at the end, you need to do a better job. That’s a reality check we get every time we go racing.

For us to be the best, we need to work with the best. We’re obviously very keen to always work with the best-in-field, but also with companies able to identify the exact needs we have and build a product or a package that helps us. Within motorsports, it’s very specific. It’s not like a normal IT company or a normal business where you can plug-and-play. We need to redefine what we can do, and what will bring added performance.

Edgeline boxWe need to work with companies that are agile. Ideally they have experience within motorsports. They know what you need, and they are able to deliver. They know what’s not needed in motorsports because everything is very time sensitive. We need to make sure we are working on the areas that bring performance — and not wasting resources and time in areas that ultimately are not going to help our track performance.

Gardner: A lot of times with motorsports it’s about eking out the most performance and the highest numbers when it comes to variables like skidpad and the amounts of friction versus acceleration. But I can see that Formula E is more about the interplay between the driver, the performance, and the electrical systems efficiency.

Is there something we can learn from Formula E and apply back to the more general electric automobile industry? It seems to me they are also fighting the battle to make the batteries last longest and make the performance so efficient that every electron is used properly.

Wolff: Absolutely. That’s why we have so many manufacturers in Formula E … the biggest names in the industry, like BMW, AudiJaguar and now Mercedes and Porsche. They are all in Formula E because they are all using it as a platform to develop and showcase their technology. And there are huge sums of money being spent within the automotive industry now because there is such a race on to get the right technology in the next generation of electric cars. The technology is advancing so quickly. The beauty of Formula E is that we are at the very pinnacle of that.

We are purely performance-based and it means that those race cars and power trains need to be the most efficient, and the quickest. All of the technology and everything that’s learned from the manufacturers doing Formula E eventually filters back into the organizations. It helps them to understand where they can improve and what the main challenges are for their electrification and electric mobility in the end.

Gardner: There is also an auspicious timing element here. You are pursuing the refinement and optimization of electric motorsports at the same time that artificial intelligence (AI) and machine learning (ML) technologies are becoming more pervasive, more accessible, and brought right to the very edge … such as on a steering wheel.

Is there an opportunity for you to also highlight the use of such intelligence technologies? Will data analytics start to infer what should be happening next, rather than just people analyzing data? Is there a new chapter, if you will, in how AI can come to bear on your quest for the Formula E best?

AI accelerates data 

Wolff: A new chapter is just beginning. Certainly, in some of the conversations we’ve had with our partners — and particularly with HPE — it’s like opening up a treasure chest, because the one thing we are very good at in motorsports is generating lots of data.

The one thing that we are clear at, and it’s purely down to manpower and time and resource, is the analyzing of data. There is only so much that we have capacity for. And with AI there are a couple of examples that I wouldn’t even want to share because I wouldn’t want my competitors to know what’s possible.

There are a couple of examples where we have seen that AI can constitute the numbers in a matter of seconds and spit out the results. I can’t even comprehend how long it would take us to get to those numbers otherwise. It’s a clear example of how much AI is going to accelerate our learning on the data side, and, particularly, because it’s software, there’s so much analyzing of the data needed to bring new levels of performance. For us it’s going to be game changer and we are only at the start.

It’s incredibly exciting but also so important to make sure that we are getting it right. There is so much possibility that if we don’t get it right, there could be big areas that we could end up losing on.

48 Edoardo MortaraGardner: Perhaps soon, race spectators will not only be watching the cars and how fast they are going. Perhaps there will be a dashboard that provides views of the AI environment’s performance, too. It could be a whole new type of viewer experience — when you’re looking at what the AI can do as well as the car. Whoever thought that AI would be a spectator sport?

Wolff: It’s true and it’s not far away. It’s very exciting to think that that could be coming.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, Hewlett Packard Enterprise, Information management, Internet of Things, machine learning, performance engineering, Software | Tagged , , , , , , , , , , , , , , | Leave a comment

The budding storage relationship between HPE and Cohesity brings the best of startup innovation to global enterprise reach

cohesity-scale-out-file-servicesThe next BriefingsDirect enterprise storage partnership innovation discussion explores how the best of startup culture and innovation can be married to the global reach, maturity, and solutions breadth of a major IT provider.

Stay with us to unpack the budding relationship between an upstart in the data management space, Cohesity, and venerable global IT provider Hewlett Packard Enterprise (HPE).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in total storage efficiency strategies and HPE’s Pathfinder program we welcome Rob Salmon, President and Chief Operating Officer at Cohesity in San Jose, California, and Paul Glaser, Vice President and Head of the Pathfinder Program at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Paul, how have technology innovation, the nature of startups, and the pace of business today made a deep relationship between HPE and Cohesity the right fit for your mutual customers?

Paul Glaser

Glaser

Glaser: That’s an interesting question, Dana. To start, the technology ecosystem and the startup ecosystem in Silicon Valley, California — as well as other tech centers on a global basis — fuel the fire of innovation. And so, the ample funding that’s available to startups, the research that’s coming out of top tier universities such as Stanford, Carnegie Mellon, or MIT out on the East Coast, fuels a lot of interesting ideas — disruptive ideas that lead their way into small startups.

The challenge for HPE as a large, global technology player is to figure out how to tap into the ecosystem of startups and the new disruptive technologies coming out of the universities, as well as serial entrepreneurs, foster and embrace that, and deliver those solutions and technologies to our customers.

Gardner: Paul, please describe the Pathfinder thesis and approach. What does it aim to do?

Insight, investment, and solutions

Glaser: Pathfinder, at the top level is the venture capital (VC) program of HPE and can be subdivided into three core functions. First is insight, second is investments, and third is the solutions function. The insight component acts like a center of excellence, it keeps a finger on the pulse, if you will, of disruptive innovation in the startup community. It helps HPE as a whole interact with the startup community, the VC community, and identifies and curates leading technology innovations that we can ultimately deliver to our customers.

The second component is investments. It’s fairly straight-forward. We act like a VC firm, taking small equity stakes in some of these startup companies.

And third, solutions. For the companies that are in our portfolio, we work with them to make introductions to product and technical organizations inside of HPE, fostering dialogue from a product evolution perspective and a solution perspective. We intertwine HPE’s products and technologies with the startup technology to create one-plus-one-equals-three. And we deliver that solution to customers and solve their challenges from a digital transformation perspective.

Gardner: How many startup companies are we talking about? How many in a year have typically been included in Pathfinder?

Glaser: We are a very focused program, so we align around the strategies for HPE. Because of that close collaboration with our portfolio companies and the business units, we are limited to about eight investments or new portfolio companies on an annual basis.

Today, the four-and-a-half-year-old program has about two dozen companies inside in the portfolio. We expect to add another eight over the next 12 months.

Gardner: Rob, tell us about Cohesity and why it’s such a good candidate, partner, and success story when it comes to the Pathfinder program.

Rob Salmon

Salmon

Salmon: Cohesity is a five-year-old company focused on data management for about 70 to 80 percent of all the data in an enterprise today. This is for large enterprises trying to figure out the next great thing to make them more operationally efficient, and to give them better access to data.

Companies like HPE are doing exactly the same thing, looking to figure out how to bring new conversations to their customers and partners. We are a software-defined platform. The company was founded by Dr. Mohit Aron, who has spent his entire 20-plus-year career working on distributed file systems. He is one of the key architects of the Google File System and co-founder of Nutanix. The hyperconverged infrastructure (HCI) movement, really, was his brainchild.

He started Cohesity five years ago because he realized there was a new, better way to manage large sets of data. Not only in the data protection space, but for file services, test dev, and analytics. The company has been selling the product for more than two and a half years now, and we’ve been a partner with Paul and the HPE Pathfinder team for more than three years now. It’s been a quite successful partnership between the two companies.

Gardner: As I mentioned in my set-up, Rob, speed-to-value is the name of the game for businesses today. How have HPE and Cohesity together been able to help each other be faster to market for your customers?

One plus one equals three

Salmon: The partnership is complimentary. What HPE brings to Cohesity is experience and reach. We get a lot of value by working with Paul, his team, and the entire executive team at HPE to bring our product and solutions to market.

When we think about the combination between the products from HPE and Cohesity, one-plus-one-equals-three-plus. That’s what customers are seeing as well. The largest customers we have in the world running Cohesity solutions run on HPE’s platform.

HPE brings credibility to a company of our size, in all areas of the world, and with large customers. We just could not do that on our own.

Gardner: And how does working with HPE specifically get you into these markets faster?

Salmon: In fact, we just announced an original equipment manufacturer (OEM) relationship with HPE whereby they are selling our solutions. We’re very excited about it.

Simplify Secondary Storage with HPE and Cohesity

I can give you a great example. I met with one of the largest healthcare providers in the world a year ago. They loved hearing about the solution. The question they had was, “Rob, how are you going to handle us? How will you support us?” And they said, “You are going to let us know, I’m sure.”

They immediately introduced me to the general manager of their account at HPE. We took that support question right off the table. Everything has been done through HPE. It’s our solution, wrapped around the broad support services and hardware capabilities of HPE. That made for a total solution for our customers, because that’s ultimately what these kinds of customers are looking for.

They are not just looking for great, new innovative solutions. They are looking for how they can roll that out at scale in their environments and be assured it’s going to work all the time.

Gardner: Paul, HPE has had huge market success in storage over the past several years, being on the forefront of flash and of bringing intelligence to how storage is managed on a holistic basis. How does the rest of storage, the so-called secondary level, fit into that? Where do you see this secondary storage market’s potential?

Glaser: HPE’s internal product strategy has been around the primary storage capability. You mentioned flash, so such brands as 3PAR and Nimble Storage. That’s where HPE has a lot of its own intellectual property today.

On the secondary storage side, we’ve looked to partners to round out our portfolio, and we will continue to do so going forward. Cohesity has become an important part of that partner portfolio for us.

But we think about more than just secondary storage from Cohesity. It’s really about data management. What does the data management lifecycle of the future look like? How do you get more insights on where your data is? How do you better utilize that?

Cohesity and that ecosystem will be an important part of how we think about rounding out our portfolio and addressing what is a tens of billions of dollars market opportunity for both companies.

Gardner: Rob, let’s dig into that total data management and lifecycle value. What are the drivers in the market making a holistic total approach to data necessary?

Cohesity makes data searchable, usable 

Salmon: When you look at the sheer size of the datasets that enterprises are dealing with today, there is an enormous data management copy problem. You have islands of infrastructures set up for different use cases for secondary data and storage. Oftentimes the end users don’t know where to look, and it may be in the wrong place. After a time, the data has to be moved.

The Cohesity platform indexes the data on ingest. We therefore have Google-like search capabilities across the entire platform, regardless of the use-case and how you want to use the data.

storage-iceberg

When we think about the legacy storage solutions out there for data protection, for example, all you can do is protect the data. You can’t do anything else. You can’t glean any insights from that data. Because of our indexing on ingest, we are able to provide insights into the data and metadata in ways unlike customers and enterprises have ever seen before. As we think about the opportunity, the larger the datasets that are run on the Cohesity platform and solution, the more insight customers can have into their data.

And it’s not just about our own applications. We recently introduced a marketplace where applications such as Splunk reside and can sit on top and access the data in the Cohesity platform. It’s about bringing compute, storage, networking, and the applications all together to where the data is, versus moving data to the compute and to the applications.

Gardner: It sounds like a solution tailor-made for many of the new requirements we’re seeing at the edge. That means massive amounts of data generated from the Internet of things (IoT) and the industrial Internet of things (IIoT). What are you doing with secondary storage and data management that aligns with the many things HPE is doing at the edge?

Seamless at the edge

Salmon: When you think about both the edge and the public cloud, the beauty of a next-generation solution like Cohesity is we are not redesigning something to take advantage of the edge or the public clouds. We can run a virtual edition of our software at the edge, and in public cloud. We have a multiple-cloud offering today.

So, from the edge all the way to on-premises and into public clouds it’s a seamless look at all of your data. You have access and visibility to all of the data without moving the data around.

Gardner: Paul, it sounds like there’s another level of alignment here, and it’s around HPE’s product strategies. With HPE InfoSightOneView — managing core-to-edge issues across multiple clouds as well as a hybrid cloud — this all sounds quite well-positioned. Tell us more about the product strategy synergy between HPE and Cohesity.

Glaser: Dana, I think you hit it spot-on. HPE CEO Antonio Neri talks about a strategy for HPE that’s edge-centric, cloud-enabled, and data-driven. As we think about building our infrastructure capabilities — both for on-premise data centers and extending out to the edge — we are looking for partners that can help provide that software layer, in this case the data management capability, that extends our product portfolio across that hybrid cloud experience for our customers.

As you think about a product strategy for HPE, you really step up to the macro strategy, which is, how do we provide a solution for our customers that allows us to span from the edge all the way to the core data center? We look at partners that have similar capabilities and similar visions. We work through the OEMs and other types of partnership arrangements to embed that down into the product portfolio.

Gardner: Rob, anything to offer additionally on the alignment between Cohesity and HPE, particularly when it comes to the data lifecycle management?

Salmon: The partnership started with Pathfinder, and we are absolutely thrilled with the partnership we have with HPE’s Pathfinder group. But when we did the recent OEM partnership with HPE, it was actually with HPE’s storage business unit. That’s really interesting because as you think about competing or not, we are working directly with HPE’s storage group. This is very complementary to what they are doing.

When we did the recent OEM partnership with HPE, it was actually with HPE’s storage business unit. That’s really interesting because as you think about competing or not, we are working directly with HPE storage. This is very complementary to what they are doing.

We understand our swim lane. They understand our swim lane. And yet this gives HPE a far broader portfolio into environments where they are looking at what the competitors are doing. They are saying, “We now have a better solution for what we are up to in this particular area by working with Cohesity.”

We are excited not just to work with the Pathfinder group but by the opportunity we have with Antonio Neri’s entire team. We have been welcomed into the HPE family quite well over the last three years, and we are just getting started with the opportunity as we see it.

Gardner: Another area that is top-of-mind for businesses is not just the technology strategy, but the economics of IT and how it’s shifted given the cloud, Software as a Service (SaaS), and pay-on-demand models. Is there something about what HPE is doing with its GreenLake Flex Capacity approach that is attractive to Cohesity? Do you see the reception in your global market improved because of the opportunity to finance, acquire, and consume IT in a variety of different ways?

Flexibility increases startups’ strength 

Salmon: Without question! Large enterprises want to buy it the way they want to buy it, whether it be for personalized licenses or a subscription model. They want to dictate how it will be used in their environments. By working with HPE and GreenLake, we are able to offer the flexible options required to win in this market today.

Gardner: Paul, any thoughts about the economics of consuming IT and how Pathfinder might be attractive to more startups because of that?

Glaser: There are two points Rob touched on that are important. One, working with HPE as a large company, it’s a journey. As a startup you are looking for that introduction or that leg up that gives you visibility across the global HPE organization. That’s what Pathfinder provides. So, you start working directly with the Pathfinder organization, but then you have the ability to spread out across HPE.

For Cohesity, it’s led to the OEM agreement with the storage business unit. It is the ability to leverage different consumption models utilizing GreenLake, and some of our flexible pricing and flexible consumption offers.

The second point is Amazon Web Services has conditioned customers to think about pay-per-use. Customers are asking for that, and they are looking for flexibility. As a startup, that sometimes is hard to figure out — how to economically provide that capability. Being able to partner with HPE and Pathfinder, to utilizing GreenLake or some of our other tools, it really provides them a leg up in terms of the conversation with customers. It helps them trust that the solution will be there and that somebody will be there to stand behind it over the coming years.

Gardner: Before we close out, I would like to peek in the crystal ball for the future. When you think about the alignment between Cohesity and HPE, and when we look at what we can anticipate — an explosion of activity at the edge and rapidly growing public cloud market — there is a gorilla in the room. It’s the new role for inference and artificial intelligence (AI), to bring more data-driven analytics to more places more rapidly.

Any thoughts about where the relationship between HPE and Cohesity will go on an AI tangent product strategy?

AI enhances data partnership

Salmon: You touched earlier, Dana, on HPE InfoSight, and we are really excited about the opportunity to partner even closer with HPE on it. That’s an incredibly successful product in its own right. The opportunity for us to work closer and do some things together around InfoSight is exciting.

On the Cohesity side, we talk a lot about not just AI but machine learning (ML) and where we can go proactively to give customers insights into not only the data, but also the environment itself. It can be very predictive. We are working incredibly hard on that right now. And again, I think this is an area that is really just getting started in terms of what we are going to be able to do over a long period of time.

Gardner: Paul, anything to offer on the AI future?

Glaser: Rob touched on the immediate opportunity for the two companies to work together, which is around HPE InfoSight and marrying our capabilities in terms of predictability and ML around IT infrastructure and creative solutions around that.

As you extend the vision to being edge-centric, as you look into the future where applications become more edge-centric and compute is going to move toward the data at the edge, the lifecycle of what that data looks like from a data management perspective at the edge — and where it ultimately resides — is going to become an interesting opportunity. Some of the AI capabilities can provide insight on where the best place is for that computation, and for that data, to live. I think that will be interesting down the road.

As you extend the vision to being edge-centric, compute is going to move toward the data at the edge. The lifecycle of what that data looks like from a data management perspective at the edge is an interesting opportunity.

Gardner: Rob, for other startups that might be interested in working with a big vendor like HPE through a program like Pathfinder, any advice that you can offer?

Salmon: As a startup, you know you are good at something, and it’s typically around the technology itself. You may have a founder like Mohit Aron, who is absolutely brilliant in his own right in terms of what he has already done in the industry and what we are going to continue to do. But you have got to do all the building around that brilliance and that technology and turn it into a true solution.

And again, back to this notion of solution, the solution needs global scale, it’s giving the support to costumers, not just one experience with you, but what they are expecting to experience from the enterprises that support them. You can learn a lot from working with large enterprises. They may not be the ones to tell you exactly how you are going to code your product; we have got that figured out with the brilliance of a Mohit and the engineering team around him. But as we think about getting to scale, and scaling the operation in terms of what we are doing, leaning on someone like the Pathfinder group at HPE has helped us an awful lot.

AWS-Cohesity-Blog

Salmon: The other great thing about working with the Pathfinder group is, as Paul touched on earlier, they work with other portfolio companies. They are working with companies that may be in a little different space than we are, but they are seeing a similar challenge as we are.

How do you grow? How do you open up a market? How do you look at bringing the product to market in different ways? We talked about consumption pricing and the new consumption models. Since they are experiencing that with others, and what they have already done at HPE, we can benefit from that experience. So leveraging a large enterprise like an HPE and the Pathfinder group, for what they know and what they are good at, has been invaluable to Cohesity.

Gardner: Paul, for those organizations that might want to get involved with Pathfinder, where should they go and what would you guide them to in terms of becoming a potential fit?

Glaser: I’d just point them to hewlettpackardpathfinder.com. You can find information on the program there, contact information, portfolio companies, and that type of thing.

We also put out a set of perspectives that talk about some of our investment theses and you can see our areas of interest. So at a high level, we look for companies that are aligned to HPE’s core strategies, which is going to be around building up the hybrid IT business as well as the intelligent edge.

So we have those specific swim lanes from a strategic perspective. And then second is we are looking for folks who have demonstrated success from a product perspective, and so whether that’s a couple of initial customer wins and then needing help to scale that business, those are the types of opportunities that we are looking for.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, Cloud computing, Data center transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, machine learning, multicloud, Nutanix, storage | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

HPE’s Erik Vogel on what’s driving success in hybrid cloud adoption and optimization

ISS-49_Multi-hued_clouds_over_the_Bering_SeaThe next BriefingsDirect Voice of the Innovator discussion explores the latest insights into hybrid cloud success strategies.

As with the often ad hoc adoption of public cloud services by various groups across an enterprise, getting the right mix and operational coordination required of true hybrid cloud cannot be successful if it’s not well managed. While many businesses recognize there’s a hybrid cloud future, far fewer are adopting a hybrid cloud approach with due diligence, governance, and cost optimization.

Stay with us as we examine the innovation maturing around hybrid cloud models and operations and learn how proper common management of hybrid cloud can make or break the realization of its promised returns.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to explain how to safeguard successful hybrid cloud deployments and operations is Erik Vogel, Global Vice President of Hybrid IT and Cloud at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The cloud model was very attractive, people jumped into it, but like with many things, there are unintended consequences. What’s driving cloud and hybrid cloud adoption, and what’s holding people back?

Vogel: All enterprises are hybrid at this point, and whether they have accepted that realization depends on the client. But pretty much all of them are hybrid. They are all using a combination of on-premises, public cloud, and software-as-a-service (SaaS) solutions. They have brought all of that into the enterprise. There are very few enterprises we talk to that don’t have some hybrid mix already in place.

Hybrid is here, but needs rationalization

But when we ask them how they got there; most have done it in an ad hoc fashion. Most have had developers who went out to one or multiple hyperscale cloud providers, or the business units went out and started to consume SaaS solutions, or IT organizations built their own on-premises solutions whether that’s an open private cloud or a Microsoft Azure Stack environment.

Erik VogelThey have done all of this in pockets within the organization. Now, they are seeing the challenge of how to start managing and operating this in a consistent, common fashion. There are a lot of different solutions and technologies, yet everyone has their own operating model, own consoles, and own rules to work within.

And that is where we see our clients struggling. They don’t have a holistic strategy or approach to hybrid, but rather they’ve done it in this bespoke or ad hoc fashion. Now they realize they are going to have to take a step back to think this through and decide what is the right approach to enforce common governance and gain common management and operating principles, so that they’re not running 5, 6, 8 or even 10 different operating models. Rather, they need to ask, “How do we get back to where we started?” And that is a common operating model across the entire IT estate.

Gardner: IT traditionally over the years has had waves of adoption that led to heterogeneity that created complexity. Then that had to be managed. When we deal with multicloud and hybrid cloud, how is that different from the UNIX wars, or distributed computing, and N-tier computing? Why is cloud a more difficult heterogeneity problem to solve than the previous ones?

Vogel: It’s more challenging. It’s funny, we typically referred to what we used to see in the data center as the  Noah’s Ark data center. You would typically walk into a data center and you’d see two of everything, two of every vendor, just about everything within the data center.

How to Better Manage 

Multicloud Sprawl 

And it was about 15 years ago when we started to consolidate all of that into common infrastructures, common platforms to reduce the operational complexity. It was an effort to reduce total cost of ownership (TCO) within the data center and to reduce that Noah’s Ark data center into common, standardized elements.

Now that pendulum is starting to swing back. It’s becoming more of a challenge because it’s now so easy to consume non-standard and heterogeneous solutions. Before there was still that gatekeeper to everything within the data center. Somebody had to make a decision that a certain piece of infrastructure or component would be deployed within the data center.

Now, we have developers go to a cloud and consume with just a swipe of a credit card, any of the three or four hybrid hyperscale solutions, and literally thousands of SaaS solutions. Just look at the Salesforce.com platform and all of the different options that surround that.

All of a sudden, we lost the gatekeeper. Now we are seeing sprawl toward more heterogeneous solutions occurring even much faster than what we saw 10 or 15 years ago with the Noah’s Ark data center.

The pendulum is definitely shifting back toward consuming lots of different solutions with lots of different capabilities and services. And we are seeing it moving much faster than it did before because of that loss of a gatekeeper.

Gardner: Another difference is that we’re talking mostly about services. By consuming things as services, we’re acquiring them not as a capital expenditure that has a three- to five-year cycle of renewal, this is on-demand consumption, as you use it.

That makes it more complicated, but it also makes it a problem that can be solved more easily. Is there something about the nature of an all-services’ hybrid and multicloud environment on an operations budget that makes it more solvable?

Services become the norm 

Vogel: Yes, absolutely. The economics definitely play into this. I have this vision that within the next five years, we will no longer call things “as a service” because it will be the norm, the standard. We will only refer to things that are not as a service, because as an industry we are seeing a push toward everything being consumed as a service.

From an operating standpoint, the idea of consuming and paying for only what we use is very, very attractive. … [Before] we would end up overbuying, pay the full load, and still pay for full maintenance and support, too.

From an operating standpoint, the idea of consuming and paying for only what we use is very, very attractive. Again, if you look back 10 or 15 years, typically within a data center, we’d be buying for a three- or four-year lifespan. That forced us to make predictions as to what type of demand we would be placing on capital expenditures.

And what would happen? We would always overestimate. If you looked at utilization of CPU, of disk, of memory, they were always 20 to 25 percent; very low utilization, especially pre-virtualization. We would end up overbuying, pay the full load, and still pay for full maintenance and support, too.

There was very little ability to dial that up or down. The economic capability of being able to consume everything as a service is definitely changing the game, even for things you wouldn’t think of as a service, such as buying a server. Our enterprise customers are really taking notice of that because it gives them the ability to flex the expenditures as their business cycles go up and down.

Rarely do we see enterprises with constant demand for compute capacity. So, it’s very nice for them to be able to flex that up and down, adjust the normal seasonal effects within a business, and be able to flex that operating expense as their business fluctuates.

That is a key driver of moving everything to an as-a-service model, giving flexibility that just a few years ago we did not have.

Gardner: The good news is that these are services — and we can manage them as services. The bad news is these are services coming from different providers with different economic and consumption models. There are different application programming interfaces (APIs), stock keeping unit (SKU) definitions, and management definitions that are unique to their own cloud organization. So how do we take advantage of the fact that it’s all services but conquer the fact that it’s from different organizations speaking, in effect, different languages?

Vogel: You’re getting to the heart of the challenge in terms of managing a hybrid environment. If you think about how applications are becoming more and more composed now, they are built with various different pieces, different services, that may or may not be on-premises solutions.

One of our clients, for example, has built an application for their sales teams that provides real-time client data and client analytics before a seller goes in and talks to a customer. And when you look at the complexity of that application, they are using Salesforce.com, they have an on-premises customer database, and they get point of sales solutions from another SaaS provider.

Why You Need Everything 

As a Service 

They also have analytics engines they get from one of the cloud hyperscalers. And all of this comes together to drive a mobile app that presents all of this information seamlessly to their end-user seller in real-time. They become better armed and have more information when they go meet with their end customer.

When we look at how these new applications or services – I don’t even call them applications because they are more services built from multiple applications — they are crossing multiple service providers, multiple SaaS providers, and multiple hyperscalers.

And as you look at how we interface and connect with those, how we pass data, exchange information across these different service providers, you are absolutely right, the taxonomies are different, the APIs are different, the interfaces and operations challenges are different.

When that seller goes to make that call, and they bring up their iPad app and all of a sudden, there is no data or it hasn’t been refreshed in three months, who do you call? How do you start to troubleshoot that? How do you start to determine if it’s a Salesforce problem, a database problem, a third-party service provider problem? Maybe it’s my encrypted connection I had to install between Salesforce and my on-premises solution. Maybe it’s the mobile app. Maybe it’s a setting on the iPad itself.

Adding up all of that complexity is what’s building the problem. We don’t have consistent APIs, consistent taxonomies, or even the way we look at billing and the underlying components for billing. And when we break that out, it varies greatly between service providers.

cloud journeyThis is where we understand the complexity of hybrid IT. We have all of these different service providers all working and operating independently. Yet we’re trying to bring them together to provide end-customer services. Composing those different services creates one of the biggest challenges we have today within hybrid cloud environment.

Gardner: Even if we solve the challenge on the functional level — of getting the apps and services to behave as we want — it seems as much or more a nightmare for the chief financial officer (CFO) who has to determine whether you’re getting a good deal or buying redundancy across different cloud providers. A lot of times in procurement you cut a deal on volume. But how you do that if you don’t know what you’re buying from whom?

How do we pay for these aggregate cloud services in some coordinated framework with the least amount of waste?

How to pay the bills

Vogel: That is probably one of the most difficult jobs within IT today, the finance side of it. There are a lot of challenges of putting that bill together. What does that bill really look like? And not just at an individual component level. I may be able to see what I’m paying from Amazon Web Services (AWS) or what Azure Stack is costing me. But how do we aggregate that? What is the cost to provide a service? And this has been a challenge for IT forever. It’s always been difficult to slice it by service.

We knew what compute costs, what network costs, and what the storage costs were. But it was always difficult to make that vertical slice across the budget. And now we have made that problem worse because we have all these different bills coming in from all of these different service providers.

The procurement challenge is even more acute because now we have these different service providers. How do we know what we are really paying? Developers swipe credit cards, where they don’t even see the bill or a true accounting of what’s being spent across the public clouds. It comes through as a credit card expense and so not really directed to IT.

We need to get our hands around these different expenses, where we are spending money, and think differently about our procurement models for these services.

In the past, we talked about this as a brokerage but it’s a lot more than that. It’s more about strategic sourcing procurement models for cloud and hybrid cloud-related services.

Our IT procurement models have to change to address the problem of how we really know what we are paying for. Are we getting the strategic value out of the expenses within hybrid that we had expected?

It’s less about brokerage and looking for that lowest-cost provider and trying to reduce the spend. It’s more about, are we getting the service-level agreements (SLAs) we are paying for? Are we getting the services we are paying for? Are we getting the uptime we are paying for?

Our IT procurement models have to change to address the problem of how we really know what we are paying for. Are we getting the strategic value out of the expenses within hybrid that we had expected?

Gardner: In business over the years, when you have a challenge, you can try to solve it yourself and employ intelligence technologies to tackle complexity. Another way is to find a third-party that knows the business better than you do, especially for small- to medium-sized businesses (SMBs).

Are we starting to see an ecosystem develop where the consumption model for cloud services is managed more centrally, and then those services are repurposed and resold to the actual consumer business?

Third-parties help hybrid manage costs 

Vogel: Yes, I am definitely starting to see that. There’s a lot is being developed to help customers in terms of consuming and buying these services and being smarter about it. I always joke that the cheapest thing you can buy is somebody else’s experience, and that is absolutely the case when it comes to hybrid cloud services providers.

The reality is no enterprise can have expertise in all three of the hyperscalers, in all of the hundreds of SaaS providers, for all of the on-premises solutions that are out there. It just doesn’t exist. You just can’t do it all.

It really becomes important to look for people who can aggregate this capability and bring the collective experience back to you. You have to reduce overspend and make smarter purchasing decisions. You can prevent things like lock-in to and reduce the risk of buying via these third-party services. There is tremendous value being created by these firms that are jumping into that model and helping clients address these challenges.

The third-parties have people who have actually gone out and consumed and purchased within the hyperscalers, who have run workloads within those environments, and who can help predict what the true cost should be — and, more importantly, maintain that optimization going forward.

How to Remove Complexity 

From Multicloud and Hybrid IT 

It’s not just about going in and buying anymore. There is ongoing optimization that has to incur, ongoing cost optimization where we’re continuously evaluating about the right decisions. And we are finding that the calculus changes over time.

So, while it might have made a lot of sense to put a workload, for example, on-premises today, based on the demand for that application and on pricing changes, it may make more sense to move that same workload off-premises tomorrow. And then in the future it may also make sense to bring it back on-premises for a variety of reasons.

You have to constantly be evaluating that. That’s where a lot of the firms playing in the space can add a lot of value now, in helping with ongoing optimization, by making sure that we are always making the smart decision. It’s a very dynamic ecosystem, and the calculus, the metrics are constantly changing. We have the ability to constantly reevaluate. That’s the beauty of cloud, it’s the ability to flex between these different providers.

Gardner: Erik, for those organizations interested in getting a better handle on this, are there any practical approaches available now?

The right mix of data and advice 

Vogel: We have a tool, called HPE Right Mix Advisor, which is our ability to go in and assess very large application portfolios. The nice thing is, it scales up and down very nicely. It is delivered in a service model so we are able to go in and assess a set of applications against the variables I mentioned, in the weighing of the factors, and come up with a concrete list of recommendations as to what should our clients do right now.

In fact, we like to talk not about the thousand things they could do — but what are the 10 or 20 things they should start on tomorrow morning. The ones that are most impactful for their business.

The Right Mix Advisor tool helps identify those things that matter the most for the business right now, and provides a tactical plan to say, “This is what we should start on.”

And it’s not just the tool, we also bring our expertise, whether that’s from our Cloud Technology Partners (CTP) acquisition, RedPixie, or our existing HPE business where we have done this for years and years. So, it’s not just the tool, but also experts, looking at that data, helping to refine that data, and coming up with a smart list that makes sense for our clients to get started on right now.

And of course, once they have accomplished those things, we can come back and look at it again and say, “Here is your next list, the next 10 or 20 things.” And that’s really how Right Mix Advisor was designed to work.

Gardner: It seems to me there would be a huge advantage if you were able to get enough data about what’s going on at the market level, that is to aggregate how the cloud providers are selling, charging, and the consumption patterns.

If you were in a position to gather all of the data about enterprise consumption among and between the cloud providers, you would have a much better idea of how to procure properly, manage properly, and optimize. Is such a data well developing? Is there anyone in the right position to be able to gather the data and start applying machine learning (ML) technologies to develop predictions about the best course of action for a hybrid cloud or hybrid IT environment?

Vogel: Yes. In fact, we have started down that path. HPE has started to tackle this by developing an expert system, a set of logic rules that helps make those decisions. We did it by combining a couple of fairly large datasets that we have developed over the last 15 years, primarily with HPE’s history of doing a lot of application migration work. We really understand on the on-premises side where applications should reside based on how they are architected and what the requirements are, and what type of performance needs to be derived from that application.

HPE has developed an expert system, a set of logic rules, that helps make those hybrid decisions. We did it by combining a couple of fairly large datasets that we have developed over the last 15 years. We understand the on-premises side … We have now combined that with our other datasets from our acquisitions of CTP and RedPixie.

We have combined that with other datasets from some of our recent cloud acquisitions, CTP and RedPixie, for example. That has brought us a huge wealth of information based on a tremendous number of application migrations to the public clouds. And we are able to combine those datasets and develop this expert system that allows us to make those decisions pretty quickly as to where applications should reside based on a number of factors. Right now, we look at about 60 different variables.

But what’s really important when we do that is to understand from a client’s perspective what matters. This is why I go back to that strategic sourcing discussion. It’s easy to go in and assume that every client wants to reduce cost. And while every client wants to do that — no one would ever say no to that — usually that’s not the most important thing. Clients are worried about performance. They also want to drive agility, and faster time to market. To them that is more important than the amount they will save from a cost-reduction perspective.

The first thing we do when we run our expert system, is we go in and weight the variables based on what’s important to that specific client, aligned to their strategy. This is where it gets challenging for any enterprise trying to make smart decisions. In order to make strategic sourcing decisions, you have to understand strategically what’s important to your business. You have to make intelligent decisions about where workloads should go across the hybrid IT options that you have. So we run an expert system to help make those decisions.

Now, as we collect more data, this will move toward more artificial intelligence (AI). I am sure everybody is aware AI requires a lot of data, since we are still in the very early stages of true hybrid cloud and hybrid IT. We don’t have a massive enough dataset yet to make these decisions in a truly automated or learning-type model.

We started with an expert system to help us do that, to move down that path. But very quickly we are learning, and we are building those learnings into our models that we use to make decisions.

So, yes, there is a lot of value in people who have been there and done that. Being able to bring that data together in a unified fashion is exactly what we have done to help our clients. These decisions can take a year to figure out. You have to be able to make these decisions quickly because it’s a very dynamic model. A lot of things are constantly changing. You have to keep loading the models with the latest and greatest data so you are always making the best, smartest decision, and always optimizing the environment.

Innovation, across the enterprise 

Gardner: Not that long ago, innovation in a data center was about speeds and feeds. You would innovate on technology and pass along those fruits to your consumers. But now we have innovated on economics, management, and understanding indirect and direct procurement models. We have had to innovate around intelligence technologies and AI. We have had to innovate around making the right choices — not just on cost but on operations benefits like speed and agility.

How has innovation changed such that it used to be a technology innovation but now cuts across so many different dynamic principles of business?

HPE BugVogel: It’s a really interesting observation. That’s exactly what’s happening. You are right, even as recently as five years ago we talked about speeds and feeds, trying to squeeze a little more out of every processor, trying to enhance the speed of the memory or the storage devices.

But now, as we have pivoted toward a services mentality, nobody asks when you buy from a hyperscaler — Google Cloud, for example — what central processing unit (CPU) chips they are running or what the chip speeds are. That’s not really relevant in an as-a-service world. So, the innovation then is around the service sets, the economic models, the pricing models, that’s really where innovation is being driven.

At HPE, we have moved in that direction as well. We provide our HPE GreenLake model and offer a flex-capacity approach where clients can buy capacity on-demand. And it becomes about buying compute capacity. How we provide that, what speeds and feeds we are providing becomes less and less important. It’s the innovation around the economic model that our clients are looking for.

We are only going to continue to see that type of innovation going forward, where it’s less about the underlying components. In reality, if you are buying the service, you don’t care what sort of chips and speeds and feeds are being provided on the back end as long as you are getting the service you have asked for, with the SLA, the uptime, the reliability, and the capabilities you need. All of what sits behind that becomes less and less important.

Think about how you buy electricity. You just expect 110 volts at 60 hertz coming out of the wall, and you expect it to be on all the time. You expect it to be consistent, reliable, and safely delivered to you. How it gets generated, where it gets generated — whether it’s a wind turbine, a coal-burning plant, a nuclear plant — that’s not important to you. If it’s produced in one state and transferred to another over the grid, or if it’s produced in your local state, that all becomes less important. What really matters is that you are getting consistent and reliable services you can count on.

How to Leverage Cloud, IoT, 

Big Data, and Other Disruptive Technologies 

And we are seeing the same thing within IT as we move to that service model. The speeds and feeds, the infrastructure, become less important. All of the innovation is now being driven around the as-a-service model and what it takes to provide that service. We innovate at the service level, whether that’s for flex capacity or management services, in a true as-a-service capability.

Gardner: What do your consumer organizations need to think about to be innovative on their side? How can they be in a better position to consume these services such as hybrid IT management-as-a-service, hybrid cloud decision making, and the right mixture of decisions-as-a-service?

What comes next when it comes to how the enterprise IT organization needs to shift?

Business cycles speed IT up 

Vogel: At a business level, within almost every market or every industry, we are moving from what used to be slow-cycle business to standard-cycles. In a lot of cases it’s moving from standard-cycle business to a fast-cycle business. Even businesses that were traditionally slow-cycle or standard-cycle are accelerating. This underlying technology is creating that.

So every company is a technology company. That is becoming more and more true every day. As a result, it’s driving business cycles faster and faster. So, IT, in order to support those business cycles, has to move at that same speed.

And we see enterprises moving away from a traditional IT model when those enterprises’ IT cannot move at the speed the business is demanding. We will still see IT, for example, take six months to provide a platform when the business says, “I need it in 20 minutes.”

We will see a split between traditional IT and a digital innovation group within the enterprise. This group will be owned by the business unit as opposed to core IT.

So, businesses are responding to IT not being able to move fast enough and not being able to provide the responsiveness and the level of service by going out and looking outside and consuming services externally.

At HPE, as we look at some of the services we have announced, they are to help our clients move faster and to provide operational support and management for hybrid to remove that burden from IT so they can focus on the things that accelerate their businesses.

As we move forward, how can clients start to move in this direction? At HPE, as we look at some of the services we have announced and will be rolling out in the next six-12 months, they are to help our clients move faster. They are designed to provide operational support and management for hybrid to take that burden away from IT, especially where IT may not have the skill sets or capability and be able to provide that seamless operating experience to our IT customers. Those customers need to focus on the things that accelerate their business — that is what the business units are demanding.

To stay relevant, IT is going to have to do that, too. They are going to have to look for help and support so that they can move at the same speed and pace that businesses are demanding today. And I don’t see that slowing down. I don’t think anybody sees that slowing down; if anything, we see the pace continuing to accelerate.

When I talked about fast-cycle — where services or solutions we put into the market may have had a market shelf life of two to three years — we are seeing it compressed to six months. It’s amazing how fast competition comes in even if we are doing innovative type of solutions. So, IT has to accelerate at that speed as well.

The HPE GreenLake hybrid cloud offering, for example, gives our clients the ability to operate at that speed by providing managed services capabilities across the hybrid estate. It provides a consistent platform, and then allows them to innovate on top of it. It takes away the management operation from their focus and lets them focus on what matters to the business today, which is innovation.

Gardner: For you personally, Erik, where do you get inspiration for innovation? How do you think out of the box when we can now see that that’s a necessary requirement?

Inspired by others

Vogel: One of the best parts about my job is the time I get to spend with our customers and to really understand what their challenges are and what they are doing. One of the things we look at are adjacent businesses.

We try to learn what is working well in retail, for example. What innovation is there and what lessons learned can we apply elsewhere? A lot of times the industry shifts so quickly that we don’t have all of the answers. We can’t take a product-out approach any longer. We really have to start looking at the customers’ back end. And I think having kind of that broad view and looking outside is really helping us. It’s where we are getting a lot of our inspiration.

For example, we are really focused on the overall experience that our clients have with HPE, and trying to drive a very consistent, standardized, easy-to-choose type of experience with us as a company. And it’s interesting as an engineering company, with a lot of good development and engineering capabilities, that we tend to look at it from a product-out view. We build a portal that they can work within, we create better products, and we get that out in front of the customer.

But by looking outside, we are saying, “Wait a minute, what is it, for example, about Uber that everybody likes?” It’s not necessarily that their app is good, but it’s really about the clean car, it’s about not having to pay when you get out of the car, not have to fumble for a credit card. It’s about seeing a map and knowing where the driver is. It’s about a predictable cost, where you know what it’s going to cost. And that experience, that overall experience is what makes Uber, Uber. It’s not just creating an app and saying, “Well, the app is the experience.”

We are learning a lot from adjacent businesses, adjacent industries, and incorporating that into what we are doing. It’s just part of that as-a-service mentality where we have to think about the experience our customers are asking for and how do we start building solutions that meet that experience requirement — not just the technical requirement. We are very good at that, but how do we start to meet that experience requirement?

How to Develop Hybrid 

Cloud Strategies With Confidence 

And this has been a real eye-opener for me personally. It has been a really fun part of the job, to look at the experience we are trying to create. How do we think differently? Rather than producing products and putting them out into the market, how do we think about creating that experience first and then designing and creating the solutions that sit underneath it?

When you talk about where we get inspiration, it’s really about looking at those adjacencies. It’s understanding what’s happening in the broader as-a-service market and taking the best of what’s happening and saying, “How can we employ those types of techniques, those tricks, those lessons learned into what we are doing?” And that’s really driving a lot of our development and inspiration in terms of how we are innovating as a company within HPE.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, Data center transformation, enterprise architecture, Hewlett Packard Enterprise, machine learning, multicloud, Security, User experience | Tagged , , , , , , , , , , , , , | Leave a comment

How total deployment intelligence overcomes the growing complexity of multicloud management

multicloud

The next BriefingsDirect Voice of the Innovator discussion focuses on the growing complexity around multicloud management and how greater accountability is needed to improve business impacts from all-too-common haphazard cloud adoption.

Stay with us to learn how new tools, processes, and methods are bringing insights and actionable analysis that help regain control over the increasing challenges from hybrid cloud and multicloud sprawl.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to explore a more pragmatic path to modern IT deployment management is Harsh Singh, Director of Product Management for Hybrid Cloud Products and Solutions at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving the need for multicloud at all? Why are people choosing multiple clouds and deployments?

Harsh Singh

Singh

Singh: That’s a very interesting question, especially today. However, you have to step back and think about why people went to the cloud in the first place – and what were the drivers – to understand how sprawl expanded to a multicloud environment.

Initially, when people began moving to public cloud services, the idea was speed, agility, and quick access to resources. IT was in the way for gaining on-premises resources. People said, “Let me get the work going and let me deploy things faster.”

And they were able to quickly launch applications, and this increased their velocity and time-to-market. Cloud helped them get there very fast. However, as we now get choices of multicloud environments, where you have various public cloud environments, you also now have private cloud environments where people can do similar things on-premises. There came a time when people realized, “Oh, certain applications fit in certain places better than others.”

From cloud sprawl to cloud smart

For example, if I want to run a serverless environment, I might want to run in one cloud provider versus another. But if I want to run more machine learning (ML), artificial intelligence (AI) kinds of functionality, I might want to run that somewhere else. And if I have a big data requirement, with a lot of data to crunch, I might want to run that on-premises.

So you now have more choices to make. People are thinking about where’s the best place to run their applications. And that’s where multicloud comes in. However, this doesn’t come for free, right?

How to Determine 

Ideal Workload Placement 

As you add more cloud environments and different tools, it leads to what we call tool sprawl. You now have people tying all of these tools together trying to figure out the cost of these different environments. Are they in compliance with the various norms we have within our organization? Now it becomes very complex very fast. It becomes a management problem in terms of, “How do I manage all of these environments together?”

Gardner: It’s become too much of a good thing. There are very good reasons to do cloud, hybrid cloud, and multicloud. But there hasn’t been a rationalization about how to go about it in an organizational way that’s in the best interest of the overall business. It seems like a rethinking of how we go about deploying IT in general needs to be part of it.

Singh: Absolutely right. I see three pillars that need to be addressed in terms of looking at this complexity and managing it well. Those are people, process, and technology. Technology exists, but unfortunately, unless you have the right skill set in the people — and you have the right processes in place — it’s going to be the Wild West. Everything is just going to be crazy. At the end you falter, not achieving what you really want to achieve.

I look at people, process, and technology as the three pillars of this tool sprawl, which is absolutely necessary for any company as they traverse their multicloud journey.

Gardner: This is a long-term, thorny problem. And it’s probably going to get worse before it gets better.

Singh: I do see it getting worse, but I also see a lot of people beginning to address these problems. Vendors, including we at HPE, are looking at this problem. We are trying to get ahead of it before a lot of enterprises crash and burn. We have experience with our customers, and we have engaged with them to help them on this journey.

You must deploy the applications to the right places that are right for your business — whether it’s multicloud or hybrid cloud. At the end of the day, you have to manage multiple environments.

It is going to get worse and people are going to realize that they need professional help. It requires that we work with these customers very closely and take them along based on what we have experienced together.

Gardner: Are you taking the approach that the solution for hybrid cloud management and multicloud management can be done in the same way? Or are they fundamentally different?

Singh: Fundamentally, it’s the same problem set. You must deploy the applications to the right places that are right for your business — whether it’s multicloud or hybrid cloud. Sometimes the terminology blurs. But at the end of the day, you have to manage multiple environments.

CIYou may be connecting private or off-premises hybrid clouds, and maybe there are different clouds. The problem will be the same — you have multiple tools, multiple environments, and the people need training and the processes need to be in place for them to operate properly.

Gardner: What makes me optimistic about the solution is there might be a fourth leg on that stool. People, process, and technology, yes, but I think there is also economics. One of the things that really motivates a business to change is when money is being lost and the business people think there is a way to resolve that.

The economics issue — about cost overruns and a lack of discipline around procurement – is both a part of the problem and the solution.

Economics elevates visibility 

Singh: I am laughing right now because I have talked to so many customers about this.  A CIO from an entertainment media company, for example, recently told me she had a problem. They had a cloud-first strategy, but they didn’t look at the economics piece of it. She didn’t realize, she told me, where their virtual machines (VMs) and workloads were running.

“At the end of the month, I’m seeing hundreds of thousands of dollars in bills. I am being surprised by all of this stuff,” she said. “I don’t even know whether they are in compliance. The overhead of these costs — I don’t know how to get a handle on it.”

So this is a real problem that customers are facing. I have heard this again and again: They don’t have visibility into the environment. They don’t know what’s being utilized. Sometimes they are underutilized, sometimes they are over utilized. And they don’t know what they are going to end up paying at the end of the day.

A common example is, in a public cloud, people will launch a very large number of VMs because that’s what they are used to doing. But they consume maybe 10 to 20 percent of that. What they don’t realize is that they are paying for the whole bill. More visibility is going to become key to getting a handle on the economics of these things.

Gardner: We have seen these kinds of problems before in general business procurement. Many times it’s the Wild West, but then they bring it under control. Then they can negotiate better rates as they combine services and look for redundancies. But you can’t do that until you know what you’re using and how it costs.

So, is the first step getting an inventory of where your cloud deployments are, what the true costs are, and then start to rationalize them?

Guardrails reduce risk, increase innovation

Singh: Absolutely, right. That’s where you start, and at HPE we have services to do that. The first thing is to understand where you are. Get a base level of what is on-premises, what is off-premises, and which applications are required to run where. What’s the footprint that I require in these different places? What is the overall cost I’m incurring, and where do I want to be? Answering those questions is the first step to getting a mixed environment you can control — and get away from the Wild West.

Put in the compliance guardrails so that IT is again looking at avoiding the problems we are seeing today.

Gardner: As a counterpoint, I don’t think that IT wants to be perceived as the big bad killjoy that comes to the data scientists and says, “You can’t get those clusters to support the data environment that you want.” So how do you balance that need for governance, security, and cost control with not stifling innovation and allowing creative freedom?

How to Transform

The Traditional Datacenter 

Singh: That’s a very good question. When we started building out our managed cloud solutions, a key criterion was to provide the guardrails yet not stifle innovation for the line of business managers and developers. The way you do that is that you don’t become the man in the middle. The idea is you allow the line of businesses and developers to access the resources they need. However, you put guardrails around which resources they can access, how much they can access, and you provide visibility into the budgets. You still let them access the direct APIs of the different multicloud environments.

You don’t say, “Hey, you have to put in a request to us to do these things.” You have to be more behind-the-scenes, hidden from view. At the same time, you need to provide those budgets and those controls. Then they can perform their tasks at the speed they want and access to the resources that they need — but within the guardrails, compliance, and the business requirements that IT has.

Gardner: Now that HPE has been on the vanguard of creating the tools and methods to get the necessary insights, make the measurements, recognize the need for balance between control and innovation — have you noticed changes in organizational patterns? Are there now centers of cloud excellence or cloud-management bureaus? Does there need to be a counterpart to the tools, of management structure changes as well?

Automate, yet hold hands, too

Singh: This is the process and the people parts that you want to address. How do you align your organizations, and what are the things that you need to do there? Some of our customers are beginning to make those changes, but organizations are difficult to change to get on this journey. Some of them are early; some of them are at much later stage. A lot of the customers frankly are still in the early phases of multicloud and hybrid cloud. We are working with them to make sure they understand the changes they’ll need to make in order to function properly in this new world.

Gardner: Unfortunately, these new requirements come at a time when cloud management skills — of understanding data and ops, IT and ops, and cloud and ops — are hard to find and harder to keep. So one of the things I’m seeing is the adoption of automation around guidance, strategy, and analysis. The systems start to do more for you. Tell me how automation is coming to bear on some of these problems, and perhaps mitigate the skill shortage issues.

sphere image

Singh: The tools can only do so much. So you automate. You make sure the infrastructure is automated. You make sure your access to public cloud — or any other cloud environment — is automated.

That can mitigate some of the problems, but I still see a need for hand-holding from time to time in terms of the process and people. That will still be required. Automation will help tie in a storage network, and compute, and you can put all of that together. This [composability] reduces the need and dependency on some of the process and people. Automation mitigates the physical labor and the need for someone to take days to do it. However, you need that expertise to understand what needs to be done. And this is where HPE is helping.

Automation will help tie in a storage network and compute, and you can put all of that together. Composability reduces the need and dependency on some of the process and the people. Automation mitigates the physical labor and the need for someone to take days to do it.

You might have heard about our HPE GreenLake managed cloud services offerings. We are moving toward an as-a-service model for a lot of our software and tooling. We are using the automation to help customers fill the expertise gap. We can offer more of a managed service by using automation tools underneath it to make our tasks easier. At the end of the day, the customer only sees an outcome or an experience — versus worrying about the details of how these things work.

Gardner: Let’s get back to the problem of multicloud management. Why can’t you just use the tools that the cloud providers themselves provide? Maybe you might have deployments across multiple clouds, but why can’t you use the tools from one to manage more? Why do we need a neutral third-party position for this?

Singh: Take a hypothetical case: I have deployments in Amazon Web Services (AWS) and I have deployments in Google Cloud Platform (GCP). And to make things more complicated, I have some workloads on premises as well. How would I go about tying these things together?

Now, if I go to AWS, they are very, very opinionated on AWS services. They have no interest in looking at builds coming out of GCP or Microsoft Azure. They are focused on their services and what they are delivering. The reality is, however, that customers are using these different environments for different things.

The multiple public cloud providers don’t have an interest in managing other clouds or to look at other environments. So third parties come in to tie everything together, and no one customer is locked into one environment.

If they go to AWS, for example, they can only look at billing, services, and performance metrics of that one service. And they do a very good job. Each one of these cloud guys does a very good job of exposing their own services and providing you visibility into their own services. But they don’t tie it across multiple environments. And especially if you throw the on-premises piece into the mix, it’s very difficult to look at and compare costs across these multiple environments.

Gardner: When we talk about on-premises, we are not just talking about the difference between your data center and a cloud provider’s data center. We are also taking about the difference between a traditional IT environment and the IT management tools that came out of that. How has HPE crossed the chasm between a traditional IT management automation and composability types of benefits and the higher-level, multicloud management?

Tying worlds together

Singh: It’s a struggle to tie these worlds together from my experience, and I have been doing this for some time. I have seen customers spend months and sometimes years, putting together a solution from various vendors, tying them together, and deploying something on premises and also trying to tie that to an off-premises environment.

At HPE, we fundamentally changed how on-premises and off-premises environments are managed by introducing our own SaaS management environment, which customers do not have to manage. Such a Software as a Service (SaaS) environment, a portal, connects on-premises environments. Since we have a native, programmable, API-driven infrastructure, we were able to connect that. And being able to drive it from the cloud itself made it very easy to hook up to other cloud providers like AWS, Azure, and GCP. This capability ties the two worlds together. As you build out the tools, the key is understanding automation on the infrastructure piece, and how can you connect and manage this from a centralized portal that ties all these things together with a click.

Through this common portal, people can onboard their multicloud environments, get visibility into their costs, get visibility into compliance — look at whether they are HIPAA compliant or not, PCI compliant or not — and get access to resources that allow them to begin to manage these environments.

How to Better Manage

Hybrid and Multicloud Economics 

For example, onboarding into any public cloud is very, very complex. Setting up a private cloud is very complex. But today, with the software that we are building, and some of our customers are using, we can set up a private cloud environment for people within hours. All you have to do is connect with our tools like HPE OneView and other things that we have built for the infrastructure and automation pieces. You then tie that together to a public cloud-facing tenant portal and onboard that with a few clicks. We can connect with their public cloud accounts and give them visibility into their complete environment.

And then we can bring in cost analytics. We have consumption analytics as part of our HPE GreenLake offering, which allows us to look at cost for on-premises as well as off-premises resources. You can get a dashboard that shows you what you are consuming and where.

Gardner: That level of management and the capability to be distributed across all these different deployment models strikes me as a gift that could keep on giving. Once you have accomplished this and get control over your costs, you are next able to rationalize what cloud providers to use for which types of workloads. It strikes me that you can then also use that same management and insight to start to actually move things around based on a dynamic or even algorithmic basis. You can get cost optimization on the fly. You can react to market forces and dynamics in terms of demand on your servers or on your virtual machines anywhere.

Are you going to be able to accelerate the capability for people to move their fungible workloads across different clouds, both hybrid and multicloud?

Optimizing for the future

Singh: Yes, absolutely right. There is more complexity in terms of moving workloads here and there, because there is data proximity requirements and various other requirements. But the optimization piece is absolutely something we can do on the fly, especially if you start throwing AI into the mix.

You will be learning over time what needs to be deployed where, and where your data gravity might be, and where you need applications closer to the data. Sometimes it’s here, sometimes it’s there. You might have edge environments that you might want to manage from this common portal, too. All that can be brought together.

HPE BugAnd then with those insights, you can make optimization decisions: “Hey, this application is best deployed in this location for these reasons.” You can even automate that. You can make that policy-driven.

Think about it this way — you are a person who wants to deploy something. You request a resource, and that gets deployed for you based on the algorithm that has already decided where the optimal place to put it is. All of that works behind the scenes without you having to really think about it. That’s the world we are headed to.

Gardner: We have talked about some really interesting subjects at a high level, even some thought leadership involved. But are there any concrete examples that illustrate how companies are already starting to do this? What kinds of benefits do they get?

Singh: I won’t name the company, but there was a business in the UK that was able to deploy VMs within minutes on their on-premises environment, as well as gain cost benefits out of their AWS deployments.

We were able to go in, connect to their VMware environment, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs. They saved 40 percent in operational efficiency. They gained self-service access.

We were able to go in, connect to their VMware environment, in this case, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs and request resources in that environment. They saved 40 percent in operational efficiency. So now they were mostly cost optimized, their IT team was less pressured to go and launch VMs for their developers, and they gained direct self-service access through which they could go and deploy VMs and other resources on-premises.

At the same time, IT had the visibility into what was being deployed in the public cloud environments. They could then optimize those environments for the size of the VMs and assets they were running there and gain some cost advantages there as well.

How to Solve Cost and Utilization

Challenges of Hybrid Cloud 

Gardner: For organizations that recognize they have a sprawl problem when it comes to cloud, that their costs are not being optimized, but that they are still needing to go about this at sort of a crawl, walk, run level — what should they be doing to put themselves in an advantageous position to be able to take advantage of these tools?

Are there any precursor activities that companies should be thinking about to get control over their clouds, and then be able to better leverage these tools when the time comes?

Watch your clouds

Singh: Start with visibility. You need an inventory of what you are doing. And then you need to ask the question, “Why?” What benefit are you getting from these different environments? Ask that question, and then begin to optimize. I am sure there are very good reasons for using multicloud environments, and many customers do. I have seen many customers use it, and for the right reasons.

However, there are other people who have struggled because there was no governance and guardrails around this. There were no processes in place. They truly got into a sprawled environment, and they didn’t know what they didn’t know.

So first and foremost, get an idea of what you want to do and where you are today — get a baseline. And then, understand the impact and what are the levers to the cost. What are the drivers to the efficiencies? Make sure you understand the people and process — more than the technology, because the technology does exist, but you need to make sure that your people and process are aligned.

And then lastly, call me. My phone is open. I am happy to have a talk with any customer that wants to have a talk.

How to Achieve Composability

Across Your Datacenter 

Gardner: On that note of the personal approach, people who are passionate in an organization around things like efficiency and cost control are looking for innovation. Where do you see the innovation taking place for cloud management? Is it the IT Ops people, the finance people, maybe procurement? Where is the innovative thinking around cloud sprawl manifesting itself?

cloud-journeySingh: All three are good places for innovation. I see IT Ops at the center of the innovation. They are the ones who will be effecting change.

Finance and procurement, they could benefit from these changes, and they could be drivers of the requirements. They are going to be saying, ‘I need to do this differently because it doesn’t work for me.” And the innovation also comes from developers and line of businesses managers who have been doing this for a while and who understand what they really need.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, containers, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, machine learning, multicloud, Security | Tagged , , , , , , , , , , , , , | Leave a comment

How an agile focus for Enterprise Architects builds competitive advantage for digital transformation

SpiralThe next BriefingsDirect business trends discussion explores the reinforcing nature of Enterprise Architecture (EA) and agile methods.

We’ll now learn how Enterprise Architects can embrace agile approaches to build competitive advantages for their companies.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about retraining and rethinking for EA in the Digital Transformation (DT) era, we are joined by Ryan Schmierer, Director of Operations at Sparx Services North America, and Chris Armstrong, President at Sparx Services North America. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Ryan, what’s happening in business now that’s forcing a new emphasis for Enterprise Architects? Why should Enterprise Architects do things any differently than they have in the past?

Ryan Schmierer

Schmierer

Schmierer: The biggest thing happening in the industry right now is around DT. We been hearing about DT for the last couple of years and most companies have embarked on some sort of a DT initiative, modernizing their business processes.

But now companies are looking beyond the initial transformation and asking, “What’s next?” We are seeing them focus on real-time, data-driven decision-making, with the ultimate goal of enterprise business agility — the capability for the enterprise to be aware of its environments, respond to changes, and adapt quickly.

For Enterprise Architects, that means learning how to be agile both in the work they do as individuals and how they approach architecture for their organizations. It’s not about making architectures that will last forever, but architectures that are nimble, agile, and adapt to change.

Gardner: Ryan, we have heard the word, agile, used in a structured way when it comes to software development — Agile methodologies, for example. Are we talking about the same thing? How are they related?

Agile, adaptive enterprise advances 

Schmierer: It’s the same concept. The idea is that you want to deliver results quickly, learn from what works, adapt, change, and evolve. It’s the same approach used in software development over the last few years. Look at how you develop software that delivers value quickly. We are now applying those same concepts in other contexts.

First is at the enterprise level. We look at how the business evolves quickly, learn from mistakes, and adapt the changes back into the environment.

Second, in the architecture domain, instead of waiting months or quarters to develop an architecture, vision, and roadmap, how do we start small, iterate, deliver quickly, accelerate time-to-value, and refine it as we go?

Gardner: Many businesses want DT, but far fewer of them seem to know how to get there. How does the role of the Enterprise Architect fit into helping companies attain DT?

The core job responsibility for Enterprise Architects is to be an extension of the company leadership and its executives. They need to look at where a company is trying to go … and develop a roadmap on how to get there.

Schmierer: The core job responsibility for Enterprise Architects is to be an extension of company leadership and its executives. They need to look at where a company is trying to go, all the different pieces that need to be addressed to get there, establish a future-state vision, and then develop a roadmap on how to get there.

This is what company leadership is trying to do. The EA is there to help them figure out how to do that. As the executives look outward and forward, the Enterprise Architect figures out how to deliver on the vision.

Gardner: Chris, tools and frameworks are only part of the solution. It’s also about the people and the process. There’s the need for training and best practices. How should people attain this emphasis for EA in that holistic definition?

Change is good 

Chris Armstrong

Armstrong

Armstrong: We want to take a step back and look at how Ryan was describing the elevation of value propositions and best practices that seem to be working for agile solution delivery. How might that work for delivering continual, regular value? One of the major attributes, in our experience, of the goodness of any architecture, is based on how well it responds to change.

In some ways, agile and EA are synonyms. If you’re doing good Enterprise Architecture, you must be agile because responding to change is one of those quality attributes. That’s a part of the traditional approach of architecture – to be concerned with the interoperability and integration.

As it relates to the techniques, tools, and frameworks we want to exploit — the experiences that we have had in the past – we try to push those forward into more of an operating model for Enterprise Architects and how they engage with the rest of the organization.

Learn About Agile Architecture

At The Open Group July Denver Event 

So not starting from scratch, but trying to embrace the concept of reuse, particularly reuse of knowledge and information. It’s a good best practice, obviously. That’s why in 2019 you certainly don’t want to be inventing your own architecture method or your own architecture framework, even though there may be various reasons to adapt them to your environment.

Starting with things like the TOGAF® Framework, particularly its Architecture Development Method (ADM) and reference models — those are there for individuals or vertical industries to accelerate the adding of value.

The challenge I’ve seen for a lot of architecture teams is they get sucked into the methodology and the framework, the semantics and concepts, and spend a lot of time trying to figure out how to do things with the tools. What we want to think about is how to enable the architecture profession in the same way we enable other people do their jobs — with instant-on service offerings, using modern common platforms, and the industry frameworks that are already out there.

We are seeing people more focused on not just what the framework is but helping to apply it to close that feedback loop. The TOGAF standard, a standard of The Open Group, makes perfect sense, but people often struggle with, “Well, how do I make this real in my organization?”

Partnering with organizations that have had that kind of experience helps close that gap and accelerates the use in a valuable fashion. It’s pretty important.

Gardner: It’s ironic that I’ve heard of recent instances where Enterprise Architects are being laid off. But it sounds increasingly like the role is a keystone to DT. What’s the mismatch there, Chris? Why do we see in some cases the EA position being undervalued, even though it seems critical?

EA here to stay 

Armstrong: You have identified something that has happened multiple times. Pendulum swings happen in our industry, particularly when there is a lot of change going on. People are getting a little conservative. We’ve seen this before in the context of fiscal downturns in economic climates.

But to me, it really points to the irony of what we perceive in the architecture profession based on successes that we have had. Enterprise Architecture is an essential part of running your business. But if executives don’t believe that and have not experienced that then it’s not surprising when there’s an opportunity to make changes in investment priorities that Enterprise Architecture might not be at the top of the list.

We need to be mindful of where we are in time with the architecture profession. A lot of organizations struggle with the glass ceiling of Enterprise Architecture. It’s something we have encountered pretty regularly, where executives are, “I really don’t get what this EA thing is, and what’s in it for me? Why should I give you my support and resources?”

Learn About Agile Architecture

At The Open Group July Denver Event 

But what’s interesting about that, of course, is if you take a step back you don’t see executives saying the same thing about human resources or accounting. Not to suggest that they aren’t thinking about ways to optimize those as a core competency or as strategic. We still do have an issue with acceptance of enterprise architecture based on the educational and developmental experiences a lot of executives have had.

We’re very hopeful that that trend is going to be moving in a different direction, particularly as relates to new master’s programs and doctorate programs, for example, in the Enterprise Architecture field. Those elevate and legitimize Enterprise Architecture as a profession. When people are going through an MBA program, they will have heard of enterprise architecture as an essential part of delivering upon strategy.

Pieces of jigsaw puzzle and global network concept.Gardner: Ryan, looking at what prevents companies from attaining DT, what are the major challenges? What’s holding up enterprises from getting used to real-time data, gaining agility, and using intelligence about how they do things?

Schmierer: There are a couple of things going on. One of them ties back to what Chris was just talking about — the role of Enterprise Architects, and the role of architects in general. DT requires a shift in the relationship between business and IT. With DT, business functions and IT functions become entirely and holistically integrated and inseparable.

When there are no separate IT processes and no businesses process — there are just processes because the two are intertwined. As we use more real-time data and as we leverage Enterprise Architecture, how do we move beyond the traditional relationship between business and IT? How do we look at such functions as data management and data architecture? How do we bring them into an integrated conversation with the folks who were part of the business and IT teams of the past?

A good example of how companies can do this comes in a recent release from The Open Group, the Digital Practitioner Body of Knowledge™ (DPBoK™). It says that there’s a core skill set that is general and describes what it means to be such a practitioner in the digital era, regardless of your job role or focus. It says we need to classify job roles more holistically and that everyone needs to have both a business mindset and a set of technical skills. We need to bring those together, and that’s really important.

As we look at what’s holding up DT we need to take functions that were once considered centralized assets like EA and data management and bring them into the forefront. … Enterprise Architects need to be living in the present.

As we look at what’s holding up DT — taking the next step to real-time data, broadening the scope of DT – we need to take functions that were once considered centralized assets, like EA and data management, and bring them into the forefront, and say, “You know what? You’re part of the digital transmission story as well. You’re key to bringing us along to the next stage of this journey, which is looking at how to optimize, bring in the data, and use it more effectively. How do we leverage technology in new ways?”

The second thing we need to improve is the mindset. It’s particularly an issue with Enterprise Architects right now. And it is that Enterprise Architects — and everyone in digital professions — need to be living in the present.

You asked why some EAs are getting laid off. Why is that? Think about how they approach their job in terms of the questions that would be asked in a performance review.

Those might be, “What have you done for me over the years?” If your answer focuses on what you did in the past, you are probably going to get laid off. What you did in the past is great, but the company is operating in the present.

What’s your grand idea for the future? Some ideal situation? Well, that’s probably going to get you shoved in a corner some place and probably eventually laid off because companies don’t know what the future is going to bring. They may have some idea of where they want to get to, but they can’t articulate a 5- to 10-year vision because the environment changes so quickly.

TOG BugWhat have you done for me lately? That’s a favorite thing to ask in performance-review discussions. You got your paycheck because you did your job over the last six months. That’s what companies care about, and yet that’s not what Enterprise Architects should be supporting.

Instead, the EA emphasis should be what can you do for the business over the next few months? Focus on the present and the near-term future.

That’s what gets Enterprise Architects a seat at the table. That’s what gets the entire organization, and all the job functions, contributing to DT. It helps them become aligned to delivering near-term value. If you are entirely focused on delivering near-term value, you’ve achieved business agility.

Gardner: Chris, because nothing stays the same for very long, we are seeing a lot more use of cloud services. We’re seeing composability and automation. It seems like we are shifting from building to assembly.
Doesn’t that fit in well with what EAs do, focusing on the assembly and the structure around automation? That’s an abstraction above putting in IT systems and configuring them.

Reuse to remain competitive 

Armstrong: It’s ironic that the profession that’s often been coming up with the concepts and thought-leadership around reuse struggles a with how to internalize that within their organizations. EAs have been pretty successful at the implementation of reuse on an operating level, with code libraries, open-source, cloud, and SaaS.

There is no reason to invent a new method or framework. There are plenty of them out there. Better to figure out how to exploit those to competitive advantage and focus on understanding the business organization, strategy, culture, and vision — and deliver value in the context of those.

For example, one of the common best practices in Enterprise Architecture is to create things called reference architectures, basically patterns that represent best practices, many of which can be created from existing content. If you are doing cloud or microservices, elevate that up to different types of business models. There’s a lot of good content out there from standards organizations that give organizations a good place to start.

Learn About Agile Architecture

At The Open Group July Denver Event 

But one of the things that we’ve observed is a lot of architecture communities tend to focus on building — as you were saying — those reference architectures, and don’t focus as much on making sure the organization knows that content exists, has been used, and has made a difference.

We have a great opportunity to connect the dots among different communities that are often not working together. We can provide that architectural leadership to pull it together and deliver great results and positive behaviors.

Gardner: Chris, tell us about Sparx Services North America. What do you all do, and how you are related to and work in conjunction with The Open Group?

Armstrong: Sparx Services is focused on helping end-user organizations be successful with Enterprise Architecture and related professions such as solution architecture and solution delivery, and systems engineering. We do that by taking advantage of the frameworks and best practices that standards organizations like The Open Group create, helping make those standards real, practical, and pragmatic for end-user organizations. We provide guidance on how to adapt and tailor them and provide support while they use those frameworks for doing real work.

And we provide a feedback loop to The Open Group to help understand what kinds of questions end-user organizations are asking. We look for opportunities for improving existing standards, areas where we might want to invest in new standards, and to accelerate the use of Enterprise Architecture best practices.

Gardner: Ryan, moving onto what’s working and what’s helping foster better DT, tell us what’s working. In a practical sense, how is EA making those shorter-term business benefits happen?

One day at a time 

Schmierer: That’s a great question. We have talked about some of the challenges. It’s important to focus on the right path as well. So, what’s working that an enterprise architect can do today in order to foster DT?

Number one, embrace agile approaches and an agile mindset in both architecture development (how you do your job) and the solutions you develop for your organizations. A good way to test whether you are approaching architecture in an agile way is the first iteration in the architecture. Can you go through the entire process of the Architecture Development Method (ADM) on a cocktail napkin in the time it takes you to have a drink with your boss? If so, great. It means you are focused on that first simple iteration and then able to build from there.

Number two, solve problems today with the components you have today. Don’t just look to the future. Look at what you have now and how you can create the most value possible out of those. Tomorrow the environment is going to change, and you can focus on tomorrow’s problems and tomorrow’s challenges tomorrow. So today’s problems today.

Third, look beyond your current DT initiative and what’s going on today, and talk to your leaders. Talk to your business clients about where they need to go in the future. That goal is enterprise business agility, which is helping the company become more nimble. DT is the first step, then start looking at steps two and three.

Architects need to understand technology better, such things as new cloud services, IoT, edge computing, ML, and AI. These are going to have disruptive effects on your businesses. You need to understand them to be a trusted advisor to your organization.

Fourth, Architects need to understand technology better, such things as fast-moving, emerging technology like new cloud services, Internet of Things (IoT), edge computingmachine learning (ML), and artificial intelligence (AI) — these are more than just buzz words and initiatives. They are real technology advancements. They are going to have disruptive effects on your businesses and the solutions to support those businesses. You need to understand the technologies; you need to start playing with them so you can truly be a trusted advisor to your organization about how to apply those technologies in business context.

Gardner: Chris, we hear a lot about AI and ML these days. How do you expect Enterprise Architects to help organizations leverage AI and ML to get to that DT? It seems really essential to me to become more data driven and analytics driven and then to re-purpose to reuse those analytics over and over again to attain an ongoing journey of efficiency and automation.

Better business outcomes 

Armstrong: We are now working with our partners to figure out how to best use AI and ML to help run the business, to do better product development, to gain a 360-degree view of the customer, and so forth.

Architecture_frameworkIt’s one of those weird things where we see the shoemaker’s children not having any shoes because they are so busy making shoes for everybody else. There is a real opportunity, when we look at some of the infrastructure that’s required to support the agile enterprise, to exploit those same technologies to help us do our jobs in enterprise architecture.

It is an emerging part of the profession. We and others are beginning to do some research on that, but when I think of how much time we and our clients have spent on the nuts and bolts collection of data and normalization of data, it sure seems like there is a real opportunity to leverage these emerging technologies for the benefit of the architecture practice. Then, again, the architects can be more focused on building relationships with people, understanding the strategy in less time, and figuring out where the data is and what the data means.

Obviously humans still need to be involved, but I think there is a great opportunity to eat your own dog food, as it were, and see if we can exploit those learning tools for the benefit of the architecture community and its consumers.

Gardner: Chris, do we have concrete examples of this at work, where EAs have elevated themselves and exposed their value for business outcomes? What’s possible when you do this right?

Armstrong: A lot of organizations are working things from the bottoms up, and that often starts in IT operations and then moves to solution delivery. That’s where there has been a lot of good progress, in improved methods and techniques such as scaled agile and DevOps.

But a lot of organizations struggle to elevate it higher. The DPBoK™  from The Open Group provides a lot of guidance to help organizations navigate that journey, particularly getting to the fourth level of the learning progression, which is at the enterprise level. That’s where Enterprise Architecture becomes essential. It’s great to develop software fast, but that’s not the whole point of agile solution delivery. It should be about building the right software the right way to meet the right kind of requirements — and do that as rapidly as possible.

We need an umbrella over different release trains, for example, to make sure the organization as a whole is marching forward. We have been working with a number of Fortune 100 companies that have made good progress at the operational implementation levels. They nonetheless now are finding that particularly trying, to connect to business architecture.

There have been some great advancements from the Business Architecture Guild and that’s been influencing the TOGAF framework, to connect the dots across those agile communities so that the learnings of a particular release train or the strategy of the enterprise is clearly understood and delivered to all of those different communities.

Gardner: Ryan, looking to the future, what should organizations be doing with the Enterprise Architect role and function?

EA evolution across environments 

Schmierer: The next steps don’t just apply to Enterprise Architects but really to all types of architects. So look at the job role and how your job role needs to evolve over the next few years. How do you need to approach it differently than you have in the past?

For example, we are seeing Enterprise Architects increasingly focus on issues like security, risk, reuse, and integration with partner ecosystems. How do you integrate with other companies and work in the broader environments?

We are seeing Business Architects who have been deeply engaged in DT discussions over the last couple of years start looking forward and shifting the role to focus on how we light up real-time decision-making capabilities. Solution Architects are shifting from building and designing components to designing assembly and designing the end systems that are often built out of third-party components instead of things that were built in-house.

Look at the job role and understand that the core need hasn’t changed. Companies need Enterprise Architects and Business Architects and Solution Architects more than ever right now to get them where they need to be. But the people serving those roles need to do that in a new way — and that’s focused on the future, what the business needs are over the next 6 to 18 months, and that’s different than what they have done in past.

Gardner: Where can organizations and individuals go to learn more about Agile Architecture as well as what The Open Group and Sparx Services are offering?

Schmierer: The Open Group has some great resources available. We have a July event in Denver focused on Agile Architecture, where they will discuss some of the latest thoughts coming out of The Open Group Architecture ForumDigital Practitioners Work Group, and more. It’s a great opportunity to learn about those things, network with others, and discuss how other companies are approaching these problems. I definitely point them there.

Learn About Agile Architecture 

At The Open Group July Denver Event 

I mentioned the DPBoK™. This is a recent release from The Open Group, looking at the future of IT and the roles for architects. There’s some great, forward-looking thinking in there. I encourage folks to take a look at that, provide feedback, and get involved in that discussion.

And then Sparx Services North America, we are here to help architects be more effective and add value to their organizations, be it through tools, training, consulting, best practices, and standards. We are here to help, so feel free to reach out at our website. We are happy to talk with you and see how we might be able to help.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in application transformation, Cloud computing, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, machine learning, Microsoft, multicloud, The Open Group | Tagged , , , , , , , , , , , | Leave a comment

For a UK borough, solving security issues leads to operational improvements and cost-savings across its IT infrastructure

Barnsley_at_Night)The next BriefingsDirect enterprise IT productivity discussion focuses on solving tactical challenges around security to unlock strategic operational benefits in the public sector.

For a large metropolitan borough council in South Yorkshire, England, an initial move to thwarting recurring ransomware attacks ended up a catalyst to wider IT infrastructure performance, cost, operations, and management benefits.

This security innovations discussion then examines how the Barnsley Metropolitan Borough Council information and communications technology (ICT) team rapidly deployed malware protection across 3,500 physical and virtual workstations and servers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share the story of how that one change in security software led to far higher levels of user satisfaction — and a heightened appreciation for the role and impact of small IT teams — is Stephen Furniss, ICT Technical Specialist for Infrastructure at Barnsley Borough Council. The interview was conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Stephen, tell us about the Barnsley Metropolitan Borough. You are one of 36 metropolitan counties in England, and you have a population of about 240,000. But tell us more about what your government agencies provide to those citizens.

Stephen Furniss

Furniss

Furniss: As a Council, we provide wide-ranging services to all the citizens here, from things like refuse collection on a weekly basis; maintaining roads, potholes, all that kind of stuff, and making sure that we look after the vulnerable in society around here. There is a big raft of things that we have to deliver, and every year we are always challenged to deliver those same services, but actually with less money from central government.

So it does make our job harder, because then there is not just a squeeze across a specific department in the Council when we have these pressures, there is a squeeze across everything, including IT. And I guess one of our challenges has always been how we deliver more or the same standard of service to our end users, with less budget.

So we turn to products that provide single-pane-of-glass interfaces, to make the actual management and configuration of things a lot easier. And [we turn to] things that are more intuitive, that have automation. We try and drive, making everything that we do easier and simpler for us as an IT service.

Gardner: So that boils down to working smarter, not harder. But you need to have the right tools and technology to do that. And you have a fairly small team, 115 or so, supporting 2,800-plus users. And you have to be responsible for all aspects of ICT — the servers, networks, storage, and, of course, security. How does being a small team impact how you approach security?

Furniss: We are even smaller than that. In IT, we have around 115 people, and that’s the whole of IT. But just in our infrastructure team, we are only 13 people. And our security team is only three or four people.

In IT, we have around 115 people, but just in infrastructure we are only 13 people. It can become a hindrance when you get overwhelmed with security incidents, yet it’s great to have  a small team to bond and come up with solutions.

It can become a hindrance when you get overwhelmed with security incidents or issues that need resolving. Yet sometimes it’s great to have that small team of people. You can bond together and come up with really good solutions to resolve your issues.

Gardner: Clearly with such a small group you have to be automation-minded to solve problems quickly or your end users will be awfully disappointed. Tell us about your security journey over the past year-and-a-half. What’s changed?

Furniss: A year-and-a-half ago, we were stuck in a different mindset. With our existing security product, every year we went through a process of saying, “Okay, we are up for renewal. Can we get the same product for a cheaper price, or the best price?”

We didn’t think about what security issues we were getting the most, or what were the new technologies coming out, or if there were any new products that mitigate all of these issues and make our jobs — especially being a smaller team — a lot easier.

But we had a mindset change about 18 months back. We said, “You know what? We want to make our lives easier. Let’s think about what’s important to us from a security product. What issues have we been having that potentially the new products that are out there can actually mitigate and make our jobs easier, especially with us being a smaller team?”

Gardner: Were reoccurring ransomware attacks the last straw that broke the camel’s back?

Staying a step ahead of security breaches

Furniss: We had been suffering with ransomware attacks. Every couple of years, some user would be duped into clicking on a file, email, or something that would cause chaos and mayhem across the network, infecting file-shares, and not just that individual user’s file-share, but potentially the files across 700 to 800 users all at once. Suddenly they found their files had all been encrypted.

From an IT perspective, we had to restore from the previous backups, which obviously takes time, especially when you start talking about terabytes of data.

That was certainly one of the major issues we had. And the previous security vendor would come to us and say, “All right, you have this particular version of ransomware. Here are some settings to configure and then you won’t get it again.” And that’s great for that particular variant, but it doesn’t help us when the next version or something slightly different shows up, and the security product doesn’t detect it.

Barnsley Town HallThat was one of our real worries and pain that we suffered, that every so often we were just going to get hit with ransomware. So we had to change our mindset to want something that’s actually going to be able to do things like machine learning (ML) and have ransomware protection built-in so that we are not in that position. We could actually get on with our day-to-day jobs and be more proactive – rather than being reactive — in the environment. That’s was a big thing for us.

Also, we need to have a lot of certifications and accreditations, being a government authority, in order to connect back to the central government of the UK for such things as pensions. So there were a lot of security things that would get picked up. The testers would do a penetration test on our network and tell us we needed to think about changing stuff.

Gardner: It sounds like you went from a tactical approach to security to more of an enterprise-wide security mindset. So let’s go back to your thought process. You had recurring malware and ransomware issues, you had an audit problem, and you needed to do more with less. Tell us how you went from that point to get to a much better place.

Safe at home, and at work 

Furniss: As a local authority, with any large purchase, usually over 2,500 pounds (US$3,125), we have to go through a tender process. We write in our requirements, what we want from the products, and that goes on a tender website. Companies then bid for the work.

It’s a process I’m not involved in. I am purely involved in the techie side of things, the deployment, and managing and looking after the kit. That tender process is all done separately by our procurement team.

So we pushed out this tender for a new security product that we wanted, and obviously we got responses from various different companies, including Bitdefender. When we do the scoring, we work on the features and functionality required. Some 70 percent of the scoring is based on the features and functionality, with 30 percent based on the cost.

What was really interesting was that Bitdefender scored the highest on all the features and functionalities — everything that we had put down as a must-have. And when we looked at the actual costs involved — what they were going to charge us to procure their software and also provide us with deployment with their consultants — it came out at half of what we were paying for our previous product.

Bitdefender scored the highest on all the features and functionalities — everything that we had put down as must-have. And the actual costs were half of what we were paying.

So you suddenly step back and you think, “I wish that we had done this a long time ago, because we could have saved money as well as gotten a better product.”

Gardner: Had you been familiar with Bitdefender?

Furniss: Yes, a couple of years ago my wife had some malware on her phone, and we started to look at what we were running on our personal devices at home. And I came up with Bitdefender as one of the best products after I had a really good look around at different options.

I went and bought a family pack, so effectively I deployed Bitdefender at home on my own personal mobile, my wife’s, my kids’, on the tablets, on the computers in the house, and what they used for doing schoolwork. And it’s been great at protecting us from anything. We have never had any issues with an infection or malware or anything like that at home.

It was quite interesting to find out, once we went through the tender process, that it was Bitdefender. I didn’t even know at that stage who was in the running. When the guys told me we are going to be deploying Bitdefender, I was thinking, “Oh, yeah, I use that at home and they are really good.”

Monday, Monday, IT’s here to stay 

Gardner: Stephen, what was the attitude of your end users around their experiences with their workstations, with performance, at that time?

Furniss: We had had big problems with end users’ service desk calls to us. Our previous security product required a weekly scan that would run on the devices. We would scan their entire hard drives every Friday around lunchtime.

You try to identify when the quiet periods are, when you can run an end-user scan on their machine, and we had come up with Friday’s lunchtime. In the Council we can take our lunch between noon and 2 p.m., so we would kick it off at 12 and hope it would finish in time for when users came back and did some work on the devices.

And with the previous product — no matter what we did, trying to change dates, trying to change times — we couldn’t get anything that would work in a quick enough time frame and complete the scans rapidly. It could be running for two to three hours, taking high resources on their devices. A lot of that was down to the spec of the end-user devices not being very good. But, again, when you are constrained with budgets, you can only put so many resources into buying kit for your users.

So, we would end up with service desk calls, with people complaining, saying, “Is there any chance you can change the date and time of the scan? My device is running slow. Can I have a new device?” And so, we received a lot of complaints.

And we also noticed, usually Monday mornings, that we would also have issues. The weekend was when we did our server scans and our full backup. So we would have the two things clashing, causing issues. Monday morning, we would come in expecting those backups to have completed, but because it was trying to fight with the scanning, neither was fully completed. We worried if we were going to be able to recover back to the previous week.

Our backups ended up running longer and longer as the scans took longer. So, yes, it was a bit painful for us in the past.

Gardner: What happened next?

Smooth deployer 

Furniss: Deployment was a really, really good experience. In the past, we have had suppliers come along and provide us a deployment document, some description, and it would be their standard document, there was nothing customized. They wouldn’t speak with us to find out what’s actually deployed and how their product fit in. It was just, “We are going to deploy it like this.” And we would then have issues trying to get things working properly, and we’d have to go backward and forward with a third party to get things resolved.

In this instance, we had Bitdefender’s consultants. They came on-site to see us, and we had a really good meeting. They were asking us questions: “Can you tell us about your environment? Where are your DMZs? What applications have you got deployed? What systems are you using? What hypervisor platforms have you got?” And all of that information was taken into account in the design document that they customized completely to best fit their best practices and what we had in place.

We ended up with something we could deploy ourselves, if we wanted to. We didn’t do that. We took their consultancy as a part of the deployment process. We had the Bitdefender guys on-site for a couple of days working with us to build the proper infrastructure services to run GravityZone.

And it went really well. Nothing was missed from the design. They gave us all the ports and firewall rules needed, and it went really, really smoothly.

We initially thought we were going to have a problem with deploying out to the clients, but we worked with the consultants to come up with a way around impacting our end-users during the deployment.

One of our big worries was that when you deploy Bitdefender, the first thing it does is see if there is a competitive vendor’s product on the machine. If it finds that, it will remove it, and then restart the user’s device to continue the installation. Now, that was going to be a concern to us.

So we came up with a scripted solution that we pushed out through Microsoft System Center Configuration Manager. We were able to run the uninstall command for the third-party product, and then Bitdefender triggered for the install straightaway. The devices didn’t need rebooting, and it didn’t impact any of our end users at all. They didn’t even know there was anything happening. The only thing that would see is the little icon in the taskbar changing from the previous vendor’s icon to Bitdefender.

It was really smooth. We got the automation to run and push out the client to our end users, and they just didn’t know about it.

Gardner: What was the impact on the servers?

Environmental change for the better 

Furniss: Our server impact has completely changed. The full scanning that Bitdefender does, which might take 15 minutes, is far less time than the two to three hours before on some of the bigger file servers.

And then once it’s done with that full scan, we have it set up to do more frequent quick scans that take about three minutes. The resource utilization of this new scan set up has just totally changed the environment.

Because we use virtualization predominantly across our server infrastructure, we have even deployed the Bitdefender scan servers, which allow us to do separate scans on each of our virtualized server hosts. It does all of the offloading of the scanning of files and malware and that kind of stuff.

It’s a lightweight agent, it takes less memory, less footprint, and less resources. And the scan is offloaded to the scan server that we run.

The impact from a server perspective is that you no longer see spikes in CPU or memory utilization with backups. We don’t have any issues with that kind of thing anymore. It’s really great to see a vendor come up with a solution to issues that people seem to have across the board.

Gardner: Has that impacted your utilization and ability to get the most virtual machines (VMs) per CPU? How has your total costs equation been impacted?

Furniss: The fact that we are not getting all these spikes across the virtualization platform means we can squeeze in more VMs per host without an issue. It means we can get more bank for buck, if you like.

Gardner: When you have a mixed environment — and I understand you have Nutanixhyperconverged (HCI), Hyper-V and vSphere VMs, some Citrix XenServer, and a mix of desktops — how does managing such heterogeneity with a common security approach work? It sounds like that could be kind of a mess.

Furniss: You would think it would be a mess. But from my perspective, Bitdefender GravityZone is really good because I have this all on a single pane of glass. It hooks into Microsoft ActiveDirectory, so it pulls back everything in there. I can see all the devices at once. It hooks into our Nutanix HCI environment. I can deploy small scan servers into the environment directly from GravityZone.

If I decide on an additional scan server, it automatically builds that scan server in the virtual environment for me, and it’s another box that we’ve got for scanning everything on the virtual service.

Bitdefender GravityZone is really good because I have this all on a single pane of glass. I can see all the devices at once. I can deploy small scan servers into the environment directly from GravityZone.

It’s nice that it hooks into all these various things. We currently have some legacy VMware. Bitdefender lets me see what’s in that environment. We don’t use the VMware NSX platform, but it gives me visibility across an older platform even as I’m moving to get everything to the Nutanix HCI.

So it makes our jobs easier. The additional patch management module that we have in there, it’s one of the big things for us.

For example, we have always been really good at keeping our Windows updates on devices and servers up to the latest level. But we tended to have problems keeping updates ongoing for all of our third-party apps, such as Adobe ReaderFlash, and Java across all of the devices.

You can get lost as to what is out there unless you do some kind of active scanning across your entire infrastructure, and the Bitdefender patch management allows us to see where we have different versions of apps and updates on client devices. It allows us to patch them up to the latest level and install the latest versions.

From that perspective, I am again using just one pane of glass, but I am getting so much benefit and extra features and functionality than I did previously in the many other products that we use.

Gardner: Stephen, you mentioned a total cost of ownership (TCO) benefit when it comes to server utilization and the increased VMs. Is there another economic metric when it comes to administration? You have a small number of people. Do you see a payback in terms of this administration and integration value?

Furniss: I do. We only have 13 people on the infrastructure team, but only two or three of us actively go into the Bitdefender GravityZone platform. And on a day-to-day basis, we don’t have to do that much. If we deploy a new system, we might have to monitor and see if there is anything that’s needed as an exception if it’s some funky application.

But once our applications are deployed and our servers are up and running, we don’t have to make any real changes. We only have to look at patch levels with third-parties, or to see if there are any issues on our end points and needs our attention.

The actual amount of time we need to be in the Bitdefender console is quite reduced so it’s really useful to us.

Gardner: What’s been the result this last year that you have had Bitdefender running in terms of the main goal — which is to be free of security concerns?

Proactive infection protection 

Furniss: That’s just been the crux of it. We haven’t had any malware any ransomware attacks on our network. We have not had to spend days, weeks, or hours restoring files back or anything like that — or rebuilding hundreds of machines because they have something on them. So that’s been a good thing.

Another interesting thing for us, we began looking at the Bitdefender reports from day one. And it had actually found, going back 5, 6, or 7 years, that there was malware or some sort of viruses still out there in our systems.

And the weird thing is, our previous security product had never even seen this stuff. It had obviously let it through to start with. It got through all our filtering and everything, and it was sitting in somebody’s mailbox ready — if they clicked on it – to launch and infect the entire network.

Straightaway from day one, we were detecting stuff that sat for years in people’s mailboxes. We just didn’t even know about it.

So, from that perspective, it’s been fantastic. We’ve not had any security outbreaks that we had to deal with, or anything like that.

And just recently, we had our security audit from our penetration testers. One of the things they try to do is actually put some malware on to a test device. They came back and said they had not been able to do that. They have been unable to infect any of our devices. So that’s been a really, really good thing from our perspective.

Gardner: How is that translated into the perception from your end users and your overseers, those people managing your budgets? Has there been a sense of getting more value? What’s the satisfaction quotient, if you will, from your end users?

Furniss: A really good, positive thing has been that they have not come back and said that there’s anything that we’ve lost. There are no complaints about machines being slow.

We even had one of our applications guys say that their machine was running faster than it normally does on Fridays. When we explained that we had swapped out the old version of the security product for Bitdefender, it was like, “Oh, that’s great, keep it up.”

There are no complaints about machines being slow. One of our apps guy says that their machine was running faster than normal. From IT, we are really pleased. 

For the people higher up, at the minute, I don’t think they appreciate what we’ve done.  That will come in the next month as we start presenting to them our security reports and the reports from the audit about how they were unable to infect an end-user device.

From our side, from IT, we are really, really pleased with it. We understand what it does and how much it’s saving us from the pains of having to restore files. We are not being seen as one of these councils or entities that’s suddenly plastered across the newspaper and had its reputation tarnished because anyone has suddenly lost all their systems or been infected or whatever.

Gardner: Having a smoothly running organization is the payoff.

Before we close out, what about the future? Where would you like to see your security products go in terms of more intelligence, using data, and getting more of a proactive benefit?

Cloud on the horizon 

Furniss: We are doing a lot more now with virtualization. We have only about 50 physical servers left. We are also thinking about the cloud journey. So we want the security products working with all of that stuff up in the cloud. It’s going to be the next big thing for us. We want to secure that area of our environment if we start moving infrastructure servers up there.

Can we protect stuff up in the cloud as well as what we have here?

Gardner: Yes, and you mentioned, Stephen, at home that you are using Bitdefender down into your mobile devices, is that also the case with your users in the council, in the governance there or is there a bring your own device benefit or some way that you are looking to allow people to use more of their own devices in context of work? How does that mobile edge work in the future?

Furniss: Well, I don’t know. I think a mobile device is quite costly for councils to actually deploy, but we have taken the approach of — if you need it for work, then you get one. We currently have got a project to look at deploying the mobile version of Bitdefender to our actual existing Android users.

Gardner: Now that you have 20/20 hindsight with using this type of security environment over the course of a year, any advice for folks in a similar situation?

Furniss: Don’t be scared of change. I think one of the things that always used to worry me was that we knew what we were doing with a particular vendor. We knew what our difficulties were. Are we going to be able to remove it from all the devices?

Don’t worry about that. If you are getting the right product, it’s going to take care of lot of the issues that you currently have. We found that deploying the new product was relatively easy and didn’t cause any pain to our end-users. It was seamless. They didn’t even know we had done it.

Some people might be thinking that they have a massive estate and it’s going to be a real headache. But with automation and a bit of thinking about how and what are you going to do, it’s fairly straightforward to deploy a new antivirus product to your end users. Don’t be afraid of change and moving into something new. Get the best use of the new products that there are out there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender

You may also be interested in:

Posted in Bitdefender, Cloud computing, Cyber security, machine learning, risk assessment, Security, User experience, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

Financial stability, a critical factor for choosing a business partner, is now much easier to assess

The next BriefingsDirect digital business risk remediation discussion explores new ways companies can gain improved visibility, analytics, and predictive indicators to better assess the financial viability of partners and global supply chains.

Businesses are now heavily relying upon their trading partners across their supply chains — and no business can afford to be dependent on suppliers that pose risks due to poor financial health.

We will now examine new tools and methods that create a financial health rating system to determine the probability of bankruptcy, default, or disruption for both public and private companies — as much as 36 months in advance.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the exploding sophistication around gaining insights into supply-chain risk of a financial nature, please welcome Eric Evans, Managing Director of Business Development at RapidRatings in New York, and Kristen Jordeth, Go-to-Market Director for Supplier Management Solutions, North America at SAP Ariba. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Eric, how do the technologies and processes available now provide a step-change in managing supplier risk, particularly financial risk?

Eric Evans

Evans

Evans: Platform-to-platform integrations enabled by application programming interfaces (APIs), which we have launched over the past few years, allows partnering with SAP Ariba Supplier Risk. It’s become a nice way for our clients to combine actionable data with their workflow in procurement processes to better manage suppliers end to end — from sourcing to on-boarding to continuous monitoring.

Gardner: The old adage of “garbage in, garbage out” still applies to the quality and availability of the data. What’s new about access to better data, even in the private sector?

Dig deep into risk factors

Evans: We go directly to the source, the suppliers our customers work with. They introduce us to those suppliers and we get the private company financial data, right from those companies. It’s a quantitative input, and then we do a deeper “CAT scan,” if you will, on the financials, using that data together with our predictive scoring.

Gardner: Kristen, procurement and supply chain integrity trends have been maturing over the past 10 years. How are you able to focus now on more types of risk? It seems we are getting better and deeper at preventing unknown unknowns.

Jordeth: Exactly, and what we are seeing is customers managing risk from all aspects of the business. The most important thing is to bring it all together through technology.

Within our platform, we enable a Controls Framework that identifies key areas of risk that need to be addressed for a specific type of engagement. For example, do they need to pull a financial rating? Do they need to do a background check? We use the technology to manage the controls across all of the different aspects of risk in one system.

Gardner: And because many companies are reliant on real-time logistics and supplier services, any disruption can be catastrophic.

No alt text provided for this image

Jordeth

Jordeth: Absolutely. We need to make sure that the information gets to the system as quickly as it’s available, which is why the API connect to RapidRatings is extremely important to our customers. On top of that, we also have proactive incidents tracking, which complements the scores.

If you see a medium-risk business, from a financial perspective, you can look into that incident to see if they are under investigation, or if things going on where they might be laying off departments.

It’s fantastic and to have it all in one place with one view. You can then slice and dice the data and roll it up into scores. It’s very helpful for our customers.

Gardner: And this is a team sport, with an ecosystem of partners, because there is such industry specialization. Eric, how important is it being in an ecosystem with other specialists examining other kinds of risk?

Evans: It’s really important. We listen to our customers and prospects. It’s about the larger picture of bringing data into an end-to-end procurement and supplier risk management process.

No alt text provided for this image

We feel really good about being part of SAP PartnerEdge and an app extension partner to SAP Ariba. It’s exciting to see our data and the integration for clients.

Gardner: Rapid Ratings International, Inc. is the creator of the proprietary Financial Health Rating (FHR), also known as RapidRatings. What led up to the solution? Why didn’t it exist 30 years ago?

Rate the risk over time

Evans: The company was founded by someone with a background in econometrics and modeling. We have 24 industry models that drive the analysis. It’s that kind of deep, precise, and accurate modeling — plus the historical database of more than 30 years of data that we have. When you combine those, it’s much more accurate and predictive, it’s really forward-looking data.

Gardner: You provide a 0 to 100 score. Is that like a credit rating for an individual? How does that score work in being mindful of potential risk?

Evans: The FHR is a short-term score, from 0 to 100, that looks at the next 12 months with a probability of default. Then a Core Health Score, which is around 24 to 36 months out, looks at operating efficiency and other indicators of how well a company is managing the business and operationalizing.

We can identify companies that are maybe weak short-term, but look fine long-term, or vice versa. Having industry depth — and the historical data behind it — that’s what drives the go-forward assessments.

When you combine the two, or look at them individually, you can identify companies that are maybe weak short-term, but look fine long-term, or vice versa. If they don’t look good in the long-term and in the short-term, they still may have less risk because they have cash on hand. And that’s happening out in the marketplace these days with a lot of the initial public offerings (IPOs) such as Pinterest or Lyft. They have a medium-risk FHR because they have cash, but their long-term operating efficiency needs to be improved because they are not yet profitable.

Gardner: How are you able to determine risk going 36 months out when you’re dealing mostly with short-term data?

Evans: It’s because of the historical nature and the discrete modeling underneath, that’s what gets precise about the industry that each company is in. Having 24 unique industry models is very different than taking all of the companies out there and stuffing them into a plain-vanilla industry template. A software company is very different than pharmaceuticals, which is very different than manufacturing.

Having that industry depth — and the historical data behind it — is what’s drives the go-forward assessments.

Gardner: And this is global in nature?

Evans: Absolutely. We have gone out to more than 130 countries to get data from those sources, those suppliers. It is a global data set that we have built on a one-to-one basis for our clients.

Gardner: Kristen, how does somebody in the Ariba orbit take advantage of this? How is this consumed?

Jordeth: As with everything at SAP Ariba, we want to simplify how our customers get access to information. The PartnerEdge program works with our third parties and partners to create an API whereby all our customers need to do is get a license key from RapidRatings and apply it to the system.

The infrastructure and connection are already there. Our deployment teams don’t have to do anything, just add that user license and the key within the system. So, it’s less touch, and easy to access the data.

Gardner: For those suppliers that want to be considered good partners with low financial risk, do they have access to this information? Can they work to boost up their scores?

To reduce risk, discuss data details 

Evans: Our clients actually own the subscription and the license, and they can share the data with their suppliers. The suppliers can also foster a dialogue with our tool, called the Financial Dialogue, and they can ask questions around areas of concern. That can be used to foster a better relationship, build transparency, and it doesn’t have to be a negative conversation to be a positive one.

No alt text provided for this image

They may want to invest in their company, extend payment terms or credit, work with them on service-level agreements (SLAs), and send in people to help manage. So, it could be a good way to just build up that deeper relationship with that supplier and use it as a better foundation.

Gardner: Kristen, when I put myself in the position of a buyer, I need to factor lots of other issues, such as around sustainability, compliance, and availability. So how do you see the future unfolding for the holistic approach to risk mitigation, of not only taking advantage of financial risk assessments, but the whole compendium of other risks? It’s not a simple, easy task.

Jordeth: When you look at financial data, you need to understand the whole story behind it. Why does that financial data look the way it does today? What I love about RapidRatings is they have financial scores, and it’s more about the health of the company in the future.

But in our SAP Ariba solution, we provide insights on other factors such as sustainability, information security, and are they funding things such as women’s rights in Third World countries? Once you start looking at the proactive awareness of what’s going on — and all the good and the bad together — you can weigh the suppliers in a total sense.

Their financials may not be up to par, but they are not high risk because they are funding women’s rights or doing a lot of things with the youth in America. To me, that may be more important. So I might put them on a tracker to address their financials more often, but I am not going to stop doing business with them because one of my goals is sustainability. That holistic picture helps tell the true story, a story that connects to our customers, and not just the story we want them to have. So, it creates and crafts that full picture for them.

Gardner: Empirical data that can then lead to a good judgment that takes into full account all the other variables. How does this now get to the SAP Ariba installed base? When is the general availability?

Customize categories, increase confidence 

Jordeth: It’s available now. Our supplier risk module is the entryway for all of these APIs, and within that module we connect to the companies that provide financial data, compliance screening, and information on forced labor, among others. We are heavily expanding in this area for categories of risk with our partners, so it’s a fantastic approach.

Within the supplier risk module, customers have the capability to not only access the information but also create their own custom scores on that data. Because we are a technology organization, we give them the keys so an administrator can go in and alter that the way they want. It is very customizable.

It’s all in our SAP Ariba Supplier Risk solution, and we recently released the connection to RapidRatings.

Evans: Our logo is right in there, built in, under the hood, and visible. In terms of getting it enabled, there’s no professional services or implementation wait time. So once the data set is built out on our end, if it’s a new client that’s through our implementation team, and basically we just give the API key credentials to our client. They take it and enable it in SAP Ariba Supplier Risk and they can instantly pull up the scores. So there is no wait time and no future developments to get at the data.

Once the data set is built on our end, we just give the API key to our client. They take it and enable it in SAP Ariba Supplier Risk and they can instantly pull up the scores. There is no wait time.

Jordeth: That helps us with security, too, because everybody wants to ensure that any data going in and out of a system is secure, with all of the compliance concerns we have. So our partner team also ensures the secure connection back and forth with their data system and our technology. So, that’s very important for customers.

Gardner: Are there any concrete examples? Maybe you can name them, maybe you can’t, instances where your rating system has proven auspicious? How does this work in the real world?

Evans: GE Healthcare did a joint-webinar with our CEO last year, explained their program, and showed how they were able to de-risk their supply base using RapidRatings. They were able to reduce the number of companies that were unhealthy financially. They were able to have mitigation plans put in place and corrective actions. So it was an across the board win-win.

No alt text provided for this image

Oftentimes, it’s not about the return on investment (ROI) on the platform, but the fact that companies were thwarting a disruption. An event did not happen because we were able to address it before it happened.

On the flip side, you can see how resilient companies are regardless of all the disruptions out there. They can use the financial health scores to observe the capability of a company to be resilient and bounce back from a cyber breach, a regulatory issue, or maybe a sustainability issue.

By looking at all of these risks inside of SAP Ariba Supplier Risk, they may want to order an FHR or look at an FHR for a new company that they hadn’t thought of if they are looking at other risks, operational risks. So that’s another way to tie it in.

Another interesting example is a large international retailer. A company got flagged as high risk and had just filed for bankruptcy, which alerted the buyer. The buyer had signed a contract, but they had the product on the shelf, so it had to be resourced and they had to find a new supplier. They mitigated risk, but they had to take quick action, get another product, and some scrambling had to be done. But they had de-risked some brand reputation damage by having done that. They hadn’t looked at that company before, it was a new company, and it was alerted. So that’s another way of not just running it at the time of contract, but it’s also running it when you’re going to market.

Identify related risks 

Gardner: It also seems logical that if a company is suffering on the financial aspects of doing business, then it might be an indicator that they’re not well-managed in general. It may not just be a cause, but an effect. Are there other areas, you could call them adjacencies, where risks to quality, delivery times, logistics are learned from financial indicators?

Evans: It’s a really good point. What’s interesting is we took a look at some data our clients had around timeliness, quality, performance, delivery, and overlaid it with the financial data on those suppliers. The companies that were weak financially were more than two times likely to ship a defective product. And companies that were weak financially were more than 2.5 times more likely to ship wrong or late.

The whole just-in-time shipping or delivery value went out the window. To your point, it can be construed that companies — when they are stressed financially – may be cutting corners, with things getting a little shoddy. They may not have replaced someone. Maybe there are infrastructure investments that should have been made but weren’t. So, all of those things have a reverberating effect in other operational risk areas.

Gardner: Kristen, now that we know that more data is good, and that you have more services like at RapidRatings, how will a big platform and network like SAP Ariba be able to use machine learning (ML) and artificial intelligence (AI) to further improve risk mitigation?

Jordeth: The opportunity exists for this to not only impact the assessment of a supplier, but throughout the full source-to-pay process, because it is embedded into the full SAP Ariba suite. So, even though you’re accessing it through risk, it’s visible when you’re sourcing, when you’re contracting, when you’re paying. So that direct connect is very important.

We want our customers to have it all. So I don’t cringe at the fact that they ask for it all because they should have it all. It’s just visualizing it in a manner that makes sense and it’s clear to them.

Gardner: And specifically on your set of solutions, Eric, where do you see things going in the next couple years? How can the technology get even better? How can the risk be reduced more?

Evans: We will be innovating products so our clients can bring in more scope around their supply base, not just the critical vendors but across the longer tail of a supply base and look at scores across different segments of suppliers. There could be sub-tiers, as a traversing with sub-tier third and fourth parties, particularly in the banking industry or manufacturing industry.

And so that coupled with more intelligence or enhanced APIs and data visualization, these are things that we are looking into as well as additional scoring capabilities.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Cloud computing | Leave a comment

Using AI to solve data and IT complexity — and thereby better enable AI

The next BriefingsDirect data disruption discussion focuses on why the rising tidal wave of data must be better managed, and how new tools are emerging to bring artificial intelligence (AI) to the rescue.

Stay with us to explore how the latest AI innovations improve both data and services management across a cloud deployment continuum — and in doing so set up an even more powerful way for businesses to exploit AI.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn how AI will help conquer complexity to allow for higher abstractions of benefits from across all sorts of analysis, we welcome Rebecca Lewington, Senior Manager of Innovation Marketing at Hewlett Packard Enterprise (HPE). The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We have been talking about massive amounts of data for quite some time. What’s new about data buildup that requires us to look to AI for help?

Lewington: Partly it is the sheer amount of data. IDC’s Data Age Study predicts the global data sphere will be 175 zettabytes by 2025, which is a rather large number. That’s what, 1 and 21 zeros? But we have always been in an era of exploding data.

 
rebecca-lewington
Lewington

Yet, things are different. One, it’s not just the amount of data; it’s the number of sources the data comes from. We are adding in things like mobile devices, and we are connecting factories’ operational technologies to information technology (IT). There are more and more sources.

Also, the time we have to do something with that data is shrinking to the point where we expect everything to be real-time or you are going to make a bad decision. An autonomous car, for example, might do something bad. Or we are going to miss a market or competitive intelligence opportunity.

So it’s not just the amount of data — but what you need to do with it that is challenging.

Gardner: We are also at a time when Al and machine learning (ML) technologies have matured. We can begin to turn them toward the data issue to better exploit the data. What is new and interesting about AI and ML that make them more applicable for this data complexity issue?

Data gets smarter with AI

Lewington: A lot of the key algorithms for AI were actually invented long ago in the 1950s, but at that time, the computers were hopeless relative to what we have today; so it wasn’t possible to harness them.

For example, you can train a deep-learning neural net to recognize pictures of kittens. To do that, you need to run millions of images to train a working model you can deploy. That’s a huge, computationally intensive task that only became practical a few years ago. But now that we have hit that inflection point, things are just taking off.

Gardner: We can begin to use machines to better manage data that we can then apply to machines. Does that change the definition of AI?

Lewington: The definition of AI is tricky. It’s malleable, depending on who you talk to. For some people, it’s anything that a human can do. To others, it means sophisticated techniques, like reinforcement learning and deep learning.

How to Remove Complexity

From Multicloud and Hybrid IT 

One useful definition is that AI is what you use when you know what the answer looks like, but not how to get there.

Traditional analytics effectively does at scale what you could do with pencil and paper. You could write the equations to decide where your data should live, depending on how quickly you need to access it.

But with AI, it’s like the kittens example. You know what the answer looks like, it’s trivial for you to look at the photograph and say, “That is a cat in the picture.” But it’s really, really difficult to write the equations to do it. But now, it’s become relatively easy to train a black box model to do that job for you.

Gardner: Now that we are able to train the black box, how can we apply that in a practical way to the business problem that we discussed at the outset? What is it about AI now that helps better manage data? What’s changed that gives us better data because we are using AI?

The heart of what makes AI work is good data; the right data, in the right place, with the right properties you can use to train a model, which you can then feed new data into to get results that you couldn’t get otherwise.

Lewington: It’s a circular thing. The heart of what makes AI work is good data; the right data, in the right place, with the right properties you can use to train a model, which you can then feed new data into to get results that you couldn’t get otherwise.

Now, there are many ways you can apply that. You can apply it to the trivial case of the cat we just talked about. You can apply it to helping a surgeon review many more MRIs, for example, by allowing him to focus on the few that are borderline, and to do the mundane stuff for him.

But, one of the other things you can do with it is use it to manipulate the data itself. So we are using AI to make the data better — to make AI better.

Gardner: Not only is it circular, and potentially highly reinforcing, but when we apply this to operations in IT — particularly complexity in hybrid cloud, multicloud, and hybrid IT — we get an additional benefit. You can make the IT systems more powerful when it comes to the application of that circular capability — of making better AI and better data management.

AI scales data upward and outward

Lewington: Oh, absolutely. I think the key word here is scale. When you think about data — and all of the places it can be, all the formats it can be in — you could do it yourself. If you want to do a particular task, you could do what has traditionally been done. You can say, “Well, I need to import the data from here to here and to spin up these clusters and install these applications.” Those are all things you could do manually, and you can do them for one-off things.

But once you get to a certain scale, you need to do them hundreds of times, thousands of times, even millions of times. And you don’t have the humans to do it. It’s ridiculous. So AI gives you a way to augment the humans you do have, to take the mundane stuff away, so they can get straight to what they want to do, which is coming up with an answer instead of spending weeks and months preparing to start to work out the answer.

No alt text provided for this image

Gardner: So AI directed at IT, what some people call AIOps could be an accelerant to this circular advantageous relationship between AI and data? And is that part of what you are doing within the innovation and research work at HPE?

Lewington: That’s true, absolutely. The mission of Hewlett Packard Labs in this space is to assist the rest of the company to create more powerful, more flexible, more secure, and more efficient computing and data architectures. And for us in Labs, this tends to be a fairly specific series of research projects that feed into the bigger picture.

For example, we are now doing the Deep Learning Cookbook, which allows customers to find out ahead of time exactly what kind of hardware and software they are going to need to get to a desired outcome. We are automating the experimenting process, if you will.

And, as we talked about earlier, there is the shift to the edge. As we make more and more decisions — and gain more insights there, to where the data is created — there is a growing need to deploy AI at the edge. That means you need a data strategy to get the data in the right place together with the AI algorithm, at the edge. That’s because there often isn’t time to move that data into the cloud before making a decision and waiting for the required action to return.

Once you begin doing that, once you start moving from a few clouds to thousands and millions of endpoints, how do you handle multiple deployments? How do you maintain security and data integrity across all of those devices? As researchers, we aim to answer exactly those questions.

And, further out, we are looking to move the natural learning phase itself to the edge, to do the things we call swarm learning, where devices learn from their environment and each other, using a distributed model that doesn’t use a central cloud at all.

Gardner: Rebecca, given your title is Innovation Marketing Lead, is there something about the very nature of innovation that you have come to learn personally that’s different than what you expected? How has innovation itself changed in the past several years?

Innovation takes time and space 

Lewington: I began my career as a mechanical engineer. For many years, I was offended by the term innovation process, because that’s not how innovation works. You give people the space and you give them the time and ideas appear organically. You can’t have a process to have ideas. You can have a process to put those ideas into reality, to wean out the ones that aren’t going to succeed, and to promote the ones that work.

How to Better Understand

What AI Can do For Your Business

But the term innovation process to me is an oxymoron. And that’s the beautiful thing about Hewlett Packard Labs. It was set up to give people the space where they can work on things that just seem like a good idea when they pop up in their heads. They can work on these and figure out which ones will be of use to the broader organization — and then it’s full steam ahead.

Gardner: It seems to me that the relationship between infrastructure and AI has changed. It wasn’t that long ago when we thought of business intelligence (BI) as an application — above the infrastructure. But the way you are describing the requirements of management in an edge environment — of being able to harness complexity across multiple clouds and the edge — this is much more of a function of the capability of the infrastructure, too. Is that how you are seeing it, that only a supplier that’s deep in its infrastructure roots can solve these problems? This is not a bolt-on benefit.

Lewington: I wouldn’t say it’s impossible as a bolt-on; it’s impossible to do efficiently and securely as a bolt-on. One of the problems with AI is we are going to use a black box; you don’t know how it works. There were a number of news stories recently about AIs becoming corrupted, biased, and even racist, for example. Those kinds of problems are going to become more common.

And so you need to know that your systems maintain their integrity and are not able to be breached by bad actors. If you are just working on the very top layers of the software, it’s going to be very difficult to attest that what’s underneath has its integrity unviolated.

If you are someone like HPE, which has its fingers in lots of pies, either directly or through our partners, it’s easier to make a more efficient solution.

You need to know that your systems maintain their integrity and are not able to be breached by bad actors. If you are just working on the very top layers of the software, it’s going to be very difficult to attest that what’s underneath has its integrity unviolated.

Gardner: Is it fair to say that AI should be a new core competency, for not only data scientists and IT operators, but pretty much anybody in business? It seems to me this is an essential core competency across the board.

Lewington: I think that’s true. Think of AI as another layer of tools that, as we go forward, becomes increasingly sophisticated. We will add more and more tools to our AI toolbox. And this is one set of tools that you just cannot afford not to have.

Gardner: Rebecca, it seems to me that there is virtually nothing within an enterprise that won’t be impacted in one way or another by AI.

Lewington: I think that’s true. Anywhere in our lives where there is an equation, there could be AI. There is so much data coming from so many sources. Many things are now overwhelmed by the amount of data, even if it’s just as mundane as deciding what to read in the morning or what route to take to work, let alone how to manage my enterprise IT infrastructure. All things that are rule-based can be made more powerful, more flexible, and more responsive using AI.

Gardner: Returning to the circular nature of using AI to make more data available for AI — and recognizing that the IT infrastructure is a big part of that — what are doing in your research and development to make data services available and secure? Is there a relationship between things like HPE OneView and HPE OneSphere and AI when it comes to efficiency and security at scale?

Let the system deal with IT 

Lewington: Those tools historically have been rules-based. We know that if a storage disk gets to a certain percentage full, we need to spin up another disk — those kinds of things. But to scale flexibly, at some point that rules-based approach becomes unworkable. You want to have the system look after itself, to identify its own problems and deal with them.

Including AI techniques in things like HPE InfoSight, HPE Clearpath, and network user identity behavior software on the HPE Aruba side allows the AI algorithms to make those tools more powerful and more efficient.

You can think of AI here as another class of analytics tools. It’s not magic, it’s just a different and better way of doing IT analytics. The AI lets you harness more difficult datasets, more complicated datasets, and more distributed datasets.

Gardner: If I’m an IT operator in a global 2000 enterprise, and I’m using analytics to help run my IT systems, what should I be thinking about differently to begin using AI — rather than just analytics alone — to do my job better?

Lewington: If you are that person, you don’t really want to think about the AI. You don’t want the AI to intrude upon your consciousness. You just want the tools to do your job.

For example, I may have 1,000 people starting a factory in Azerbaijan, or somewhere, and I need to provision for all of that. I want to be able to put on my headset and say, “Hey, computer, set up all the stuff I need in Azerbaijan.” You don’t want to think about what’s under the hood. Our job is to make those tools invisible and powerful.

Composable, invisible, and insightful 

Gardner: That sounds a lot like composability. Is that another tangent that HPE is working on that aligns well with AI?

Lewington: It would be difficult to have AI be part of the fabric of an enterprise without composability, and without extending composability into more dimensions. It’s not just about being able to define the amount of storage and computer networking with a line of code, it’s about being able to define the amount of memory, where the data is, where the data should be, and what format the data should be in. All of those things – from the edge to cloud – need to be dimensions in composability.

How to Achieve Composability 

Across Your Datacenter 

You want everything to work behind the scenes for you in the best way with the quickest results, with the least energy, and in the most cost-effective way possible. That’s what we want to achieve — invisible infrastructure.

Gardner: We have been speaking at a fairly abstract level, but let’s look to some examples to illustrate what we’re getting at when we think about such composability sophistication.

Do you have any concrete examples or use cases within HPE that illustrate the business practicality of what we’ve been talking about?

Lewington: Yes, we have helped a tremendous number of customers either get started with AI in their operations or move from pilot to volume use. A couple of them stand out. One particular manufacturing company makes electronic components. They needed to improve the yields in their production lines, and they didn’t know how to attack the problem. We were able to partner with them to use such things as vision systems and photographs from their production tools to identify defects that only could be picked up by a human if they had a whole lot of humans watching everything all of the time.

This gets back to the notion of augmenting human capabilities. Their machines produce terabytes of data every day, and it just gets turned away. They don’t know what to do with it.

No alt text provided for this image

We began running some research projects with them to use some very sophisticated techniques, visual autoencoders, that allow you, without having a training set, to characterize a production line that is performing well versus one that is on the verge of moving away from the sweet spot. Those techniques can fingerprint a good line and also identify when the lines go just slightly bad. In that case, a human looking at line would think it was working just perfectly.

This takes the idea of predictive maintenance further into what we call prescriptive maintenance, where we have a much more sophisticated view into what represents a good line and what represents a bad line. Those are couple of examples for manufacturing that I think are relevant.

Gardner: If I am an IT strategist, a Chief Information Officer (CIO) or a Chief Technology Officer (CTO), for example, and I’m looking at what HPE is doing — perhaps at the HPE Discover conference — where should I focus my attention if I want to become better at using AI, even if it’s invisible? How can I become more capable as an organization to enable AI to become a bigger part of what we do as a company?

The new company man is AI

Lewington: For CIOs, their most important customers these days may be developers and increasingly data scientists, who are basically developers working with training models as opposed to programs and code. They don’t want to have to think about where that data is coming from and what it’s running on. They just want to be able to experiment, to put together frameworks that turn data into insights.

It’s very much like the programming world, where we’ve gradually abstracted things from bare-metal, to virtual machines, to containers, and now to the emerging paradigm of serverless in some of the walled-garden public clouds. Now, you want to do the same thing for that data scientist, in an analogous way.

Today, it’s a lot of heavy lifting, getting these things ready. It’s very difficult for a data scientist to experiment. They know what they want. They ask for it, but it takes weeks and months to set up a system so they can do that one experiment. Then they find it doesn’t work and move on to do something different. And that requires a complete re-spin of what’s under the hood.

Now, using things like software from the recent HPE BlueData acquisition, we can make all of that go away. And so the CIO’s job becomes much simpler because they can provide their customers the tools they need to get their work done without them calling up every 10 seconds and saying, “I need a cluster, I need a cluster, I need a cluster.”

That’s what a CIO should be looking for, a partner that can help them abstract complexity away, get it done at scale, and in a way that they can both afford and that takes the risk out. This is complicated, it’s daunting, and the field is changing so fast.

Gardner: So, in a nutshell, they need to look to the innovation that organizations like HPE are doing in order to then promulgate more innovation themselves within their own organization. It’s an interesting time.

Containers contend for the future 

Lewington: Yes, that’s very well put. Because it’s changing so fast they don’t just want a partner who has the stuff they need today, even if they don’t necessarily know what they need today. They want to know that the partner they are working with is working on what they are going to need five to 10 years down the line — and thinking even further out. So I think that’s one of the things that we bring to the table that others can’t.

Gardner: Can give us a hint as to what some of those innovations four or five years out might be? How should we not limit ourselves in our thinking when it comes to that relationship, that circular relationship between AI, data, and innovation?

Lewington: It was worth coming to HPE Discover in June, because we talked about some exciting new things around many different options. The discussion about increasing automation abstractions is just going to accelerate.

We are going to get to the point where using containers seems as complicated as bare-metal today and that’s really going to help simplify the whole data pipelines thing.

For example, the use of containers, which have a fairly small penetration rate across enterprises, is at about 10 percent adoption today because they are not the simplest thing in the world. But we are going to get to the point where using containers seems as complicated as bare-metal today and that’s really going to help simplify the whole data pipelines thing.

Beyond that, the elephant in the room for AI is that model complexity is growing incredibly fast. The compute requirements are going up, something like 10 times faster than Moore’s Law, even as Moore’s Law is slowing down.

We are already seeing an AI compute gap between what we can achieve and what we need to achieve — and it’s not just compute, it’s also energy. The world’s energy supply is going up, can only go up slowly, but if we have exponentially more data, exponentially more compute, exponentially more energy, and that’s just not going to be sustainable.

So we are also working on something called Emergent Computing, a super-energy-efficient architecture that moves data around wherever it needs to be — or not move data around but instead bring the compute to the data. That will help us close that gap.

How to Transform

The Traditional Datacenter

And that includes some very exciting new accelerator technologies: special-purpose compute engines designed specifically for certain AI algorithms. Not only are we using regular transistor-logic, we are using analog computing, and even optical computing to do some of these tasks, yet hundreds of times more efficiently and using hundreds of times less energy. This is all very exciting stuff, for a little further out in the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Cloud computing, data analysis | Tagged , , , , , , , , , , , , | Leave a comment

How IT can fix the broken employee experience

The next BriefingsDirect intelligent workspaces discussion explores how businesses are looking to the latest digital technologies to transform how employees work.

There is a tremendous amount of noise, clutter, and distraction in the scattershot, multi-cloud workplace of today — and it’s creating confusion and frustration that often pollute processes and hinder innovative and impactful work. 

We’ll now examine how IT can elevate the game of sorting through apps, services, data, and delivery of simpler, more intelligent experiences that enable people — in any context — to work on relevancy and consistently operate at their informed best. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To illustrate new paths to the next generation of higher productivity work, please welcome Marco Stalder, Team Leader of Citrix Workspace Services at Bechtle AG, one of Europe’s leading IT providers, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, improving the employee experience has become a hot topic, with billions of productivity dollars at stake. Why has how workers do or don’t do their jobs well become such a prominent issue?

Minahan: The simple answer is the talent crunch. Just about everywhere you look, workforce, management, and talent acquisition have become a C-suite level, if not board level, priority.

Minahan

And this really boils down to three things. Number one, demographically there is not enough talent. You have heard the numbers from McKinsey that within the next year there will be a shortage of 95 million medium- to high-skilled workers around the globe. And that’s being exacerbated by the fact that our traditional work models — where we build a big office building or a call center and try to hire people around it — is fundamentally broken.

The second key reason is a skills gap. Many companies are reengineering their business to drive digital transformation and new digital business or engagement models with their customers. But oftentimes their employee base doesn’t have the right skills and they need to work on developing them. 

The third issue exacerbating the talent crunch is the fact that if you are fortunate enough to have the talent, it’s highly likely they are disengaged at work. Gallup just did its global Future of Work Study and found that 85 percent of employees are either disengaged or highly disengaged at work. A chief reason is they don’t feel they have access to the information and the tools they need to get their jobs done effectively.

Gardner: We have dissatisfaction, we have a hard time finding people, and we have a hard time keeping the right people. What can we bring to the table to help solve that? Is there some combination of what human resources (HR) used to do and IT maybe didn’t think about doing but has to do?

Enhance the employee experience 

Minahan: The concept of employee experience is working its way into the corporate parlance. The chief reason is that you want to be able to ensure the employees have the right combination of physical space and an environment conducive with interacting and partnering with their project teams — and for getting work done. 

Digital spaces, right? That is not just access to technology, but a digital space that is simplified and curated to ensure workers get the right information and insights to do their jobs. And then, obviously, cultural considerations, such as, “Who is my manager, what’s my development career, am I continuing to move forward?”

Those three things are combining when we talk about employee experience.

Gardner: And you talked about the where, the physical environment. A lot of companies have experimented with at-home workers, remote workers, and branch offices. But many have not gotten the formula right. At the same time, we are seeing cities become very congested and very expensive. 

The traditional work models of old just aren’t working, especially in light of the talent crunch and skills gap we’re seeing. Traditional work models are fundamentally broken. 

Do we need to give people even more choice? And if we do, how can we securely support that? 

Minahan: The traditional work models of old just aren’t working, especially in light of the talent crunch and skills gap we are seeing. The high-profile example is Amazon, right? So over the past year in the US there was a big deal over Amazon selecting their second and third headquarters. Years ago Amazon realized they couldn’t hire all the talent they needed in Seattle or Silicon Valley or Austin. Now they have 17-odd tech centers around the US, with anywhere from 400 to several thousand people at each one. So you need to go where the talent is. 

When we think about traditional work models — where we would build a call center and hire a lot of people around that call center – it’s fundamentally broken. As evidence of this, we did a study recently where we surveyed 5,000 professional knowledge workers in the US. These were folks who moved to cities because they had opportunities and they got paid more. Yet 70 percent of them said that they would move out of the city if they could have more flexible work schedules and reliable connectivity. 

Gardner: It’s pretty attractive when you can get twice the house for half the money, still make city wages, and have higher productivity. It’s a tough equation to beat. 

Minahan: Yes, there is that higher productivity thing, this whole concept of mindfulness that’s working its way into the lingo. People should be hired to do a core job, not spending their days doing things like expense report approvals, performance reviews, or purchase requisitions. Yet those are a big part of everyone’s job, when they are in an office. 

You compound that with two-hour commutes, and that there are a lot of distractions in the office. We often need to navigate multiple different applications just to get a bit of the information that we need. We often need to navigate multiple different applications to get a single business process done and that’s just not dealing with all the different interfaces, it’s dealing with all the different authentications, and so on. All of that noise in your day really frustrates workers. They feel they were hired to do a job based on core skills they are really passionate about – but they spend all their time doing task work. 

Gardner:I feel like I spend way too much time in email. I think everybody knows and feels that problem. Now, how do we start to solve this? What can the technology side bring to the table and how can that start to move into the culture, the methods, and the rethinking of how work gets done?

De-clutter intelligently

Minahan: The simple answer is you need to clear way the clutter. And you need to bring intelligence to bear. We believe that artificial intelligence (AI) and machine learning (ML) play a key role. And so Citrix has delivered a digital workspace that has three primary attributes. 

First, it’s unified. Users and employees gain everything they need to be productive in one unified experience. Via single sign-on they gain access to all of their Software as a service (SaaS) apps, web apps, mobile apps, virtualized apps, and all of their content in one place. That all travels consistently with them wherever they are — across their laptop, to a tablet, to a smartphone, or even if they need to log on from a distinct terminal. 

The second component, in addition to being unified, is being secure. When things are within the workspace, we can apply contextual security policies based on who you are. We know, for example, that Dana logs in every day from a specific network, using his device. If you were to act abnormally or outside of that pattern, we could apply an additional level of authentication, or some other rules like shutting off certain functionalities such as downloading. So your applications and content are far more secure inside of the workspace than outside. 

When things are within the workspace, we can apply contextual security policies based on who you are. Your applications and content are far more secure inside of the workspace than outside.

The third component, intelligence, gets to the frustration part for the employees. Infusing ML and simplified workflows — what we call micro apps — within the workspace brings in a lot of those consumer-like experiences, such as curating your information and news streams, like Facebook. Or, like Netflix, it provides recommendations on the content you would like to see.

We can bring that into the workspace so that when you show up you get presented in a very personalized way the insights and tasks that you need, when you need them, and remove that noise from your day so you can focus on your core job. 

Gardner: Getting that triage based on context and that has a relevancy to other team processes sounds super important.

When it comes to IT, they may have been part of the problem. They have just layered on more apps. But IT is clearly going to be part of the solution, too. Who else needs to play a role here? How else can we re-architect work other than just using more technology?

To get the job done, ask employees how 

Minahan: If you are going to deliver improved employee experiences, one of the big mistakes a lot of companies make is they leave out the employee. They go off and craft the great employee experience and then present it to them. So definitely bring employees in. 

When we do research and engage with customers who prioritize on the employee experience, it’s usually a union between IT and human resources to best understand what the work is that an employee needs to get done. What’s the preferred environment? How do they want to work? With that understanding, you can ensure you are adapting the digital workspaces — and the physical workplaces — to support that.

Gardner: It certainly makes sense in theory. Let’s learn how this works in practice. 

Marco, tell us about Bechtle, what you have been doing, and why you made solving employee productivity issues a priority.

Stalder: Bechtle AG is one of Europe’s leading IT providers. We currently have about 70 systems integrators (SIs) across Germany, Switzerland, and Austria, as well as e-commerce businesses in 14 different European countries. 

Stalder

We were founded in 1983 and our company headquarters is in Neckarsulm, a small town in the southern part of Germany. We currently have 10,300 employees spread across all of Europe.

As an IT company, one of our key priorities is to make IT as easy as possible for the end users. In the past, that wasn’t always the case because the priorities had been set in the wrong place. 

Gardner: And when you say the priorities were set in the wrong place, when you tried to create the right requirements and the right priorities, how did you go about that, what were the top issues you wanted to solve?

Stalder: The hard part is gaining the balance between security and user experience. In the past, priorities were more focused on the security part. We have tried to shift this through our Corporate Workspace Project to give the user the right kind of experience back again, and letting it show in the work and focus on what the user has to do.

Gardner: And just to be clear, are we talking about the users that are just within your corporation or did this extend also to some of your clients and how you interact with them?

Stalder: The primary focus was our internal user base, but of course we also have contractors that externally have to access our data and our applications.

Gardner: Tim, this is yet another issue companies are dealing with: contingent workforces, contractors that come and go, and creative people that are often on another continent. We have to think about supporting that mix of workers, too.

Synchronizing the talent pool 

Minahan: Absolutely. We are seeing a major shift in how companies think of the workforce, between full-time and part-time contractors, and the like. Leading companies are looking around for pools of talent. They are asking, “How do I organize the right skills and resources I need? How do I bring them together in an environment, whether it’s physical or digital, to collaborate around a project and then dissolve them when that project is complete?”

And these new work models excite me when we talk about the workspace opportunity that technology can enable. A great example is a customer of ours, eBay, which people are familiar with. A long time ago, eBay recognized that they could not get ahead of the older call center model. They kept training people, but the turnover was too fast. So they began using the Citrix Workspace together with some of our networking technologies to go to where the employees are. 

Now they can go to the stay-at-home parent in Montana, the retiree in Florida, or the gig worker in New York. In this way, they can Uberfythe call center model by giving them, regardless of location, the applications, knowledge base, and reliable connectivity they need. So when you or I call in, it sounds like we are calling into a call center, and we get the answers we need to solve our problems.

Gardner: Marco, your largely distributed IT organization has permeable boundaries. There isn’t a hard wall between you and where your customers start and end. The Citrix Workspace helped you solve that. What were some of the other problems, and what was the outcome?

Stalder: One of the main criteria for Bechtle is agility. We have been growing constantly for the last 36 years. Bechtle started as a small company with only 100 employees, but organic and inorganic growth continues, and we are still growing quite rapidly. We just acquired another four companies at the end of last year, for example, with 400 to 500 employees. We need to on-board them quickly. 

One of the main criteria for Bechtle is agility. We have been growing constantly for the last 36 years. And our teams are spread around different office locations. We also have to adapt to new technologies rapidly because we want to be ahead of the technology.

And our teams are spread around different office locations; even my team, for example. I am based in Switzerland with four people. Another part of our group is in Germany, and I have one colleague in Budapest. Giving all of these people the correct and secure access to all of their applications and data is definitely key.

As an IT company, we also have to adapt to new technologies rapidly and quickly, probably faster than other companies because we want to be ahead of the technology for our employees. We are selling these same solutions to our customers, along with the same experience — and a good experience.

Gardner: We often call that drinking your own champagne. Tell us about the process through which you evaluated the Citrix Workspace solution and why that’s proven so powerful.

One platform to rule them all 

Stalder: In early 2016, we began with a high-level design for a basic corporate workspace. We began with an on-premises design, like a lot of companies. Then we were introduced to something called Citrix Cloud services by our partner manager in Germany.

In January 2017, we started to think about the Citrix Cloud solution as an interesting addition to what we were already planning. And we quickly realized that the team I am leading — we are six to eight people with limited resources – could only deliver all those services out to our end users with help. The Citrix Cloud services were a perfect fit for the project we wanted to do.

There are different reasons. One is standardization, to build and use one platform to access all of our applications, data, and services. Another is flexibility. While most of our workloads are currently in our own data centers in Germany, we are also thinking about bringing workloads and data out to the cloud. It doesn’t matter if it’s Microsoft AzureAmazon Web Services (AWS), or you name it.

Another benefit, of course, is scalability. As I said, we have been growing a lot and we are going to grow a lot more in the future. We really need to be able to scale out, and it doesn’t matter where the workload is going to be or where the data is going to be at the end. 

And, as an IT company, we are facing another issue. We are selling different kinds of IT products to our customers, and people tend to like to use the product they are selling to their customers. So we have to explore and use different kinds of applications for different tasks. 

For example, we use Microsoft TeamsCisco WebEx Teams, as well as Skype for Business. We are using many other kinds of applications, too. That perfectly fits into what we have seen [from Citrix at their recent Synergy conference keynote]. It brings it all together in the Citrix Workspace using micro apps and micro services.

Another important attribute is efficiency. As I said before, with seven or eight IT support people, we cannot build very complex and large things. You have to focus on doing things very efficiently.

Another really important thing for us as we set up the workspaces project is engaging with the executive board of Bechtle. If we find that those people are not standing behind the idea and understanding what we are trying to do, then the project is definitely going to fail. 

It was not that easy, just telling those board people what we would like to do. We had to build a proof of concept system to let them see, touch, and feel it themselves. Only in this way can one really understand it.

Gardner: Of course, such solutions are a team sport. You don’t just buy this out of the box. Digital transformation doesn’t come with a pretty ribbon on it. How did you go about creating this workspace?

There is IT in team 

Stalder: It was via teamwork spread between different kinds of groups. We have been working very closely with Citrix Consulting Services in Germany, for example. We have been working together with the engineers within our business units who are selling and implementing those solutions within our customers.

And another very important part, in my opinion, was not just engaging the Citrix people, but also engaging with the application owners. It doesn’t really help if I give them a very nice virtual desktop and they are able to log-on fast but they don’t have any applications on it. Or the application doesn’t work very well. Or if they have to log-on again, for example, or configure it before using it. We tried to provide an end-to-end solution by engaging with all of the different people — from the front-end client, to the networking, and through to the applications’ back end.

And we have been quite successful. For example, for our main business applications, SAP or Microsoft, we have been telling the people what we want to do to get those application guys on board. They understand what it means for them. In the past we had been rolling out version updates for 70 different locations.

They were sending out emails saying, “Can you please go to the next version? Can you please update to this or that?” That, of course, requires a lot of time and is very hard to troubleshoot and configure.

But now, by standardizing those things together [as a workspace], we can deploy it once, configure it once, and it doesn’t matter who is going to use it. It has made those rollouts much easier. For example, for our virtual apps and desktops, we just reached about 30 percent of our employees. It’s being done in a highly standardized project basis across every business unit.

We also realized the importance of informing and guiding the people as to how they have to use the new solutions, because it’s changing and some people, they react a bit slow to change. At first they say, “I don’t want to try it. I don’t need it.” It was a learning process to see what kind of documentation and guidance the people needed.

The changes are simple things [that deliver big paybacks]. Because if the people can take a PC back home and use a VPN to connect to their company resources, they may no longer need that PC. They can simply use any device to access their work from home or from on the road. Those are very simple things, but people have to understand that they can do that now.

Gardner: As I like to say, we used to force people to conform to the apps and now we can get the apps and services to conform to what the people want and need. 

But we have talked about this in terms of the productivity of the employee. How about your IT department? How have your IT people reacted to this?

Stalder: I also needed a lot of time to convince the IT people, especially some security guys. They said, “You are going to go to Citrix Cloud? What does it mean for security?”

We have been working very closely with Citrix to explain to the security officer what kind of data goes to the cloud, how it’s stored, and how it’s processed. And that took quite a while to get approval, but at the end it went through, definitely. 

The IT guys have to understand and use the solution. They sometimes think that it’s just for the end users. But IT is also an end user. They have to get on board and use the solutions. Only in this way everyone knows what the other one is talking about.

Gardner: Now that you have been through this process and the workspace is in place, what have you found? What are the metrics of success? When you do it well, what do you get back?

Positive feedback 

Stalder: Unfortunately, measuring productivity is very hard to do. I don’t have any numbers on that yet. I just get feedback from employees who are talking about different things as they try the system.

And I have quite an interesting story. For example, one guy in our application consulting group was a bit skeptical. One day his notebook PC was broken so he had to use the new Citrix Workspace. He had no choice but to try it.

He wrote back some very interesting facts and figures, saying it was faster. It was faster to log on and the applications started faster. And it was easy to use. Because he does a lot of presentations and training, he realized he could start the work on one device and then switch back to another device, maybe in the meeting room or go to the training room, and just continue the work.

We also get feedback saying they can work from everywhere, can access everything they need, especially if they go out to the customer, and that they only have to remember one place to log on to. They just log-on once and they have all the data and all the applications they are going to need.

Gardner: Tim, when you hear about such feedback from Marco, what jumps out at you? 

Minahan: What stands out is the universal challenge we are all experiencing now. The employee experience is less than adequate in most organizations. It is impacting not only the ability to develop and retain great talent, but it’s also impacting your overall business.

What also stands out is that when technology is harnessed in a way that puts the employee first — and drives superior experience to allow them to have access to the information and the tools they need to get their jobs done — not only does employee retention go up, but you also drive better customer experiences, and better business end results.

The third thing that stands out is the recognition that traditionally we in the IT sector focused on putting security in the way of the experience. Now, if you put the employee at the center, we are beginning to attain a  better balance between experience and security. It’s not an either-or equation anymore. This story at Bechtle is a great example of that in reality.

Gardner: What was interesting for me, too, was that employees get used to the way things are. You hit inertia. But when a necessity crops up, and somebody was forced to try something new, they found that there are better ways to do things.

Minahan: Right, it’s the old saw … If you only asked folks what they wanted, they would want a faster horse — and we never would have had the car.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in artificial intelligence, BYOD, Citrix, Cloud computing, Cyber security, data center, Data center transformation, Deloitte, Enterprise transformation, machine learning, Microsoft, multicloud, Security, User experience, vdi, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

Architectural firm attains security and performance confidence across virtualized and distributed desktops environment

Better security over data and applications remains a foremost reason IT organizations embrace and extend the use of client virtualization. Yet performance requirements for graphics-intense applications and large files remain one of the top reasons the use of thin clients and virtualized desktops trails the deployment of full PC clients.

For a large architectural firm in Illinois, gaining better overall security, management, and data center consolidation had to go hand in hand with preserving the highest workspace performance — even across multiple distributed offices.

The next BriefingsDirect security innovations discussion examines how BLDD Architects, Inc. developed an IT protection solution that fully supports all of its servers and mix of clients in a way that’s invisible to its end users. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to share the story of how to gain the best cloud workload security, regardless of the apps and the data, is Dan Reynolds, Director of IT at BLDD Architects in Decatur, Illinois. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Dan, tell us about BLDD Architects. How old is the firm? Where you are located? And what do you have running in your now-centralized data center?

Reynolds: We are actually 90 years old this year, founded in 1929. It has obviously changed names over the years, but the same core group of individuals have been involved the entire time. We used to have five offices: three in central Illinois, one in Chicago, and one in Davenport, Iowa. Two years ago, we consolidated all of the Central Illinois offices into just the Decatur office.

When we did that, part of the initiative was to allow people to work from home. Because we are virtualized, that was quite easy. Their location doesn’t matter. The desktops are still here, in the central office, but the users can be wherever they need to be.

On the back-end, we are a 100 percent Microsoft shop, except for VMware, of course. I run the desktops from a three-node Hewlett Packard Enterprise (HPE) DL380 cluster. I am using a Storage Area Network (SAN) product called the StarWind Virtual SAN, which has worked out very well. We are all VMware for the server and client virtualization, so VMware ESXi 6.5 and VMware Horizon 7.

Gardner: Please describe the breadth of architectural, design, and planning work you do and the types of clients your organization supports.

Architect the future, securely 

Reynolds: We are wholly commercial. We don’t do any residential designs, or only very, very rarely. Our biggest customers are K-12 educational facilities. We also design buildings for religious institutions, colleges, and some healthcare clinics. 

Recently we have begun designing senior living facilities. That’s an area of growth that we have pursued. Our reason for opening the office in Davenport was to begin working with more school districts in that state. 

Along time ago, I worked as a computer-aided design (CAD) draftsman. The way the architecture industry has changed since then has been amazing. They now work with clients from cradle to grave. With school districts, for example, they need help at the early funding level. We go in and help them with campaigns, to put projects on the ballot, and figure out ways to help them – from gaining money all the way to long-term planning. There are several school districts where we are their architect-of-record. We help them plan for the future. It’s amazing. It really surprises me.

Gardner: Now that we know what you do and your data center platforms, let’s learn more about your overall security posture. How do you approach security knowing that it’s not from one vendor, it’s not one product? You don’t just get security out of a box. You have to architect it. What’s your philosophy, and what do you have in place as a result?

Reynolds: I like to have a multilayered approach. I think you have to. It can’t just be antivirus, and it can’t just be firewall. You have to allow the users freedom to do what they need to do, but you also have to figure out where they are going to screw up — and try to catch that. 

I like to have a multilayered approach. I think you have to. It can’t just be antivirus, and it can’t just be a firewall. You have to allow the users freedom to do what they need to do, but you also have to figure out where they are going to screw up — and try and catch that.

And it’s always a moving target. I don’t pretend to know this perfectly at all. I use OpenDNS as a content filter. Since it’s at the DNS level, and OpenDNS is so good at whitelisting, we pick up on some of the content choices and that keeps our people from accidentally making mistakes. 

In addition, last year I moved us to Cisco Meraki Security Appliances, and their network-based malware protection. I have a site-to-site virtual private network (VPN) for our Davenport office. All of our connections are Fiber Ethernet. In Illinois, it’s all Comcast Metro E. I have another broadband provider for the Davenport office. 

And then, on top of all of that, I have Bitdefender GravityZone Enterprise Security for the endpoints that are not thin clients. And then, of course, for the VMware environment I also use GravityZone; that works perfectly with VMWare NSX virtual networking on the back-end and the scanning engine that comes with that.

Gardner: Just to be clear Dan, you have a mix of clients; you have got some zero clients, fat clients, both Mac and Windows, is that right?

Diversity protects mixed clients

Reynolds: That’s correct. For some of the really high-end rendering, you need the video hardware. You just can’t do everything with virtualization, but you can knock out probably 90 to 95 percent of all that we do with it. 

And, of course, on those traditional PC machines I have to have conventional protection, and we also have laptops and Microsoft Surfaces. The marketing department has Mac OSX machines. There are just times you can’t completely do everything with a virtual machine. 

Gardner: Given such a diverse and distributed environment to protect, is it fair to say that being “paranoid about security” has paid off? 

Reynolds: I am confident, but I am not cocky. The minute you get cocky, you are setting yourself up. But I am definitely confident because I have multi-layers of protection. I build my confidence by making sure these layers overlap. It gives me a little bit of cushion so I am not constantly afraid.

And, of course, another factor many of us in the IT security world are embracing is around better educating the end users. We try to make them as aware to help share your paranoia with them to help them understand. That is really important. 

On the flip side, I also use a product called StorageCraft and I encrypt all my backups. Like I said, I am not cocky. I am not going to put a target on my back and say, “Hit me.” 

Gardner: Designers, like architects, are often perfectionists. It’s essential for them to get apps, renderings, and larger 3D files the way they want them. They don’t want to compromise.

As an IT director, you need to make sure they have 100 percent availability — but you also have to make sure everything is secure. How have you been able to attain the combined requirements of performance and security? How did you manage to tackle both of them at the same time?

Reynolds: It was an evolving process. In my past life I had experience with VMware and I knew of virtual desktops, but I wasn’t really aware of how they would work under [performance] pressure. We did some preliminary testing using VMware ESXi on high-end workstations. At that point we weren’t even using VMware View. We were just using remote desktops. And it was amazing. It worked, and that pushed me to then look into VMware View.

Of course, when you embrace virtualization, you can’t go without security. You have to have antivirus (AV); you just have to. The way the world is now, you can’t live without protecting your users — and you can’t depend on them to protect themselves because they won’t do it.

The way that VMware had approached antivirus solutions — knowing that native agents and the old-fashioned types of antivirus solutions would impact performance — was they built it into the network. It completely insulated the user from any interaction with the antivirus software. I didn’t want anything running on the virtual desktop. It was completely invisible to them, and it worked.

Gardner: When you go to fully virtualized clients, you solve a lot of problems. You can centralize to better control your data and apps. That in itself is a big security benefit. Tell me your philosophy about security and why going virtualized was the right way to go.

Centralization controls chaos, corruption 

Reynolds: Well, you hit the nail on the head. By centralizing, I can have one image or only a few images. I know how the machines are built. I don’t have desktops out there that users customize and add all of their crap to. I can control the image. I can lock the image down. I can protect it with Bitdefender. If the image gets bad, it’s just an image. I throw it away and I replace it.

I tend to use full clones and non-persistent desktops simply for that reason. It’s so easy. If somebody begins having a problem with their machine or their Revit software gets corrupted or something else happens, I just throw away the old virtual machine (VM) and roll a new one in. It’s easy-peasy. It’s just done.

Gardner: And, of course, you have gained centralized data. You don’t have to worry about different versions out there. And if corruption happens, you don’t lose that latest version. So there’s a data persistence benefit as well.

Reynolds: Yes, very much so. That was the problem when I first arrived here. They had five different silos [one for each branch office location]. There were even different versions of the same project in different places. They were never able to bring all of the data into one place.

I saw that as the biggest challenge, and that drove me to virtualization in the first place. We were finally able to put all the data in one place and back it up in one place.

Gardner: How long have you been using Bitdefender GravityZone Enterprise Security, and why do you keep renewing? 

Reynolds: It’s been about nine years. I keep renewing because it works, and I like their support. Whenever I have a problem, or whenever I need to move — like from different versions of VMware or going to NSX and I change the actual VMware parts — the Bitdefender technology is just there, and the instructions are there, too. 

It’s all about relationships with me. I stick with people because of relationships — well, the performance as well, but that’s part of the relationship. I mean, if your friend kept letting you down, they wouldn’t be your friend anymore.

Gardner: Let’s talk about that performance. You have some really large 2-D and 3-D graphics files at work constantly. You’re using Autodesk Revit, as you mentioned, Bluebeam RevuMicrosoft OfficeAdobe, so quite a large portfolio.

These are some heavy-lifting apps. How does their performance hold up? How do you keep the virtualized delivery invisible across your physical and virtualized workstations? 

High performance keeps users happy 

Reynolds: Number one, I must keep the users happy. If the users aren’t happy and if they don’t think the performance is there, then you are not going to last long.

I have a good example, Dana. I told you I have Macs in the marketing department, and the reason they kept Macs is because they want their performance with the Adobe apps. Now, they use the Macs as thin clients and connect to a virtual desktop to do their work. It’s only when they are doing big video editing that they resume using their Macs natively. Most of the time, they are just using them as a thin client. For me, that’s a real vote of confidence that this environment works.

Gardner: Do you have a virtualization density target? How are you able to make this as efficient as possible, to get full centralized data center efficiency benefits?

Reynolds: I have some guidelines that I’ve come up with over the years. I try to limit my hosts to about 30 active VMs at a time. We are actually now at the point where I am going to have to add another node to the cluster. It’s going to be compute only, it won’t be involved in the storage part. I want to keep the ratio of CPUs and RAM about the same. But generally speaking, we have about 30 active virtual desktops per host.

Gardner: How does Bitdefender’s approach factor into that virtualization density?

I like the way Bitdefender licenses their coverage. It gives me a lot of flexibility, and it helps me plan out my environment. I’m not paying by the core, and I’m not paying by the desktop. I’m paying by the socket, and I really like it that way.

Reynolds: The way that Bitdefender does it — and I really like this — is they license by the socket. So whether I have 10 or 100 on there, it’s always by the socket. And these are HPE DL380s, so they are two sockets, even though I have 40 cores.

I like the way they license their coverage. It gives me a lot of flexibility, and it helps me plan out my environment. Now, I’m looking at adding another host, so I will have to add a couple of more cores. But that still gives me a lot of growth room because I could have 120 active desktops running and I’m not paying by the core, and I’m not paying by the individual virtual desktop. I am paying for Bitdefender by the socket, and I really like it that way.

Gardner: You don’t have to be factoring the VMs along the way as they spin up and spin down. It can be a nightmare trying to keep track of them all.

Reynolds: Yes, I am glad I don’t have to do that. As long as I have the VMware agent installed and NSX on the VMware side, then it just shows up in GravityZone, and it’s protected.

Prevent, rather than react, to problems

Gardner: Dan, we have been focusing on performance from the end-user perspective. But let’s talk about how this impacts your administration, your team, and your IT organization. 

How has your security posture, centralization, and reliance on virtualization allowed your team to be the most productive?

Reynolds: I use GravityZone’s reporting features. I have it tell me weekly the posture of my physical machines and my virtual machines. I use the GravityZone interface. I look at it quite regularly, maybe two or three times a week. I just get in and look around and see what’s going on.

I like that it keeps itself up to date or lets me know it needs to be updated. I like the way that the virus definitions get updated automatically and pushed out automatically, and that’s across all environments. I really like that. That helps me, because it’s something that I don’t have to constantly do.

I would rather watch than do. I would rather have it tell me or e-mail me than I find out from my users that their machines aren’t working properly. I like everything about it. I like the way it works. It works with me.

Gardner: It sounds like Bitdefender had people like you, a jack of all trades, in mind when it was architected, and that wasn’t always the case with security. Usually before the security would play catch-up to the threats, rather than anticipating the needs of those in the trenches fighting the security battle.

Reynolds: Yes, very much so. At other places I have worked and with other products, that was an absolute true statement, yes.

Gardner: Let’s look at some of the metrics of success. Tell us how you measure that. I know security is measured best when there are no problems.

But in terms of people, process, and technology, how do we evaluate in terms of costs, man hours, of being proactive? How do we measure success when it comes to a good security posture for an organization like yours?

Security supports steady growth

Reynolds: I will be the first to admit I am a little weak in describing that. But I do have some metrics that work. For example, we didn’t need to replace our desktops often. We had been using our desktops for eight years, which is horrible in one sense, but in another sense, it says we didn’t have to. And then when those desktops were about as dead as dead could be, we replaced them with less expensive thin clients, which are almost disposable devices.

I envision a day when we’re using Raspberry Pi as our thin clients and we don’t spend any big money. That’s the way to sum it up. All my money is spent on maintenance for applications and platform software, and you are not going to get rid of that.

Another big payoff is around employee happiness. A little over two years ago, when we had to collapse the offices, more people could work from home. It kept a lot of people that probably would have walked out. That happened because of the groundwork and foundation I had put in. From that time, we have had two of the best years the company has ever had, even after that consolidation.

And so, for me, personally, that was kind of like I had something to do with that, and I can take some pride in that.

Gardner: Dan, when I hear your story, the metrics of success that I think about are that you’re able to accommodate growth, you can scale up, and if you had to – heaven forbid — you could scale down. You’re also in a future-proofing position because you’ve gone software-defined, you have centralized and consolidated, you’ve gone highly virtualized across-the-board, and you can accommodate at-home users and bring your own devices (BYOD).

Perhaps you have a merger and acquisition in the works, who knows? But you can accommodate that and that means business agility. These are some of the top business outcome metrics of success that I know companies large and small look for. So hats off to you on that.

Reynolds: Thank you very much. I hate to use the word “pride” but I’m proud of what I’ve been able to accomplish the last few years. All the work I have done in the prior years is paying off.

Gardner: One of my favorite sayings is, “Architecture is destiny.” If you do the blocking and tackling, and you think strategically — even while you are acting tactically — it will pay off in spades later.

Okay, let’s look to the future before we end. There are always new things coming out for modernizing data centers. On the hardware side, we’re hearing about hyper-converged infrastructure (HCI), for example. We’re also seeing use of automated IT ops and using artificial intelligence (AI) and machine learning (ML) to help optimize systems.

Where does your future direction lead, and how does your recent software and security posture work enable you to modernize when you want?

Future solutions, scaled to succeed 

Reynolds: Obviously, hyper-converged infrastructure is upon us and many have embraced it. I think the small- to medium-sized business (SMB) has been a little reluctant because the cost is very high for an SMB.

I think that cost of entry is going to come down. I think we are going to have a solution that offers all the benefits but is scaled down for a smaller firm. When that happens, everything I have done is going to transfer right over.

I have software-based storage. I have some software-based networking, but I would love to embrace that even more. That would be the icing on the cake and take some of the physical load off of me. The work that I have to do with switches and cabling and network adapters — if I could move that into the hyper-converged arena, I would love that.

When I started, everybody said there’s no way we could virtualize Revit and Autodesk. We did and it worked fine. You have to be willing to experiment and take some chances sometimes. It’s a long road but it’s worth it. It will pay off.

Gardner: Also, more companies are looking to use cloud, multi-cloud, and hybrid cloud. Because you’re already highly virtualized, because your security is optimized for that, whatever choices your company wants to take with vis-à-vis cloud and Software-as-a-Service (SaaS) you’re able to support that.

Reynolds: Yes, we have a business application that manages our projects, does our time keeping, and all the accounting. It is a SaaS app. And, gosh, I was glad when it went SaaS. That was just one thing that I could get off of my plate — and I don’t mean that in a bad way. I wanted it to be handled even better by moving to SaaS where you get economy of scale that you can’t provide as an IT individual.

Gardner: Any last words of advice for organizations — particularly those wanting to recognize all the architectural and economic benefits, but might be concerned about security and performance?

Reynolds: Research, research, research — and then more research. When I started, everybody said there’s no way we could virtualize Revit and Autodesk. Of course, we did and it worked fine. I ignored them, and you have to be willing to experiment and take some chances sometimes. But by researching, testing, and moving forward gently, it’s a long road, but it’s worth it. It will pay off.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in application transformation, Bitdefender, Cyber security, Data center transformation, Hewlett Packard Enterprise, risk assessment, Security, server, storage, User experience | Tagged , , , , , , , , , , | Leave a comment

Qlik’s CTO on why the cloud data diaspora forces businesses to rethink their analytics strategies

The next BriefingsDirect business intelligence (BI) trends discussion explores the impact of dispersed data in a multicloud world. 

Gaining control over far-flung and disparate data has been a decades’ old struggle, but now as hybrid and public clouds join the mix of legacy and distributed digital architectures, new ways of thinking are demanded if comprehensive analysis of relevant data is going to become practical. 

Stay with us now as we examine the latest strategies for making the best use of data integration, data catalogs and indices, as well highly portable data analytics platforms. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about closing the analysis gap between data and multiple — and probably changeable — cloud models, we are joined by Mike Potter, Chief Technology Officer (CTO) at Qlik. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mike, businesses are adopting cloud computing for very good reasons. The growth over the past decade has been strong and accelerating. What have been some of the — if not unintentional — complicating factors for gaining a comprehensive data analysis strategy amid this cloud computing complexity? 

Potter: The biggest thing is recognizing that it’s all about where data lives and where it’s being created. Obviously, historically most data have been generated on-premises. So, there is a strong pull there, but you are seeing more and more cases now where data is born in the cloud and spends its whole lifetime in the cloud. 

And so now the use cases are different because you have a combination of those two worlds, on-premises and cloud. To add further complexity, data is now being born in different cloud providers. Not only are you dealing with having some data and legacy systems on-premises, but you may have to reconcile that you have data in AmazonGoogle, or Microsoft

Our whole strategy around multicloud and hybrid cloud architectures is being able to deploy Qlik where the data lives. It allows you to leave the data where it is, but gives you options so that if you need to move the data, we can support the use cases on-premises to cloud or across cloud providers. 

Gardner: And you haven’t just put on the patina of cloud-first or software as a service (Saas) -first. You have rearchitected and repositioned a lot of what your products and technologies do. Tell us about being “SaaS-first” as a strategy

Scaling the clouds

Potter: We began our journey about 2.5 years ago, when we started converting our monolith architecture into a microservices-based architecture. That journey struck to the core of the whole product. 

Qlik’s heritage was a Windows Server architecture. We had to rethink a lot of things. As part of that we made a big bet 1.5 years ago on containerization, using Docker and Kubernetes. And that’s really paid off for us. It has put us ahead of the technology curve in many respects. When we did our initial release of our multicloud product in June 2018, I had conversations with customers who didn’t know what Kubernetes was. 

One enterprise customer had an infrastructure team who had set up an environment to provision Kubernetes cluster environments, but we were only the second vendor that required one, so we were ahead of the game quite a bit. 

Gardner: How does using a managed container platform like Kubernetes help you in a multicloud world?

Potter: The single biggest thing is it allows you to scale and manage workloads at a much finer grain of detail through auto-scaling capabilities provided by orchestration environments such as Kubernetes. 

More importantly it allows you to manage your costs. One of the biggest advantages of a microservice-based architecture is that you can scale up and scale down to a much finer grain. For most on-premises, server-based, monolith architectures, customers have to buy infrastructure for peak levels of workload. We can scale up and scale down those workloads — basically on the fly — and give them a lot more control over their infrastructure budget. It allows them to meet the needs of their customers when they need it. 

Gardner: Another aspect of the cloud evolution over the past decade is that no one enterprise is like any other. They have usually adopted cloud in different ways. 

Has Qlik’s multicloud analytics approach come with the advantage of being able to deal with any of those different topologies, enterprise by enterprise, to help them each uniquely attain more of a total data strategy?

Potter: Yes, I think so. The thing we want to focus on is, rather than dictate the cloud strategy – often the choice of our competitors — we want to support your cloud strategy as you need it. We recognize that a customer may not want to be on just one cloud provider. They don’t want to lock themselves in. And so we need to accommodate that. 

There may be very valid reasons why they are regionalized, from a data sovereignty perspective, and we want to accommodate that. There will always be on-premises requirements, and we want to accommodate that.

The reality is that, for quite a while, you are not going to see as much convergence around cloud providers as you are going to see around microservices architectures, containers, and the way they are managed and orchestrated.

You are not going to see as much convergence around cloud providers as you are going to see around microservices architectures, containers, and the way they are managed and orchestrated. 

Gardner: And there is another variable in the mix over the next years — and that’s the edge. We have an uncharted, immature environment at the edge. But already we are hearing that a private cloud at the edge is entirely feasible. Perhaps containers will be working there.

At Qlik, how are you anticipating edge computing, and how will that jibe with the multicloud approach?

Running at the edge

Potter: One of the key features of our platform architecture is not only can we run on-premises or in any cloud at scale, we can run on an edge device. We can take our core analytics engine and deploy it on a device or machine running at the edge. This enables a new opportunity, which is taking analytics itself to the edge.

lot of Internet of Things (IoT) implementations are geared toward collecting data at the sensor, transferring it to a central location to be processed, and then analyzing it all there. What we want to do is push the analytics problem out to the edge so that the analytic data feeds can be processed at the edge. Then only the analytics events are transmitted back for central processing, which obviously has a huge impact from a data-scale perspective.

But more importantly, it creates a new opportunity to have the analytic context be very immediate in the field, where the point of occurrence is. So if you are sitting there on a sensor and you are doing analytics on the sensor, not only can you benefit at the sensor, you can send the analytics data back to the central point, where it can be analyzed as well.

Gardner: It’s auspicious, the way that Qlik’s catalog, indexing, and abstracting out the information about where data is approach can now be used really well in an edge environment.

Potter: Most definitely. Our entire data strategy is intricately linked with our architectural strategy in that respect, yes.

Gardner: Analytics and being data-driven across an organization is the way of the future. It makes sense to not cede that core competency of being good at analytics to a cloud provider or to a vendor. The people, process, and tribal knowledge about analytics seems essential.

Do you agree with that, and how does Qlik’s strategy align with keeping the core competency of analytics of, by, and for each and every enterprise?

Potter: Analytics is a specialization organizationally within all of our customers, and that’s not going to go away. What we want to do is parlay that into a broader discussion. So our focus is enabling three key strategies now.

It’s about enabling the analytics strategy, as we always have, but broadening the conversation to enabling the data strategy. More importantly, we want to close the organizational, technological, and priority gaps to foster creating an integrated data and analytics strategy.

By doing that, we can create what I describe as a raw-to-ready analytics platformbased on trust, because we own the process of the data from source to analysis, and that not only makes the analytics better, it promotes the third part of our strategy, which is around data literacy. That’s about creating a trusted environment in which people can interact with their data and do the analysis that they want to do without having to be data scientists or data experts.

So owning that whole end-to-end architecture is what we are striving to reach.

Gardner: As we have seen in other technology maturation trend curves, applying automation to the problem frees up the larger democratization process. More people can consume these services. How does automation work in the next few years when it comes to analytics? Are we going to start to see more artificial intelligence (AI) applied to the problem?

Automated, intelligent analytics

Potter: Automating those environments is an inevitability, not only from the standpoint of how the data is collected, but in how the data is pushed through a data operations process. More importantly, automating enables on the other end, too, by embedding artificial and machine learning (ML) techniques all the way along that value chain — from the point of source to the point of consumption.

Gardner: How does AI play a role in the automation and the capability to leverage data across the entire organization?

Potter: How we perform analytics within an analytic system is going to evolve. It’s going to be more conversational in nature, and less about just consuming a dashboard and looking for an insight into a visualization.

The analytics system itself will be an active member of that process, where the conversation is not only with the analytics system but the analytics system itself can initiate the conversation by identifying insights based on context and on other feeds. Those can come from the collective intelligence of the people you work with, or even from people not involved in the process.

The analytics system itself will be an active member of that process, where the conversation is not only with the analytics system but it will initiate the conversation by identifying insights based on context and other feeds.

Gardner: I have been at some events where robotic process automation (RPA) has been a key topic. It seems to me that there is this welling opportunity to use AI with RPA, but it’s a separate track from what’s going on with BI, analytics, and the traditional data warehouse approach.

Do you see an opportunity for what’s going on with AI and use of RPA? Can what Qlik is doing with the analytics and data assimilation problem come together with RPA? Would a process be able to leverage analytic information, and vice versa?

Potter: It gets back to the idea of pushing analytics to the edge, because an edge isn’t just a device-level integration. It can be the edge of a process. It can be the edge of not only a human process, but an automated business process. The notion of being able to embed analytics deep into those processes is already being done. Process analytics is an important field.

But the newer idea is that analytics is in service of the process, as opposed to the other way around. The world is getting away from analytics being a separate activity, done by a separate group, and as a separate act. It is as commonplace as getting a text message, right?

Gardner: For the organization to get to that nirvana of total analytics as a common strategy, this needs to be part of what the IT organization is doing, with full stack architecture and evolution. So AIOps and DataOps also getting closer over time.

How does DataOps in your thinking relate to what the larger IT enterprise architects are doing, and why should they be thinking about data more?

Optimizing data pipelines

Potter: That’s a really good question. From my perspective, when I get a chance to talk to data teams, I ask a simple question: “You have this data lake. Is it meeting the analytic requirements of your organization?”

And often I don’t get very good answers. And a big reason why is because what motivates and prioritizes the data team is the storage and management of data, not necessarily the analytics. And often those priorities conflict with the priorities of the analytics team. 

What we are trying to do with the Qlik integrated data and analytic strategy is to create data pipelines optimized for analytics, and data operations optimized for analytics. And our investments and our acquisitions in Attunity and Podium are about taking that process and focusing on the raw-to-ready part of the data operations.

Gardner: Mike, we have been talking at a fairly abstract level, but can you share any use cases where leading-edge organizations recognize the intrinsic relationship between DataOps and enterprise architecture? Can you describe some examples or use cases where they get it, and what it gets for them?

Potter: One of our very large enterprise customers deals in medical devices and related products and services. They realized an essential need to have an integrated strategy. And one of the challenges they have, like most organizations, is how to not only overcome the technology part but also the organizational, cultural, and change-management aspects as well.

They recognized the business has a need for data, and IT has data. If you intersect that, how much of that data is actually a good fit? How much data does IT have that isn’t needed? How much of the remaining need is unfulfilled by IT? That’s the problem we need to close in on.

Gardner: Businesses need to be thinking at the C-suite level about outcomes. Are there some examples where you can tie together such strategic business outcomes back to the total data approach, to using enterprise architecture and DataOps?

Data decision-making, democratized

Potter: The biggest ones center on end-to-end governance of data for analytics, the ability to understand where the data comes from, and building trust in the data inside the organization so that decisions can be made, and those decisions have traceability back to results.

The other aspect of building such an integrated system is a total cost of ownership (TCO) opportunity, because you are no longer expending energy managing data that isn’t relevant to adding value to the organization. You can make a lot more intelligent choices about how you use data and how you actually measure the impact that the data can have.

Gardner: On the topic of data literacy, how do you see the behavior of an organization — the culture of an organization — shifting? How do we get the chicken-and-egg relationship going between the data services that provide analytics and the consumers to start a virtuous positive adoption pattern?

One of the biggest puzzles a lot of IT organizations face is around adoption and utilization. They build a data lake and they don’t know why people aren’t using it.

Potter: One of the biggest puzzles a lot of IT organizations face is around adoption and utilization. They build a data lake and they don’t know why people aren’t using it. 

For me, there are a couple of elements to the problem. One is what I call data elitism. When you think about data literacy and you compare it to literacy in the pre-industrial age, the people who had the books were the people who were rich and had power. So church and state, that kind of thing. It wasn’t until technology created, through the printing press, a democratization of literacy that you started to see interesting behavior. Those with the books, those with the power, tried to subvert reading in the general population. They made it illegal. Some argue that the French Revolution was, in part, caused by rising rates of literacy.

If you flash-forward this analogy to today in data literacy, you have the same notion of elitism. Data is only allowed to be accessed by the senior levels of the organization. It can only be controlled by IT.

Ironically, the most data-enabled organizations are typically oriented to the Millennials or younger users. But they are in the wrong part of the organizational chart to actually take advantage of that. They are not allowed to see the data they could use to do their jobs.

The opportunity from a democratization-of-data perspective is understanding the value of data for every individual and allowing that data to be made available in a trusted environment. That’s where this end-to-end process becomes so important.

Gardner: How do we make the economics of analytics an accelerant to that adoption and the democratization of data? I’ll use another historical analogy, the Model T and assembly line. They didn’t sell Model Ts nearly to the degree they thought until they paid their own people enough to afford one

Is there a way of looking at that and saying, “Okay, we need to create an economic environment where analytics is paid for-on-demand, it’s fit-for-purpose, it’s consumption-oriented.” Wouldn’t that market effect help accelerate the adoption of analytics as a total enterprise cultural activity?

Think positive data culture

Potter: That’s a really interesting thought. The consumerization of analytics is a product of accessibility and of cost. When you build a positive data culture in an organization, data needs to be as readily accessible as email. From that perspective, turning it into a cost model might be a way to accomplish it. It’s about a combination of leadership, of just going there and making occur at the grassroots level, where the value it presents is clear.

And, again, I reemphasize this idea of needing a positive data culture.

Gardner: Any added practical advice for organizations? We have been looking at what will be happening and what to anticipate. But what should an enterprise do now to be in an advantageous position to execute a “positive data culture”?

Potter: The simplest advice is to know that technology is not the biggest hurdle; it’s change management, culture, and leadership. When you think about the data strategy integrated with the analytics strategy, that means looking at how you are organized and prioritized around that combined strategy.

Finally, when it comes to a data literacy strategy, define how you are going to enable your organization to see data as a positive asset to doing their jobs. The leadership should understand that data translates into value and results. It’s a tool, not a weapon.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Qlik.

You may also be interested in:

Posted in artificial intelligence, big data, Cloud computing, containers, data analysis, Information management, machine learning, multicloud, Qlik, Security | Tagged , , , , , , , , , , | Leave a comment

Happy employees equal happy customers — and fans. Can IT deliver for them all?

The next BriefingsDirect workplace productivity discussion explores how businesses are using the latest digital technologies to re-imagine the employee experience — and to transform their operations and results.

Employee experience isn’t just a buzz term. Research shows that engaged employees are happier, more productive, and deliver a superior customer experience, all of which translates into bottom line results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn how, our panel will now explore how IT helps deliver a compelling experience that enables employees to work when, where, and how they want — and to perform at their best. Joining us are Adam Jones, Chief Revenue Officer, who oversees IT for the Miami Marlins Major League Baseball team and organization, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, when it comes to employee experience, Citrix has been at the forefront of the conversation and of the technology shaping it. In fact, I remember covering one of the first press conferences that Citrix had, and this is going back about 30 years, and the solutions were there for people to work remotely. It seemed crazy at the time, delivering apps over the wire, over the Internet.

But you are still innovating. You’re at it again. About a year ago, you laid out an aggressive plan to help companies power their way to even better ways to work. So, it begs the question: Tim, what’s wrong with the way people are working today and the way that employees are experiencing work today?

From daily grind to digital growth 

Minahan: That topic is top of mind both for C-level and board members around the globe. We are entering an era of a new talent crisis. What’s driving it is, number one, there are just too few workers. Demographically McKinsey estimates that in the next few years we will be short by 95 million medium- to high-skilled workers around the globe.

Minahan

And that’s being frustrated by our traditional work models, which tend to organize around physical hubs. I build an office building, call center, or manufacturing facility and I do my best to hire the best talent around that hub. But the talent isn’t always located there.

The second thing is, as more companies become digital businesses — trying to develop new digital business lines, engage customers through new digital business channels, develop new digital business revenue streams — oftentimes they lack the right skills. They lack skills to help drive to this next level of transformation. If companies are fortunate enough to identify employees with those skills, there is a huge likelihood that they will be disengaged at work. 

In fact, the latest Gallup study finds that globally 85 percent of workers are disengaged at work. A key component of that frustration has to do with their work environment.

We spend a lot of time talking about vision alignment and career development — and all of that is important. But a key gap that many companies are overlooking is that they have a frustrating work environment. They are not giving their employees the tools or resources they need to do their jobs effectively.

In fact, all the choice we have around our applications and our devices has actually begun to create noise that distracts us from doing our core jobs in the best way possible.

Gardner: Is this a case of people being distracted by the interfaces? Is there too much information and overload? Are we not adding enough intelligence to provide a contextual approach? All of the above? 

Minahan: It is certainly “all of the above,” Dana. First off, there are just too many applications. The typical enterprise IT manager is responsible for more than 500 applications. At the individual employee level, a typical worker uses more than a dozen applications through the course of their day, and oftentimes needs to traverse four or five different applications just to do a single business process. That could be something as simple as finding the change in a deal status, or even executing one particular transaction.

Work Isn’t Working for Your Employees.

Find Out How Technology Can Help

And that would be bad enough, except consider that oftentimes we are distracted by apps that aren’t even core to our jobs. Last time I checked, Dana, neither you nor I, nor Adam were hired to spend our day approving expense reports in something like SAP Concur, which is a great application. But it’s not core to my job. Or, we are approving performance reviews in Workday, or a purchase request in SAP Ariba. Certainly, these distract from our day. By doing so, we need to constantly navigate via new application interfaces. We need to learn new applications that aren’t even core to our jobs. 

To your point around disruption and context switching, today — because we have all of these different channels, and not just e-mail, but Slack and Microsoft Teams and all of these applications – just finding information consumes a large part of our day. We can’t remember where we stored something, or we can’t remember the change in that deal status. So we have to spend about 20 percent of our day switching between all of these different contexts, just to get the information or insight we need to do our jobs.

Gardner: Clearly too much of a good thing. And to a large degree, IT has brought about that good thing. Has IT created this problem?

Minahan: In part. But I think employees share a bit of responsibility themselves. As an employee, I know I’m always pushing IT by saying, “Hey, absolutely, this is the one tool we need to do a more effective job at marketing, strategy, or what-have-you.”

We keep adding to the top of what we already have. And IT is in a tough position of either saying, “No,” or finding a way to layer on more and more choices. And that has the unintended side effect of what we have just mentioned — which is the complexity that frustrates today’s employee experience.

Workspace unity and security 

Gardner: Now, the IT people have faced complexity issues before, and many times they have come up with solutions to mitigate the complexity. But we also have to remember that you can’t just give employees absolute freedom. There have to be guardrails, and rules, compliance, and regulatory issues must be addressed. 

So, security and digital freedom need to be in balance. How do we get to the point, Tim, where we can create that balance, and give freedom — but not so much that they are at risk?

Minahan: You’re absolutely right. At Citrix, we firmly believe this problem needs to be solved. We are making the investments, working with our customers and our partners, to go out and solve it. We believe the right way to solve it is through a digital workspace that unifies everything your employees need to be productive in one, unified experience that wrappers those applications and content, and makes them available across any device or platform, no matter where you are.

A workspace that’s just unified but not secure doesn’t fully address the needs of the enterprise. We believe the workspace should wrapper in a layer of contextual security policies that know who you are.

If you are in the office, on the corporate network using your laptop, perfect. You also need to have access to the same content and applications to do your job on the train ride home, on your smartphone, and maybe while visiting a friend. You need to be able to log on through a web interface. You want your work to travel with you, so you can work anytime, anywhere. 

But such a workspace that’s just unified — but not secure — doesn’t fully address the needs of the enterprise. The second attribute of what’s required for a true digital workspace is that it needs to be secure. When you have those applications and content within the workspace, we believe the workspace should wrapper that in a layer of contextual security policies that know who you are, what you typically have access to, and how you typically access it. The security must know if you do your work through one device or another, and then apply the right policies when there are anomalies outside of that realm. 

For example, maybe you are logging in from a different place. If so, we are going to turn off certain capabilities within your applications, such as the capability to download, print, or screen-capture. Maybe we need to require a second layer of authentication, if you are logging on from a new device. 

And so, this approach brings together the idea of employee experience and balances it with the security that the enterprise needs. 

Gardner:We are also seeing more intelligence brought into this process. We are seeing more integration end-to-end, and we are anticipating the best worker experience. But companies, of course, are looking for productivity improvements to help their bottom line and their top line.

Want Employees to Perform at Their Best?

Let Them Thrive Using

An Intelligent Workspace

Is there a way to help businesses understand the economic benefits of the best digital workspace? How do we prove that this is the right way to go?

Minahan: Dana, you hit the nail on the head. I mentioned there are three attributes required for an effective digital workspace. We talked about the first two, unifying everything an employee needs to be productive with one unified experience, and secondly securing that to ensure that applications’ content is more secure in the workspace than when native. So that organizes your workday, and that’s a phenomenal head start. 

Work smart, with intelligence 

But, to your point, we can do better by building on that foundation and injecting intelligence into the workspace. You can then begin to help employees work better. You can help employees remove that noise from their day by using things such as machine learning (ML)artificial intelligence (AI), simplified workflows, and what we call micro appsto guide an employee through their workdays. The workspace is not just someplace they go to launch an application, but it is someplace they go to get work done.

We have begun providing capabilities that literally reach into your enterprise applications and extract out the key insights and tasks that are personal to each employee. So when you log into the workspace, Dana, it would say, “Hey, Dana, it’s time for you to approve that expense report.”

You don’t need to log-in to the app again. You just quickly open a review. If you want, you can click “approve” and move on, saving yourself minutes. And you multiply that throughout the course of the day. We estimate you can give an employee back 10 to 20 percent of their workweek. So, an added day each week of improved productivity. 

But it’s not just about streamlined tasks. It’s also about improved insights, of making sure that you understand the information you need. Maybe it’s that change in a deal status and presenting that up to you so you don’t need to log-in to Salesforce and check on that dashboard. It’s presented to you because the workspace knows it’s of interest to you. 

To your point, this could dramatically improve the overall productivity for an employee, improve their overall experience at work, and by extension allow them to serve their customers in a much better way. They have the resources, tools, and the information at their fingertips to deliver a superior customer experience. 

The Miami Marlins have a very sophisticated approach to user experience. They look at heir employees and their fan base across multiple ways of making the experience exceptional.

Gardner: We are entering an age, Tim, where we let the machines do what they do best and know the difference, so that then allows people to do what they can do best, creatively, and most productively. It’s an exciting time. 

Let’s look at a compelling use case. The Miami Marlins have a very sophisticated approach to user experience. And they are not just looking at their employees, they are looking at the end-users — their fan base across multiple different ways of entertainment and for intercepting the baseball experience. 

Baseball, in a sense, was hibernating over the winter, and now the new season has played out well in 2019. And your fans in Miami are getting treated to a world-class experience. 

Tell me, Adam, what went on behind-the-scenes that allows you in IT to make this happen? What is the secret sauce for providing such great experiences?

Marlins’ Major League IT advantage 

Jones: The Marlins is a 25-year-old franchise. We find ourselves in build mode coming into the mid-2019 season, following a change in ownership and leadership. We have elevated the standards and vision for the organization.

We are becoming a world-class sports and entertainment enterprise, and so are building a next-generation IT infrastructure to enable the 300-plus employees who operate across our lines of business and the various assets of the organization. We are very pleased to have our longtime partner, Citrix, deploy their digital workspace solutions to enable our employees to deliver against the higher standards that we have set. 

Gardner: Is it difficult to create a common technological approach for different types of user experience requirements? You have fans, scouts, and employees. There are a lot of different endpoints. How does a common technological approach work under those circumstances?

Jones: The diversity within our enterprise necessitates having tools and solutions that have a lot of agility and can be flexible across the various requirements of an organization such as ours. We are operating a very robust baseball operation — as well as a sophisticated business. We are looking to scale and engage a very diverse audience. We need to have the resources available to invest and develop talent on the baseball side. So, what we have within the Citrix environment is the capability to enable that very diverse set of activities within one environment.

Gardner: And we have become used to, in our consumer lives, having a sort of seamless segue between different things that we are doing. Are you approaching that same seamless integration when it comes to how people encounter your content across multiple channels? Is there a way for you to present yourselves in such a way that the technology takes over and allows people to feel like they are experiencing the same Miami Marlins experience regardless of how they actually intercept your organization and your sport?

Want Employees to Perform at Their Best?

Let Them Thrive Using

An Intelligent Workspace

Jones: Like many of our peers, we are looking to establish more robust, rounded relationships with our fans and community. And that means going beyond our home schedule to more of a 365-day relationship, with a number of touch points and a variety of content.

The mobility of our workforce to get out into the community — but not lose productivity — is incredibly important as we evolve into a more sophisticated and complex set of activities and requirements.

Gardner: Controlling your content, making sure you can make choices about who gets to see what, to protect your franchise, is important. Are you reaching a balance between offering a full experience of interesting content and technology, but at the same time protecting and securing your assets and your franchise?

Safe! at digital content distribution 

Jones: Security is our highest priority, particularly as we continue to develop more content and more intellectual property. What we have within the Citrix environment is very robust controls, with the capability to facilitate fairly broad collaboration among our workforce. So again, we are able to disseminate that content in near real-time so that we are creating impactful and timely moments with our fans.

Gardner: Tim, this sounds like a world-class use case for advanced technology. We have scale, security, omni-channel distribution, and a dynamic group of people who want to interact as much as they can. Why is the Miami Marlins such a powerful and interesting use-case from your perspective?

Minahan: The Marlins are a fantastic example of a world champion organization now moving into the digital edge. They are rethinking the fan experience, not just at the stadium but in how they engage across their digital properties and in the community. Adam and the other leadership there are looking across the board to figure out how the sport of baseball and fan experience evolve. They are exploring the linkage between the fan experience, or customer experience, and the employee experience, and they are learning about the role that technology plays in connecting the two.

Work Isn’t Working for Your Employees.

Find Out How Technology Can Help

They are a great example of a customer at the forefront of looking at these new digital channels and how they impact customer relationships — and how they impact values for employees as well.

Gardner: Tim, we have heard over the past decade about how data and information are so integral to making a baseball team successful. It’s a data-driven enterprise as much as any. How will the intelligence you are baking into more of the Citrix products help make the Miami Marlins baseball team a more intelligent organization? What are the tools behind the intelligent baseball future?

Minahan: A lot of the same intelligence capabilities we are incorporating into the workspace for our customers — around ML, AI, and micro apps — will ensure that the Marlins organization — everyone from the front office to the field manager — has the right insights and tasks presented to them at the right time. As a result, they can deliver the best experience, whether that is recruiting the best talent for the team or delivering the best experience for the fans. 

We are going to learn a lot, as we always have from our customers, from the Miami Marlins about how we can continue to adapt that to help them deliver that superior employee experience and, hence, the superior fan experience.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix

You may also be interested in:

Posted in application transformation, Citrix, Cyber security, Data center transformation, enterprise architecture, Mobile apps, mobile computing, Security, User experience, vdi, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

How enterprises like McKesson digitize procurement and automate spend management to slash waste

The next BriefingsDirect intelligent enterprise innovations discussion explores new ways that leading enterprises like McKesson Corp. are digitizing procurement and automating spend management

We’ll now examine how new intelligence technologies and automation methods like robotic process automation (RPA) help global companies reduce inefficiencies, make employees happier, cut manual tasks, and streamline the entire source-to-pay process.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the role and impact of automation in business-to-business (B2B) finance, please welcome Michael Tokarz, Senior Director of Source to Pay Processes and Systems at McKesson, in Alpharetta, Georgia. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There’s never been a better time to bring efficiency and intelligence to end-to-end, source-to-pay processes. What is it about the latest technologies and processes that provides a step-change improvement?

Tokarz: Our internal customers are asking us to move faster and engage deeper in our supplier conversations. By procuring intelligently, we are able to shift where resources are allocated so that we can better support our internal costumers.

Gardner: Is there a sense of urgency here? If you don’t do this, and others do, is there a competitive disadvantage? 

Tokarz: There’s a strategic advantage to first-movers. It allows you to set the standard within an industry and provide greater feedback and value to your internal customers.

Gardner: There are some major trends driving this. As far as new automation and the use of artificial intelligence (AI), why are they so important?

The AI advantage 

Tokarz

Tokarz: AI is important for a couple of reasons. Number one, we want to process transactions as cost-effectively as we possibly can. Leveraging a “bot” to do that, versus a human, is strategically advantageous to us. It allows us to write protocols that process automatically without any human touch, which, in turn is extremely valuable to the organization.

AI also allows workers to change their value-quotient within the organization. You can go from someone doing manual processes to working at a much higher level for the organization. They now work on things that are change-driven and that bring much more value, which is really important to the organization.

Gardner: What do you mean by bots? Is that the same as robotic process automation (RPA), or they overlapping? What’s the relationship?

Tokarz: I consider them the same technology, RPA and bots. It’s essentially a computer algorithm that’s written to help process transactions that meet a certain set of circumstances.

Gardner: E-sourcing technology is also a big trend and an enabler these days. Why is it important to you, particularly across your supplier base?

Tokarz: E-sourcing helps us drive conversations internally in the organization. It forces the businesses to pause. Everyone’s always in a hurry, and when they’re in a hurry they want to get something published for the organization and out on the street. Having the e-sourcing tool forces people to think about what they really need from the marketplace and to structure it in a format so that they can actually go faster.

E-Sourcing, while you have to do a little bit of work on the front end, you enable the speed of the transaction on the back end because you have everything aligned from all of the suppliers in one central place, so that you can easily compare and make solid business decisions.

Gardner: Another important thing for large organizations like McKesson is the ability to extend and scale globally. Rather than region-by-region there is standardization. Why is that important?

Tokarz: First and foremost, getting to one technology across the board allows us to have a global standard. And what does a global standard mean? It doesn’t mean that we’re going to do the same things the same way in every country. But it gives us a common platform to build our processes on.

It gives us a way to unify our organization so that we can have more informed conversations within the organization. It becomes really important when you begin managing global relationships with large suppliers.

Gardner: Tell us about McKesson and your role within vendor operations and management.

Tokarz: McKesson is a global provider of healthcare solutions — from pharmaceuticals to medical supplies to services. We’re mainly in the United States, Canada, and Europe.

I’m responsible for indirect sourcing here in the United States, but I also oversee the global implementations of solutions in Ireland, Europe, and Canada in the near future. Currently in the United States, we process about $1.6 billion in direct transactions. That’s more than 60,000 transactions on our SAP Ariba system. We also leverage other vendor management solutions to help us process our services transactions.

Gardner: A lot of people like you are interested in becoming touchless – of leveraging automation, streamlining processes, and using data to apply analytics and create virtuous adoption cycles. How might others benefit from your example of using bots and why that works well for you?

Bots increase business 

Tokarz: The first thing we did was leverage SAP Ariba Guided Buying. We also then reformatted our internal website to put Guided Buying forefront for all of our end users. We actually tag it for novice users because Guided Buying works similar to a tablet interface. It gives you smart icons that you can tap to begin and make decisions for your organization. It now drives purchasing behavior. 

The next thing we did is push as much buying through catalogs and indirect spend that we possibly could. We’ve implemented enough catalogs in the United States that we now have 80 percent of our transactions fully automated through catalogs. It provides people really nice visual cues and point-and-click accessibility. Some of my end users tell me they can find what they need within three minutes, and then they can go about their day, which is really powerful. Instead of focusing on buying or purchasing, it allows them to do their jobs, their specialty, which brings more value to the organization.

We use the RPA and bot technology to take the entire organization to the next level. We’re always striving to get to 90 touchless transactions. If we are at 80 percent, that means an additional 50 percent reduction in the touch transactions that we’re currently processing, which is very significant.

The last thing we’ve done is taken it to the next level. We use the RPA and bot technology to take the entire organization to the next level. We’re always striving to get to 90 percent touchless transactions. If we are at 80 percent, that means an additional 50 percent reduction in the touch transactions that we’re currently processing, which is very significant. 

That has allowed me to refocus some of my efforts with my business process outsourcing (BPO) providers where they’re not having to touch the transactions. I can have them instead focus on acquisitions, integrations, and doing different work that might have been at a cost increase. This all saves me money from an operations standpoint.

Gardner: And we all know how important user experience is — and also adoption. Sometimes you can bring a horse to water and they don’t necessarily drink.

So it seems to me that there is a double-benefit here. If you have a good interface like Guided Buying, using that as a front end, that can improve user satisfaction and therefore adoption. But by also using bots and automation, you are taking away the rote, manual processes and thereby making life more exciting. Tell us about any cultural and human capital management benefits.

Smarts, speed, and singular focus 

Tokarz: It allows my procurement team to focus differently. Before they were focused on the transactions in the queue and how fast to get them processed, all to keep the internal customers happy. Now I have a bot that processes that three times a day, it looks at the queue, and so we don’t have to worry about those any more. The team is only watching the bot to make sure it isn’t kicking out any errors.

From an acquisition integration standpoint, when I need to add suppliers to the network I don’t have to go for a change request to my management team and request more money. I can operate within the original budget with my BPO providers. If there’s another 300 suppliers that I need added to the network, for example, I can process them more effectively and efficiently.

Gardner: What have been some challenges with establishing the e-sourcing technology? What have you had to overcome to make e-sourcing more prevalent and to get as digital as possible?

Tokarz: Anytime I begin working on a project, I focus not only on the technology component, but also the process, organization, and policy components. I try to focus on all of them.

So first, we hired someone to manage the e-sourcing via an e-sourcing administrator role. It becomes really important. We have a single point of contact. Everyone knows where to go within the organization to make things happen as people learn the technology, and what the technology is actually capable of. Instead of having to train 50 people, I have one expert that can help guide them through the process.

From a policy standpoint, we’ve also taken the policies and dictated that. People are supposed to be leveraging the technology. We all know that not all policies are adhered to, but it sets the right framework for discussion internally. We can now go to a category manager and access the right technology to do the jobs better, faster, cheaper.

As a result, you have a more intriguing job versus doing administrative work, which ultimately leads to more value to the organization. They’re acting more as a business consultant to our internal customers to drive value — not just about price but on how to create value using innovations, new technology, and new solutions in the marketplace.

To me, it’s not just about the technology — it’s about developing the ecosystem of the organization.

Gardner: Is there anything about Guided Buying and the added intelligence that helps with e-sourcing – of getting the right information to the right person in the right format at the right time?

Seamless satisfaction for employee

Tokarz: The beautiful thing about Guided Buying is it’s seamless. People don’t know how the application works and that they are using SAP Ariba. It’s interesting. They see Guided Buying and they don’t realize it’s basically a looking glass into the architecture that is SAP Ariba behind the scenes.

That helps with transparency for them to understand what they are buying and get to it as quickly as possible. It allows them to process a transaction via a really nice, simple checkout screen. Everyone knows what it costs, and it just routes seamlessly across the organization.

Gardner: So what do you get when you do e-sourcing right? Are there any metrics or impacts that you can point to such as savings, efficiencies, employee satisfaction?

The biggest impact is employee satisfaction. Instead of having a category manager working in Microsoft Outlook, sending e-mails to 30 different suppliers on a particular event, they have a simple dashboard where they can combine all of the answers and push all of that information out seamlessly across all the participants.

Tokarz: The biggest impact is employee satisfaction. Instead of having a category manager working in Microsoft Outlook, sending e-mails to 30 different suppliers on a particular event, they have a simple dashboard where they can combine all of the answers, or questions, and develop all of the answers and push all of that information out seamlessly across all the participants. Instead of working administratively, they’re working strategically with internal customers. They are asking the hard questions about how to solve business problems at hand and creating value for the organization.

Gardner: Let’s dig deeper into the need for extensibility for globalization. To me this includes seeking a balance between the best of centralized and the best of distributed. You can take advantage of regional particulars, but also leverage and exploit the repeatability and standard methods of centralization.

What have you been doing in procurement using SAP Ariba that helps get to that balance?

Global insights grow success 

Tokarz: We’re in the process of rolling out SAP Ariba globally. We have different regions, and they all have different requirements. What we’ve learned is that our EMEA region wants to do some things differently than we were doing them. It forces us to answer the question, “Why were we doing things the way we were doing them, and should we be changing? Are their insights valuable?”

We learned that their insights are valuable, whether it be the partners that they are working with, from an integration standpoint, or the people on the ground. They have valuable insights. We’re beginning to work with our Canadian colleagues as well, and they’ve done a tremendous amount of work around change management. We want to capitalize on that, and we want to leverage it. We want to learn so that we can be better here in the United States at how we implement our systems.

Gardner: Let’s look to the future. What would you like to see improved, not only in terms of the technology but the way the procurement is going? Do you see more AI, ML, and bots progressing in terms of their contribution to your success?

Tokarz: The bots’ technology is really interesting, and I think it’s going to change pretty dramatically the way we work. It’s going to take a lot of the manual work that we do in processing transactions and it’s going to alleviate that.

And it’s not just about the transactions. You can leverage the bot technology or RPA technology to do manual work and then just have people do the audit. You’re eliminating three to five hours’ worth of work so that the workers can go focus their time on higher value-add.

For my organization, I’d like us to extend the utilization of the solutions that we currently own. I think we can do a better job of rolling out the technology broadly across the organization and leverage key features to make our business more powerful.

Gardner: We have been hearing quite a bit from SAP Ariba and SAP at-large about integrating more business applications and data sets to find process efficiencies across different types of spend and getting a better view of total spend. Does that fit into your future vision? 

Tokarz: Yes, it does. Data is really important. It’s a huge initiative at McKesson. We have teams that are specifically focused on data and integrating the data so that we can have meaningful information to make more broad decisions. They can be made not by, “Hey, I think I have the right knowledge.” Instead insights are based on the concrete details that guide you to making smart business decisions.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, Business intelligence, electronic medical records, ERP, healthcare, Internet of Things, machine learning, procurement, RPA, Spot buying, supply chain | Tagged , , , , , , , , , , , , , , , | Leave a comment

CEO Henshall on Citrix’s 30-year journey to make workers productive, IT stronger, and partners more capable

The next BriefingsDirect intelligent workspaces discussion explores how for 30 years Citrix has pioneered ways to make workers more productive, IT operators stronger, and a vast partner ecosystem more capable.

We will now hear how Citrix is by no means resting on its laurels by charting a new future of work that abstracts productivity above apps, platforms, data, and even clouds. The goal: To empower, energize, and enlighten disaffected workers while simplifying and securing anywhere work across any deployment model.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To hear more about Citrix’s evolution and ambitious next innovations, please welcome David Henshall, President and CEO of Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: To me Citrix is unique in that for 30 years it has been consistently disruptive, driven by vision, and willing to take on both technology and culture — which are truly difficult things to do. And you have done it over and over again.

As Citrix was enabling multiuser remote access — or cloud before there was even a word for it — you knew that changing technology for delivering apps necessitated change in how users do their jobs. What’s different now, 30 years later? How has your vision of work further changed from delivery of apps?

Do your best work

Henshall: I think you said it well. For 30 years, we have focused on connecting people and information on-demand. That has allowed us to let people be productive on their terms. The fundamental challenge of people is to have access to the tools and resources necessary to get their jobs done — or as we describe it, to do their best work.

We look at that as an absolute necessity. It’s one of the things that makes people feel empowered, feel accomplished, and it allows them to drive better productivity and output. It allows engagement at the highest levels possible. All of these have been great contributing factors.

What’s changed? The technology landscape continues to evolve as applications have evolved over the years – and so have we. You referred to the fact that we’ve reinvented ourselves many times in the last three decades. All great companies go through the same regeneration against a common idea, over-and-over again. We are now in what I would describe as the cloud-mobile era, which has created unprecedented flexibility from the way people used to manage IT. Everything from new software-as-a-service (SaaS) services are being consumed with much less effort, all the way to distributed edge services that allow us to compute in new ways that we’ve never imagined.

And then, of course, on the device side, the choices are frankly nearly infinite. Being able to support the device of your choice is a critical part of what we do — and we believe that matters.

Gardner: I was fortunate enough to attend a press conference back in 1995 when Citrix WinFrame, as it was called at that time, was delivered. The late Citrix cofounder Ed Iacobucci was leading the press conference. And to me, looking back, that set the stage for things like desktop as a service (DaaS)virtual desktop infrastructure (VDI), multi-tenancy, and later SaaS. We all think of these as major mainstream technologies.

Do you feel that what you’re announcing about the future of work, and of inserting intelligence in context to what people do at work, will similarly set off a new era in technology? Are we repeating the past in terms of the scale and magnitude of what you are biting off?

Future productivity goes beyond products 

Henshall: The interesting thing about the future is that it keeps changing. Against that backdrop we are rethinking the way people work. It’s the same general idea about just giving people the tools to be productive on their terms.

Henshall

A few years back that was about location, of being able to work outside of a traditional office. Today more than half the people do not work in a typical corporate headquarters environment. People are more distributed than ever before.

The challenge we are now trying to solve takes it another step forward. We think about it from a productivity standpoint and an engagement template. The downside of technology is that it does make everything possible. So therefore the level of complexity has gone up dramatically. The level of interruptions — and what we call context shifting— has gone up dramatically. And so, we are looking for ways to help simplify, automate common workflows, and modernize the way people engage with applications. All of these point toward the same common outcome of, “How do we make people more productive on their terms?”

Gardner: To solve that problem of location flexibility years ago, Citrix had to deal with the network, servers, performance and capacity, and latency — all of which were invisible. End users didn’t know that it was Citrix behind-the-scenes.

Will people know the Citrix name and associate it with workspaces now that you are elevating your value above IT?

Henshall: We are solving broader challenges. We have moved gradually over the years from being a behind-the-scenes infrastructure technology. People have actually used the company’s name as a verb. “I have Citrixed into my environment,” for example. That will slowly evolve into still leveraging Citrix as a verb, but meaning something like, “I Citrixed to get my job done.” That takes on an even broader definition around productivity and simplification, and it allows us more degrees of freedom.

We are working with ecosystem partners across the infrastructure landscape, all types of application vendors. We therefore are a bridge between all of those. It doesn’t mean we necessarily have to have our name front and center, but Citrix is still a verb for most people in the way they think about getting their jobs done.

Gardner: I commend you for that because a lot of companies can’t resist making their name part-and-parcel of a solution. Perhaps that’s why you’ve been such a good partner over the years. You’ve been supplying a lot of the workhorses to get jobs done, but without necessarily having to strut your stuff.

Let’s get back to the issues around worker talent, productivity, and worker user experience. It seems to me we have lot of the bits and parts for this. We have great apps, great technology, and cloud distribution. We are seeing interactivity via chatbots, and robotic programming automation (RPA).

Why do you think being at the middle is the right place to pull this all together? How can Citrix uniquely help, whereas none of the other individual parts can?

Empower the people, manage the tech

Henshall: It’s a problem they are all focused on solving. So take a SaaS application, for example. You have applications that are incredibly powerful, best of breed, and they allow for infinite flexibility. Therein lies part of the challenge. The vast majority of people are not power users. They are not looking for every single bell and whistle across a workflow. They are looking for the opportunity to get something done, and it’s usually something fairly simple.

We are designing an interface to help abstract away a lot of complexity from the end user so they can focus on the task more than the technology itself. It’s an interesting challenge because so much technology is focused on the tech and how great and powerful and inflexible it is, and they lose sight of what people are trying to accomplish.

We start by working backward. We start with the end user, understand what they need to be productive, empowered, and engaged. We let that be a guiding principle behind our roadmap. That gives us flexibility to empathize, to understand more about customers.

We start by working backward. We start with the end user, understand what they need to be productive, empowered, and engaged. We let that be a guiding principle behind our roadmap. That gives us flexibility to empathize, to understand more about customers and end users more effectively than if we were building something purely for technology’s sake.

Gardner: For younger workers who have grown up all-digital all the time, they are more culturally attuned to being proactive. They want to go out and do things with choice. So culturally, time is on your side.

On the other hand, getting people to change their behaviors can be very challenging. They don’t know that it could be any better, so they can be resistant. This is more than working with an IT department on infrastructure. We are talking about changing people’s thinking and how they relate to technology.

How do you propose to do that? Do you see yourself working in an ecosystem in such a way that this is not just, “If we build it, they will come,” affair, but evangelizing to the point where cognitive patterns can be changed?

Henshall: A lot of our relationships and conversations have been evolving over the last few years. We’ve been moving further up what I would call “the IT hierarchy.” We’re having conversations with CIOs now about broad infrastructure, ways that we can help address the use cases of all their employees, not just those that historically needing all the power of virtualization.

But as we move forward, there is a large transformation going on. Whether we use terms like digital transformation and others, those are less technology conversations and more about business outcomes – more than any time in my 30-year-career.

Because of that, you’re not only engaging the CIO, you may have the same conversation with a line of business executive, a chief people officer, the chief financial officer (CFO), or someone in another functional organizations. And this is because they’re all trying to accomplish a specific outcome more than focusing on the technology itself.

And that allows us to elevate the discussion in a way that is much more interesting. It allows us to think about the human and business outcomes more so than ever before. And again, it’s just one more extension of how we are getting out of the “technology for technology’s sake” view and much more into the, “What is it that we are actually trying to accomplish” view.

Gardner: David, as we tackle these issues, elevate the experience, and let people work the way they want, it seems we are also opening up the floodgates for addition of more intelligence.

Whether you call it artificial intelligence (AI)machine learning (ML), or augmented intelligence, the fact is that we are able to deal with more data, derive analytics from it, learn patterns, reapply those learning lessons, and repeat. So injecting that into work, and how people get their jobs done, is the big question these days. People are trying to tackle it from a variety of different directions.

You have said an advantage Citrix has, is in access to data. What kind of data are we talking about, and why is that going to put Citrix in a leadership position?

Soup to nuts supervision of workflow 

Henshall: We have a portfolio that spans everything from the client device through the application, files, and the network. We are able to instrument many different parts of the entire workflow. We can capture information about how people are using technologies, what their usage patterns look like, where they are coming in from, and how the files are being used.

In most cases, we take that and apply it into contextual outcomes. For example, in the case of security, we have an analytics platform and we use those security analytics. We can create a risk score that’s very similar to your credit score for an individual user’s behavior if something anomalous happens. For example, you’re here with me and you’re in front of your computer, but you also tried to log on from another part of the globe at the same time.

Things like that can be flagged almost instantaneously and allows the organization to identify and — in many cases — automatically address those types of scenarios. In that case, it may immediately ask for two-factor authentication.

We are not capturing personally identifiable information (PII) and other types of broader data that fall under a privacy umbrella. We access a lot of anonymized things that provide the insights.

Citrix operates in about 100 countries around the world. We are already very familiar with local compliance and data privacy regulations. We are making sure that we can operate within those and give our customers in those markets the tools to make sure they are operating within those constraints as well.

Every company has [had privacy discussions] and will continue to evolve over time as technology evolves because the underlying platforms are becoming very powerful. Citrix operates in about 100 countries around the world. We are already very familiar with local compliance and data privacy regulations. We are making sure that we can operate within those and certainly give our customers in those markets the tools to make sure that they are operating effectively within the constraints as well.

Gardner: The many resources people rely on to do their jobs come from different places — public clouds, private clouds, a hybrid between them, different SaaS providers, and different legacy systems of record.

You are in a unique position in the middle of that. You can learn from it and begin to suggest how people can improve. Those patterns can be really powerful. It’s not something we’ve been able to do before.

What do we call that? Is it AI? Or a valet or digital assistant to help in your work while protective of privacy and adhering to all the laws along the way? And where do you see that going in terms of having an impact on the economy and on companies?

AI, ML to assist and automate tasks

Henshall: Two very broad questions. From the future standpoint, AI and ML capabilities are helping turn all the data we have into more useful or actionable information. And in our case, you mentioned virtual assistance. We will be using intelligent assistance to help you automate simple tasks.

And many of those could be tasked between applications. For example, you could ask your assistant to move a meeting to next Thursday or any time your other meeting participants happen to be available. The bots will go out, search for that optimal time, and take those actions. Those are the types of things that we envision more for the virtual assistants going forward, and I think those will be interesting.

Beyond that, it becomes a learning mechanism whereby we can identify that your bot came back and told you you’ve had the same conflict two meetings in a row. Do you want to change all future meetings so that this doesn’t happen again? It can become much more predictive.

And so, this journey that Citrix has been on for many years started with helping to simplify IT so that it became easier to deliver the infrastructure. The second part of that journey was making it easier for people to consume those resources across the complexities we have talked about.

Now, the products we announced at our May 2019 Citrix Synergy Conference are more about guiding work to help simplify the workflows. We will be doing more in this last space on how to anticipate what you will need so that we can automate it ahead of time. And that’s an interesting journey. It will take a few years to get there, but it’s going to be pretty powerful when we do.

Gardner: As you’re conducting product development, I assume you’re reflecting these capabilities back to your own workforce, the Citrix global talent pool. Do you drink your own champagne? What are you finding? Does it give you a sense as the CEO that your workforce has an advantage by employing these technologies? Do we have any proof points that the productivity is in fact enhanced?

Henshall: It’s still early days. A lot of these are brand-new technologies that don’t have enough of a base of learning yet.

But some of the early learnings can identify areas where you’re multitasking too much, or are in an inefficient process, or in my case, I tend to look at automating opportunities for how much I am multitasking inside of a meeting. That helps me understand whether I should be in that meeting in the first place, whether I am a 100 percent focused and committed — or have I been distracted by other elements.

Those are interesting learnings that are more about personal productivity and how we can optimize from that respect.

More broadly speaking, our workforce is globally distributed. We absolutely drink our own champagne when it comes to engaging a global team. We have teams now in about 40 countries around the world and we are very, very virtual. In fact, among my leadership team, I am the only member that lives full-time in [Citrix’s headquarters] in South Florida. We make that work because we embrace all of our own technology, stay on top of common projects, communicate across all the various mediums, and collaborate where need be.

That allows us to tap into nontraditional workforce populations, to differentiate, and enable folks who need different types of flexibility for their own lifestyles. You miss great talent if you are far too rigid. Personally, I believe the days are gone when everybody is expected to work inside a corporate headquarters. It’s just not practical anymore.

Gardner: For those businesses that recognize there is tremendous change afoot, are using new models like cloud, and don’t want complexity to outstrip productivity – what advice do you have for them as they start digital transformation efforts? What should they be putting in place now to take advantage of what companies like Citrix will be providing them in a few years?

Business-first supports global collaboration 

Henshall: The number one thing on any digital transformation project is to be currently clear about what the outcome is you are trying to achieve. Start with the outcome and work backward. You can leverage platforms like Citrix, for example, to look across multiple technologies, focus on those business outcomes, and leave the technology decision in many cases to last. It shouldn’t be the other way around because if you do, you will self-limit what those outcomes should be. 

Make sure you have buy-in across all stakeholders. As I talked about earlier, have a conversation with the CFO, head of marketing, head of human resources, and many others. Look for breadth of outcomes, because you don’t want to solve problems for one small team, you want to solve problems across the enterprise. That’s where you get the best leverage. It allows you the best opportunity to simplify the complexity that has built up over the last 30 to 40 years. This will help people get out from under that problem.

Gardner: Lastly, for IT departments specifically, the people who have been most aware of Citrix as a brand, how should IT be thinking about entering this new era of focusing on work and productivity? What should IT be thinking about to transform themselves to be in the best position to attain these better business outcomes?

Henshall: I have already seen the transformation happening. Most IT administrators want to focus on larger business problems, more than just maintaining the existing infrastructure. Unfortunately, the budgets have been relatively limited for innovation because of all the complexity we have talked about.

But my advice for everyone is, take a step back, understand how to be the champion of the business, to be the hero by providing great outcomes, great experiences, and higher productivity. That’s not a technology conversation first and foremost. Obviously it has a technology element but understand and be empathetic of the needs of the business. Then work backward, and Citrix will help you get there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in application transformation, artificial intelligence, Citrix, Cloud computing, Cyber security, Data center transformation, Enterprise architect, Enterprise transformation, machine learning, multicloud, Security, User experience, Virtualization | Tagged , , , , , , , , , , | Leave a comment

How real-time data streaming and integration set the stage for AI-driven DataOps

The next BriefingsDirect business intelligence (BI) trends discussion explores the growing role of data integration in a multi-cloud world.

Just as enterprises seek to gain more insights and value from their copious data, they’re also finding their applications, services, and raw data spread across a hybrid and public clouds continuum. Raw data is also piling up closer to the edge — on factory floors, in hospital rooms, and anywhere digital business and consumer activities exist.

Stay with us now as we examine the latest strategies for uniting and governing data wherever it resides. By doing so, businesses are enabling rapid and actionable analysis — as well as entirely new levels of human-to-augmented-intelligence collaboration.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the foundational capabilities that lead to a total data access exploitation, we’re now joined by Dan Potter, Vice President of Product Marketing at Attunity, a Division of Qlik. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Dan, what are the business trends forcing a new approach to data integration?

Potter: It’s all being driven by analytics. The analytics world has gone through some very interesting phases of late: Internet of Things (IoT), streaming data from operational systems, artificial intelligence (AI) and machine learning (ML), predictive and preventative kinds of analytics, and real-time streaming analytics.

So, it’s analytics driving data integration requirements. Analytics has changed the way in which data is being stored and managed for analytics. Things like cloud data warehouses, data lakes, streaming infrastructure like Kafka — these are all a response to the business demand for a new style of analytics.

Potter

As analytics drives data management changes, the way in which the data is being integrated and moved needs to change as well. Traditional approaches to data integration – such as batch processes, more ETL, and scripted-oriented integration – are no longer good enough. All of that is changing. It’s all moving to a much more agile, real-time style of integration that’s being driven by things like the movement to the cloud and the need to move more data in greater volume, and in greater variety, into data lakes, and how do I shape that data and make it analytics-ready.

With all of these movements, there have been new challenges and new technologies. The pace of innovation is accelerating, and the challenges are growing. The demand for digital transformation and the move to the cloud has changed the landscape dramatically. With that came great opportunities for us as a modern data integration vendor, but also great challenges for companies that are going through this transition.

Gardner: Companies have been doing data integration since the original relational database (RDB) was kicked around. But it seems the core competency of managing the integration of data is more important than ever.

Innovation transforms data integration

Potter: I totally agree, and if done right, in the future, you won’t have to focus on data integration. The goal is to automate as much as possible because the data sources are changing. You have a proliferation of NoSQL databases, graph databases; it’s no longer just an Oracle database or RDB. You have all kinds of different data. You have different technologies being used to transform that data. Things like Spark have emerged along with other transformation technologies that are real-time-oriented. And there are different targets to where this data is being transformed and moved to.

It’s difficult for organizations to maintain the skills set — and you don’t want them to. We want to move to an automated process of data integration. The more we can achieve that, the more valuable all of this becomes. You don’t spend time with mundane data integration; you spend time on the analytics — and that’s where the value comes from.

Gardner: Now that Attunity is part of Qlik, you are an essential component of a larger undertaking, of moving toward DataOps. Tell me why automated data migration and integration translates into a larger strategic value when you combine it with Qlik?

Potter: DataOps resonates well for the pain we’re setting out to address. DataOps is about bringing the same discipline that DevOps has brought to software development. Only now we’re bringing that to data and data integration for analytics.

How do we accelerate and remove the gap between IT, which is charged with providing analytics-ready data to the business, and all of the various business and analytics requirements? That’s where DataOps comes in. DataOps is technology, but that’s just a part of it. It’s as much or more about people and process — along with enabling technology and modern integration technology like Attunity.

We’re trying to solve a problem that’s been persistent since the first bit of data hit a hard drive. Data integration challenges will always be there, but we’re getting smarter about the technology that you apply and gaining the discipline to not boil the ocean with every initiative.

The new goal is to get more collaboration between what business users need and to automate the delivery of analytics-ready data, knowing full-well that the requirements are going to change often. You can be much more responsive to those business changes, bring in additional datasets, and prepare that data in different ways and in different formats so it can be consumed with different analytics technologies.

That’s the big problem we’re trying to solve. And now, being part of Qlik gives us a much broader perspective on these pains as relates to the analytics world. It gives us a much broader portfolio of data integration technologies. The Qlik Data Catalyst product is a perfect complement to what Attunity does.

Our role in data integration has been to help organizations move data in real-time as that data changes on source systems. We capture those changes and move that data to where it’s needed — like a cloud, data lake, or data warehouse. We prepare and shape that data for analytics.

Our role in data integration has been to help organizations move data in real-time as that data changes on source systems. We capture those changes and move that data to where it’s needed — like a cloud, data lake, or data warehouse. We prepare and shape that data for analytics.

Qlik Data Catalyst then comes in to catalog all of this data and make it available to business users so they can discover and govern that data. And it easily allows for that data to be further prepared, enriched, or to create derivative datasets.

So, it’s a perfect marriage in that the data integration world brings together the strength of Attunity with Qlik Data Catalyst. We have the most purpose-fit, modern data integration technology to solve these analytics challenges. And we’re doing it in a way that fits well with a DataOps discipline.

Gardner: We not only have the different data types, we have another level of heterogeneity to contend with and that’s cloud, hybrid cloud, multi-cloud, and edge. We don’t even know what more is going to be coming in two or three years. How does an organization stay agile given that level of dynamic complexity?

Real-time analytics deliver agility 

Potter: You need a different approach for a different style of integration technology to support these topologies that are themselves very different. And what the ecosystem looks like today is going to be radically different two years from now.

The pace of innovation just within the cloud platform technologies is very rapid. Just the new databases, transformation engines, and orchestration engines — it’s just proliferates. And now you have multiple cloud vendors. There are great reasons for organizations to use multiple clouds, to use the best of the technologies or approaches that work for your organization, your workgroup, your division. So you need that. You need to prepare yourself for that, and modern integration approaches definitely help.

One of the interesting technologies to help organizations provide ongoing agility is Apache Kafka. Kafka is a way to move data in real-time and make the data easy to consume even as it’s flowing. We see that as an important piece of the evolving data infrastructure fabric.

At Attunity we create data streams from systems like mainframes, SAP applications, and RDBs. These systems weren’t built to stream data, but we stream-enable that data. We publish it into a Kafka stream and that provides great flexibility for organizations to, for example, process that data in real time for real-time analytics such as fraud detection. It’s an efficient way to publish that data to multiple systems. But it also provides the agility to be able to deliver that data widely and have people find and consume that data easily.

Such new, evolving approaches enable a mentality that says, “I need to make sure that whatever decision I make today is going to future-proof me.” So, setting yourself up right and thinking about that agility and building for agility on day one is absolutely essential.

Gardner: What are the top challenges companies have for becoming masterful at this ongoing challenge — of getting control of data so that they can then always analyze it properly and get the big business outcomes payoff?

Potter: The most important competency is on the enterprise architecture (EA) level, more than on the people who traditionally build ETL scripts and integration routines. I think those are the piece you want to automate.

The real core competency is to define a modern data architecture and build it for agility so you can embrace the changing technologies and requirements landscape. It may be that you have all of your eggs in one cloud vendor today. But you certainly want to set yourself up so you can evolve and push processing to the most efficient place, and to attain the best technology for the kinds of analytics or operational workloads you want.

That’s the top competency that organizations should be focused on. As an integration vendor, we are trying to reduce the reliance on technical people to do all of this integration work in a manual way. It’s time-consuming, error-prone, and costly. Let’s automate as much as we can and help companies build the right data architecture for the future.

Gardner: What’s fascinating to me, Dan, in this era of AI, ML, and augmented intelligence is that we’re not just creating systems that will get you to that analytic opportunity for intelligence. We are employing that intelligence to get there. It’s tactical and strategic. It’s a process, and it’s a result.

How do AI tools help automate and streamline the process of getting your data lined up properly?

Automated analytics advance automation 

Potter: This is an emerging area for integration technology. Our focus initially has been on preparing data to make it available for ML initiatives. We work with vendors such as Databricks at the forefront of processing, using a high performance Spark engine and processing data for data science, ML, and AI initiatives.

We need to ask, “How do we apply cognitive engines, things like Qlik, to the fore within our own technology and get smarter about the patterns of integration that organizations are deploying so we can further automate?” That’s really the next way for us.

Gardner: You’re not just the president, you’re a client.

Potter: Yeah, that’s a great way to put it.

Gardner: How should people prepare for such use of intelligence?

Potter: If it’s done right — and we plan on doing it right — it should be transparent to the users. This is all about automation done right. It should just be intuitive. Going back 15 years when we first brought out replication technology at Attunity, the idea was to automate and abstract away all of the complexity. You could literally drag your source, your target, and make it happen. The technology does the mapping, the routing, and handles all the errors for me. It’s that same elegance. That’s where the intelligence comes in, to make it so intuitive that you are not seeing all the magic that’s happening under the covers.

This is all about automation done right. It should just be intuitive. When we first brought out replication technology at Attunity, the idea was to automate and abstract away all of the complexity. That’s now where the intelligence comes in, to make it so intuitive that you are not seeing all the magic under the covers.

We follow that same design principle in our product. As the technologies get more complex, it’s harder for us to do that. Applying ML and AI becomes even more important to us. So that’s really the future for us. You’ll continue to see, as we automate more of these processes, all of what is happening under the covers.

Gardner: Dan, are there any examples of organizations on the bleeding edge? They understand the data integration requirements and core competencies. They see this through the lens of architecture.

Automation insures insights into data 

Potter: Zurich Insurance is one of the early innovators in applying automation to their data warehouse initiatives. Zurich had been moving to a modern data warehouse to better meet the analytics requirements, but they realized they needed a better way to do it than in the past.

Traditional enterprise data warehousing employs a lot of people, building a lot of ETL scripts. It tends to be very brittle. When source systems change you don’t know about it until the scripts break or until the business users complain about holes in their graphs. Zurich turned to Attunity to automate the process of integrating, moving it to real-time, and automatically structuring their data warehouse.

Their capability to respond to business users is a fraction of what it was. They reduced 45-day cycles to two-day cycles for updating and building out new data marts for users. Their agility is off the charts compared to the traditional way of doing it. They can now better meet the needs of the business users through automation.

As organizations move to the cloud to automate processes, a lot of customers are embracing data lakes. It’s easy to put data into a data lake, but it’s really hard to derive value from the data lake and reconstruct the data to make it analytics-ready.

For example, you can take transactions from a mainframe and dump all of those things into a data lake, which is wonderful. But how do I create any analytic insights? How do I ensure all those frequently updated files I’m dumping into the lake can be reconstructed into a queryable dataset? The way people have done it in the past is manually. I have scriptures using Pig and other languages try to reconstruct it. We fully automate that process. For companies using Attunity technology, our big investments in data lakes has had a tremendous impact on demonstrating value.

Gardner: Attunity recently became part of Qlik. Are there any clients that demonstrate the combination of two-plus-two-equals-five effect when it comes to Attunity and the Qlik Catalyst catalog?

DataOps delivers the magic 

Potter: It’s still early days for us. As we look at our installed base — and there is a lot of overlap between who we sell to — the BI teams and the data integration teams in many cases are separate and distinct. DataOps brings them together. 

In the future, as we take the Qlik Data Catalyst and make that the nexus of where the business side and the IT side come together, the DataOps approach leverages that catalog and extends it with collaboration. That’s where the magic happens.

So business users can more easily find the data. They can send the requirements back to the data engineering team as they need them. By, again, applying AI and ML to the patterns that we are seeing from the analytics side will help better apply that to the data that’s required and automate the delivery and preparation of that data for different business users.

That’s the future, and it’s going to be very interesting. A year from now, after being part of the Qlik family, we’ll bring together the BI and data integration side from our joint customers. We are going to see some really interesting results.

Gardner: As this next, third generation of BI kicks in, what should organizations be doing to get prepared? What should the data architect, who is starting to think about DataOps, do to put them in an advantageous position to exploit this when the market matures?

Potter: First they should be talking to Attunity. We get engaged early and often in many of these organizations. The hardest job in IT right now is [to be an] enterprise architect, because there are so many moving parts. But we have wonderful conversations because at Attunity we’ve been doing this for a long time, we speak the same language, and we bring a lot of knowledge and experience from other organizations to bear. It’s one of the reasons we have deep strategic relationships with many of these enterprise architects and on the IT side of the house.

They should be thinking about what’s the next wave and how to best prepare for that. Foundationally, moving to more real-time streaming integration is an absolute requirement. You can take our word for it. You can go talk to analysts and other peers around the need for real-time data and streaming architectures, and how important that is going to be in the next wave.

Data integration is strategic, it unlocks the value of the data. If you do it right, you’re going to set yourself up for long-term success. 

So, preparing for that and again thinking about the agility in the automation that’s going to get them the desired results because if they’re not preparing for that now, they are going to be left behind, and if they are left behind the business is left behind, and it is a very competitive world and organizations are competing on data and analytics. So the faster that you can deliver the right data and make it analytic-ready, the faster and better decisions you can make and the more successful you’ll be.

So it really is a do-or-die kind of proposition and that’s why data integration, it’s strategic, it’s unlocking the value of this data, and if you do it right, you’re going to set yourself up for long-term success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Qlik

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Business intelligence, data analysis, DevOps, Information management, machine learning, MySQL, Qlik, storage | Tagged , , , , , , , , , , , | Leave a comment

How HCI forms a simple foundation for hybrid cloud, edge, and composable infrastructure

The next BriefingsDirect Voice of the Innovator podcast discussion explores the latest insights into hybrid cloud and hyperconverged infrastructure (HCI) strategies.

Speed to business value and simplicity in deployments have been top drivers of the steady growth around HCI solutions. IT operators are now looking to increased automation, built-in intelligence, and robust security as they seek such turnkey appliance approaches for both cloud and traditional workloads.

Stay with us now as we examine the rapidly evolving HCI innovation landscape, which is being shaped just as much by composability, partnerships, and economics, as it is new technology.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us learn more about the next chapter of automated and integrated IT infrastructure solutions is Thomas Goepel, Chief Technologist for Hyperconverged Infrastructure at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Thomas, what are the top drivers now for HCI as a business tool? What’s driving the market now, and how has that changed from a few years ago?

Goepel

Goepel: HCI has gone through a really big transformation in the last few years. When I look at how it originally started, it was literally people looking for a better way of building virtual desktop infrastructure (VDI) solutions. They wanted to combine servers and storage in a single device and make it easier to operate.

What I am seeing now is HCI spreading throughout datacenters and becoming one of the core elements of a lot of the datacenters around the world. The use cases have significantly been expanded. It started out with VDI, but now people are running all kinds of business applications on HCI — all the way to critical databases  like SAP HANA.

Gardner: People are using HCI in new ways. They are innovating in the market, and that often means they do things with HCI that were not necessarily anticipated. Do you see that happening with HCI?

Ease of use encourages HCI expansion 

Goepel: Yes, it’s happened with HCI quite a bit. The original use cases were very much focused on VDI and end-user computing. It was just a convenient way of having a platform for all of your virtual desktops and an easy way of managing them.

But people saw that ease of management can actually be expanded into other use cases. They then began to bring in some core business applications, such as Microsoft Exchange or SharePoint, logged onto the platform and saw there are more and more things they can put on there, and gain the entire simplicity that hyperconverged brings to operating in this environment.

How Hyperconverged Infrastructure Delivers

Unexpected Results for VDI Users

You no longer had to build a separate server farm, separate storage farm, or even manage your network independently. You could now do all of that from a single interface, a single-entry point, and gain a single point of management. Then people said, “Well, this ease makes it so beneficial for me, why don’t we bring the other things in here?” And then we saw it spread out in the data centers.

What we now have is people saying, “Hey, let me take this a step further. If I have remote offices, branch offices, or edge use-cases where I also need compute resources, why not try to take HCI there? Because typically on the edge I don’t even have system administrators, so I can take this entire simplicity down to this point, too.”

And the nice thing with hyperconvergence is that — at least in the HPE version of hyperconvergence, which is HPE SimpliVity — it’s not only simple to manage, it has also built in all of the enterprise features such as high availability and data efficiency, so it makes it really a robust solution. It has come a very long way on this journey.

Gardner: Thomas, you mentioned the role of HCI at the edge gaining traction and innovation. What’s a typical use case for this sort of micro datacenter at the edge? How does that work?

Losing weight with HCI wins the race

Goepel: Let me give you a really good example of a super-fast-paced industry: Formula One car racing. It really illustrates how edge is having an impact — and also how this has a business impact.

One of our customers, Aston Martin Red Bull Racing, has been very successful in Formula One racing. The rules of the International Automobile Federation (FIA), the governing board of Formula One racing, say that each race team can only bring a certain amount of weight to a racetrack during the races.

This is obviously a high-tech race. They are adjusting the car during the race, lap by lap, making adjustments based on the real-time performance of the car to get the last inch possible out of the car to win that race. All of these cars are very close to each other from a performance perspective.

Traditionally, they shipped racks and racks of IT gear to the racetrack to calculate the performance of the car and make adjustments during the race. They have now replaced all of these racks with HPE SimpliVity HCI gear and significantly reduced the amount of gear. It means having significantly less weight to bring to the racetrack.

How Hyperconvergence Plays

Pivotal Role at Red Bull

There are two benefits. First, reducing the weight of the IT gear allows them to bring additional things to the racetrack because what counts is the total weight – and that includes the car, spare parts, people, equipment — everything. There is a certain mandated limit.

By taking that weight out, having less IT equipment on the racetrack, the HCI allows them to bring extra personnel and spare parts. They can perform better in the races.

The other benefit is that HCI performs significantly better than traditional IT infrastructure. They can now make adjustments within one lap of the race versus before, when it took them three laps before they could make adjustments to the car.

This is a huge competitive advantage. When you look at the results, they are doing great when it comes to Formula One racing, especially for being a smaller team compared to the big teams out there.

From that perspective, at the edge, HCI is making some big improvements, not only in a high-end industry like Formula One racing, but in all kinds of other industries, including manufacturing and retail. They are seeing similar benefits.

Gardner: I wrote a research paper about four years ago, Thomas, that laid out the case that HCI will become a popular on-ramp to private clouds and ultimately hybrid cloud. Was I ahead of my time?

HCI on-ramp to the clouds

Goepel: Yes, I think you were a little bit ahead of your time. But you were also a visionary to lay out that groundwork. When you look at the industry, hyperconvergence is a fast-growing industry segment. When it comes to server and data center infrastructure, HCI has the highest growth rate across the entire IT industry.

I don’t see an end anytime soon. HCI continues to grow as people discover new use cases. The edge is one new element, but we are just scratching the surface.

What you were foreseeing four years ago is exactly what we now have, and I don’t see an end anytime soon. HCI continues to grow as people discover new use cases. The edge is one new element, but we are just scratching the surface.

Edge use cases are a fascinating new world in general — from such distributed environments as smart cities and smart manufacturing. We are just starting to get into this world. There’s a huge opportunity for innovation and this will become an attractive area for hyperconvergence. 

Gardner: How does HCI innovation align with other innovations at HPE around automation, composability, and intelligence derived to make IT behave as total solutions? Is there a sense that the whole is greater than the sum of the parts?

HCI innovations prevent problems

Goepel: Absolutely there is. We have leveraged a lot of innovation in the broader HPE ecosystem, including the latest generation of the ProLiant DL380 Server, the most secure server in the industry. All of these elements flew into the HPE SimpliVity HCI platform, too.

But we are not stopping there. A lot of other innovations in the HPE ecosystem are being brought into hyperconvergence. A perfect example is HPE InfoSight, a management platform that allows you to operate your infrastructure better by understanding what’s going on in a very efficient way. It uses artificial intelligence (AI) to detect when something is going wrong in your IT environment so you can proactively take action and don’t end up with a disaster.

How to Tell if Your Network

Is Really Aware of Your Infrastructure

HPE InfoSight originally started out in storage, but we are now taking it into the full HPE SimpliVity HCI ecosystem. It’s not just a support portal, it gives you intelligence to understand what’s going on before you run into problems. Those problems can be solved so your environment keeps running at top performance. You’ll have what you need to run any mission-critical business on HCI. 

More and more of these innovations in our ecosystem will be brought into the hyperconverged world. Another example is around composability. We have been developing a lot of platform capabilities around composability and we are now bringing HPE SimpliVity and composability together. This allows customers to actually change the infrastructure’s personality depending on the workload, including bringing on HPE SimpliVity. You can get the best of these two worlds.

This leads to building a private cloud environment that can be easily connected to a public cloud or clouds. You will ultimately build out a hybrid IT environment in such a way that your private cloud environment, or your on-premise environment, runs in the most optimized way for your business and for your specific needs as a company.

Gardner: You are also opening up that HCI ecosystem with new partners. Tell us how innovation around hyperconverged is broadening and making it more ecumenical for the IT operations consumer.

Welcome to the hybrid world

Goepel: HPE has always been an open player. We never believed in locking down an environment or making it proprietary and basically locking out everyone else. We have always been a company that listens to what our customers want, what our customers need, and then give them the best solution.

Now, customers are looking to run their HCI environment on HPE equipment and infrastructure because they know that this is reliable infrastructure. It is working, and they feel comfortable with it, and they trust it. But we also have customers who say, “Hey, you know, I want to run this piece of software or that solution on this HPE environment. Can you make sure this runs and works perfectly?”

We are in a hybrid world. And in a hybrid world there is not a single vendor that can cover the entire hybrid market. We need to innovate in such a way that we allow an ecosystem of partners to all come together and work collaboratively and jointly to provide new solutions.

We have recently announced new partnerships with other software vendors, and that includes HPE GreenLake Flex Capacity. With that, instead of doing big, upfront investments on equipment, you can do it in a more innovative way financially. It brings about the solution that solves the customers’ real problems, rather than locking the customer into some certain infrastructure.

Flexibility improves performance 

Gardner: You are broadening the idea of making something consumable when you innovate, not only around the technology and the partnerships, but also the economic model, the consumption model. Tell us more about how HPE GreenLake Flex Capacity and acquiring a turnkey HPE SimpliVity HCI solution can accelerate value when you consume it, not as a capital expense, but as an operating cost affair.

Goepel: No industry is 100 percent predictable, at least I haven’t seen it, and I haven’t found it. Not even the most conservative government institution that has a five-year plan is predictable. There are always factors that will disrupt that predictability plan, and you have to react to that.

How Hyperconverged Infrastructure

Solves Unique Challenges

For Datacenters at the Edge

Traditionally, what we have done in the industry is oversized our environments to calculate for anticipated growth over five years — and then add another 25 percent on top of it, and then another 10 percent cover on top of that. Hopefully we did not undersize the environment once we get to the end of the life of the equipment. 

That is a lot of capital you are investing into something that just sits there and has no value, no use, and just basically stands around, and you take off of your books in the financial perspective. 

Now, HPE GreenLake gives you a flexible-capacity model. You only pay literally for what you consume. If you grow faster than you anticipated, you just use more. If you grow slower, you use less. If you have an extremely successful business — but then something in the economic model changes and your business doesn’t perform as you have anticipated — then you can reduce your spending. That flexibility better supports your business.

IT shouldn’t be a burden that slows you down, it should be an accelerator. By having a flexible financial model, you get exactly that.You can scale up and down based on your business needs. 

We are ultimately doing IT to help our businesses to perform better. IT shouldn’t be a burden that slows you down, it should be an accelerator. By having a flexible financial model, you get exactly that. HPE GreenLake allows you to scale up and scale down your environment based on your business needs with the right financial benefits behind it.

Gardner: There is such a thing as too much of a good thing. And I suppose that also applies to innovation. If you are doing so many new and interesting things — allowing for hybrid models to accelerate and employing new economic models — sometimes things can spin out of control.

But you can also innovate around management to prevent that from happening. How does management innovation fit into these other aspects of a solution, to keep it from getting out of control?

Checks and balances extend manageability

Goepel: You bring up a really good point. One of the things we have learned as an industry is that things can spin out of control very quickly. And for me, the best example is when I go back two years when people said, “I need to go to the cloud because that is going to save my world. It’s going to reduce my costs, and it’s going to be the perfect solution for me.”

What happened is people went all-in for the cloud and every developer and IT person heard, “Hey, if you need a virtual machine just get it on whatever your favorite cloud provider is. Go for it.” People very quickly learned that this means exploding their costs. There was no control, no checks and balances.

On both the HCI and general IT side, we have learned from that initial mistake in the public cloud and have put the right checks and balances in place. HPE OneView is our infrastructure management platform that allows the system administrator to operate the infrastructure from a single-entry point or single point of view.

How Hyperconverged Infrastructure

Helps Trim IT Complexity

Without Sacrificing Quality

That gives you a very simple way of managing and plays along with the way HCI is operated — from a single point of view. You don’t have five consoles or five screens, you literally have one screen you operate from. 

You need to have a common way of managing checks and balances in any environment. You don’t want the end user or every developer to go in there and just randomly create virtual machines, because then your HCI environment quickly runs out of resources, too. You need to have the right access controls so that only people that have the right justification can do that, but it still needs to happen quickly. We are in a world where a developer doesn’t want to wait three days to get a virtual machine. If he is working on something, he needs the virtual machine now — not in a week or in two days.

Similarly, when it comes to a hybrid environment — when we bring together the private cloud and the public cloud — we want a consistent view across both worlds. So this is where HPE OneSphere comes in. HPE OneSphere is a cloud management platform that manages hybrid clouds, so private and public clouds. 

It allows you to gain a holistic view of what resources you are consuming, what’s the cost of these resources, and how you can best distribute workloads between the public and private clouds in the most efficient way. It is about managing performance, availability, and cost. You can put in place the right control mechanisms to curb rogue spending, and control how much is being consumed and where.

Gardner: From all of these advancements, Thomas, have you made any personal observations about the nature of innovation? What is it about innovation that works? What do you need to put in place to prevent it from becoming a negative? What is it about innovation that is a force-multiplier from your vantage point?

Faster is better 

Goepel: The biggest observation I have is that innovation is happening faster and faster. In the past, it took quite a while to get innovation out there. Now it is happening so fast that one innovation comes, then the next one just basically runs over it, and we are taking advantage of it, too. This is just the nature of the world we are living in; everything is moving much faster. 

There are obviously some really great benefits from the innovation we are seeing. We have talked about a few of them, like AI and how HCI is being used in edge use-cases. In manufacturing, hospitals, and these kinds of environments, you can now do things in better and more efficient ways. That’s also helping on the business side.

How One Business

Took Control of their Hybrid Cloud 

But there’s also the human factor, because innovation makes things easier for us or makes it better for us to operate. A perfect example is in hospitals, where we can provide the right compute power and intelligence to make sure patients get the right medication. It is controlled in a good way, rather than just somebody writing on a piece of paper and hoping the next person can read it. You can now do all of these things electronically, with the right digital intelligence to ensure that you are actually curing the patient.

I think we will see more and more of these types of examples happening and bringing compute power to the edge. That is a huge opportunity, and there is a lot of innovation in the next two to three years, specifically in this segment, and that will impact everyone’s life in a positive way. 

Gardner: Speaking of impacting people’s lives, I have observed that the IT operator is being greatly impacted by innovation. The very nature of their job is changing. For example, I recently spoke with Gary Thome, CTO for Composable Cloud at HPE, and he said that composability allows for the actual consumers of applications to compose their own supporting infrastructure.

Because of ease, automation, and intelligence, we don’t necessarily need to go to IT to say, “Set up XYZ infrastructure with these requirements.” Using composablity, we can move innovation to the very people who are in the most advantageous position to define what it is they need.

Thomas, how do you see innovation impacting the very definition of what IT people do?

No more mundane tasks 

Goepel: This is a very positive impact, and I will give you a really good example. I spend a lot of time talking to customers and to a lot of IT people out there. And I have never encountered a single systems administrator in this industry who comes to work in the morning and says, “You know, I am so happy that I am here this morning so I can do a backup of my environment. It’s going to take me four hours, and I am going to be the happiest person in the world if the backup goes through.” Nobody wants to do this. 

Nobody goes to work in the morning and says, “You know, I really hope I get a hard problem to solve, like my network crashes and I am going to be the hero in solving the problem, or by making a configuration change in my virtual environment.”

These are boring tasks that nobody is looking for, but we have to do it because we don’t have the right automation in our environments. We don’t have the right management tools in our environment. We put a lot of boring tasks to our administrators and let them do them. They are mundane and they don’t really look forward to them.

How Hyperconverged Infrastructure

Gives You 54 Minutes Back Every Hour

Innovation takes these burdens away from the systems administrator and frees up their time to do things that are not only more interesting, but also add to the bottom line of the company. They can better help drive the businesses and spend IT resources on something that makes the difference for the company’s bottom line.

Ultimately, you don’t want to be the one watching backups going through or restoring files. You want this to be automatic, with a couple of clicks, and then you spend your time on something more interesting.

Every systems administrator I talk to really likes the new ways. I haven’t seen anyone coming back to me and saying, “Hey, can you take this automation away and all this hyperconvergence away? I want to go back to the old way and do things manually so I know how to spend my eight hours of the day.” People have much more to do with the hours they have. This is just freeing them up to focus on the things that add value.

HCI to make IT life easier and easier 

Gardner: Before we close out, Thomas, how about some forward-looking thoughts about what innovation is going to bring next to HCI? We talked about the edge and intelligence, but is there more? What are we going to be talking about when it comes to innovation in two years in the HCI space?

Goepel: I touched on the edge. I think there will be a lot of things happening across the entire edge space, where HCI will clearly be able to make a difference. We will take advantage of the capabilities that HCI brings in all these segments — and it will actually drive innovation outside of the hyperconverged world, but by being enabled by HCI.

But there are a couple of other things to look at. Self-healing using AI in IT troubleshooting, I think, will become a big innovation point in the HCI industry. What we are doing with HPE InfoSight is a start, but there is much more to come. This will continue to make the life of the systems administrator easier.

We want HCI as a platform to be almost invisible to the end user because they shouldn’t care about the infrastructure. It will behave like a cloud, but just be on-premises and private, and in a better, more controlled way. 

Ideally, we want HCI as a platform to be almost invisible to the end user because they shouldn’t care about the infrastructure. It will behave like a cloud, but just be on-premises and private, and in a better, more controlled way.

The next element of innovation you will see is HCI acting very similar to a cloud environment. And some of the first steps with that are what we are doing around composability. This will drive forward to where you change the personality of the infrastructure depending on the workload needed. It becomes a huge pool of resources. And if you need to look like a bare-metal server, or a virtual server — a big one or a small one — you can just change it and this will be all software controlled. I think that innovation element will then enable a lot of other innovations on top of it.

How to Achieve Composability

Across Your Datacenter

If you take these three elements — AI, composability of the infrastructure, and driving that into the edge use cases — that will enable a lot of business innovation. It’s like the three legs of a stool. And that will help us drive even further innovation.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, data center, Data center transformation, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, multicloud, Security, server, Software, Software-defined storage, storage, Virtualization | Tagged , , , , , , , , , , , , , | Leave a comment

How Ferrara Candy depends on automated IT intelligence to support rapid business growth

The next BriefingsDirect Voice of the Customer IT modernization journey interview explores how a global candy maker depends on increased insight for deploying and optimizing servers and storage.

We’ll now learn how Ferrara Candy Company boosts its agility as a manufacturer by expanding the use of analysis and proactive refinement in its data center operations by bringing more intelligence to IT infrastructure.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to hear about unlocking the potential for end-to-end process and economic efficiency with Stefan Floyhar, Senior Manager of IT Infrastructure at Ferrara Candy Co. in Oakbrook Terrace, Illinois. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the major reasons Ferrara Candy took a new approach in bringing added intelligence to your servers and storage operations?

Floyhar: The driving force behind utilizing intelligence at the infrastructure level specifically was to alleviate the firefighting operations that we were constantly undergoing with the old infrastructure.

Gardner: And what sort of issues did that entail? What was the nature of the firefighting?

Floyhar: We were constantly addressing infrastructure-related hardware failures, firmware issues, and not having visibility into true growth factors. That included not knowing what’s happening on the backend during an outage or from a problem with performance. We had a lack of visibility into true real-time performance data and fully scalable performance data.

Gardner: There’s nothing worse than being caught up in reactive firefighting mode when you’re also trying to be innovative, re-architect, and adjust to things like mergers and growth. What were some of the business pressures that you were facing even as you were trying to keep up with that old-fashioned mode of operations?

IT meets expanded candy demands

Floyhar: We have undergone a significant amount of growth in the last seven years — going from 125 virtual machines to 452, as of this morning. Those 452 virtual machines are all application-driven and application-specific. As we continued to grow, as we continued to merge and acquire other candy companies, that growth exploded exponentially.

The merger with Ferrara Pan Candy, and Farley’s and Sathers in 2012, for example, saw an initial growth explosion. More recently, in 2017 and 2018, we were acquired by Ferrero. We also acquired Nestlé Confections USA, which has essentially doubled the business overnight. The growth is continuing at an exponential rate.

Gardner: The old mode of IT operations just couldn’t keep up with that dynamic environment?

Floyhar: That is correct, yes.

Gardner: Ferrara Candy might not roll off the tongue for many people, but I bet they have heard a lot of your major candy brands. Could you help people understand how big and global you are as a confectionery manufacturer by letting us know some of your major brands?

Floyhar: We are the producers of Now and LaterLemonheads, Boston Baked BeansAtomic FireballsBob’s Candy Canes, and Trolli Gummies, which is one of our major brands. We also recently acquired Crunch BarButterfinger100 GrandLaffy Taffy, and Willy Wonka brands, among others.

We produce a little over 1 million pounds of gummies per week, and we are currently utilizing 2.5 million square feet of warehousing.

Learn More About Intelligent,

Self-Managing Flash Storage

In the Data Center and Cloud

Gardner: Wow! Some of those brands bring me way back. I mean, I was eating those when I was a kid, so those are some age-old and favorite brands.

Let’s get back to the IT that supports that volume and diversity of favorite confections. What were some of the major drivers that brought you to a higher level of automation, intelligence, and therefore being able to get on top of operations rather than trying to play catch up?

Floyhar: We have a very lean staff of engineers. That forced us to seek the next generation of product, specifically around artificial intelligence (AI) and machine learning (ML). We absolutely needed that because we’re growing at this exponential rate. We needed to take the focus off of infrastructure-related tasks and leverage technology to manage and operate the application stack and get it up to snuff. And so that was the major driving force for seeking AI [in our operations and management].

Gardner: And when you refer to AI you are not talking about helping your marketers better factor which candy to bring into a region. You are talking about intelligence inside of your IT operations, so AIOps, right?

Floyhar: Yes, absolutely. So things like Hewlett Packard Enterprise (HPE) InfoSight and some of the other providers with cloud-type operations for failure metrics and growth perspectives. We needed somebody with proven metrics. Proven technology was a huge factor in product determination.

Gardner: How about storage specifically? Was that something you targeted? It seems a lot of people need to reinvent and modernize their storage and server infrastructure in tandem and coordination.

Floyhar: Storage was actually the driving factor for us. It’s what started the whole renovation of IT within Ferrara. With our older storage, we were constantly suffering bottlenecks with administrative tasks and in not having visibility into what was going on.

During that discovery process and research, HPE InfoSight really jumped off the page at us. That level of AI, the proven track record, and being able to produce data around my work loads.

Storage drove that need for change. We looked at a lot of different storage area networks (SANs) and providers, everything from HPE Nimble to Pure, VNXUnityHitachi, … insert major SAN provider here. We probably did six or so months’ worth of research working with those vendors, doing proof of concepts (POCs) and looking at different products to truly determine what was the best storage solution for Ferrara.

During that discovery process, during that research, HPE InfoSight really jumped off the page at us. That level of AI, the proven track record, being able to produce data around my actual work loads. I needed real-life examples, not a sales and marketing pitch.

By having a demo and seeing that data being given that on the fly and on request was absolutely paramount in making our decision.

Gardner: And, of course, InfoSight, was a part of Nimble Storage and Nimble became acquired by HPE. Now we are even seeing InfoSight technology being distributed and integrated across HPE’s broad infrastructure offerings. Is InfoSight something that you are happy to see extended to other areas of IT infrastructure?

Floyhar: Yes, ever since we adopted the Nimble Storage solution I have been waiting for InfoSight to be adopted elsewhere. Finally it’s been added across the ProLiant series of servers. We are an HPE ProLiant DL560 shop.

I am ultra-excited to see what that level of AI brings for predictive failures monitoring, which is essentially going to alleviate any downtime. Any time we can predict a failure, it’s obviously better than being reactive, with a retroactive approach where something fails and then we have to replace it.

Gardner: Stefan, how do you consume that proactive insight? What does InfoSight bring in terms of an operations interface? Or have you crafted a new process in your operations? How have you changed your culture to accommodate such a proactive stance? As you point out, being proactive is a fairly new way of avoiding failures and degraded performance.

Proactivity improves productivity

Floyhar: A lot of things have changed with that proactivity. First, the support model, with the automatic opening and closure of tickets with HPE support. The Nimble support is absolutely fantastic. I don’t have to wait for something reactive at 2 am, and then call HPE support. The SAN does it for me; InfoSight does it for me. It automatically opens the ticket and an engineer calls me at the beginning of my workday.

No longer are we getting interrupted with those 2, 3, 4 am emergency calls because our monitoring platform has notified us that, “Hey, a disk failed or looks like it’s going to fail.” That, in turn, has led to a complete culture change within my team. It takes us away from that firefighting, the constant, reactive methodologies of maintaining traditional three-tier infrastructure and truly into leveraging AI and the support behind it.

Learn More About Intelligent,

Self-Managing Flash Storage

In the Data Center and Cloud

We are now able to turn the corner from reactiveto proactive, including on applications redesign or re-work, or on tweaking performance improvements. We are taking that proactive approach with the applications themselves, which has rolled even further downhill to our end users and improved their productivity.

In the last six months, we have received significant praise for the applications performance, based on where it was three years ago compared with today. And, yes, part of that is because of the back-end upgrades in the infrastructure platform, but also because as we’ve been able to focus more on the applications administration tasks and truly making it a more pleasant experience for our end users — less pain, less latency, just less issues.

Gardner: You are a big SAP shop, so that improvement extends across all of your operations, to your logistics and supply chain, for example. How does having a stronger sense of confidence in your IT operations give you benefits on business-level innovation?

Floyhar: As you mentioned, we are a large SAP shop. We run any number of SAP-insert-acronym-here systems. Being proactive on addressing some of the application issues has honestly caused less downtime for the applications. We have seen into the four- and five-9s (99.99-9 percent) uptime from an application availability perspective. 

We have been able to proactively catch a number of issues, whether using HPE InfoSight or standard notifications. We have been able to proactively catch a number of issues that would have caused downtime, even as minimal as 30 minutes. But when you start talking about an operation that runs 24×7, 360 days a year, and truly depends on SAP to be the backbone, it’s the lifeblood of what we do on a business operations basis. 

So 30 minutes makes all the difference on the production floor. Being able to turn that support corner has absolutely been critical in our success.

Gardner: Let’s go back to data. When it comes to having storage confidence, you can extend that confidence across your data lifecycle. It’s not just storage and accommodating key mission-critical apps. You can start to modernize and gain efficiencies through backup and recovery, and to making the right cache and de-dupe decisions.

What’s it been like to extend your InfoSight-based intelligence culture into the full data lifecycle?

Sweet, simplified data backup and recovery

Floyhar: Our backup and recovery has gotten significantly less complex — and significantly faster — using Veeam with the storage API and Nimble snapshots. Our backup window went from about 22.5 hours a day, which was less than ideal, obviously, down to less than 30 minutes for a lot of our mission-critical systems.

We are talking about 8-10 terabytes of Microsoft Exchange data, 8-10 terabytes of SAP data — all being backed up, full backups, in less than 60 minutes, using Veeam with the storage API. Again, it’s transformed how much time and how much effort we put into managing our backups.

Again, we have turned the corner on managing our backups on an exception-basis. So now it’s only upon failure. We have gained that much trust in the product and the back-end infrastructure.

We specifically watch for failure, and any time something comes up that’s what we address as opposed to watching everything 100 percent of the time to make sure it’s working.

We specifically watch for failure, and any time something comes up that’s what we address as opposed to watching everything 100 percent of the time to make sure that it’s all working. Outside of the backups, just every application has seen significant performance increases.

Gardner: Thinking about the future, a lot of organizations are experimenting more with hybrid cloud models and hybrid IT models. One of the things that holds them up from adoption is not feeling confident about having insight, clarity, and transparency across these different types of systems and architectures.

Does what HPE InfoSight and similar technologies bring to the table give you more confidence to start moving toward a hybrid model, or at least experimenting in that direction for better performance in price and economic payback?

Headed to hybrid, invested in IoT

Floyhar: Yes, absolutely, it does. We started to dabble into the cloud, and a mixed-hybrid infrastructure a few years before Nimble came into play. We now have a significantly larger cloud presence. And we were able to scale that cloud presence easily specifically because of the data. With our growth trending, all of the pieces involved with InfoSight, we were able to use that data to scale out and know what it looks like from a storage perspective on Amazon Web Services (AWS).

We started with SAP HANA out in the cloud, and now we’re utilizing some of that data on the back end. We are able to size and scale significantly better than we ever could have in the past, so it has actually opened up the door to adopting a bit more cloud architecture for our infrastructure.

Gardner: And looking to the other end from cloud, core, and data center, increasingly manufacturers like yourselves — and in large warehouse environments like you have described — the Internet of Things (IoT) is becoming much more in demand. You can place sensors and measure things in ways we didn’t dream of before.

Even though IoT generates massive amounts of data — and it’s even processing at the edge – have you gained confidence to take these platform technologies in that direction, out to the edge, and hope that you can gain end-to-end insights, from edge to core?

Floyhar: The executives at our company have deemed that data is a necessity. We are a very data-driven company. Manufacturers of our size are truly benefiting from IoT and that data. For us, people say “big data” or insert-common-acronym-here. People process big data, but nobody truly understands what that term means.

Learn More About Intelligent,

Self-Managing Flash Storage

In the Data Center and Cloud

With our executives, we have gone through the entire process and said, “Hey, you know what? We have actually defined what big data means to Ferrara. We are going to utilize this data to help drive leaner manufacturing processes, to help drive higher-quality products out the door every single time to achieve an industry standard of quality that quite frankly has never been met before.”

We have very lofty goals for utilizing this data to drive the manufacturing process. We are working with a very large industrial automation company to assist us in utilizing IoT, not quite edge computing yet, but we might get there in the next couple of years. Right now we are truly adopting the IoT mentality around manufacturing.

And that is, as you mentioned, a huge amount of data. But it is also a very exciting opportunity for Ferrara. We make candy, right? We are not making cars, or tanks, or very expansive computer systems. We are not doing that level of intricacy. We are just making candy.

But to be able to leverage the machine data at almost every inch of the factory floor? If we could get that and utilize it to drive end-to-end process, efficiency, and manufacturing efficiencies? It not only helps us produce a better-quality product faster, it’s also environmentally conscious, because there will be less waste, if any waste at all.

The list of wonderful things that comes out of this goes on and on. It really is an exciting opportunity. We are trying to leverage that. The intelligent back-end storage and computer systems are ultra-imperative to us for meeting those objectives.

Gardner: Any words of advice for other organizations that are not as far ahead as you are when it comes to going to all-flash and highly intelligent storage — and then extending that intelligence into an AIOps culture? With 20/20 hindsight, for those organizations that would like to use more AIOps, who would like to get more intelligence through something like HPE InfoSight, what advice can you give them?

Floyhar: First things first — use it. For even small organizations, all the way up to the largest of organizations, it may almost seem like, “Well, what is that data really going to be used for?” I promise, if you use it, it is greatly beneficial to your IT operations.

Historically we would constantly be fighting infrastructure-related issues — outages, performance bottlenecks, and so on. With the AI behind HPE InfoSight, the AI makes all the difference. You don’t have to fight that fight when it becomes a problem because you nip it in the bud.

If you don’t have it — get it. It’s very important. This is the future of technology. Using AI to predictively analyze all of the data — not just from your environment — but being able to take a conglomerate view of customer data and keep it together and use predictive analytics – that truly does allow IT organizations to turn the corner from reactive to proactive.

Historically we would constantly be fighting infrastructure-related issues — outages, performance bottlenecks, and so on. With the AI behind HPE InfoSight, and other providers, including cloud platforms, the AI makes all the difference. You don’t have to fight that fight when it becomes a problem because you get to nip it in the bud.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, big data, Cloud computing, data analysis, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, ERP, Hewlett Packard Enterprise, machine learning, multicloud, Software-defined storage, storage | Tagged , , , , , , , , , , , , , , | Leave a comment

As price transparency grows inevitable, healthcare providers need better tools to close the gap on patient trust

The next BriefingsDirect healthcare finance insights discussion explores ways that healthcare providers can become more proactive in financial and cost transparency from the patient perspective.

By anticipating rather than reacting to mandates on healthcare economics and process efficiencies, providers are becoming more competitive and building more trust and satisfaction with their patients — and caregivers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the benefits of a more proactive and data-driven approach to healthcare cost estimation, we are joined by expert Kate Pepoon, Manager of Revenue Cycle Operations at Baystate Health in Springfield, Mass., and Julie Gerdeman, President of HealthPay24 in Mechanicsburg, Penn. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We are at the point with healthcare and medical cost transparency that the finger, so to speak, has been pulled out of the dike. We have had mandates and regulations, but it’s still a new endeavor. People are feeling their way through providing cost transparency and the need for more accurate estimations about what things will actually cost when you have a medical procedure.

Kate, why does it remain difficult and complex to provide accurate medical cost estimates?

Pepoon

Pepoon: It has to do with educating our patients. Patients don’t understand what a chargemaster is, which, of course, is the technical term for the data we are now required to post on our websites. For them to see a spreadsheet that lists 21,000 different codes and costs can be overwhelming. 

What Baystate Health does, as I’m sure most other hospitals in Massachusetts do, is give patients an option to call us if they have any questions. You’re right, this is in its infancy. We are just getting our feet wet. Patients may not even know what questions to ask. So we have to try and educate as much as possible.

Gardner: Julie, it seems like the intention is good, the idea of getting more information in peoples’ hands so they can make rational decisions, particularly about something as important as healthcare. The intent sounds great, but the implementation and the details are not quite there yet.

Given that providers need to become more proactive, look at the different parts of transparency, and make it user-friendly, where are we in terms of this capability? 

Gerdeman: We are still in the infancy. We had a race to the deadline, to the Centers for Medicare and Medicaid Services (CMS) [part of the U.S. Department of Health and Human Services] deadlineof Jan. 1, 2019. That’s when all the providers rushed to at least meet the bare minimum of compliance. A lot of what we have seen is just the publishing of the chargemaster with some context available.

But where there is competition, we have seen it taken a bit further. Where I live in Pennsylvania, for example, I could drive to a number of different healthcare providers. Because of that competition, we are seeing providers that don’t just provide context, they are leveraging the chargemaster and price transparency as competitive differentiation.

Gardner: Perhaps we should make clear that there are many areas where you don’t really have a choice and there isn’t much competition. There is one major facility that handles most medical procedures, and that’s where you go.

But that’s changing. There are places where it’s more of a marketplace, but that’s not necessarily the case at Baystate Health. Tell us why for your patients, they don’t necessarily do a lot of shopping around yet.

Clearing up charge confusion 

Pepoon: They don’t. That question you just asked Julie, it’s kind of the opposite for us because we have multiple hospitals. When we posted our chargemaster, we also posted it for our other three hospitals, not just for the main one, which is Baystate Medical Center (BMC). And that can create confusion for our patients as well.

We are not yet at the drive to be competitive with other area hospitals because BMC is the only level-1 trauma center in its geographical area. But when we had patients ask why costs are so different at our other hospitals, which are just 40 miles away, we had to step up and educate our staff. And that was largely guiding patients as to the difference between a chargemaster price and what they are actually going to pay. And that is more an estimate of charges from their particular insurance.

We have not yet had a lot of questions from patients, but we anticipate it will definitely increase. We are ready to answer the questions and guide our patients.

We have not yet had a lot of questions from patients, but we anticipate it will definitely increase. We are ready to answer the questions and guide our patients.

Gardner: The chargemaster is just a starting point, and not necessarily an accurate one from the perspective of an outsider looking in.

But it began the process to more accurate price transparency. And even while there is initially a regulatory impetus, one of the biggest drivers is gaining trust, loyalty, and a better customer experience, a sense of confidence about the healthcare payments process.

Julie, what does it take to get past this point of eroding trust due to complexity? How do we reverse that erosion and build a better process so people to feel comfortable about how they pay for their healthcare?

Gerdeman

Gerdeman: There is an opportunity for providers to create a trusted, unique, and personalized experience, even with this transparency regulation. In any experience when you are procuring goods and services, there is a need for information. People want to get information and do research. This has become an expectation now with consumerization — a superior consumer experience.

And what Kate described for her staff, that’s one way of providing a great experience. You train the staff. You have them readily available to answer questions to the highest level of detail. That’s necessary and expected by patients.

There is also a larger opportunity for providers, even just from a marketing perspective. We are starting to see healthcare providers define their brand uniquely and differently.  And patients will start to look for that brand experience. Healthcare is so personal, and it should be part of a personalized experience.

Gardner: Kate, I think it’s fair to say that things are going to get even more challenging.  Increasingly, insurance companies are implementing more co-pays, more and different deductibles, and offering healthcare plans that are more complex overall. 

What would you like to see happen in terms of the technologies and solutions that come to the market to help make this process better for you and your patients?

Accounting for transparency 

Pepoon: Dana, transparency is going to be the future. It’s only going to get more … transparent.

This infancy stage of the government attempting to help educate consumers — I think it was a great idea. The problem is that that did not come with a disclaimer. Now, each hospital is required to provide that disclaimer to help guide patients. The intent was fantastic, but there are so many different ways to look at the information provided. If you look at it face-value, it can be quite shocking. 

I heard a great anecdote recently, that a patient can go online and look at the chargemaster and see that aspirin is going to cost them $100 at a hospital. Obviously, you are taken aback. But that’s not the actual cost to a patient.

Learn How to Meet Patient Demands

For Convenient Payment Options

For Healthcare Services

There needs to be much more robust education regarding what patients are looking at. Technology companies can help bring hospitals to the next level and assist with the education piece. Patients have to understand that there is a whole other layer, which is their actual insurance.

In Massachusetts we are pretty lucky because 12 years ago, then-Governor Mitt Romney [led a drive to bring health insurance to almost everyone]. Because of that, it’s reduced the amount of self-pay patients to the lowest level in the entire United States. Only around two to three percent of our patients don’t have insurance.

Some of the benefits that other states see from the published chargemaster list is better engaging with patients and to have conversations. Patients can say, “Well, I don’t have insurance and I would like to shop around. Thank you to Hospital A, because Hospital A is $2,000 for the procedure and Hospital B is only $1,500.”

But Massachusetts, as part of its healthcare laws, further dedicates itself to educating patients about their benefits. MassHealth, the Medicaid program of Massachusetts, requires hospitals to have certified financial counselors.

Those counselors are there to help with patient benefits and answer questions like, “Is this going to cost me $20,000?” No, because if you sign up for benefits or based on the benefits you have, it’s not going to cost you that much. That chargemaster is more of a definition of what is charged to insurance companies.

The fear is that this is not so easily explained to patients. Patients don’t always even get to the point where they ask questions. If they think that something is going to cost $20,000, they may just move on.

Gardner: The sticker shock is something you have to work with them on and bring them back to reality by looking at the particulars of their insurance as well as their location, treatment requirements, and the specific medical issues. That’s a lot of data, a lot of information to process.

Not only are the patients shopping for healthcare services, they will also be shopping for their next insurance policy. The more information, transparency, and understanding they have about their health payments, the better shopper they become the next time they pick an insurance company and plan. These are all choices. This is all data-driven. This is all information-dependent. 

So Julie, why is it so hard in the medical setting for that data to become actionable? We know in other businesses that it’s there. We know that we can even use machine learning (ML) and artificial intelligence (AI) to predict the weather, for example. And the way we predict the weather is we look at what happened the last 500 times a storm came up the East Coast as an example that sets a pattern.

Where do we go next? How can the same technologies we use to predict the weather be brought to the medical data processing problem?

Gerdeman: Kate said it well that transparency is here, and transparency is the future. But, honestly, transparency is table stakes at this point.

CMS has already indicated that they expect to expand the pricing transparency ruling to require even more. This was just the first step. They know that more has to be done to address complexity for patients as consumers.

Technology is going to play a critical role in all of this, because when you reference things like predicting the weather and other aspects of our lives, they all leverage technology. They look back in order to look forward. The same is true for and will be used in healthcare. It’s already starting to.

So [patient support] teams like Kate’s use estimation tools to provide the most accurate as possible costs to patients in advance of services and procedures. HealthPay24 has been involved as part of our mission, from pre-service to post-service, in that patient financial engagement.

But it is in arming providers and their staffs with that [predictive] technology that is most important for making a difference in the future. There will always be complexities in healthcare. There will always be things happening during procedures that physicians and surgeons can’t anticipate, and that’s where there will be modifications made later.

But given what we know of the costs around the 5,000 knee replacements some healthcare provider might already have done, I think we can begin to provide forward-looking data to patients so that they can make informed decisions like they never have before by comparing all of that.

See the New Best Practice

Of Driving Patient Loyalty

Through Estimation

Gardner: We know from other industries that bringing knowledge and usability works to combat complexity. And one of the places that can be most powerful is for a helpdesk. Those people are on the other end of a telephone or a chatbot from consumers — whether you are in consumer electronics or information technology.

It seems to me that those people at Baystate Health, mandated by the Commonwealth of Massachusetts, who help patients are your helpdesk. So what tools would you like to see optimally in the hands of those people who are explaining away this complexity for your patients?

How to ask before you pay 

Pepoon: That’s a great question. Step one, I would love to see some type of education, perhaps a video from some hospitals if they partnered together, that helps patients understand what it is they are about to look at when they look at a chargemaster and the dollar amounts associated with certain procedures.

That’s going to set the stage for questions to come back through to the staff that you mentioned, the helpdesk people, who are there ready and willing to respond to patients.

But there is another problem with that. The problem is that these are moving targets. People like black-and-white. People like, “This is definitely what I’m going to pay,” before they get a procedure done.

We have heard of the comparison to buying a car. This is very similar to educating yourself in advance, of looking for a specific model you may like for a car, of going to different dealers, looking it up online, seeing what you’re going to pay and then negotiating that before you buy the car.

That’s the piece that’s missing from this healthcare process. You can’t yet negotiate on it. But in the future – with the whole transparency thing, you never know. But it’s that moving target that’s going to make this hard to swallow for a lot of patients because, obviously, this is not like buying a car. It’s your life, it’s your health.

The future is going to have more price transparency. And the future is also going to bring higher costs to patients regardless of who they are and what plan they have. Plans 10 years ago didn’t have deductibles. The plans we had 10 years ago that had a $5 co-pay, and now those plans have a $60 co-pay and a $5,000 deductible.

That’s the direction our healthcare climate is moving to. We are only going to see more cost burdens on patients. As people realize they are going to need to pay out more money for their own healthcare services, it’s going to bring a greater sense of concern.

So, when they do call and talk to that helpdesk, it’s really important for all of us in all of our hospitals to make sure that we are answering patients properly. It was an amazing idea to have this new transparency, but we need to explain what it means. We need to be able to reach out personally to patients and explain what it is they are about to look at. That’s our future.

Gerdeman: I would just like to add that at HealthPay24 we work with healthcare providers all across the country. There are areas that have already had to do this. They have had to be proactive and jump into a competitive landscape with personalized marketing materials.

We are starting to see educational videos in places like Pennsylvania using the human touch, and the approach of, “Yes, we recognize that you’re a consumer, and we recognize that you have a choice.” They have even gone to the extent of creating online price-checkers and charge-checkers to give people flexibility from their homes of conveniently clicking a box from a chargemaster to determine what procedure or service they are to be receiving. They can furthermore check those charges across multiple hospitals that are competing and that are making those calculators available to consumers proactively.

We are starting to see educational videos using the human touch. The providers recognize that you’re a consumer and that you have a choice. They have created online price-checkers to allow people from their homes to determine the procedures and pricing.

Gardner: I’m sensing a building urgency around this need for transparency from large organizations like Baystate Health. And they are large, with service providers in both Western Massachusetts as well as the “Knowledge Corridor” of Massachusetts and Connecticut. They have four hospitals, 80 medical practices, 25 reference laboratories, 12,000 employees, and 1,600 physicians.

They have a sense of urgency but aren’t yet fully aware of what is available and how to solve this problem. It’s a big opportunity. I think we can all agree it’s time now to be proactive and recognize what’s required to make transparency happen and be accurate.

What do you recommend, Kate, for organizations to be more proactive, to get out in front of this issue? How can vendors in the marketplace such as Julie and HealthPay24 help?

Use IT to explain healthcare costs

Pepoon: There needs to be a better level of education at the place where patients go to look at what medical charge prices are. That forms a disclaimer, in a way, of, “Listen, this is what you are about to look at. It’s a little bit like jargon, and that’s okay. You are going to feel that way because this is raw data coming from a hospital, and a lot of people have to go to school for very long time to read and understand what it is that they are looking at.”

And I think if there has to be a way that we can have patients focused and able to call and ask questions. That’s going to help.

For the technology side going forward, I am very interested to see what it’s going to look like in about a year. I want to see the feedback from other hospitals and providers in Massachusetts as to how this has gone. Today, quite frankly, when I was doing research for us at Baystate I reached out to find out what are the questions patients are asking. Patients are not really calling that much to talk about this subject yet. I don’t know if that’s a good thing or a bad thing. I think that that’s a sentiment most hospitals in Massachusetts are feeling right now.

I don’t think there is one hospital system that’s ahead of the curve or running toward the goal of plastering all of this data out there. I don’t think everybody knows what to do with it yet. IT companies and partners that we have — our technical partners like HealthPay24 – can help take jargon and put it into some version that is easily digestible.

That is going to be future. It ties back to the question of: Is transparency going to be the wave of the future? And that’s absolutely, “Yes.” But it’s all about who can read the language? If me and Julie are the only two people in a room who can read the language, we are letting our patients down.

Gardner: Well, engineering complexity out is one of the things the technology does very well. Software has been instrumental in that for the past 15 or 20 years.

There is a huge opportunity to look at technology and emerging technology today to provide new levels of clarity, reduce complexity, and to become more proactive.

Julie, as we end our discussion, for organizations like Baystate Health that want to be more proactive, to be able to answer those patient phone calls in the best way, what do you recommend? What can healthcare provider organizations start doing to be in front of this issue when it comes to accurate and transparent healthcare cost information? 

 Gerdeman: There is a huge opportunity to look at technology available today, as well as emerging technology and where it’s headed. If history proves anything, Dana, to your point, it’s that technology can provide new levels of clarity and reduce complexity. You can digitize processes that were completely manual and where everything needed to be done on the phone, via fax, and on paper.

In healthcare, there’s a big opportunity to embrace technology to become more proactive. We talk about being proactive, and it really means to stop reacting and take a strategic approach, just like in IT architectures of the past. When you take that strategic approach you can look at processes and workflows and see what can be completely digitized and automated in new ways. I think that’s a huge opportunity.

I also don’t want to lose sight of the humane aspect because this is healthcare and we are all human, and so it’s personal. But again, technology can help personalize experiences. People may not be calling because they want access online via their phone, or they want everything to be mobile, simple, beautiful, and digital because that’s what we increasingly experience in all of our lives.

View a Webinar

On How Accurate Financial Data

Helps Providers Make Informed Decisions 

Providers have a great opportunity to leverage technology to make things even more personal and humane and to differentiate themselves as brands, in Massachusetts and all across the country as they become leading brands in healthcare.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in electronic medical records, healthcare, professional services, supply chain | Tagged , , , , , , , , , | Leave a comment

How Texmark Chemicals pursues analysis-rich, IoT-pervasive path to the ‘refinery of the future’

Oil refinery industry

The next BriefingsDirect Voice of the Customer discussion revisits the drive to define the “refinery of the future” at Texmark Chemicals.

Texmark has been combining the best of operational technology (OT) with IT and now Internet of Things (IoT) to deliver data-driven insights that promote safety, efficiency, and unparalleled sustained operations.

Stay with us now as we hear how a team approach — including the plant operators, consulting experts and latest in hybrid IT systems — joins forces for rapid process and productivity optimization results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn how, are are joined by our panel, Linda Salinas, Vice President of Operations at Texmark Chemicals, Inc. in Galena Park, Texas; Stan Galanski, Senior Vice President of Customer Success at CB Technologies (CBT) in Houston, and Peter Moser, IoT and Artificial Intelligence (AI) Strategist at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Stan, what are the trends, technologies, and operational methods that have now come together to make implementing a refinery of the future approach possible? What’s driving you to be able to do things in ways that you hadn’t been able to do before?

Galanski

Galanski: I’m going to take that in parts, starting with the technologies. We have been exposed to an availability of affordable sensing devices. These are proliferating in the market these days. In addition, the ability to collect large amounts of data cheaply — especially in the cloud — having ubiquitous Wi-Fi, Bluetooth, and other communications have presented themselves as an opportunity to take advantage of.

On top of this, the advancement of AI and machine learning (ML) software — often referred to as analytics — has accelerated this opportunity.

Gardner: Linda, has this combination of events dramatically changed your perspective as VP of operations? How has this coalescing set of trends changed your life?

Salinas: They have really come at a good time for us. Our business, and specifically with Texmark, has morphed over the years to where our operators are more broadly skilled. We ask them to do more with less. They have to have a bigger picture as far as operating the plant.

Today’s operator is not just sitting at a control board running one unit. Neither is an operator just out in a unit, keeping an eye on one tower or one reactor. Our operators are now all over the plant operating the entire utilities and wastewater systems, for example, and they are doing their own lab analysis.

Learn More About Transforming

the Oil and Gas Industry

This technology has come at a time that provides information that’s plant-wide so that they can make more informed decisions on the board, in the lab, whenever they need.

Gardner: Peter, as somebody who is supplying some of these technologies, how do you see things changing? We used to have OT and IT as separate, not necessarily related. How have we been able to make those into a whole greater than the sum of their parts?

OT plus IT equals success 

Moser

Moser: That’s a great question, Dana, because one of the things that has been a challenge with automation of chemical plants is these two separate towers. You had OT very much separate from IT.

The key contributor to the success of this digitization project is the capability to reboot those two domains together successfully. 

Gardner: Stan, as part of that partnership, tell us about CBT and how you fit.

Galanski: CBT is a 17-year-old, privately owned company. We cut our teeth early on by fulfilling high-tech procurement orders for the aerospace industry. During that period, we developed a strength for designing, testing, and installing compute and storage systems for those industries and vendors.

It evolved into developing an expertise in high-performance computing (HPC), software design platforms, and so forth.

About three years ago, we recognized the onset of faster computational platforms and massive amounts of data — and the capability for software to control that dataflow — was changing the landscape. Now, somebody needed to analyze that data faster over multiple mediums. Hence, we developed a practice around comprehensive data management and combined that with our field experience. That led us to become a systems integrator (SI), which is what we’ve been assigned to for this refinery of the future.

Gardner: Linda, before we explore more on what you’ve done and how it improves things, let’s learn about Texmark. With a large refinery operation, any downtime can be a big problem. Tell us about the company and what you are doing to improve your operations and physical infrastructure.

Salinas

Salinas: Texmark is a family-owned company, founded in 1970 by David Smith. And we do have a unique set of challenges. We sit on eight acres in Galena Park, and we are surrounded by a bulk liquid terminal facility.

So, as you can imagine, a plant that was built in the 1940s has older infrastructure. The layout is probably not as efficient as it could be. In the 1940s, we didn’t have a need for wastewater treatment. Things may not have been laid out in the most efficient ways, and so we have added these things over the years. So, one, we are landlocked, and, two, things may not be sited in the most optimal way.

For example, we have several control rooms sprinkled throughout the facility. But we have learned that siting is an important issue. So we’ve had to move our control room to the outskirt of the process areas.

As a result, we’ve had to reroute our control systems. We have to work with what we have, and that presents some infrastructure challenges.

Also, like other chemical plants and refineries, the things we handle are hazardous. They are flammable, toxic, and they are not things people want to have in the air that they breath in neighborhoods just a quarter-mile downwind of us.

So we have to be mindful of safe handling of those chemicals. We also have to be mindful that we don’t disrupt our processes. Finding the time to shut down to install and deploy new technology, is a challenge. Chemical plants and refineries need to find the right time to shut down and perform maintenance with a very defined scope, and on a budget.

And so that capability to come up and down effectively is a strength for Texmark because we are a smaller facility and so are able to come up and down and deploy and test and prove out some of these technologies. 

Gardner: Stan, in working with Linda, you are not just trying to gain incremental improvement. You are trying to define the next definition, if you will, of a safe, efficient, and operationally intelligent refinery.

How are you able to leapfrog to that next state, rather than take baby steps, to attain an optimized refinery?

Challenges of change 

Galanski: First we sat down with the customer and asked what the key functions and challenges they had in their operations. Once they gave us that list, we then looked at the landscape of technologies and the available categories of information that we had at our disposal and said, “How can we combine this to have a significant improvement and impact on your business?”

We came up with five solutions that we targeted and started working on in parallel. They have proven to be a handful of challenges — especially working in a plant that’s continuously operational.

The connected worker solution is garnering a lot of attention in the marketplace. With it, we are able to bring real-time data from the core repositories of the company to the hands of the workers in the field.

Based on the feedback we’ve received from their personnel; we feel we are on the right track. As part of that, we are attacking predictive maintenance and analytics by sensoring some of their assets, their pumps. We are putting video analytics in place by capturing video scenes of various portions of the plant that are very restrictive but still need to have careful monitoring. We are looking at worker safety and security by capturing biometrics and geo-referencing the location of workers so we know they are safe or if they might be in danger.

The connected worker solution is garnering a lot of attention in the marketplace. With it, we are able to bring real-time data from the core repositories of the company to the hands of the workers in the field. Oftentimes it comes to them in a hands-free condition where the worker has wearables on his body that project and display the information without them having to hold a device.

Lastly, we are tying this all together with an asset management system that tracks every asset and ties them to every unstructured data file that has been recorded or captured. In doing so, we are able to put the plant together and combine it with a 3D model to keep track of every asset and make that useful for workers at any level of responsibility.

Gardner: It’s impressive, how this touches just about every aspect of what you’re doing.

Peter, tell us about the foundational technologies that accommodate what Stan has just described and also help overcome the challenges Linda described.

Foundation of the future refinery

Moser: Before I describe what the foundation consists of, it’s important to explain what led to the foundation in the first place. At Texmark, we wanted to sit down and think big. You go through the art of the possible, because most of us don’t know what we don’t know, right?

You bring in a cross-section of people from the plant and ask, “If you could do anything what would you do? And why would you do it?” You have that conversation first and it gives you a spectrum of possibilities, and then you prioritize that. Those prioritizations help you shape what the foundation should look like to satisfy all those needs.

That’s what led to the foundational technology platform that we have at Texmark. We look at the spectrum of use cases that Stan described and say, “Okay, now what’s necessary to support that spectrum of use cases?”

But we didn’t start by looking at use cases. We started first by looking at what we wanted to achieve as an overall business outcome. That led us to say, “First thing we do is build out pervasive connectivity.” That has to come first because if things can’t give you data, and you can’t capture that data, then you’re already at a deficit.

Then, once you can capture that data using pervasive Wi-Fi with HPE Aruba, you need a data center-class compute platform that’s able to deliver satisfactory computational capabilities and support, accelerators, and other things necessary to deliver the outcomes you are looking for.

The third thing you have to ask is, “Okay, where am I going to put all of this computing storage into?” So you need a localized storage environment that’s controlled and secure. That’s where we came up with the edge data center. It was those drivers that led to the foundation from which we are building out support for all of those use cases.

Gardner: Linda, what are you seeing from this marriage of modernized OT and IT and taking advantage of edge computing? Do you have an ability yet to measure and describe the business outcome benefits?

Hands-free data at your fingertips 

Salinas: This has been the perfect project for us to embark on our IT-OT journey with HPE and CBT, and all of our ecosystem partners. Number one, we’ve been having fun.

Two, we have been learning about what is possible and what this technology can do for us. When we visited the HPE Innovation Lab, we saw very quickly the application of IT and OT across other industries. But when we saw the sensored pump, that was our “aha moment.” That’s when we learned what IoT and its impact meant to Texmark.

Learn More About Transforming

the Oil and Gas Industry

As for key performance indicators (KPIs), we gather data and we learn more about how we can employ IoT across our business. What does that mean? That means moving away from the clipboard and spreadsheet toward having the data wherever we need it — having it available at our fingertips, having the data do analytics for us, and telling us, “Okay, this is where you need to focus during your next precious turnaround time.”

The other thing is, this IoT project is helping us attract and retain talent. Right now it’s a very competitive market. We just hired a couple of new operators, and I truly believe that the tipping point for them was that they had seen and heard about our IoT project and the “refinery of the future” goal. They found out about it when they Googled us prior to their interview.

We just hired a new maintenance manager who has a lot of IoT experience from other plants, and that new hire was intrigued by our “refinery of the future” project.

Finally, our modernization work is bringing in new business for Texmark. It’s putting us on the map with other pioneers in the industry who are dipping their toe into the water of IoT. We are getting national and international recognition from other chemical plants and refineries that are looking to also do toll processing.

They are now seeking us out because of the competitive edge we can offer them, and for the additional data and automated processes that that brings to us. They want the capability to see real-time data, and have it do analytics for them. They want to be able to experiment in the IoT arena, too, but without having to do it necessarily inside their own perimeter.

Gardner: Linda, please explain what toll processing is and why it’s a key opportunity for improvement?

Collaboration creates confidence 

Salinas: Texmark produces dicyclopentadiene, butyl alcohol, propyl alcohol, and some aromatic solvents. But alongside the usual products we produce and sell, we also provide “toll processing services.” The analogy I like to tell my friends is, “We have the blender, and our customers bring the lime and tequila. The we make their margaritas for them.”

So our customers will bring to us their raw materials. They bring the process conditions, such as the temperatures, pressures, flows, and throughput. Then they say, “This is my material, this is my process. Will you run it in your equipment on our behalf?”

When we are able to add the IoT component to toll processing, when we are able to provide them data that they didn’t have whenever they ran their own processes, that provides us a competitive edge over other toll processors.

When we are able to add the IoT component to toll processing, when we are able to provide them data that they didn’t have whenever they ran their own processes, that provides us a competitive edge over other toll processors.

Gardner: And, of course, your optimization benefits can go right to the bottom line, so a very big business benefit when you learn quickly as you go.

Stan, tell us about the cultural collaboration element, both from the ecosystem provider team support side as well as getting people inside of a customer like Texmark to perhaps think differently and behave differently than they had in the past.

Galanski: It’s all about human behavior. If you are going to make progress in anything of this nature, you are going to have to understand the guy sitting across the table from you, or the person out in the plant who is working in some fairly challenging environments. Also, the folks sitting at the control room table with a lot of responsibility for managing the processes with lots of chemicals for many hours at a time. 

So we sat down with them. We got introduced to them. We explained to them our credentials. We asked them to tell us about their job. We got to know them as people; they got to know us as people.

We established trust, and then we started saying, “We are here to help.” They started telling us their problems, asking, “Can you help me do this?” And we took some time, came up with some ideas, and came back and socialized those ideas with them. Then we started attacking the problem in little chunks of accomplishments.

We would say, “Well, what if we do this in the next two weeks and show you how this can be an asset for you?” And they said, “Great.” They liked the fact that there was quick turnaround time, that they could see responsivity. We got some feedback from them. We developed a little more confidence and trust between each other, and then more things started out-pouring a little at a time. We went from one department to another and pretty soon we began understanding and learning about all aspects of this chemical plant.

It didn’t happen overnight. It meant we had to be patient, because it’s an ongoing operation. We couldn’t inject ourselves unnaturally. We had to be patient and take it in increments so we could actually demonstrate success.

And over time you sometimes can’t tell the difference between us and some of their workers because we all come to meetings together. We talk, we collaborate, and we are one team — and that’s how it worked.

Gardner: On the level of digital transformation — when you look at the bigger picture, the strategic picture — how far along are they at Texmark? What would be some of the next steps? 

All systems go digital 

Galanski: They are now very far along in digital transformation. As I outlined earlier, they are utilizing quite a few technologies that are available — and not leaving too many on the table. 

So we have edge computing. We have very strong ubiquitous communication networks. We have software analytics able to analyze the data. They are using very advanced asset integrity applications to be able to determine where every piece, part, and element of the plant is located and how it’s functioning.

I have seen other companies where they have tried to take this only one chapter at a time, and they sometimes have multiple departments working on these independently. They are not necessarily ready to integrate or to scale it across the company.

But Texmark has taken a corporate approach, looking at holistic operations. All of their departments understand what’s going on in a systematic way. I believe they are ready to scale more easily than other companies once we get past this first phase.

Gardner: Linda, any thoughts about where you are and what that has set you up to go to next in terms of that holistic approach?

Salinas: I agree with Stan. From an operational standpoint, now that we have some sensored pumps for predictive analytics, we might sensor all of the pumps associated with any process, rather than just a single pump within that process.

That would mean in our next phase that we sensor another six or seven pumps, either for toll processing or our production processes. We won’t just do analytics on the single pump and its health, lifecycle, and when it needs to be repaired. Instead we look at the entire process and think, “Okay, not only will I need to take this one pump down for repair, but instead there are two or three that might need some service or maintenance in the next nine months. But the fuller analytics can tell me that if I can wait 12 months, then I can do them all at the same time and bring down the process and have a more efficient use of our downtime.”

I could see something like that happening.

Galanski: We have already seen growth in this area where the workers have seen us provide real-time data to them on hands-free mobile and wearable devices. They say, “Well, could you give me historical data over the past hour, week, or month? That would help me determine whether I have an immediate problem, not just one spike piece of information?”

So they have given us immediate feedback on that and that’s progressing.

Gardner: Peter, we are hearing about a more granular approach to sensors at Texmark, with the IoT edge getting richer. That means more data being created, and more historical analysis of that data.

Are you therefore setting yourself up to be able to take advantage of things such as AI, ML, and the advanced automation and analytics that go hand in hand? Where can it go next in terms of applying intelligence in new ways?

Deep learning from abundant data 

Moser: That’s a great question because the data growth is exponential. As more sensors are added, videos incorporated into their workflows, and they connect more of the workers and employees at Texmark their data and data traffic needs are going to grow exponentially.

But with that comes an opportunity. One is to better manage the data so they get value from it, because the data is not all the same or it’s not all created equal. So the opportunity there is around better data management, to get value from the data at its peak, and then manage that data cost effectively.

That massive amount of data is also going to allow us to better train the current models and create new ones. The more data you have, the better you can do ML and potentially deep learning.

Learn More About Transforming

the Oil and Gas Industry

Lastly, we need to think about new insights that we can’t create today. That’s going to give us the greatest opportunity, when we take the data we have today and use it in new and creative ways to give us better insights, to make better decisions, and to increase health and safety. Now we can take all of the data from the sensors and videos and cross-correlate that with weather data, for example, and other types of data, such as supply chain data, and incorporate that into enabling and empowering the salespeople, to negotiate better contracts, et cetera.

So, again, the art of the possible starts to manifest itself as we get more and more data from more and more sources. I’m very excited about it.

Gardner: What advice do you have for those just beginning similar IoT projects? 

Galanski: I recommend that they have somebody lead the group. You can try and flip through the catalogs and find the best vendors who have the best widgets and start talking to them and bring them on board. But that’s not necessarily going to get you to an end game. You are going to have to step back, understand your customer, and come up with a holistic approach of how to assign responsibilities and specific tasks, and get that organized and scheduled. 

There are a lot of parties and a lot of pieces on this chess table. Keeping them all moving in the right direction and at a cadence that people can handle is important. And I think having one contractor, or a department head in charge, is quite valuable.

Salinas: You should rent a party bus. And what I mean by that is when we first began our journey, actually our first lecture, our first step onto the learning curve about IoT, was when Texmark rented a party bus and put about 13 employees on it and we took a field trip to the HPE Innovation Lab.

When Doug Smith, our CEO, and I were invited to visit that lab we decided to bring a handful of employees to go see what this IoT thing was all about. That was the best thing we ever could have done, because the excitement was built from the beginning.

They saw, as we saw, the art of the possible at the HPE IoT Lab, and the ride home on that bus was exciting. They had ideas. They didn’t even know where to begin. The buy-in was there from the beginning. 

They saw, as we say, the art of the possible at the HPE IoT Lab, and the ride home on that bus was exciting. They had ideas. They didn’t even know where to begin, but they had ideas just from what they had seen and learned in a two-hour tour about what we could do at Texmark right away. So the engagement, the buy-in was there from the beginning, and I have to say that was probably one of the best moves we have made to ensure the success of this project.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise. 

You may also be interested in:

Posted in artificial intelligence, big data, data analysis, Data center transformation, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, machine learning, Security, storage | Tagged , , , , , , , , , , , , , , | Leave a comment

How the composable approach to IT aligns automation and intelligence to overcome mounting complexity

The next edition of the BriefingsDirect Voice of the Innovator podcast series explores the latest developments in hybrid IT and datacenter composability.

Bringing higher levels of automation to data center infrastructure has long been a priority for IT operators, but it’s only been in the past few years that they have actually enjoyed truly workable solutions for composability.

The growing complexities, from hybrid cloud and the pressing need for conservation of IT spend — as well as the need to find high-level IT skills — means there is no going back. Indeed, there is little time for even a plateau on innovation around composability.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us now as we explore how pervasive increasingly intelligent IT automation and composability can be with Gary Thome, Vice President and Chief Technology Officer for Composable Cloud at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Gary, what are the top drivers making composability top-of-mind and something we’re going to see more of?

Thome: It’s the same drivers for businesses as a whole, and certainly for IT. First, almost every business is going through some sort of digital transformation. And that digital transformation is really about transforming to leverage IT to connect with their customers and make IT the primary way they interact with customers and make revenue.

Digital transformation drives composability 

Thome

With that, there’s a desire to go very fast, of rapidly getting connections to customers much faster and for adding features faster via software for your customers.

The whole idea of digital transformation and becoming a digital business is driving a whole new set of behaviors in the way enterprises run – and as a result – in the way that IT needs to support them.

From the IT standpoint, there is this huge driver to say, “Okay, I need to be able to go faster to keep up with the speed of the business.” That is a huge motivator. 

But at the same time, there’s the constant desire to keep IT cost in line, which requires higher levels of automation. That automation — along with a desire to flexibly align with the needs of the business — drives what we call composability. It combines the flexibility of being able to configure and choose what you need to meet the business needs — and ultimately customer needs — and do it in a highly automated manner.

Gardner: Has the adoption of cloud computing models changed the understanding of how innovation takes place in an IT organization? There used to be long periods between upgrades or a new revision. Cloud has given us constant iterative improvements. Does composability help support that in more ways?

Thome: Yes, it does. There has been a general change in the way of thinking, of shifting from occasional, large changes to frequent, smaller changes. This came out of an Agile mindset and a DevOps environment. Interestingly enough, it’s permeated to lots of other places outside of IT. More companies are looking at how to behave that way in general.

How to Achieve Composability

Across Your Datacenter

On the technology side, the desire for rapid, smaller changes means a need for higher levels of automation. It means automating the changes to the next desired state as quickly as possible. All of those things lend themselves toward composability.

Gardner: At the same time, businesses are seeking economic benefits via reducing unutilized IT capacity. It’s become about “fit-for-purpose” and “minimum viable” infrastructure. Does composability fit into that, making an economic efficiency play?

Thome: Absolutely. Along with the small, iterative changes – of changing just what you need when you need it – comes a new mindset with how you approach capacity. Rather than buying massive amounts of capacity in bulk and then consuming it over time, you use capacity as you need it. No longer are there large amounts of stranded capacity.

Composability is key to this because it allows you through technical means to gain an environment that gets the desired economic result. You are simply using what you need when you need it, and then releasing it when it’s not needed — versus pre-purchasing large amounts of capacity upfront.

Innovation building blocks 

Gardner: As an innovator yourself, Gary, you must have had to rethink a lot of foundational premises when it comes to designing these systems. How did you change your thinking as an innovator to create new systems that accommodate these new and difficult requirements?

Thome: Anyone in an innovation role has to always challenge their own thinking, and say, “Okay, how do I want to think differently about this?” You can’t necessarily look to the normal sources for inspiration because that’s exactly where you don’t want to be. You want to be somewhere else.

For myself it may mean looking at any other walk of life – from what I do, read, and learn as possible sources of inspiration for rethinking the problem.

Interestingly enough, there is a parallel in the IT world of taking applications and decomposing them into smaller chunks. We talk about microservices that can be quickly assembled into larger applications — or composed, if you want to think of it that way. And now we’re able to disaggregate the infrastructure into elements, too, and then rapidly compose them into what’s needed. 

Those are really parallel ideas, going after the same goal. How do I just use what I need when I need it — not more, not less? And then automate the connections between all of those services.

That, in turn, requires an interface that makes it very easy to assemble and disassemble things together — and therefore very easy to produce the results you want. 

When you look at things outside of the IT world, you can see patterns of it being easy to assemble and disassemble things, like children’s building blocks. Before, IT tended to be too complex. How do you make the IT building blocks easier to assemble and disassemble such that it can be done more rapidly and more reliably?

Gardner: It sounds as if innovations from 30 years ago are finding their way into IT. Things such as simultaneous engineering, fit-for-purpose design and manufacturing, even sustainability issues of using no more than you need. Were any of those inspirations to you?

Cultivate the Agile mindset

Thome: There are a variety of sources, everything from engineering practices, to art, to business practices. They all start swiveling around in your head. How do I look at the patterns in other places and say, “Is that the right kind of pattern that we need to apply to an IT problem or not?”

The historical IT perspective of elongated steps and long development cycles led to the end-place of very complex integrations to get all the piece-parts put together. Now, the different, Agile mindset says, “Why don’t you create what you need iteratively but make sure it integrates together rapidly?”

Can you imagine trying to write a symphony and have 20 different people develop their own parts? There’s separate trombone, or timpani, or violin. And then you just say, “Okay, play it together once, and we will start debugging when it doesn’t sound right.” Well, of course that would be a disaster. If you don’t think about it upfront, do you want to develop it as-you-go?

The same thing needs to go into how we develop IT — with both the infrastructure and applications. That’s where the Agile and the DevOps mindsets have evolved to. It’s also very much the mindset we have in how we develop composability within HPE.

Gardner: At HPE, you began bringing composability to servers and the data center stack, trying to make hardware behave more like software, essentially. But it’s grown past that. Could you give us a level-set of where we are right now when it comes to the capability to compose the support for doing digital business?

Intelligent, rapid, template-driven assembly 

Thome: Within the general category of composablity, we have this new thing called Composable Infrastructure, and we have a product called HPE Synergy. Rather than treat the physical data resources in the data center as discrete servers, storage arrays, switches, it looks at them as pools of compute capacity, storage capacity, fabric capacity, and even software capacity or images of what you want to use.

Each of those things can be assembled rapidly through what we call software-defined intelligence. It knows how to assemble the building blocks – compute, storage, and networking — into something interesting. And that is template-driven. You have a template, which is a description of what you want the end-state to look like, what you want your infrastructure look like, when you are done.

And the templates say, “Well, I need a compute of this big block or size. This much storage, or this kind of network.” Whatever you want. “And then, by the way, I want this software loaded on it.” And so forth. You describe the whole thing as a template and then we can assemble it based on that description.

That approach is one we’ve innovated on in a lab from the infrastructure’s standpoint. But what’s very interesting about it is, if you look at a modern cloud made up of applications, it uses a very similar philosophical approach to the assembling. In fact, just like with modern applications, you say, “Well, I’m assembling a group of services or elements. I am going to create it all via APIs.” Well, guess what? Our hardware is driven by APIs also. It’s an API-level assembly of the hardware to compose the hardware into whatever you want. It’s the same idea of composing that applies everywhere.

Millennials lead the way

Gardner: The timing for this is auspicious on many levels. Just as you’re making crafting of hardware solutions possible, we’re dealing with an IT labor shortage. If, like many Millennials, you are of a cloud-first mentality you will find kinship with composability — even though you’re not necessarily composing a cloud. Is that right?

Thome: Absolutely. That cloud mindset, or service’s mindset, or asset-service mindset — whatever you want to think of it as – is one where this is a natural way of thinking. The younger people may have grown up with this mindset. It wouldn’t occur to them to think any differently. And others may have to shift to a new way of thinking.

This is one of the challenges for organizations. How do they shift — not just the technologies or the tools — but the mindset within the culture in a different direction?

How to Remove Complexity

From Multicloud and Hybrid IT

You have to start with changing the way you think. It’s a mindset change to ask, “How do I think about this problem differently?” That’s the key first thing that needs to happen, and then everything falls behind that mindset.

It’s a challenge for any company doing transformation, but it’s also true for innovation — shifting the mindset.

Gardner: The wide applicability of composability is impressive. You could take this composable mindset, use these methods and tools, and you could compose a bare-metal, traditional, on-premises data center. You could compose a highly virtualized on-premises data center. You could compose a hybrid cloud, where you take advantage of private cloud and public cloud resources. You can compose across multiple types of private and public clouds. 

Cross-cloud composability

Thome: We think composability is a very broad, useful idea. When we talk to customers they are like, “Okay, well, I’ll have my own kind of legacy estate, my legacy applications. Then I have my new applications, and new way of thinking that are being developed. How do I apply principles and technologies that are universal across them?”

The idea of being able to say, “Well, I can compose the infrastructure for my legacy apps and also compose my new cloud-native apps, and I get the right infrastructure underneath.” That is a very appealing idea.

But we also take the same ideas of composability and say, “Well, I would even want to compose ultimately across multiple clouds.” So more-and-more enterprises are leveraging clouds in various shapes and forms. They are increasing the number of clouds they use. We are trending to hybrid cloud, where there are people using different clouds for different reasons. They may actually have a single application that’s spanning multiple clouds, including on-premises clouds.

When you get to that level, you start thinking, “Well, how do I compose my environment or my applications across all of those areas?” Not everybody is necessarily thinking about it that way yet, but we certainly are. It’s definitely something that’s coming.

You start thinking, “How do I compose my environment or my applications across all areas?” Not everyone is thinking about it yet that way, but we certainly are. It’s definitely coming.

Gardner: Providers are telling people that they can find automation and simplicity but the quid pro quo is that you have to do it all within a single stack, or you have to line up behind one particular technology or framework. Or, you have to put it all into one particular public cloud.

It seems to me that you may want to keep all of your options open and be future-proof in terms of what might be coming in a couple of years. What is it about composability that helps keep one’s options open?

Thome: With automation, there’s two extremes that people wind up with. One is a great automation framework that promises you can automate anything. The most important thing is that you can; meaning, wedon’t do it, but youcan, if you are willing to invest all of the hard work into it. That’s one approach. The good news is that there are multiple vendors with actual parts of the automation-technology total. But it can be a very large amount of work to develop and maintain systems across that kind of environment.

On the other hand, there are automation environments where, “Hey, it works great. It’s really simple. Oh, by the way, you have to completely stay within our environment.” And so you are stuck within the confines of their rules for doing things.

Both of these approaches, obviously, have a very significant downside because any one particular environment is not going to be the sum of everything that you do as a business. We see both of them as wrong.

Real composability shines when it spans the best of both of those extremes. On the one hand, composability makes it very easy to automate the composable infrastructure, and it also automates everything within it. 

In the case of HPE Synergy, composable management (HPE OneView) makes it easy to automate the compute, storage, and networking — and even the software stacks that run on it — through a trivial interface. And at the same time, you want to integrate into the broader, multivendor automation environments so you can automate across all things.

You need that because, guaranteed, no one vendor is going to provide everything you want, which is the failing of the second approach I mentioned. Instead, what you want is to have a very easy way to integrate into all of those automation environments and automation frameworks without throwing a whole lot of work to the customer to do.

We see composability strength in being API-driven. It makes it easy to integrate into automation frameworks, but secondly, it completely automates the things that are underneath that composable environment. You don’t have to do a lot of work to get things operating.

So we see that as the best of those two extremes that have historically been pushed on the market by various vendors.

Gardner: Gary, you have innovated and created broad composability. In a market full of other innovators, have there been surprises in what people have done with composability? Has there been follow-on innovation in how people use composability that is worth mentioning and was impressive to you? 

Success stories 

Thome: One of my goals for composability was that, in the end, people would use it in ways I never imagined. I figured, “If you do it right, if you create a great idea and a great toolset, then people can do things with it you can’t imagine.” That was the exciting thing for me.

One customer created an environment where they used the HPE composable API in the Terraform environment. They were able to rapidly span a variety of different environments based on self-service mechanisms. Their scientist users actually created the IT environments they needed nearly instantly. 

It was cool because it was not something that we set out specifically to do. Yet they were saying it solves business needs and their researchers’ needs in a very rapid manner.

Another customer recently said, “Well, we just need to roll out really large virtualization clusters.” In their case, it’s a 36-node cluster. It used to take them 21 days. But when they shifted to HPE composability, they got it down to just six hours.

How to Tame

Multicloud Complexity

Obviously it’s very exciting to see such real benefits to customers, to get faster with putting IT resources to use and to minimize the burden on the people associated with getting things done.

When I hear those kinds of stories come back from customers — directly or through other people — it’s really exciting. It says that we are bringing real value to people to help them solve both their IT needs and their business needs.

Gardner: You know you’re doing composable right when you have non-IT people able to create the environments they need to support their requirements, their apps, and their data. That’s really impressive.

Gary, what else did you learn in the field from how people are employing composability? Any insights that you could share?

Thome: It’s in varying degrees. Some people get very creative in doing things that we never dreamed of. For others, the mindset shift can be challenging, and they are just not ready to shift to a different way of thinking, for whatever reasons.

Gardner: Is it possible to consume composability in different ways? Can you buy into this at a tactical level and a strategic level?

Thome: That’s one of the beautiful things about the HPE composability approach. The answer is absolutely, “Yes.” You can start by saying, “I’m going to use composability to do what I always did before.” And the great news is it’s easier than what you had done before. We built it with the idea of assembling things together very easily. That’s exactly what you needed.

Then, maybe later, some of the more creative things that you may want to do with composability come to mind. The great news is it’s a way to get started, even if you haven’t yet shifted your thinking. It still gives you a platform to grow from should you need to in the future.

Gardner: We have often seen that those proof-points tactically can start the process to change peoples’ mindsets, which allows for larger, strategic value to come about.

Thome: Absolutely. Exactly right. Yes.

Gardner: There’s also now at HPE, and with others, a shift in thinking about how to buy and pay for IT. The older ways of IT, with longer revisions and forklift upgrades meant paying was capital-intensive.

What is it about the new IT economics, such as HPE GreenLake Flex Capacity purchasing, that align well with composability in terms of making it predictable and able to spread out costs as operating expenses?

Thome: These two approaches are perfect together; they really are. They are hand-in-glove and best buddies. You can move to the new mindset of, “Let me just use what I need and then stop using it when I don’t need it.”

That mindset — and being able to do rapid, small changes in capacity or code or whatever you are doing, it doesn’t matter – also allows a new economic perspective. And that is, “I only pay for what I need, when I need it; and I don’t pay for the things I am not using.”

Our HPE GreenLake Flex Capacity service brings that mindset to the economic side as well. We see many customers choose composability technology and then marry it with GreenLake Flex Capacity as the economic model. They can bring together that mindset of making minor changes when needed, and only consuming what is needed, to both the technical and the economic side.

We see this as a very compelling and complementary set of capabilities — and our customers do as well.

Gardner: We are also mindful nowadays, Gary, about the edge computing and the Internet of Things (IoT), with more data points and more sensors. We also are thinking about how to make better architectural decisions about edge-to-core relationships. How do we position the right amount of workload in the right place for the right requirements?

How does composability fit into the edge? Can there also be an intelligent fabric network impact here? Unpack for us how the edge and the intelligent network foster more composability.

Composability on the fly, give it a try 

Thome: I will start with the fabric. So the fabric wants to be composable. From a technology side, you want a fabric that allows you to say, “Okay, I want to very dynamically and easily assemble the network connections I want and the bandwidth I want between two endpoints — when I want them. And then I want to reconfigure or compose, if you will, on the fly.”

We have put this technology together, and we call it Composable Fabric. I find this superexciting because you can create a mesh simply by connecting the endpoints together. After that, you can reconfigure it on the fly, and the network meets the needs of the applications the instant you need them.

How to Achieve Composability

Across Your Datacenter 

This is the ultimate of composability, brought to the network. It also simplifies the management operation of the network because it is completely driven by the need from the application. That is what directly drives and controls the behavior of the network, rather than having a long list of complex changes that need to be implemented in the network. That tends to be cumbersome and winds up being unresponsive to the real needs of the business. Those changes take too long. This is completely driven from the needs of application down into the needs of the fabric. It’s a super exciting idea, and we are really big on it, obviously.

Now, the edge is also interesting because we have been talking about conserving resources. There are even fewer resources at the edge, so conservation can be even more important. You only want to use what you need, when you need it. Being able to make those changes incrementally, when you need them, is the same idea as the composability we have been talking about. It applies to the edge as well. We see the edge as ultimately an important part of what we do from a composable standpoint.

Gardner: For those folks interested in exploring more about composability, methodologies, technologies, and getting some APIs to experiment with — what advise do you have for them? What are some good ways to unpack this and move into a proof-of-concept project?

Thome: We have a lot of information on our website, obviously, about composability. There is a lot you can read up on, and we encourage anybody to learn about composability through those materials.

They can also try composability because it is completely software-defined and API-driven. You can go in and play with the composable concepts through software. We suggest people try directly. But they can also go and connect it to their automation tools and see how they can compose things via the automation tools they might already be using for other purposes. It can then extend into all things composable as well.

I definitely encourage people to learn more, but specially to move into the “doing phase.” Just try it out, see how easy it is to get things done.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, Business intelligence, Cloud computing, containers, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, machine learning, multicloud, Software-defined storage, storage, User experience | Tagged , , , , , , , , , , , | Leave a comment

SAP Ariba COO James Lee on the best path to an intelligent and talented enterprise

Hand holding artificial intelligence robot face with network data 0 and 1 background,3D Rendering.

The next BriefingsDirect enterprise management innovations discussion explores the role of the modern chief operating officer (COO) and how they are tasked with creating new people-first strategies in an age of increased automation and data-driven intelligence.

We will now examine how new approaches to spend management, process automation, and integrated procurement align with developing talent, diversity, and sustainability.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the leadership trends behind making globally dispersed and complex organizations behave in harmony, please welcome James Lee, Chief Operating Officer at SAP Aribaand SAP Fieldglass.The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: James, why has there never been a better time to bring efficiency and intelligence to business operations? Why are we in an auspicious era for bridging organizational and cultural gaps that have plagued businesses in the past?

Lee: If you look at the role of the modern COO, or anyone who is the head of operations, you are increasingly asked to be the jack-of-all-trades. If you think about the COO, they are responsible for budgeting and planning, for investment decisions, organizational and people topics, and generally orchestrating across all aspects of the business. To do this at scale, you really need to drive standardization and best practices, and this is why efficiency is so critical. 

Lee

Now, in terms of the second part of your question, which has to do with intelligence, the business increasingly is asking for — not just reporting the news — but making the news. What does that mean? That means you have to offer insights to different parts of the business and help them make the right decisions; things that they wouldn’t know otherwise. That requires leveraging all the data available to do thorough analysis and provide the data that all the functional leaders can use to make the best-possible decision.

Gardner: It seems that the COO is a major consumer of such intelligence. Do you feel like you are getting better tools?

Make sense of data

Lee: Yes, absolutely. We talk about being in the era of big data, so the information you can get from systems — even from a lot of devices, be it mobile devices or sensors – amounts to an abundance and explosion of data. But how to make sense of this data is very tricky.

As a COO, a big part of what I do is not only collect the data from different areas, but then to make sense of it, to help the business understand the insights behind this data. So I absolutely believe that we are in the age where we have the tools and the processes to exploit data to the fullest.

Gardner: You mentioned the COO needs to be a jack-of-all-trades. What in your background allows you to bring that level of Renaissance man, if you will, to the job?

Lee: As COO of SAP Ariba and now SAP Fieldglass, too, I have operational responsibilities across our entire, end-to-end business. I’m responsible for helping with our portfolio strategy and investments, sales excellence, our commercial model, data analytics, reporting, and then also our learning and talent development. So that is quite a broad purview, if you will. 

I feel like the things I have done before at SAP have equipped me with the tools and the mindset to be successful in this position. Before I took this on, I was a COO and general manager of sales for the SAP Greater China business. In that position, during that time, I doubled the size of SAP’s business in China, and we were also involved in some of the largest product launches in China, including SAP S/4HANA

Before that, having been with SAP for 11 years, I had the opportunity to work across North America, Europe, and Asia in product and operating roles, in investment roles, and also sales roles.

Before joining SAP, I was a management consultant by training. I had worked at Hewlett Packard and then McKinsey and Company.

Gardner: Clearly most COOs of large companies nowadays are tasked with helping extend efficiency into a global environment, and your global background certainly suits you for that. But there’s another element of your background that you didn’t mention – which is having studied and been a concert pianist. What do you think it is about your discipline and work toward a high level of musical accomplishment that also has a role in your being a COO?

The COO as conductor 

Lee: That’s a really interesting question. You have obviously done your research and know my background. I grew up studying classical music seriously, as a concert pianist, and it was always something that was very, very important to me. I feel even to this day — I obviously have pursued a different profession — that it is still a very key and critical part of who I am.

If I think about the two roles — as a COO and as a musician — there are actually quite a few parallels. To start, as a musician, you have to really be in tune with your surroundings and listen very carefully to the voices around you. And I see the COO team ultimately as a service provider, it’s a shared services team, and so it’s really critical for me to listen to and understand the requirements of my internal and external constituents. So that’s one area where I see similarities.

Secondly, the COO role in my mind is to orchestrate across the various parts of the business, to produce a strong and coherent whole. And again, this is similar to my experiences as a musician, in playing in ensembles, and especially in large symphonies, where the conductor must always know how to bring out and balance various musical voices and instruments to create a magical performance. And again, that’s very similar to what a COO must do.

Gardner: I think it’s even more appropriate now — given that digital transformation is a stated goal for so many enterprises – to pursue orchestration and harmony and organize across multiple silos.

Does digital transformation require companies to think differently to attain that better orchestrated whole?

Lee: Yes, absolutely. From the customers that I have spoken to, digital transformation to be successful has to be a top-to-down movement. It has to be an end-to-end movement. It’s no longer a case where management just says, “Hey, we want to do this,” without the full support and empowerment of people at the working level. Conversely, you can have people at the project team level who are very well-intentioned, but without senior executive level support, it doesn’t work.

The role of the COO is to orchestrate across the various parts of the business, to produce a strong and coherent whole. This is similar to my experiences as a musician, in playing in ensembles, and especially in large symphonies. 

In cases where I have seen a lot of success, companies have been able to break down those silos, paint an overarching vision and mission for the company, brought everyone onto the same bandwagon, empowered and equipped them with the tools to succeed, and then drive with ruthless execution. And that requires a lot of collaboration, a lot of synergies across the full organization.

Gardner: Another lens through which to view this all is a people-centric view, with talent cultivation. Why do you think that that might even be more germane now, particularly with younger people? Many observers say Millennials have a different view of things in many ways. What is it about cultivating a people-first approach, particularly to the younger workers today, that is top of mind for you?

People-first innovation 

Lee: We just talked about digital transformation. If we think about technology, no matter how much technology is advancing, you always need people to be driving the innovation. This is a constant, no matter what industry you are in or what you are trying to do.

And it’s because of that, I believe, that the top priority is to build a sustainable team and to nurture talent. There are a couple of principles I really adhere to as I think about building a “people-first team.”

First and foremost, it’s very important to go beyond just seeking work-life balance. In this day and age, you have to look beyond that and think about how you help the people on your team derive meaning from what they do.

This goes beyond just work and life and balance, this has to do with social responsibility, personal enrichment, personal aspiration, and finding commonality and community among your peers. And I find that now — especially with the younger generation — a lot of what they do is virtual. We are not necessarily in the office all together at the same time. So it becomes even more important to build a sense of connectivity, especially when people are not all present in the same room. And this is something that Millennials really care about.

Also for Millennials it’s important for them, at the beginning of their careers, to have a strong true-north. Meaning that they need to have great mentors who can coach them through the process, work with them, develop them, and give them a good sense of belonging. That’s something I always try to do on my team, to ensure that the young people get mentorship early on in their career to have one-on-one dedicated time. There should always be a sounding board for them to air their concerns or questions.

Gardner: Being a COO, in your case, means orchestrating a team of other operations professionals. What do you look for in them, in their background, that gives you a sense of them being able to fulfill the jack-of-all-trades approach?

Growth mindset drives success

Lee: I tend to think about successful individuals, or teams, along two metrics. One is domain expertise. Obviously if you are in charge of, say, data analytics then your background as a data scientist is very important. Likewise, if you are running a sales operation, a strong acumen in sales tools and processes is very important. So there is obviously a domain expertise aspect of it.

But equally, if not more important, is another mentality. I tend to believe in people who are of a growth-mindset as opposed to a closed-mindset. They tend to achieve more. What I mean by that are people who tend to want to explore more, want to learn more, who are open to new suggestions and new ways of doing things. The world is constantly changing. Technology is changing. The only way to keep up with it is if you have a growth mindset.

It’s also important for a COO team to have a service mentality, of understanding who your ultimate customer is — be it internal or external. You must listen to them, understand what the requirements are, and then work backward and look at what you can create or what insights you can bring to them. That is very critical to me.

When we talk about procurement, end users are increasingly looking for a marketplace-like experience. They are used to a B2C experience. And for Millennials, they are pushing everyone to think differently. They expect easy, seamless access across all of their different platforms. 

Gardner: I would like to take advantage of the fact that you travel quite a bit, because SAP Ariba and SAP Fieldglass are global in nature. What you are seeing in the field? What are your customers telling you?

Lee: As I travel the globe, I have the privilege of supporting our business across the Americas, Europe, the Middle East, and Asia, and it’s fascinating to see that there are a lot of differences and nuances — but there are a lot of commonalities. At the end of the day, what people expect from procurement or digital transformation are more or less very similar.

There are a couple of trends I would like to share with you and your listeners. One is, when we talk about procurement, end users are increasingly looking for a marketplace-like experience. Even though they are in a business-to-business (B2B) environment, they are used to the business-to-consumer (B2C) user experience. It’s like what they get on Amazon where they can do shopping, they have a choice, it’s easy to compare value, and features — but at the same time you have all of the policies and compliance that comes with B2B. And that’s something that is beginning to be the lowest common denominator.

Secondly, when we talk about Millennials, I think the Millennial experience is pushing everyone to think differently about the user experience. And not just for SAP Ariba and SAP Fieldglass, but for any software. How do we ensure that there is easy data access across different platforms — be it your laptop, your desktop, your iPad, your mobile devices? They expect easy, seamless access across all their different platforms. So that is something I call the Millennial experience.

Contingent, consistent labor

Thirdly, I have learned about the rise of contingent labor in a lot of regions. We, obviously, are very honored to now have Fieldglass as part of the SAP Ariba family. And I have spent more and more time with the Fieldglass team.

In the future, there may even be a situation where there are few permanent, contracted employees. Instead, you may have a lot of project-based, or function-based, contingent laborers. We hear a lot about that, and we are focused on how to provide them with the tools and systems to manage the entire process with contingent labor.

Gardner: It strikes as an interesting challenge for COOs — how do you best optimize and organize workers who work with you, but not for you.

Lee: Right! It’s very different because when you look at the difference between indirect and direct procurement, you are talking about goods and materials. But when you are talking about contingent labor, you are talking about people. And when you talk about people, there is a lot more complexity than if you are buying a paper cup, pen, or pencil.

You have to think about what the end-to-end cycle looks like to the [contingent workers]. It extends from how you recruit them, to on-boarding, enabling, and measuring their success. Then, you have to ensure that they have a good transition out of the project they are working on.

SAP Fieldglass is one of the few solutions in the market that really understands that process and can adapt to the needs of contingent laborers. 

Gardner: One more area from your observations around the globe: The definition and concept of the intelligent enterprise. That must vary somewhat, and certain cultures or business environments might accept more information, data, and analytics differently than others. Do you see that? Does it mean different things to different people?

Intelligent enterprise on the rise

Lee: At its core, if you look at the revolution of the enterprise software and solutions, we have gone from being a very transactional system — where we are the system of bookings and record, just tracking what is being done — to we start to automate, what we now call the intelligent enterprise. That means making sense of all the information and data to create insight.

A lot of companies are looking to transform into an intelligent enterprise. That means you need to access an abundance of data around you. We talked about the different sources — through sensors, equipment, customers, suppliers, sometimes even from the market and your competitors — a 360-degree view of data. 

Then how do you have a seamless system that analyzes all of this data and actually makes sense of it? The intelligent enterprise takes it to the next level, which is leveraging artificial intelligence (AI). There is no longer a person or a team sitting in front of a computer and doing Excel modeling. This is the birth of the age of AI.

Now we are looking at predictive analytics, where, for example, at SAP Ariba, we look for patterns and trends on how you conduct procurement, how you contract, and how you do sourcing. We then suggest actions for the business to take. And that, to me, is an intelligent enterprise.

Gardner: How do you view the maturity of AI, in a few years, as an accelerant to the COO’s job? How important will AI be for COOs specifically?

Lee: AI is absolutely a critical, critical topic as it relates to — not just procurement transformation — but any transformation. There are four main areas addressed with AI, especially the advanced AI that we are seeing today.

Number one, it allows you to drive deeper engagement and adoption of your solution and what you are doing. If you think about how we interact with systems through conversations, sometimes even through gestures, that’s a different level of engagement than we had before. You are involving the end user in a way that was never done before. It’s interactive, it’s intuitive, and it avoids a lot of cost when it comes to training.

Secondly, we talk a lot about decision-making. AI gives you access to a broad array of data and you can uncover hidden insights and patterns while leveraging it.

Thirdly, we talked about talent, and I believe that having AI helps you attract and retain talent with state-of-the-art technology. We have self-learning systems that help you institutionalize a lot of knowledge.

And last, but not least, it’s all about improving business outcomes. So, you think about how you increase efficiencies for your personalized, context-specific information. In the context of procurement, you can improve approvals and accuracy, especially when you are dealing with contracts. An AI robot is a lot less prone to error than the human working on a contract. We have the statistics to prove it. 

At the end of the day, we look at procurement and we see an opportunity to transform it from a very tactical, transactional function into a very strategic function. And what that means is AI can help you automate a lot of the repetitive tasks, so that procurement professionals can focus on what is truly value-additive to the organization.

Gardner: We seem to be on the cusp of an age where we are going to determine what it is that the people do best, and then also determine what the machines do best — and let them do it.

This whole topic of bots and robotic process automation (RPA) is prevalent now across the globe. Do you have any observations about what bots and RPA are doing to your customers of SAP Fieldglass and SAP Ariba?

Sophisticated bot benefits

Lee: When we talk about bots, there are two types that come to mind. One is in the shop floor, in a manufacturing setting, where you have physical bots replacing humans and what they do. 

Secondly, you have virtual bots, if you will. For example, at SAP Ariba, we have bots that analyze data, make sense of the patterns, and provide insights and decision-making support to our end users.

In the first case, I absolutely believe that the bots are getting more sophisticated. The kinds of tasks that they can take on, on the shop floors, are a lot more than what they were before — and it drives a lot of efficiency, cuts costs, and allows employees to be redeployed to more strategic, higher value-added roles. So I absolutely see that as a positive trend going forward.

When it comes to the artificial, virtual bots, we see a lot of advancement now, not just in procurement, but in the way they are being used across sales and human resources systems. I was talking to a company just last week and they are utilizing virtual bots to do the recruiting and interviewing process. Can you imagine that?

The next time you submit your resume to a company, on the other end of the line might not be a human, but a robot that is screening you. It’s now to that level of sophistication.

The next time that you are submitting your résumé to a company, on the other end of the line might not be a human that you are talking to, but actually a robot that’s screening you. And it’s now to the level of sophistication where it’s hard for you to tell the difference.

Gardner: I might feel better that there is less subjectivity. If the person interviewing me didn’t have a good sleep the previous night, for example. I might be okay with that. So it’s like the Turing test, right? Do you know whether it’s real bodies or virtual bots?

Before we close out, James, do you have any advice for other COOs who are seeking to take advantage of all the ways that digital transformation is manifesting itself? What advice do you have for COOs who are seeking to up their game?

It’s up to you to up your COO game

Lee: Fundamentally, the COO role is what you make of it. A lot of companies don’t even have a COO. It’s a unique role. There is no predefined job scope or job definition.

For me, a successful COO — at least in the way I measure myself — is about what kind of business impact you have when you look at the profits and loss (P and L). Everything that you do should have a direct impact on your top line, as well as your bottom line. And if you feel like the things that you are doing are not directly impacting the P and L, then it’s probably time to reconsider some of those things.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in: