International Data Center Day highlights how sustainability and diversity will shape the evolving modern IT landscape

IDCDThe next BriefingsDirect panel discussion explores how March 25’s International Data Center Day provides an opportunity to both look at where things have been in the evolution of the modern data center and more importantly — where they are going.

Those trends involve a lot more than just technology. Data center challenges and advancements alike will hinge around the next generation of talent operating those data centers and how diversity and equal opportunity best support that.

Our gathered experts also forecast that sustainability improvements — rather than just optimizing the speeds and feeds — will help determine the true long-term efficiency of IT facilities and systems.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To observe International Data Center Day with a look at ways to make the data centers of the future the best-operated and the greenest ever, we are joined by Jaime Leverton, Senior Vice President and Chief Commercial Officer at eStruxture Data Centers in Montreal; Angie McMillin, Vice President and General Manager of IT Systems at VertivTM, and Erin Dowd, Vice President of Global Human Resources at Vertiv. The International Data Center Day observance panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Erin, why — based on where we have come from — is there now a need to think differently about the next generation of data center talent?

Erin Dowd

Dowd

Dowd: What’s important to us is that we have a diverse population of employees. We think about diversity from the perspective traditionally around ethnicity and gender. But when we consider diversity, we think about diversity of thought, diversity of behavior, and diverse backgrounds.

That all makes us a much stronger company; a much stronger industry. It’s representative of our customer base, frankly, and it’s representative of the globe. We are ensuring that we have people working within our company from around the world and contributing all of those diverse thoughts and perspectives that make us a much stronger company and much stronger industry.

Gardner: We have often seen that creative and innovative thought comes when you have a group of individuals that come from a variety of backgrounds, and so it’s often a big benefit. Why has it been slow-going? What’s been holding back the diversity of the support talent for data centers?

Diversity for future data centers 

Dowd: It’s a competitive environment, so it’s a struggle to find diverse candidates. It goes beyond our tech type of roles and into sales and marketing. We look at our talent early in their careers, and we are working on growing talent, in terms of nurturing them, helping them to develop, and helping them to grow into leadership roles. It takes a proactive approach, and it’s more than just letting the talent pool evolve naturally. It is about taking proactive and definitive actions around attracting people and growing people.

Gardner: I don’t think I am going out on a limb by observing that over the past 30 years, it’s been a fairly male-dominated category of worker. Tell us why women in science, technology, engineering, and math, or the so-called STEM occupations, are going to be a big part of making that diversity a strength.

Dowd: That is a huge pipeline for us as we benefit from all the initiatives to increase STEM education for women and men. The results help expand the pool, frankly, and it allows candidates across the board, that are interested at an early age, to best prepare for this type of industry. We know historically that girls have been less likely to pursue STEM types of interest at early ages.

So ensuring that we have people across the continuum, that we have women in these roles, to model and mentor — that’s really important in expanding the pool. There are a lot of things that we can be doing around STEM, and we are looking at all those opportunities.

Gardner: Statistically there are more women in universities than men, so that should translate into a larger share in the IT business. We will be talking about that more.

But we would also like to focus on International Data Center Day issues around sustainability. Jaime, why is sustainability the gift that keeps giving when it comes to improving our modern data centers?

Jaime Leverton

Leverton

Leverton: International Data Center Day is about the next generation of data center professionals. And we know that for the next generation, they are committed to preserving the environment, which is good news for all of us as citizens. And as one of the world’s biggest consumers of energy, I believe the data center industry has a fundamental duty to elevate its environmental stewardship with energy efficient infrastructure and renewable power resources. I think the conversation really does go well together with diversity.

Gardner: Alright, let’s dive in a little bit more to the issues around talent and finding the best future pool. First, Erin please tell us about your role at Vertiv.

Dowd: I am the Global Business HR Partner at Vertiv. So my focus is to help us design, build, and deliver the right people strategy for our teams that have a global presence. We focus on having super-engaged and productive people in the right places with the right skills, and in developing career opportunities across the continuum — from early level to senior level of contributors.

Gardner: We have heard a lot about the skills shortage in IT in general terms, but in your experience at Vertiv, what are your observations about the skills shortage? What challenges do you face?

Dowd: We have challenges in terms of a shortage of diverse candidates across the board. This is present in all positions. Increasing the diversity of candidates that we can attract and grow will help us address the shortage first-hand.

Gardner: And in addition to doing this on a purely pragmatic basis, there are other larger benefits. Tell us why diversity is so important to Vertiv over the long term?

We have challenges in terms of a shortage of diverse candidates across the board. This is present in all positions. The diversity of candidates that we can attract will help us.

Dowd: Diversity is the right thing to do. Just hands down, it has business benefits, and it has cultural benefits. As I mentioned earlier, it reflects not only on our global presence but also on our customer base. And research shows that companies that have more diverse workforces outperform and out-innovate those that don’t.

For example, companies in the top quartile of the workforce on diversity are 33 percent more likely to financially outperform their less diverse counterparts, according to a 2018 study from McKinsey. We have been embracing diversity, which aligns with our core values. It’s the right competitive strategy. It’s going to allow us to compete in the marketplace and relate to our customers best.

Gardner: Is Vertiv an outlier in this? Or is this the way the whole industry is going?

Dive into competitive talent pool 

Dowd: This is the way whole industry is going. I come from a line of IT companies prior to my tenure with Vertiv. Even the biggest, the most established companies are still wrestling with the competitiveness affiliated with the tracking of candidates that have diversity of thought, diverse backgrounds, diverse behaviors, and diversity on ethnicity and gender as well.

The trend is toward engineering and services, and everywhere we are experiencing turnover because it’s so competitive. It’s a very competitive environment. We are competing with brother and sister companies for the same types of talent.

WorkerAs I mentioned previously, if we attract people who are diverse in terms of thought, ethnicity, and gender we can expand our candidate pool and enhance our competitiveness. When our talent acquisition team looks at talent, they are expanding and enhancing diversity in our university relations and in our recruiting efforts. They are targeting diverse candidates as we hire interns and then folks that are later in their careers as well.

Gardner: We have been looking at this through the demand side, but on the supply-side, what are the incentives? Why should people from a variety of backgrounds consider and pursue these IT careers? What are the benefits to them?

Dowd: The career opportunities are amazing. This is a field that’s growing and that is not going to go away. We depend on IT infrastructure and data centers across our world, and we’re doing that more and more over time. There’s opportunity in the workplace and there are a lot of things that we are specifically doing at Vertiv to keep people engaged and excited. We think a lot about attracting talent.

But there is another piece, which is about retaining talent. Some of the things we are doing at Vertiv are specifically launching programs aligned with diversity.

So recently, and Angie has been involved in this, we have a women at Vertiv resource group called Women at Vertiv Excel (WAVE). And that group is nurturing women, encouraging more women to pursue leadership positions within Vertiv. Really it looks at diversity in leadership positions, but it also provides important training that women can apply in their current positions.

Together we are building one Vertiv culture, which is a really important framework for our company. We are creating solutions and resources that make us more competitive and reflect the global market. We find that diversity breeds new and different ideas, more innovation, and a deeper understanding of our customers, partners, employees, and our stakeholders all around the globe. We are a global company, so this is very important to us. It’s going to make us more successful as we grow into the future.

Another thing that we are doing is creating end-to-end management of Vertiv programs. This is new. We continue to improve this. It integrates behavioral skills and training designed to look at the work that we do through the eyes of others. We utilize experiences and talent effectively to grow stronger and stronger teams. Part of this is about recruiting and hiring. It has an emphasis on finding potential employees who possess a diverse experience of thought and perspectives. And diversity of thought comes from field experiences, from different backgrounds, and all of this contributes to our values as an employee in our organization.

Together we are building one Vertiv culture, which is a really important framework for our company. We are creating solutions and resources that make us more competitive and reflect the global market. We find that diversity breeds new and different ideas, more innovation, and a deeper understanding of our customers, partners, and employees.

We also are launching the Vertiv Operating System. Now this is being created, launched, and built with an emphasis on better understanding of our differences, in bridging gaps where there are differences, and in ways that bring out the best in everybody. It’s designed to encourage thought leadership, and to help all of us work through change management together.

Finally, another program that we’ve been implementing across the globe is called Intrinsic. And Intrinsic supplies a foundational assessment designed to improve our understanding of ourselves and also of our colleagues. It’s a formal experiential program that’s going to help us all learn more about ourselves, what makes our individual values and styles unique, but then also it allows us to think about the people that we are working with. We can learn more about our colleagues, potentially our customers, and it allows us to grow in terms of our team dynamics and the techniques that we are using to manage conflict, stress, and change.

Collectively, as we look at the full continuum of how we behave at Vertiv in the future we are building for ourselves, all of these efforts work together toward changing the way we think as individuals, how we behave in groups, and ultimately evolving our organizational culture to be more diverse, more inclusive, and more innovative.

Gardner: Jaime at eStruxture, when we look at sustainability, it aligns quite well with these issues around talent and diversity because all the polling shows that the younger generation is much more focused on energy efficiency and consciousness around their impact on the natural world — so sustainability. Tell us why the need for sustainability is key and aligns so well with talent and retaining the best people to work for your organization.

Sustainability inspires next generation 

Leverton: What we know to be true about the next generation is when they look to choose a career path, or take on an assignment, they want to make sure that it aligns with their values. They want to do work that they believe in. So, our industry offers them that opportunity to be value-aligned and to make an impact where it counts.

DC mainAs you can see all around us, people are working and learning remotely now more than ever, and data centers are what make all of that possible. They are crucial to our society and to our everyday lives. The data center industry is only going to continue to grow, and with our dependence on energy we have to have a focus on sustainability.

It represents a substantial opportunity to make a difference. It’s a fast-paced environment where we truly believe there is a career path for the next generation that will matter to them.

Gardner: Jaime, tell us about eStruxture Data Centers and your role there.

Leverton: eStruxture is relatively new data center company. It was established just over three years ago and we have grown rapidly from our original acquisition of our first data center in Montreal. We now have three data centers in Montreal, two in Vancouver, and one in Calgary. We are a Canadian pure-play — Canadian-owned, -operated, and -financed. We really believe in the Canadian landscape, the Canadian story, and we are going to continue to focus on growth in this nation.

Gardner: When it comes to efficiency and sustainability, we often look at power usage effectiveness (PUE). Where are we in terms of getting to complete sustainability? Is it that so farfetched?

Leverton: I don’t think it is. Huge strides have been made in reducing PUE, especially by us in our most recent construction, which has a PUE load of sub 1.2. Organizations in our industry continue to innovate every day, trying to get as close to that 1.0 as humanly possible.

We are very lucky that we partner with Vertiv. Vertiv solutions are key in driving our efficiency in our data centers, and we know that progress can be made continually by addressing the IP load deficiency and that is a savings that is incremental to PUE as well. PUE is specifically about the ratio of IP power usage and the power usage of the equipment that supports it. But we look at our data center and our business holistically to drive sustainability even outside of what the PUE covers.

Gardner: It sounds like sustainability is essentially your middle name. Tell me more about that. How did you focus the construction and placement of your data centers to be focused so much on sustainability?

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

Leverton: All of our facilities have been designed with a focus on sustainability. When we have purchased facilities, we have immediately gone to upgrade them and make them more efficient. We take advantage of free cooling wherever possible. As I mentioned, three of our data centers are in Montreal, so we get to take advantage of about eight months of the year of free cooling where the majority of our data centers are using 99.5 percent hydro-power energy, which is the cleanest possible energy that we can use.

We virtualize our environments as much as possible. We carefully select eco-responsible technologies and suppliers, and we are committed to continuing to increase our power usage effectiveness without ever sacrificing the performance, scalability, or uptime of our data centers, of course.

Gardner: And more specifically, when you look at that holistic approach to sustainability, how does working with a supplier like Vertiv augment and support that? How does that become a tag-team when it comes to the power source and the underlying infrastructure?

Leverton: Vertiv has just been such a great partner. They were there with us from the very beginning. We work together as a team, trying to make sure that we’re designing the best possible environment for our customers and for our community. One of our favorite solutions from Vertiv is around their thermal management, which is a water-free solution.

Our commitment is to operate as sustainably as possible. Being able to partner with Vertiv and build their solutions into our design right from the beginning has had a huge impact.  

That is absolutely ideal in keeping with our commitment to operate as sustainably as possible. In addition to being water-free, it’s 75 percent more efficient because it has advanced controls and economization. Being able to partner with Vertiv and build their solutions into our design right from the beginning has made a huge, huge impact.

Gardner: And, like I mentioned, sustainability is the gift that keeps giving. This is not just a nice to have. This is a bottom-line benefit. Tell us about the costs and how that reinforces sustainability initiatives.

Leverton: Yes, while there is an occasional higher cost in the short term, we firmly believe that the long-term total cost of ownership is lower — and the benefits far outweigh any initial incremental costs.

Obviously, it’s about our values. It’s critical that we do the right thing for the environment, for the community, for our staff, and for our customers. But, as I say, over the long-term, we believe the total cost is less. So far and above, sustainability is the right thing to do.

Gardner: Jaime, when it comes to that sustainability formula, what really works? It’s not just benefiting the organization that’s supplying, it’s also benefiting the consumer. Tell us how sustainability is also a big plus when it comes to those people receiving the fruits of what the data centers produce.

Leverton: Sustainability is huge for our customers, and it’s increasingly a key component of their decision-making criteria. In fact, many hyperscale cloud providers and corporations — large corporate enterprises — have declared very ambitious environmental responsibility objectives and are shifting to green energy.

Microsoft, as an example, is targeting over 70 percent renewable energy for its data centers by 2023. Amazonreached a 50 percent renewable energy target in 2018 and is now aiming for 100 percent.

Women and STEM step IT up 

Gardner: Let’s look at the sustainability issue again through the lens of talent and the people who are going to be supporting these great initiatives. Angie, when it comes to bringing more women into the STEM professions, how does the IT industry present itself as an attractive career path, say for someone just graduating from high school?

Angie McMillin

McMillin

McMillin: When I look at children today, they’re growing up with IT as part of their lives. That’s a huge advantage for them. They see firsthand the value and impact it has on everything they do. I look at my nieces and nephews, and even grandkids, and they can flip through phones, tablets, they are using XBoxes, you name it, all faster than adults.

They’re the next generation of IT. And now, with the COVID-19situation, children are learning how to do schooling collaboratively — but also remotely. I believe we can engage children early with the devices they already know and use. And with the tools that they’re now learning for schoolwork, those are a bridge to learning about what makes that work. It’s the data center industry. All of our data centers can be a part of that as they complete their schooling and go into higher education. They will remember this experience that we’re all living through right now forever — and so why not build upon that?

Gardner: Jaime, does that align with your personal experience in terms of technology being part of the very fabric of life?

Leverton: Oh, absolutely. I’m really proud of what I’ve seen happening in Canada. I have two young daughters and they have been able to take part in STEM camps, coding clubs, and technology is part of their regular curriculum in elementary school. The best thing we can do for our children is to teach them about technology, teach them how to be responsible with tech, and to keep them engaged with it so that over time they can be comfortable looking toward STEM careers later on.

Gardner: Angie, to get people focused on being part of the next generation of data centers, are there certain degrees, paths, or educational strategies that they should be pursuing?

Education paths lead to STEM careers 

McMillin: Yes. It’s a really interesting time in education. There are countless degrees specifically geared toward the IT industry. So those are good bets, but specifically in networking and computers, there’s coding, there is cyber security, which is becoming even more important, and the list goes on.

We currently see a very large skill set gap specifically around the science and technology functions. So these offer huge opportunities for a young person’s future. But I also want to highlight that the industry still needs the skill sets, the traditional engineering skills, such as power management, thermal management, services and equally important are the trade skills in this industry. There’s a current gap in the workforce and the training for that may be different, but it still has a really vital role to play.

And then finally, we’d be remiss if we didn’t recognize the fact that there are support functions, finance, HR, and marketing. People often think that you must only be in the science or engineering part of the business to work in a particular given market, and that really isn’t true. We need skill sets across a broad range to really help make us successful.

IDCD 2Leverton: I am an IT leader and have been in this business for 20 years, and my undergraduate degrees are in political science and psychology. So I really think that it’s all about how you think, and the other skills that you can bring to bear. More and more, we see emotional intelligence (EQ) and communication skills as the difference-maker to somebody’s career success or career trajectory. We just need to make sure that people aren’t afraid of coming out of more generalized degrees.

Gardner: We have heard a lot about the T structure, where we need to have the vertical technology background but also we want those with cultural leadership, liberal arts, and collaboration skills.

Angie, you are involved with mentoring young women specifically. What’s your take on the potential? What do you see now as the diversity is welling up and the available pool of talent is shifting?

McMillin: I am, and I absolutely love it. One of the things I do is support a women’s engineering summer camp probably much like Jaime’s daughters attend, and other events around my alma mater, with the University of Dayton. I support mentoring interns and other early career individuals, be they male or female. There is just so much potential in young people. They are absolutely eager to learn and play their part. They want to have relevance in the growing data center market, and the IT and sustainability that we talked about earlier. It’s really fun and enjoyable to help them along that journey.

There are two key themes I repeat. One is that success doesn’t happen overnight. So enjoy those small steps on the journey, learn as much as you can, and don’t give up. The second is keep an open mind about your career, try new things, and doors you never imagined will open up.

I get asked for advice, and there are two key themes that I repeat. One is that success doesn’t happen overnight. So enjoy those small steps on the journey that we take to much greater things, and the important part of that, is really just keep taking the steps, learn as much as you can, and don’t give up. The second thing is to keep an open mind in your career, being willing to try new things and opportunities and sometimes doors are going to open that you didn’t even imagine, which is absolutely okay.

As a prime example, I started my education in the aerospace industry. When that industry was hurting, I switched to mechanical. There is a broader range of that field of study, and I spent a large part of my career in automotive. I then moved to consumer and now I am in data center and IT. I am essentially a space geek and car junkie engineer with experience in engineering, strategy, sales, portfolio transformation, and operations. And now I am a general manager for an IT management portfolio.

If I hadn’t been open to new opportunities and doors along my career path, I wouldn’t be here today. So it’s an example for the younger generation. There are broad possibilities. You don’t have to have it all figured out now, but keep taking those steps and keep trying and keep learning — and the world awaits you, essentially.

Gardner: Angie what sort of challenges have you faced over the years in your career? And how is that changing?

Women rise, challenges continue 

McMillin: It’s a great question. My experience at Vertiv has been wonderful with a support structure of diversity for women and leadership. We talked about the new WAVE program that Erin mentioned earlier. You can feel that across your organization. It starts at the top. I also had the benefit, as many of us I think had on this podcast, of having good sponsors along the way in our career journeys to help us get to where we are.

But that doesn’t mean we haven’t faced challenges throughout our careers. And there are challenges that still arise for many in the industry. In all the industries I have worked, which have all been male-dominated industries, there is this necessity to have to prove yourself as a woman — like 10 times over — for your right to be at the table with a voice regardless of the credentials you have coming in. It gets exhausting, and it’s not consistent with male counterparts. It’s a “show me first” and then “I might believe,” it’s also BS. That’s something that a lot of women in this industry, as well as in other industries, continue to have to surpass.

The other common challenge is that you need to over-prove yourself, so that people know that the position was earned. I always want people to know I got my position because I earned it, and I have something to offer not because of a diversity quota. And that’s a lot better today than it’s been in years passed. But I can tell you, I can still hear those words, of accusations made of female colleagues that I knew throughout my career. When one female gets elevated in a position and fails, it makes it a lot harder for other females to get the chance of an opportunity or promotion.

Now, again, it’s getting better. But to give you a real-world example, if you think about the number of industries where there are women CEOs. If they don’t succeed, boards get very nervous about putting another woman in a CEO position. If a male CEO doesn’t succeed, he is often just not the right fit. So we still have a long way to go.

Gardner: Jaime at eStruxture, what’s been your experience as a woman in the technology field?

Leverton: Well, eStruxture has been an incredible experience for me. We have diversity throughout the organization. Actually we are almost at 50 percent of our population identifying as non-white heterosexual male, which is quite different from what I’ve experienced over the rest of my career in technology. From a female perspective, our senior leadership team is 35 percent women; our director population is almost 50 percent women.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

So it’s been a real breath of fresh air for me. In fact, I would say it really speaks to the values of our founder when he started this company three years ago and did it with the intention of having a diverse organization. Not only does it better mirror our customers but it absolutely reflects the values of our organization, the culture we wanted to create, and ultimately to drive better returns.

Gardner: Angie, why is the data center industry a particularly attractive career choice right now? What will the future look like in say five years? Why should people be thinking about this as a no-brainer when it comes to their futures?

Wanted: Skilled data center pros 

McMillin: We are in a fascinating time for data center trends. The future is very, very strong. We know now — and the kids of today certainly know — that data isn’t going away. It’s part of our everyday lives and it’s only going to expand — it’s going to get faster with more compute power and capability. Let’s face it, nobody has patience for slow anymore. There are trends in artificial intelligence (AI), 5G, and others that haven’t even been thought of yet that are going to offer enormous potential for careers for those looking to get into the IT space.

We are in a fascinating time for data center trends. The future is very strong. Data isn’t going away. And nobody has patience for slow anymore. There are trends in AI, 5G, and others that haven’t even been thought of yet.

And when we think about that new trend — with the increase of working or schooling remotely as many of us are doing currently — that may permanently alter how people work and learn going forward. There will be a need for different tools, capabilities, and data management. And how this all remains secure and efficient is also very important.

Likewise, more data centers will need to operate independently and be managed remotely. They will need to be more efficient. Sustainability is going to remain very prevalent, especially edge-of-the-network data centers and enabling the connectivity and productivity wherever they are.

wind powerGardner: Now that we are observing International Data Center Day 2020, where do you see this state of the data center in just the next few years? Angie, what’s going to be changing that makes this even more important to almost every aspect of our lives and businesses?

McMillin: We know now the data center as an ecosystem that is changing dramatically. The hybrid model is a product that’s enabling a diversification of data workloads where customers get the best of all options available: cloud, data center, and edge, as our regional global survey of data center professionals are experiencing phenomenal growth. And we also see a lot more remote management to operate and maintain these disparate locations securely.

We need more people with all the skill sets capable of supporting these advancements on the horizon like 5G, theindustrial internet of things (IIoT), and AI.

Gardner: Erin, where do you see the trends of technology and human resources going that will together shape the future of the data center?

Dowd: I will piggyback on the technology trends that Angie just referenced and say the future requires more skilled professionals. It will be more competitive in the industry to hire those professionals, and so it’s really a great situation for candidates.

logoIt makes it important for companies like Vertiv to continue creating environments that favor diversity. Diversity should manifest in many different ways and in an environment where we welcome and nurture a broad variety of people. That’s the direction of the future, and, naturally, the secret for success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Vertiv.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, data center, Data center transformation, Enterprise transformation, Networked economy, Vertiv | Tagged , , , , , , , , , , , , , , , | Leave a comment

Business readiness provides an agile key to surviving and thriving in these uncertain times

device userJust as the nature of risk has been a whirling dervish of late, the counter-forces of business continuity measures have had to turn on a dime as well. What used to mean better batteries for servers and mirrored, distributed datacenters has recently evolved into anywhere, any-circumstance solutions that keep workers working — no matter what.

Out-of-the-blue workplace disruptions — whether natural disasterspolitical unrest, or the current coronavirus pandemic — have shown how true business continuity means enabling all employees to continue to work in a safe and secure manner.

The next BriefingsDirect business agility panel discussion explores how companies and communities alike are adjusting to a variety of workplace threats using new ways of enabling enterprise-class access and distribution of vital data resources and applications.

And in doing so, these public and private sector innovators are setting themselves up to be more agile, intelligent, and responsive to their workers, customers, and citizens once the disaster inevitably passes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share stories on making IT systems and people evolve together to overcome workplace disruptions is Chris McMasters, Chief Information Officer (CIO) at the City of Corona, California; Jordan Catling, Associate Director of Client Technology at The University of Sydney in Australia, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, how has business readiness changed over the past few years? It seems to be a moving target.

Minahan: The very nature of business readiness is not about preparing for what’s happening today — or responding to a specific incident. It’s a signal for having a plan to ensure that your work environment is ready for any situation.

That certainly means having in place the right policies and contingency plans, but it also — with today’s knowledge workforce — goes to enabling a very flexible and dynamic workspace infrastructure that allows you to scale up, scale down, and move your entire workforce on a moment’s notice.

You need to ensure that your employees can continue to work safely and remotely while giving your company the confidence that they’re doing that all in a very secure way, so the company’s information and infrastructure remains secure.

Gardner: Chris McMasters, as a CIO, you surely remember the days when IT systems were brittle, not easily adjusted, and hard to change. Has the nature of work and these business continuity challenges forced IT to be more agile?

McMasters: Yes, absolutely. There’s no better example than in government. Government IT is known for being on-premises and very resistant to change. In the current environment everything has been flipped on its head. We’re having to be flexible, more dynamic in how we deploy services, and in how users get those services.

Gardner: Jordan, higher education hasn’t necessarily been the place where we’d expect business continuity challenges to be overcome. But you’ve been dealing with an aggressive outbreak of the coronavirus in China.

Catling: It’s been a very interesting six months for us, particularly in higher education, with the Australian fires, floods, and now the coronavirus. But generally, as an institution that operates over 22 locations, with teaching hospitals and campuses — our largest campus has its own zip code — this is part of our day, enabling people to work from wherever they are.

The really interesting thing about this situation is we’re having to enable teaching from places that we wouldn’t ordinarily. We’re having to make better use of the tools that we have available to come up with innovative solutions to keep delivering a distinctive education that The University of Sydney is known for.

Gardner: And when you’re trying to anticipate challenges, something like COVID-19, the disease that emanates from the coronavirus, did you ever think that you’d have to virtually overnight provide students stuck in one location with the opportunity to continue to learn from a distance?

Catling: We need to always be preparing for a number of scenarios. We need to be able to rapidly deploy solutions to enable people to work from wherever they are. The flexibility and dynamic toolsets are really important for us to be able to scale up safely and securely.

Gardner: Tim, the idea of business continuity including workers not only working at home but perhaps in far-flung countries where they’ve been stuck because of a quarantine, for example — these haven’t always been what we consider IT business continuity. Why is worker continuity more important than ever?

Minahan: Globally we’re recognizing the importance of the overall employee experience and how it’s becoming a key differentiator for companies and organizations. We have a global shortage of medium- to high-skilled talent. We’re short about 85 million workers.

Companies are battling for the high ground on providing preferred ways to work. One way they do that is ensuring that they can provide flexible work environments that rely on effective workplace technologies that enable employees to do their very best work.

So companies are battling for the high ground on providing preferred ways to work. One way they do that is ensuring that they can provide flexible work environments, ones that rely on effective workplace technologies that enable employees to do their very best work wherever that might be. That might be in an office building. It might be in a remote location, or in certain situations they may need to turn on a dime and move from their office to the home force to keep operations going. Companies are planning to be flexible not just for business readiness but also for competitive advantage.

Gardner: Making this happen with enterprise-caliber, mission-critical reliability isn’t just a matter of renting some new end-devices and throwing up a few hotspots. Why is this about an end-to-end solution, and not just point solutions?

Be proactive not reactive

Minahan: One of the most important things to recognize is companies often first react to a crisis environment. Currently, you’re hearing a lot of, “Hey, we just,” like the school system in Miami, for example, “purchased 250,000 laptops to distribute to students and teachers to maintain their education.”

multiple threatsHowever, that may enable and empower students and employees, but it may be less associated with proper security measures and put both the companies’, workers’, and customers’ personal information at risk.

You need to plan from the get-go for having a very flexible, remote workplace infrastructure — one that embeds security. That way — no matter where the work needs to get done, no matter on what device, or even on whatever unfamiliar network — you can be assured that the appropriate security policies are in place to protect the private information of your employees. The critical information of your business, and certainly any kinds of customer or constituent information, is at stake.

Gardner: Let’s hear what you get when you do this right. Jordan at The University of Sydney, you had more than 14,000 students unexpectedly quarantined in China, yet they still needed to somehow do their coursework. Tell us how this came about, and what you’ve done to accommodate them.

Quality IT during quarantine

Catling: Exactly right. As this situation began to develop in late January, we quite quickly began to scenario plan around the possible eventualities. A significant part of our role, as the technologists within the university, is making sure that we’re providing a toolset that can adapt to the needs of the community.

University_of_Sydney

So we looked at various platforms that we were already using — and some that we hadn’t — to work out what do. Within the academic community, we needed the best set of tools for our staff to use in different and innovative ways. We quickly had to develop solutions and had to lean on our partners to help us out with developing those.

Gardner: Did you know where your students were going to be housed? Was this a case where you knew that they were going to be in a certain type of facility with certain types of resources or are they scattered around? How did you deal with that last mile issue, so to speak?

Catling: The last mile issue is a real tricky one. We knew that people were going to be in various locations throughout mainland China, and elsewhere. We needed to quickly build a solution capable of supporting our students — no matter where they were, no matter what device that they were using, and no matter what their local Internet connection was like.

We have had variability in the quality of our connections even within Australia. But now we needed a solution that would cater to as many people as possible and be considerate of quite a number of different scenarios that our students and staff would be facing.

Gardner: How were you are able to provide that quality of service across so many applications given that level of variability?

Catling: The biggest focus for us, of course, is the safety and security of our staff and students. It’s paramount. We very quickly tried to work out where our people would be connecting from and tried to make sure that the resources we were providing, the connection to the resources, would be as close to them as possible to minimize the impact of that last mile.

We worked with Citrix to put together a set of application delivery controllers into Hong Kong to make sure that the access to the solutions was nice and fast. We then worked to optimize the connection from Hong Kong to Sydney to maximize the user experience. 

We worked with Citrix to put together a set of application delivery controllers into Hong Kong to make sure that the access to the solution was nice and fast. Then we worked to optimize the connection back from Hong Kong to Sydney to maximize the user experience for our staff and students.

Gardner: So this has very much been a cloud-enabled solution. You couldn’t have really done this eight or 10 years ago.

Catling: Certainly not this quickly. Literally from putting a call into Citrix, we worked from design to a production environment within seven days. For me, that’s unheard of, really. Regardless of whether it’s 10 years ago or 10 weeks ago, it was quite a monumental effort. It’s highlighted the importance of having partners that both seek to understand the business problems you’re facing and coming up with innovative solutions rapidly and are able to deploy those at scale. And cloud is obviously a really important part of that.

Citrix logoWe are still delivering on this solution. We have the capabilities now that we didn’t have a couple of months ago. We’re able to provide applications to students no matter where they are. They’re able to continue their studies.

Obviously, the solution needs to remain flexible to the evolving needs. The situation is changing frequently and we are discovering new needs and new requirements. As our academics start to use the technology in different ways, we’re evolving the solution based on their feedback to try and maximize the experience for both our staff and students.

Gardner: Tim, when you hear Jordan describe this solution, does it strike you as a harbinger of more business continuity things to come? How has the coronavirus issue — and not just China but in Europe and in North America — reinforced your idea of what a workplace-enhanced business continuity solution should be?

Business continuity in crisis

Minahan: We continue to field a rising a number of inquiries from customers and other companies. They are trying to assess the best ways to ensure continuity of their business operations and switch to a remote workforce in a very short period of time.

Situations like this remind us that we need to be planning today for any kind of business-ready situation. Using these technologies ensures that you can quickly adapt your work models, moving entire employee groups from an office to a remote environment, if needed, whether it’s because of virus, flood, or any other unplanned event.

What’s exciting for me is being able to use such agile work models and digital workspace technology to arm companies with new sources for growth and competitive advantage.

One good example is we recently partnered with the Center for Economics and Business Research to examine the impact remote work models and technologies have on business and economic growth. We found that 69 percent of people who are currently unemployed or economically inactive would be willing to start working if given the opportunity to work flexibly by having the right technology.

device user2They further estimate that activating these, if you will, untapped pools of talent by enabling these flexible work-from-home models — especially for parents, workers in rural areas, retirees, part-time, and gig workers, folks that are normally outside of the traditional work pool and reactivating them through digital workspace technologies — could drive upward of an initial $2 trillion in economic gains across the US economy. So, the investment in readiness that folks are making is now being applied to drive ongoing business results even in non-crisis times.

Gardner: The coronavirus has certainly been leading the headlines recently, but it wasn’t that long ago that we had other striking headlines.

In California last fall, Chris McMasters, the wildfires proved a recurring problem. Tell us about Corona and why adjusting to a very dangerous environment — but requiring your key employees to continue to work – allowed you to adjust to a major business continuity challenge.

Fighting fire with cloud

McMasters: Corona is like a lot of local governments within the United States. I came from the private sector and have been in the city IT for about four years now. When I first got there, everything was on-premises. Our back-up with literally three miles away on the other side of the freeway.

If there was a disaster and something totaled the city, literally all of our technology assets would be down, which concerned me. I used to work for a national company and we had offices all over and we backed up across the country. So this was a much different environment. Yet we were dealing with public safety, which with police and fire service, 911 service, and they can never go down. Citizens depend on all of that.

CoronaCA

That was a wake-up call for me. At that time, we didn’t really have any virtual desktop infrastructure (VDI) going on. We did have server virtualization, but nothing in the cloud. In the government sector, we have a lot of regulation that revolves around the cloud and its security, especially when we are dealing with police and fire types of information. We have to be very careful. There are requirements both from the State of California and the federal government that we have to comply with.

At first, we used a government cloud, which was a little bit slower in terms of innovation because of all the regulations. But that was a first step to understanding what was ahead for us. We started this process about two years ago. At the time, we felt like we needed to push more of our assets to the cloud to give us more continuity.

At the end of the day, we realized we also needed to get the desktops up there, too: Using VDI and the cloud. And at the time, no one was doing that. We went and talked to Citrix on how that would extend to support our environment for public safety. Citrix has been there since day-one.

At the end of the day, we realized we also needed to get the desktops up there, too: Using VDI and the cloud. And at the time, no one was doing that. But we went and talked to Citrix. We flew out to their headquarters, sat with their people, and discussed our initiative, what we are trying to accomplish, and how that would extend out to support our environment for public safety. And that means all of the people out at the edge who actually touch citizens and provide emergency support services.

That was the beginning of the journey and Citrix has been there since day-one. They develop the products around that particular idea for us right up to today.

In the last two years, we’ve had quite a few fires in the State of California. Corona butts right up against the forest line and so we have had a lot of damage done by fires, both in our city and in the surrounding county. And there have been the impacts that occur after fires, too, which include mudslides. We get the whole gamut of that stuff.

But now we find that those first responders have the data to take action. We get the data into their hands quickly, make sure it’s secure on the way there, and we make that continuative so that it never fails. Those are the last people that we want to have fail.

We’ve been able to utilize this type of a platform where our data currently resides in two different datacenters in two different states. It’s on encrypted arrays at rest.

We are operating on a software-defined network so we can look at security from a completely different perspective. The old way was, “Let’s build a moat around it and a big wall, and hopefully no one gets in.” Now, instead we look at it quite differently. Our assets are protected outside of our facilities.

Those personnel riding in fire engines, in police cars, right up at the edge — they have to be secure right up to that edge. We have to maintain and understand the identity of that person. We need to know what applications they are accessing, or should not be accessing, and be secure all along that path.

This has all changed our outlook on how we deal with things and what a modern-day work environment looks like. The money we use comes from taxes, the people pay, and we provide services for our citizens. The interesting thing about that is we’re now driving toward the idea of government on-demand.

Before, when you would come home, right after a hard day’s work, city hall would be closed. Government was open 8 to 5, when people are normally working. So, when you want to conduct business at city hall, you have to take some time off of work. You try to find one day of the week, or a time when you might sneak in there to get your permits for something and proceed with your business.

endpoint-security-solution

But our new idea is different. Most of our services can be provided online for people. If we can do that, that’s fantastic, right? So, you can come home and say, “Hey, you know what? I was thinking about building an addition to my house.” So you go online, file your permits, and submit all of your documents electronically to us.

The difference that VDI provides for our employees is that I can now tap into a workforce of let’s say, a single mother who has a special needs child who can’t work normal hours, but she can work at night. So that person can take that permit, look at that permit at 6 or 7 pm, process the permit, and then at 5 am the next day, that process is done. You wake up in the morning, your permit has been processed by the city and completed. That type of flexibility is integral for us to make government more effective for people.

It’s not the necessarily the public safety support, which we are concerned about. But it’s about also generally providing flexible services for people and making sure government continues to operate.

Gardner:  Tim, it’s interesting that by addressing business continuity issues and disasters we are able to move very rapidly to a government on-demand or higher education on-demand. So, what are some of the larger goals when it comes to workforce agility?

Flexibility extends the business

Minahan: The examples that Chris and Jordan just gave are what excites me about flexible work models, empowered by digital workplace technologies, and the ability to embrace entirely new business models.

I used the example from the Center of Economic Business Research and how to tap into untapped talent pools. Another example of a company using similar technology is eBay. So eBay, like many of their competitors, would build a big call center and hire a bunch of people, train them up, and then one of the competitors will build a call center down the street and steal them away. They would have rapid turnover. They finally said, “Enough is enough, we have to think of a different model.”

eBay used the same approach of providing a secure digital workspace to reach into new talent pools outside of big cities. They could now hire gig workers and re-engage them in the workforce by using a workplace platform to arm them at the edge.

Well, they used the same approach of providing a secure digital workspace to reach into new talent pools outside of big cities. They could now hire gig workers, stay-at-home parents, etc., and re-engage them in the workforce by using the workplace platform to arm them at the edge and provide a service that was formally only provided in a big work hub, a big call center.

They went from having zero home force workers to 600 by the end of last year, and they are on a path to 4,000 by the end of this year. eBay solved a big problem, which is providing support for customers. How do I have a call center in a very competitive market? Well, I turn the tables and create new pools of talent, using technology in an entirely different way.

Gardner: Jordan, now that you’ve had help from organizations like Citrix to deal with your tough issue of students stuck in China, or other areas where there’s a quarantine, are you going to take that innovation and use it in other ways? Is this a gift that keeps giving?

Catling: It’s a really interesting question. What it’s demonstrated to me is that, as technologists, we need to be working with all of our people across the organization to understand their needs and to provide the right tools, but not necessarily to be prescriptive in how they are used. This current coronavirus situation has demonstrated to us that a combination of just a few tools — for example, the Citrix platform, ZoomEcho, and Canvas — means a very different thing to one person than to another person.

There’s such large variability in the way that education is delivered across the university, across so many disciplines, that it becomes about providing a flexible set of tools that all of our people can use in different and exciting ways. That extends not only to the current situation but to more normal times.

If we can provide the right toolset that’s flexible and meets the users where they are, and also make sure that the solutions provide a natural experience, that’s when you are really geared up well for success. The technology kind of fades into the background and becomes a true enabler of the bright minds across the institution.

Gardner: Chris, now that you’re able to do more with virtual desktops and delivering data regardless of the circumstances to your critical workers as well as to your citizens, what’s the next step?

Can you add a layer of intelligence rather than just being about better feeds and speeds? What comes next, and how would Citrix help you with that?

Intelligence improves government

McMasters: We’re neck deep in data analytics and in trying to understand how we can make impacts correctly by analyzing data. So adding artificial intelligence (AI) on top of those layers, understanding utilization of our resources, is the amazing part of where we’re going.

Wildfire_in_CaliforniaThere’s so much unused hardware and processing power tied up in our normal desktop machines. Being able to disrupt that and flip it up on its end is a fundamental change in how government operates. This is literally turning it on-end. I mean, AI can impact all the way down to how we do helpdesk, how it minimizes our response times and turnaround times, to increased productivity, and in how we service 160,000 people in my city. All of that changes.

Already I’m saving hundreds of thousands of dollars by using the cloud and VDI models and at the same time increasing all my service levels across the board. And now we can add this layer of business continuity to it, and that’s before we start benefitting from predictive AI and using data to determine asset utilization.

Moving from a CAPEX model to this OPEX model for government is something very new, it’s something that public sector or a private sector has definitely capitalized on and I think public sector is ripe for doing that. So for us, it’s changing everything, including our budget, how we deliver services, how we do helpdesk support, and on to the ways that we’re assessing our assets and leveraging citizens’ tax dollars correctly.

Gardner: Tim, organizations, both public and private sector, get involved with these intelligent workspaces in a variety of ways. Sometimes it might be a critical issue such as business continuity or a pandemic.

But ultimately, as Chris just mentioned, this is about digital business transformation. How are you able to take whatever on-ramp organizations are getting into an intelligent workspace and then give them more reasons to see ongoing productivity? How is this something that has a snowball effect on productivity?

AI, ML works with you

Minahan: Chris hit the nail on the head. Certainly, the initial on-ramps to digital workspace provides employees with unified and secure access to everything they need to be productive and in one experience. That means all of their apps, all of their content, regardless of where that’s stored, regardless of what device they’re accessing it from and regardless of where they’re accessing it from.

However, it gets really exciting when you go beyond that foundation of unified experience in a secure environment toward infusing things like machine learning (ML), digital assistants, and bots to change the way that people work. They can newly extract out some of the key insights and tasks that they need to do and offer them up to employees in real-time in a very personalized way. Then they can quickly take care of those tasks and the things they need to remove that noise from their day, and even guide them toward the right next steps to take to be even more productive, more engaged, and do much more innovative and creative work.

So, absolutely, AI and ML and the rise of bots are the next phase of all of this, where it’s not just a place you go to launch apps and work securely, but a place where you go to get your very best work done.

Gardner: Jordan, you were very impressively able to get more than 14,000 students to continue their education regardless of what Mother Nature threw at them. And you were able to do it in seven days. For those organizations that don’t want to be caught under such circumstances, that want to become proactive and prepared, what lessons have you have learned in your most recent journey that you can share with them? How can they be better positioned to combat any unfortunate circumstances they might face?

Prioritize when and how you work

Catling: It’s almost becoming cliché to say, but work is something that you do — it’s not a place anymore. So when we’re looking at and assessing tools for how we support the university, we’re focusing on taking a cloud-first approach where it doesn’t matter where a student or staff member is. They have access to all the resources they need on-demand. That’s one of the real guiding principles we should be using in our decision-making process.

Scalability is also a very important thing to us. The nature of the way that education is delivered today with an on-campus model is that demand is very peaky. We need to be mindful of how scalable and rapidly scalable a solution can be. That’s important to consider, particularly in the higher education context. How quickly can you scale up and down your environments to meet varying demands?

We can use the Citrix platform in many different ways. It’s not only for us to provide applications out to students to complete coursework. It can also be used for providing secure access to data and workspaces. 

Also, it’s important to consider the number of flexible ways that each of the technology products you choose can be used. For example, with the Citrix platform we can use it in many different ways. It’s not only for us to provide applications out to students to complete their coursework. It can also be used for providing secure access to data and to workspaces. There are so many different ways it can be extended, and that’s a real important thing when deciding which platform to use.

The final really important takeaway for us has been the establishment of true partnerships. We’ve had extremely good support from our partners, such as Citrix and Zoom, where they very rapidly sought to understand and work with us to solve the unique business problems that we’re facing. The real, true partnership is not one of just providing products, but of really sitting down shoulder-to-shoulder, trying to understand, but also suggesting ways to use a technology we may not be thinking of — or maybe it’s never been done before.

As Chris mentioned earlier, virtual desktops in the cloud weren’t a big thing that many years ago. About a decade ago, we began working with Citrix to provide streams of desktops to physical devices across campus.

That was something — that was a very unusual use of technology. So I think that the partnership is very important and something that organizations should develop and be ready to use. It goes in both directions at all times.

Gardner: Chris, now that you have, unfortunately, dealt with these last few harsh wildfire seasons in Southern California, what lessons have you learned? How do you make yourselves more like local government on demand?

Public-private partnerships

McMasters: That’s a big question. For us, we looked at breaking some of the paradigms that exist in government. They don’t have the same impetus to change as in the private sector. They are less willing to take risks. However, there are ways to work with vendors and partners to mitigate a lot of that risk, ways to pilot and test cutting-edge technologies that don’t put you at risk as you push these things out.

There are very few vendors that I would consider such partners. I probably can count them on one hand in total, and the interesting thing is that when we were selecting a vendor for this particular project, we were looking for a true partner. In our case, it was Citrix and Microsoft who came to the table. And when I look back at what’s happened in our relationship with those two in particular, I couldn’t ask for anything better.

We have literally had technicians, engineers, everyone on-site, on the phone every step of the way as we have been developing this. They took a lot of the risk out for us, because we are dealing with public dollars and we need to make sure these projects work. To have that level of comfort and stability in the background and knowing that I can rely on these people was huge. It’s what allowed us to develop to where we are today, which is far advanced in the government world.

That’s where things have to change. This kind of public-private partnership is what the public sector needs to start maturing. It’s bidirectional; it goes both ways. There is a lot of information that we offer to them; there is a lot of things they do for us. And so it goes back and forth as we develop this through this product cycle. It’s advantageous for both of us to be in it.

That’s where sometimes, especially in the public sector, we lose focus. They don’t understand what the private sector wants and what they are moving toward. It’s about being aligned on both sides of that equation — and it benefits both parties.

Technology is going to change, and it just keeps driving faster. There’s always another thing around the corner, but building these types of partnerships with vendors and understanding what they want helps them understand what you want, and then be able to deliver.

Gardner: Tim, how should businesses better work with vendor organizations to prepare themselves and their workers for a flexible future?

Minahan: First off, I would echo Chris’s comments. We all want government on-demand. You need a solution like that. But how they should work together? There are two great examples here in The University of Sydney and the City of Corona.

It really starts by listening. What are the problems we are trying to solve in planning for the future? How do we create a digitally agile organization and infrastructure that allows us to pursue new business opportunities, and just as easily ensure business continuity? So start by listening, map out a joint roadmap together and innovate toward that.

We are collectively as an industry constantly looking to innovate, constantly looking to leverage new technologies to drive business outcomes — whether those are for our citizens, students, or clientele. Start by listening, doing joint and co-development work, and constantly sharing that innovation with the rest of the market. It raises all boats.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in artificial intelligence, Citrix, Cloud computing, contact center, Cyber security, Data center transformation, disaster recovery, Information management, Internet of Things, Microsoft, mobile computing, risk assessment, Security, supply chain, User experience, vdi | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

As containers go mainstream, IT culture should pivot to end-to-end DevSecOps

hanging container

Container-based deployment models have rapidly gained popularity from cloud models to corporate data centers. IT operators are now looking to extend the benefits of containers to more use cases, including the computing edge.

Yet in order to push containers further into the mainstream, security concerns need to be addressed across this new end-to-end container deployment spectrum — and that means addressing security during development under the rubric of DevSecOps best practices.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us now as the next BriefingsDirect Voice of Innovation discussion examines the escalating benefits that come from secure and robust container use with Simon Leech, Worldwide Security and Risk Management Practice at Hewlett Packard Enterprise (HPE) Pointnext Services. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Simon, are we at an inflection point where we’re going to see containers take off in the mainstream? Why is this the next level of virtualization?

Leech: We are certainly seeing a lot of interest from our customers when we speak to them about the best practices they want to follow in terms of rapid application development.

One of the things that always held people back a little bit with virtualization was that you are always reliant on an operating system (OS) managing the applications that sit on top of that OS in managing the application code that you would deploy to that environment.

But what we have seen with containers is that as everything starts to follow a cloud-native approach, we start to deal with our applications as lots of individual microservices that all communicate integrally to provide the application experience to the user. It makes a lot more sense from a development perspective to be able to address the development in these small, microservice-based or module-based development approaches.

So, while we are not seeing a massive influx of container-based projects going into mainstream production at the moment, there are certainly a lot of customers testing their toes in the water to identify the best possibilities to adopt and address container use within their own application development environments.

Gardner: And because we saw developers grok the benefits of containers early and often, we have also seen them operate within a closed environment — not necessarily thinking about deployment. Is now the time to get developers thinking differently about containers — as not just perhaps a proof of concept (POC) or test environment, but as ready for the production mainstream?

Leech: Yes. One of the challenges I have seen with what you just described is a lot of container projects start as a developer’s project behind his laptop. So the developer is going out there, identifying a container-based technology as something interesting to play around with, and as time has gone by has realized he can actually make a lot of progress by developing his applications using a container-based architecture.

This is often done under the radar of management. one of the things we are discussing with customers as we address DevSecOps and DevOps is to make sure you get buy-in from the executive team and enable top-down integration.

What that means from an organizational perspective is that this is often done under the radar of management. One of the things we are discussing with our customers as we go and talk about addressing DevSecOps and DevOps initiatives is to make sure that you do get that buy-in from the executive team and so you can start to enable some top-down integration.

Don’t just see containers as a developer’s laptop project but look at it broadly and understand how you can integrate that into the overall IT processes that your organization is operating with. And that does require a good level of buy-in from the top.

Gardner: I imagine this requires a lifecycle approach to containers thinking — not just about the development, but in how they are going to be used over time and in different places.

Now, 451 Research recently predicted that the market for containers will hit $2.7 billion this year. Why do you think that the IT operators — the people who will be inheriting these applications and microservices — will also take advantage of containers? What does it bring to their needs and requirements beyond what the developers get out of it?

Quick-change code artists

Leech: One of the biggest advantages from an operational perspective is the ability to make fast changes to the code you are using. So whereas in the traditional application development environment, a developer would need to make a change to some code and it would involve requesting a downtime to be able to update the complete application, with a container-based architecture, you only have to update parts of the architecture.

So, it allows you to make many more changes than you previously would have been able to deliver to the organization — and it allows you to address those changes very rapidly.

Gardner: How does this allow for a more common environment to extend across hybrid IT — from on-premises to cloud to hybrid cloud and then ultimately to the edge?

HPE C truck

Leech: Well, applications developed in containers and developed within a cloud-native approach typically are very portable. So you don’t need to be restricted to a particular version or limits, for example. The container itself runs on top of any OS of the same genre. Obviously, you can’t run a Windows container on top of a Linux OS, or vice versa.

But within the general Linux space that pretty much has compatibility. So it makes it very easy for the containers to be developed in one environment and then released into different environments.

Gardner: And that portability extends to the hyperscale cloud environments, the public cloud, so is there a multi-cloud extensibility benefit?

Leech: Yes, definitely. You see a lot of developers developing their applications in an on-premises environment with the intention that they are going to be provisioned into a cloud. If they are done properly, it shouldn’t matter if that’s a Google Cloud Platform instance, a Microsoft Azure instance, or Amazon Web Services (AWS).

Gardner: We have quite an opportunity in front of us with containers across the spectrum of continuous development and deployment and for multiple deployment scenarios. What challenges do we need to think about to embrace this as a lifecycle approach?

What are the challenges to providing security specifically, making sure that the containers are not going to add risk – and, in fact, improve the deployment productivity of organizations?

Make security a business priority 

Leech: When I address the security challenges with customers, I always focus on two areas. The first is the business challenge of adopting containers, and the security concerns and constrains that come along with that. And the second one is much more around the technology or technical challenges.

If you begin by looking at the business challenges, of how to adopt containers securely, this requires a cultural shift, as I already mentioned. If we are going to adopt containers, we need to make sure we get the appropriate executive support and move past the concept that the developer is doing everything on his laptop. We train our coders on the needs for secure coding.

A lot of developers are not trained as security specialists. It makes sense to put a program into place that trains coders to think more about security, especially in a container environment where you have fast release cycles. 

A lot of developers have as their main goal to produce high-quality software fast, and they are not trained as security specialists. It makes a lot of sense to put an education program into place, that allows you to train those internal coders so that they understand the need to think a little bit more about security — especially in a container environment where you have fast release cycles and sometimes the security checks get missed or don’t get properly instigated. It’s good to start with a very secure baseline.

And once you have addressed the cultural shift, the next thing is to think about the role of the security team in your container development team, your DevOps development teams. And I always like to try and discuss with my customers the value of getting a security guy into the product development team from day one.

Often, we see in a traditional IT space that the application gets built, the infrastructure gets designed, and then the day before it’s all going to go into production someone calls security. Security comes along and says, “Hey, have you done risk assessments on this?” And that ends up delaying the project.

Hooded guyIf you introduce the security person into the small, agile team as you build it to deliver your container development strategy, then they can think together with the developers. They can start doing risk assessments and threat modeling right from the very beginning of the project. It allows us to reduce delays that you might have with security testing.

At the same time, it also allows us to shift our testing model left in a traditional waterfall model, where testing happens right before the product goes live. But in a DevOps or DevSecOps model, it’s much better to embed the security, best practices, and proper tooling right into the continuous integration/continuous delivery (CI/CD) pipeline.

The last point around the business view is that, again, going back to the comment I made earlier, developers often are not aware of secure coding and how to make things secure. Providing a secure-by-default approach — or even a security self-service approach – allows developers to gain a security registry, for example. That provides known good instances of container images or provides infrastructure and compliance code so that they can follow a much more template-based approach to security. That also pays a lot of dividends in the quality of the software as it goes out the door.

Gardner: Are we talking about the same security precautions that traditional IT people might be accustomed to but now extending to containers? Or is there something different about how containers need to be secured?

Updates, the container way 

Leech: A lot of the principles are the same. So, there’s obviously still a need for network security tools. There’s still a need to do vulnerability assessments. There is still a need for encryption capabilities. But the difference with the way you would go about using technical controls to protect a container environment is all around this concept of the shared kernel.

An interesting white paper has been released by the National Institute of Standards and Technology (NIST) in the US, SP 800-190, which is their Application Container Security Guide. And this paper identifies five container security challenges around risks with the images, registry, orchestrator, the containers themselves, and the host OS.

So, when we’re looking at defining a security architecture for our customers, we always look at the risks within those five areas and try to define a security model that protects those best of all.

hpe-logoOne of the important things to understand when we’re talking about securing containers is that we have a different approach to the way we do updates. In a traditional environment, we take a gold image for a virtual machine (VM). We deploy it to the hypervisor. Then we realize that if there is a missing patch, or a required update, that we roll that update out using whatever patch management tools we use.

In a container environment, we take a completely different approach. We never update running containers. The source of your known good image is your registry. The registry is where we update containers, have updated versions of those containers, and use the container orchestration platform to make sure that next time somebody calls a new container that it’s launched from the new container image.

It’s important to remember we don’t update things in the running environment. We always use the container lifecycle and involve the orchestration platform to make those updates. And that’s really a change in the mindset for a lot of security professionals, because they think, “Okay, I need to do a vulnerability assessment or risk assessment. Let me get out my Qualys and my Rapid7,” or whatever, and, “I’m going to scan the environment. I’m going to find out what’s missing, and then I’m going to deploy patches to plug in the risk.”

So we need to make sure that our vulnerability assessment process gets built right into the CI/CD pipeline and into the container orchestration tools we use to address that needed change in behavior.

Gardner: It certainly sounds like the orchestration tools are playing a larger role in container security management. Do those in charge of the container orchestration need to be thinking more about security and risk?

Simplify app separation 

Leech: Yes and no. I think the orchestration platform definitely plays a role and the individuals that use it will need to be controlled in terms of making sure there is good privileged account management and integration into the enterprise authentication services. But there are a lot of capabilities built into the orchestration platforms today that make the job easier.

One of the challenges we’ve seen for a long time in software development, for example, is that developers take shortcuts by hard coding clear text passwords into the text, because it’s easier. And, yeah, that’s understandable. You don’t need to worry about managing or remembering passwords.

But what you see a lot of orchestration platforms offering is the capability to deliver sequence management. So rather than storing the passcode in within the code, you can now request the secret from the secrets management platform that the orchestration platform offers to you.

Orchestration tools give you the capability to separate container workloads for differing sensitivity levels. This provides separation between the applications without having to think too much about it.

These orchestration tools also give you the capability to separate container workloads for differing sensitivity levels within your organization. For example, you would not want to run containers that operate your web applications on the same physical host as containers that operate your financial applications. Why? Because although you have the capability with the container environment using separate namespaces to separate the individual container architectures from one another, it’s still a good security best practice to run those on completely different physical hosts or in a virtualized container environment on top of different VMs. This provides physical separation between the applications. Very often the orchestrators will allow you to provide that functionality within the environment without having to think too much about it.

Gardner: There is another burgeoning new area where containers are being used. Not just in applications and runtime environments, but also for data and persistent data. HPE has been leading the charge on making containers appropriate for use with data in addition to applications.

How should the all-important security around data caches and different data sources enter into our thinking?

Save a slice for security 

Leech: Because containers are temporary instances, it’s important that you’re not actually storing any data within the container itself. At the same time, as importantly, you’re not storing any of that data on the host OS either.

puzzleIt’s important to provide persistent storage on an external storage array. So looking at storage arrays, things like from HPE, we have Nimble Storage or Primera. They have the capability through plug-ins to interact with the container environment and provide you with that persistent storage that remains even as the containers are being provisioned and de-provisioned.

So your container itself, as I said, doesn’t store any of the data, but a well-architected application infrastructure will allow you to store that on a third-party storage array.

Gardner: Simon, I’ve had an opportunity to read some of your blogs and one of your statements jumped out … “The organizational culture still lags behind when it comes to security.” What did you mean by that? And how does that organizational culture need to be examined, particularly with an increased use of containers?

Leech: It’s about getting the security guys involved in the DevSecOps projects early on in the lifecycle of that project. Don’t bring them to the table toward the end of the project. Make them a valuable member of that team. There was a comment made about the idea of a two-pizza team.

A two-pizza team means a meeting should never have more people in it than can be fed by two pizzas and I think that that applies equally to development teams when you’re working on container projects. They don’t need to be big; they don’t need to be massive.

It’s important to make sure there’s enough pizza saved for the security guy! You need to have that security guy in the room from the beginning to understand what the risks are. That’s a lot of where this cultural shift needs to change. And as I said, executive support plays a strong role in making sure that that happens.

Gardner: We’ve talked about people and process. There is also, of course, that third leg of the stool — the technology. Are the people building container platforms like HPE thinking along these lines as well? What does the technology, and the way it’s being designed, bring to the table to help organizations be DevSecOps-oriented?

Select specific, secure solutions 

Leech: There are a couple of ways that technology solutions are going to help. The first are the pre-production commercial solutions. These are the things that tend to get integrated into the orchestration platform itself, like image scanning, secure registry services, and secrets management.

A lot of those are going to be built into any container orchestration platform that you choose to adopt. There are also commercial solutions that support similar functions. It’s always up to an organization to do a thorough assessment of whether their needs can be met by the standard functions in the orchestration platform or if they need to look at some of the third-party vendors in that space, like Aqua Security or Twistlock, which was recently acquired by Palo Alto Networks, I believe.

No single solution covers all of an enterprise’s requirements. It’s a task to assess security shortcomings, what products you need, and then decide who will be the best partner for those total solutions.

And then there are the solutions that I would gather up as post-production commercial solutions. These are for things such as runtime protection of the container environment, container forensic capabilities, and network overlay products that allow you to separate your workloads at the network level and provide container-based firewalls and that sort of stuff.

Very few of these capabilities are actually built into the orchestration platforms. They tend to be third parties such as SysdigGuardicore, and NeuVector. And then there’s another bucket of solutions, which are more open-source solutions. These typically focus on a single function in a very cost-effective way and are typically open source community-led. And these are solutions such as SonarQubePlatform as a Service (PaaS), and Falco, which is the open source project that Sysdig runs. You also have Docker Bench and Calico, a networking security tool.

But no single solution covers all of an enterprise customer’s requirements. It remains a bit of a task to assess where you have security shortcomings, what products you need, and who’s going to be the best partner to deliver those products with those technology solutions for you.

Gardner: And how are you designing Pointnext Services to fill that need to provide guidance across this still dynamic ecosystem of different solutions? How does the services part of the equation shake out?

Leech: We obviously have the technology solutions that we have built. For example, the HPE Container Platform, which is based around technology that we acquired as part of the BlueData acquisition. But at the end of the day, these are products. Companies need to understand how they can best use those products within their own specific enterprise environments.

ship containersI’m part of Pointnext Services, within the advisory and professional services team. A lot of the work that we do is around advising customers on the best approaches they can take. On one hand, we’d like them to purchase our HPE technology solutions, but on the other hand, a container-based engagement needs to be a services-led engagement, especially in the early phases where a lot of customers aren’t necessarily aware of all of the changes they’re going to have to make to their IT model.

At Pointnext, we deliver a number of container-oriented services, both in the general container implementation area as well as more specifically around container security. For example, I have developed and delivered transformation workshops around DevSecOps.

We also have container security planning workshops where we can help customers to understand the security requirements of containers in the context of their specific environments. A lot of this work is based around some discovery we’ve done to build our own container security solution reference architecture.

Gardner: Do you have any examples of organizations that have worked toward a DevSecOps perspective on continuous delivery and cloud native development? How are people putting this to work on the ground?

Edge elevates container benefits 

Leech: A lot of the customers we deal with today are still in the early phases of adopting containers. We see a lot of POC engagement where a particular customer may be wanting to understand how they could take traditional applications and modernize or architect those into cloud-native or container-based applications.

There’s a lot of experimentation going on. A lot of the implementations we see start off small, so the customer may buy a single technology stack for the purpose of testing and playing around with containers in their environment. But they have intentions within 12 to 18 months of being able to take that into a production setting and reaping the benefits of container-based deployments.

Gardner: And over the past few years, we’ve heard an awful lot of the benefits for moving closer to the computing edge, bringing more compute and even data and analytics processing to the edge. This could be in a number of vertical industries, from autonomous vehicles to manufacturing and healthcare.

But one of the concerns, if we move more compute to the edge, is will security risks go up? Is there something about doing container security properly that will make that edge more robust and more secure?

Leech: Yes, a container project done properly can actually be more secure than a traditional VM environment. This begins from the way you manage the code in the environment. And when you’re talking about edge deployments, that rings very true.

From the perspective of the amount of resources it has to use, it’s going to be a lot lighter when you’re talking about something like autonomous driving to have a shared kernel rather than lots of instances of a VM running, for example.

From a strictly security perspective, if you deal with container lifecycle management effectively, involve the security guys early, have a process around releasing, updating, and retiring container images into your registry, and have a process around introducing security controls and code scanning in your software development lifecycle — making sure that every container that gets released is signed with an appropriate enterprise signing key — then you have something that is very repeatable, compared with a traditional virtualized approach to application and delivery.

That’s one of the big benefits of containers. It’s very much a declarative environment. It’s something that you prescribe … This is how it’s going to look. And it’s going to be repeatable every time you deploy that. Whereas with a VM environment, you have a lot of VM sprawl. And there are a lot of changes across the different platforms as different people have connected and changed things along the way for their own purposes.

There are many benefits with the tighter control in a container environment. That can give you some very good security benefits.

Gardner: What comes next? How do organizations get started? How should they set themselves up to take advantage of containers in the right way, a secure way?

Begin with risk evaluation 

Leech: The first step is to do the appropriate due diligence. Containers are not going to be for every application. There are going to be certain things that you just can’t modernize, and they’re going to remain in your traditional data center for a number of years.

I suggest looking for the projects that are going to give you the quickest wins and use those POCs to demonstrate the value that containers can deliver for your organization. Make sure that you do appropriate risk awareness, work with the services organizations that can help you. The advantage of a services organization is they’ve probably been there with another customer previously so they can use the best practices and experiences that they have already gained to help your organization adopt containers.

Just make sure that you approach it using a DevSecOps model. There is a lot of discussion in the market at the moment about it. Should we be calling it DevOps or should we call it SecDevOps or DevOpsSec? My personal opinion is call it DevSecOps because security in a DevSecOps module sits right in the middle of development and operations — and that’s really where it belongs.

endpoint-security-solution

In terms of assets, there is plenty of information out there in a Google search; it finds you a lot of assets. But as I mentioned earlier, the NIST White Paper SP 800-190 is a great starting point to understand not only container security challenges but also to get a good understanding of what containers can deliver for you.

At the same time, at HPE we are also committed to delivering relevant information to our customers. If you look on our website and also our enterprise.nxt blog site, you will see a lot of articles about best practices on container deployments, case studies, and architectures for running container orchestration platforms on our hardware. All of this is available for people to download and to consume.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, application transformation, Cloud computing, containers, Cyber security, Data center transformation, enterprise architecture, Hewlett Packard Enterprise, Security, Virtualization | Tagged , , , , , , , , , , , , , , , | Leave a comment

AI-first approach to infrastructure design extends analytics to more high-value use cases

speech text

The next BriefingsDirect Voice of artificial intelligence (AI) Innovation discussion explores the latest strategies and use cases that simplify the use of analytics to solve more tough problems.

Access to advanced algorithms, more cloud options, high-performance compute (HPC) resources, and an unprecedented data asset collection have all come together to make AI more attainable — and more powerful — than ever.

Major trends in AI and advanced analytics are now coalescing into top competitive differentiators for most businesses. Stay with us as we examine how AI is indispensable for digital transformation through deep-dive interviews on prominent AI use cases and their escalating benefits.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about analytic infrastructure approaches that support real-life solutions, we’re joined by two experts, Andy Longworth, Senior Solution Architect in the AI and Data Practice at Hewlett Packard Enterprise (HPE) Pointnext Services, and Iveta Lohovska, Data Scientist in the Pointnext Global Practice for AI and Data at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Andy, what are the top drivers for making AI more prominent in business use cases?

Longworth: We have three main things driving AI at the moment for businesses. First of all, we know about the data explosion. These AI algorithms require huge amounts of data. So we’re generating that, especially in the industrial setting with machine data.

Andy Longworth

Longworth

Also, the relative price of computing is coming down, giving the capability to process all of that data at accelerating speeds as well. You know, the graphics processing units (GPUs) and tensor processing units (TPUs) are becoming more available, enabling us to get through that vast volume of data.

And thirdly, the algorithms. If we look to organizations likeFacebookGoogle, and academic institutions, they’re making algorithms available as open source. So organizations don’t have to go and employ somebody to build an algorithm from the ground up. They can begin to use these pre-trained, pre-created models to give them a kick-start in AI and quickly understand whether there’s value in it for them or not.

Gardner: And how do those come together to impact what’s referred to as digital transformation? Why are these actually business benefits?

Longworth: They allow organizations to become what we call data driven. They can use the massive data that they’ve previously generated but never tapped into to improve business decisions, impacting the way they drive the business through AI. It’s transforming the way they work.

AI data boost to business 

Across several types of industry, data is now driving the decisions. Industrial organizations, for example, improve the way they manufacture. Without the processing of that data, these things wouldn’t be possible.

Gardner: Iveta, how do the trends Andy has described make AI different now from a data science perspective? What’s different now than, say, two or three years ago?

Lohovska: Most of the previous AI algorithms were 30, 40, and even 50 years old in terms of the linear algebra and their mathematical foundations. The higher levels of computing power enable newer computations and larger amounts of data to train those algorithms.

Iveta Lohovska

Lohovska

Those two components are fundamentally changing the picture, along with the improved taxonomies and the way people now think of AI as differentiated between classical statistics and deep learning algorithms. Now, not just technical people can interact with these technologies and analytic models. Semi-technical people can with a simple drag-and-drop interaction, based on the new products in the market, adopt and fail fast — or succeed faster — in the AI space. The models are also getting better and better in their performance based on the amount of data they get trained on and their digital footprint.

Gardner: Andy, it sounds like AI has evolved to the point where it is mimicking human-like skills. How is that different and how does such machine learning (ML) and deep learning change the very nature of work?

Let simple tasks go to machines 

Longworth: It allows organizations and people to move some of the jobs that were previously very tedious for people so they can be done by machines and repurposes the people’s skills into more complex jobs. For example, in computer vision and applying that in quality control. If you’re creating the same product again and again and paying somebody to look at that product to say whether there’s a defect on it, it’s probably not the best use of their skills. And, they become fatigued.

If you look at the same thing again and again, you start to miss features of that and miss the things that have gone wrong. A computer doesn’t get that same fatigue. You can train a model to perform that quality-control step and it won’t become tired over time. It can keep going for longer than, for example, an eight-hour shift that a typical person might work. So, you’re seeing these practical applications, which then allows the workforce to concentrate on other things.

Gardner: Iveta, it wasn’t that long ago that big data was captured and analyzed mostly for the sake of compliance and business continuity. But data has become so much more strategic. How are businesses changing the way they view their data?

Lohovska: They are paying more attention to the quality of the data and the variety of the data collection that they are focused on. From a data science perspective, even if I want to say that the performance of models is extremely important, and that my data science skills are a critical component to the AI space and ecosystem, it’s ultimately about the quality of the data and the way it’s pipelined and handled.

Organizations will realize that being more selective and paying more attention to the foundations of how they handle big data — or small data — will get them to the data science part of the process.

This process of data manipulation, getting to the so-called last mile of the data science contribution, is extremely important. I believe it’s the critical step and foundation. Organizations will realize that being more selective and paying more attention to the foundations of how they handle big data — or small data – will get them to the data science part of the process.

You can already see the maturity as many customers, partners, and organizations pay more attention to the fundamental layers of AI. Then they can get better performance at the last mile of the process.

Gardner: Why are the traditional IT approaches not enough? How do cloud models help?

Cloud control and compliance 

Longworth: The cloud brings opportunities for organizations insomuch as they can try before they buy. So if you go back to the idea of processing all of that data, before an organization spends real money on purchasing GPUs, they can try them in the cloud to understand whether they work and deliver value. Then they can look at the delivery model. Does it make sense with my use case to make a capital investment, or do I go for a pay-per-use model using the cloud?

You also have the data management piece, which is understanding where your data is. From that sense, cloud doesn’t necessarily make life any less complicated. You still need to know where the data resides, control that data, and put in the necessary protections in line with the value of the data type. That becomes particularly important with legislation like the General Data Protection Regulation (GDPR) and the use of personally identifiable information (PII).

outside factoryIf you don’t have your data management under control and understand where all those copies of that data are, then you can’t be compliant with GDPR, which says you may need to delete all of that data.

So, you need to be aware of what you’re putting in the cloud versus what you have on-premises and where the data resides across your entire ecosystem.

Gardner: Another element of the past IT approaches has to do with particulars vs. standards. We talk about the difference between managing a cow and managing a herd.

How do we attain a better IT infrastructure model to attain digital business transformation and fully take advantage of AI? How do we balance between a standardized approach, but also something that’s appropriate for specific use cases? And why is the architecture of today very much involved with that sort of a balance, Andy?

Longworth: The first thing to understand is the specific use case and how quickly you need insights. We can process, for example, data in near real-time or we can use batch processing like we did in days of old. That use case defines the kind of processing.

If, for example, you think about an autonomous vehicle, you can’t batch-process the sensor data coming from that car as it’s driving on the road. You need to be able to do that in near real-time — and that comes at a cost. You not only need to manage the flow of data; you need the compute power to process all of that data in near real-time.

So, understand the criticality of the data and how quickly you need to process it. Then we can build solutions to process the data within that framework and within the right time that it needs to be processed. Otherwise, you’re putting additional cost into a use case that doesn’t necessarily need to be there.

When we build those use cases we typically use cloud-like technologies. That allows us portability of the use case, even if we’re not necessarily going to deploy it in the cloud. It allows us to move the use case as close to the data as possible.

When we build those use cases we typically use cloud-like technologies — be that containers or scalar technologies. That allows us portability of the use case, even if we’re not necessarily going to deploy it in the cloud. It allows us to move the use case as close to the data as possible.

For example, if we’re talking about a computer vision use case on a production line, we don’t want to be sending images to the cloud and have the high latency and processing of the data. We need a very quick answer to control the production process. So you would want to move the inference engine as close to the production line as possible. And, if we use things like HPE Edgeline computing and containers, we can place those systems right there on the production line to get the answers as quickly as we need.

So being able to move the use case where it needs to reside is probably one of the biggest things that we need to consider.

Gardner: Iveta, why is the so-called explore, experiment, and evolve approach using such a holistic ecosystem of support the right way to go?

Scientific methods and solutions

Lohovska: Because AI is not easy. If it were easy, then everyone would be doing it and we would not be having this conversation. It’s not a simple statistical use case or a program or business intelligence app where you already have the answer or even an idea of the questions you are asking.

The whole process is in the data science title. You have the word “science,” so there is a moment of research and uncertainty. It’s about the way you explore the data, the way you understand the use cases, starting from the fact that you have to define your business case, and you have to define the scope.

My advice is to start small, not exhaust your resources or the trust of the different stakeholders. Also define the correct use case and the desired return on investment (ROI). HPE is even working on the definitions and the business case when approaching an AI use case, trying to understand the level of complexity and the required level of prediction needed to achieve the use case’s success.

Such an exploration phase is extremely important so that everyone is aligned and finds a right path to minimize failure and get to the success of monetizing data and AI. Once you have the fundamentals, once you have experimented with some use cases, and you see them up and running in your production environment, then it is the moment to scale them.

I think we are doing a great job bringing all of those complicated environments together, with their data complexity, model complexity, and networking and security regulations into one environment that’s in production and can quickly bring value to many use cases.

This flow is extremely important, of experimenting and not approaching things like you have a fixed answer or fixed approach. It’s extremely important, and this is the way we at HPE are approaching AI.

Gardner: It sounds as if we are approaching some sort of a unified reference architecture that’s inclusive of systems, cloud models, data management, and AI services. Is that what’s going to be required? Andy, do we need a grand unifying theory of AI and data management to make this happen?

Longworth: I don’t think we do. Maybe one day we will get to that point, but what we are reaching now is a clear understanding of what architectures work for which use cases and business requirements. We are then able to apply them without having to experiment every time we go into this because it’s a complement to what Iveta said.

machine monitoringWhen we start to look at these use cases, when we engage with customers, what’s key is making sure there is business value for the organization. We know AI can work, but the question is, does it work in the customer’s business context?

If we can take out a good deal of that experimentation and come in with a fairly good answer to the use case in a specific industry, then we have a good jump start on that.

As time goes on and AI develops, we will see more generic AI solutions that can be used for many different things. But at the moment, it’s really still about point solutions.

Gardner: Let’s find out where AI is making an impact. Let’s look first, Andy, at digital prescriptive maintenance and quality control. You mentioned manufacturing a little earlier. What’s the problem, context, and how are we getting better business outcomes?

Monitor maintenance with AI

Longworth: The problem is the way we do maintenance schedules today. If you look back in history, we had reactive maintenance that was basically … something breaks and then we fix it.

Now, most organizations are in a preventative mode so a manufacturer gives a service window and says, “Okay, you need to service this machinery every 1,000 hours of running.” And that happens whether it’s needed or not.

Read the White Paper on Digital Prescriptive

 Maintenance and Quality Control 

When we get into prescriptive and predictive maintenance, we only service those assets as they actually need it, which means having the data, understanding the trends, recognizing if problems are forthcoming, and then fixing them before they impact the business.

That data from machinery may sense temperature, vibration, speed, and getting a condition-based monitoring view and understanding in real time what’s happening with the machinery. You can then also use past history to be able to predict what is going to happen in the future with that machine.

We can get to a point where we know in real time what’s happening with the machinery and have the capability to predict the failures before they happen.

The prescriptive piece comes in when we understand the business criticality or the business impact of an asset. If you have a production line and you have two pieces of machinery on that production line, both may have the identical probability of failure. But one is on your critical manufacturing path, and the other is some production buffer.

The prescriptive piece goes beyond the prediction to understand the business context of that machinery and applying that to how you are behaving, and then how you react when something happens with that machine.

As a business, the way that you are going to deal with those two pieces of machinery is different. You will treat the one on the critical path differently than the one where you have a product buffer. And so the prescriptive piece goes beyond the prediction to understanding the business context of that machinery and applying that to how you are behaving, and then how you react when something happens with that machine.

That’s the idea of the solution when we build digital prescriptive maintenance. The side benefit that we see is the quality control piece. If you have a large piece of machinery that you can test to it running perfectly during a production run, for example, then you can say with some certainty what the quality of the outcoming product from that machine will be.

video of carsGardner: So we have AI overlooking manufacturing and processing. It’s probably something that would make you sleep a little bit better at night, knowing that you have such a powerful tool constantly observing and reporting.

Let’s move on to our next use case. Iveta, video analytics and surveillance. What’s the problem we need to solve? Why is AI important to solving it?

Scrutinize surveillance with AI 

Lohovska: For video surveillance and video analytics in general, the overarching field is computer vision. This is the most mature and currently the trendiest AI field, simply because the amount of data is there, the diversity is there, and the algorithms are getting better and better. It’s no longer state-of-the-art, where it’s difficult to grasp, adopt, and bring into production. So, now the main goal is moving into production and monetizing these types of data sources.

Read the White Paper on

Video Analytics and Surveillance 

When you talk about video analytics or surveillance, or any kind of quality assurance, the main problem is improving on or detecting human errors, behaviors, and environments. Telemetry plays a huge role here, and there are many complements and constraints to consider in this environment.

That makes it hardware-dependent and also requires AI at the edge, where most of the algorithms and decisions need to happen. If you want to detect fire, detect fraud or prevent certain types of failure, such as quality failure or human failure — time is extremely important.

As HPE Pointnext Services, we have been working on our own solution and reference architectures to approach those problems because of the complexity of the environment, the different cameras, and hardware handling the data acquisition process. Even at the beginning it’s enormous and very diverse. There is no one-size-fits-all. There is no one provider or one solution that can handle surveillance use cases or broad analytical use cases at the manufacturing plant or oil and gas rig where you are trying to detect fire or oil and gas spills from the different environments. So being able to approach it holistically, to choose the right solution for the right complement, and design the architecture is key.

Also, it’s essential to have the right hardware and edge devices to acquire the data and handle the telemetry. Let’s say when you are positioning cameras in an outside environment and you have different temperatures, vibrations, and heat. This will reflect on the quality of the acquired information going through the pipeline.

Some of the benefits in use cases using computer vision and video surveillance include real time information coming from manufacturing plants, knowing that all the safety and security standards there are met, and that the people operating are following the instructions and have the safeguards required for a specific manufacturing plant is also extremely important.

When you have a quality assurance use case, video analytics is one source of information to tackle the problem. For example, improving the quality of your products or batches is just one source in the computer vision field. Having the right architecture, being agile and flexible, and finding the right solution for the problem and the right models deployed at the right edge device — or at the right camera — is something we are doing right now. We have several partners working to solve the challenges of video analytics use cases.

Gardner: When you have a high-scaling, high-speed AI to analyze video, it’s no longer a gating factor that you need to have humans reviewing the processes. It allows video to be used in so many more applications, even augmented reality, so that you are using video on both ends of the equation, as it were. Are we seeing an explosion of applications and use cases for video analytics and AI, Iveta?

Lohovska: Yes, absolutely. The impact of algorithms in this space is enormous. Also, all the open source datasets, such as ImageNet and ResNet, allow a huge amount of data to train any kind of algorithms on those open source datasets. You can adjust them and pre-train them for your own use cases, whether it’s healthcare, manufacturing, or video surveillance. It’s very enabling.

You can see the diversity of the solutions people are developing and the different programs they are tackling using computer vision capabilities, not only from the algorithms, but also from the hardware side, because the cameras are getting more and more powerful.

Currently, we are working on several projects in the non-visible human spectrum. This is enabled by the further development of the hardware acquiring those images that we can’t see.

Gardner: If we can view and analyze machines and processes, perhaps we can also listen and talk to them. Tell us about speech and natural language processing (NLP), Iveta. How is AI enabling those businesses and how they transform themselves?

Speech-to-text to protect

Lohovska: This is another strong field for how AI is used and still improving. It’s not as mature as computer vision, simply because the complexity of human language and speech, and the way speech gets recorded and transferred. It’s a bit more complex, so it’s not only a problem of technologies and people writing algorithms, but also linguists being able to combine the grammar problems and write the right equation to solve those grammar problems.

Read the White Paper on

Speech and Natural Language Processing 

But one very interesting field in the speech and NLP area is speech-to-text, so basically being able to transcribe speech into text. It’s very helpful for emergency organizations handling emergency calls or fraud detection, where you need, in real time, to detect fraud or danger. If someone is in danger, it’s a very common use case for law enforcement or for security organizations or for simply improving the quality of your service for call centers.

carsThis example is industry- or vertical-independent. You can have finance, manufacturing, retail — but all of them have some kind of customer support. This is the most common use case, being able to record and improve the quality of your services, based on the analysis you can apply. Similar to the video analytics use case, the problem here, too, is handling the complexity of different algorithms, different languages, and the varying quality of the recordings.

A reference architecture, where you have the different components designed on exactly this holistic approach, allows the user to explore, evolve, and experiment in this space. We choose the right complement for the right problem and how to approach it.

And in this case, if we combine the right data science tool with the right processing tool and the right algorithms on top of it, then you can simply design the solution and solve the specific problem.

Gardner: Our next and last use case for AI is one people are probably very familiar with, and that’s the autonomous driving technology (ADT).

Andy, how are we developing highly automated-driving infrastructures that leverage AI and help us get to that potential nirvana of truly self-driving and autonomous vehicles?

Data processing drives vehicles 

Longworth: There are several problems around highly autonomous driving as we have seen. It’s taking years to get to the point where we have fully autonomous cars and there are clear advantages to it.

If you look at, for example, what the World Health Organization (WHO) says, there are more than 1 million deaths per year in road traffic accidents. One of the primary drivers for ADT is that we can reduce the human error in cars on the road — and reduce the number of fatalities and accidents. But to get to that point we need to train these immensely complex AI algorithms that take massive amounts of data from the car.

Just purely from the sensor point of view, we have high-definition cameras giving 360-degree views around the car. You have radar, GPS, audio, and vision systems. Some manufacturers use light detection and ranging (LIDAR), some not. But you have all of these sensors giving massive amounts of data. And to develop those autonomous cars, you need to be able to process all of that raw data.

Read the White Paper on

Development of Self-Driving Infrastrcuture 

Typically, in an eight-hour shift, an ADT car generates somewhere between 70 and 100 terabytes of data. If you have an entire fleet of cars, then you need to be able to very quickly get that data off of the car so that you can get them back out on the road as quickly as possible. Then you need to get that data from where you offload it into the data center so that the developers, data scientists, analysts, and engineers can build to the next iteration of the autonomous driving strategy.

When you have built that, tested it, and done all the good things that you need to do, you need to next be able to get those models and that strategy from the developers back into the cars again. It’s like the other AI problems that we have been talking about, but on steroids because of the sheer volume of data and because of the impact of what happens if something should go wrong.

At HPE Pointnext Services, we have developed a set of solutions that address several of the pain points in the ADT development process. First is the ingest; how can we use HPE Edgeline processing in the car to pre-process data and reduce the amount of data that you have to send back to the data center. Also, you have to send back the most important data after the eight-hour drive first, and then send the run-of-the-mill, backup data later.

At HPE Pointnext Services, we have developed a set of solutions that address several of the pain points in the ADT development process. 

The second piece is the data platform itself, building a massive data platform that is extensible to store all the data coming from the autonomous driving test fleet. That needs to also expand as the fleet grows as well as to support different use cases.

The data platform and the development platform are not only massive in terms of the amount of data that it needs to hold and process, but also in terms of the required tooling. We have been developing reference architectures to enable automotive manufacturers, along with the suppliers of those automotive systems, to build their data platforms and provide all the processing that they need so their data scientists can continuously develop autonomous driving strategies and be able to test them in a highly automated way, while also giving access to the data to the additional suppliers.

For example, the sensor suppliers need to see what’s happening to their sensors while they are on the car. The platform that we have been putting together is really concerned with having the flexibility for those different use cases, the scalability to be able to support the data volumes of today, but also to grow — to be able to have the data volumes of the not-too-distant future.

The platform also supports the speed and data locality, so being able to provide high-speed parallel file systems, for example, to feed those ML development systems and help them train the models that they have.

So all of this pulls together the different components we have talked about with the different use cases, but at a scale that is much larger than several of the other use cases, probably put together.

Gardner: It strikes me that the ADT problem, if solved, enables so many other major opportunities. We are talking about micro-data centers that provide high-performance compute (HPC) at the edge. We are talking about the right hybrid approach to the data management problem — what to move, what to keep local, how to then have a lifecycle approach to. So, ADT is really a key use-case scenario.

Why is HPE uniquely positioned to solve ADT that will then lead to so many enabling technologies for other applications?

Longworth: Like you said, the micro-data center — every autonomous driving car essentially becomes a data center on wheels. So being able to provide that compute at the edge to enable the processing of all that sensor data.

If you look at the HPE portfolio of products, there are very few organizations that have edge compute solutions and the required processing power in such small packages. But it’s also about being able to wrap it up in, not only the hardware, but the solution on top, the support, and being able to provide a flexible delivery model.

Lots of organizations want to have a cloud-like experience, not just from the way they consume the technology, but also in the way they pay for the technology. So, by HPE providing everything as-a-service allows being able to pay for it all, as you use it, for your autonomous driving platform. Again, there are very few organizations in the world that can offer that end-to-end value proposition.

Collaborate and corroborate 

Gardner: Iveta, why does it take a team-sport and solution-approach from the data science perspective to tackle these major use cases?

They can attack the complexity of those use cases from each side because it requires not just data science and the hardware but a lot of domain-specific expertise to solve those problems, too. 

Lohovska: I agree with Andy. The way we approach those complex use cases and the fact that you can have them as a service — and not only infrastructure-as-a-service (IaaS) or data-as-a-service (DaaS) — but working on AI and modeling-as-a-service (MaaS). You can have a marketplace for models and being able to plug-and-play different technologies, experiment, and rapidly deploy them allows you to rapidly get value out of those technologies. That is something we are doing on a daily basis with amazing experts and people with the knowledge of the different layers. They can then attack the complexity of those use cases from each side, because it requires not just data science and the hardware, but a lot of domain-specific expertise to solve those problems. This is something we are looking at and we are doing in-house.

And I am extremely happy to say that I have the pleasure to work with all of those amazing people and experts within HPE.

Gardner: And there is a great deal more information available on each of these use cases for AI. There are white papers on the HPE website in Pointnext Services.

What else can people do, Andy, to get ready for these high-level AI use cases that lead to digital business transformation? How should organizations be setting themselves up on a people, process, and technology basis to become adept at AI as a core competency?

Longworth: It is about people, technology, process, and all these things combined. You don’t go and buy AI in a box. You need a structured approach. You need to understand what the use cases are that give value to your organization and to be able to quickly prototype those, quickly experiment with them, and prove the value to your stakeholders.

hpe-logoWhere a lot of organizations get stuck is moving from that prototyping, proof of concept (POC), and proof of value (POV) phase into full production. It is tough getting the processes and pipelines that enable you to transition from that small POV phase into a full production environment. If you can crack that nut, then the next use-cases that you implement, and the next business problems that you want to solve with AI, become infinitely easier. It is a hard step to go from POV through to the full production because there are so many bits involved.

You have that whole value chain from grabbing hold of the data at the point of creation, processing that data, making sure you have the right people and process around that. And when you come out with an AI solution that gives some form of inference, it gives you some form of answer, you need to be able to act upon that answer.

You can have the best AI solution in the world that will give you the best predictions, but if you don’t build those predictions into your business processes, you may well have never made them in the first place.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, data center, enterprise architecture, Hewlett Packard Enterprise, machine learning, professional services, video delivery | Tagged , , , , , , , , , , , , , , , | Leave a comment

Automation and connectivity will enable the modern data center to extend to many more remote locations

grid mainEnterprise IT strategists are adapting to new demands from the industrial edge, 5G networks, and hybrid deployment models that will lead to more diverse data centers across more business settings.

That’s the message from a broad new survey of 150 senior IT executives and data center managers on the future of the data center. IT leaders and engineers say they must transform their data centers to leverage the explosive growth of data coming from nearly every direction.

Yet, according to the Forbes-conducted survey, only a small percentage of businesses are ready for the decentralized and often small data centers that are needed to process and analyze data close to its source.

The next BriefingsDirect discussion on the latest data center strategies unpacks how more self-healing and automation will be increasingly required to manage such dispersed IT infrastructure and support increasingly hybrid deployment scenarios.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Joining us to help learn more about how modern data centers will efficiently extend to the computing edge is Martin Olsen, Vice President of Global Edge and Integrated Solutions at VertivTM. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Martin, what’s driving this movement away from mostly centralized IT infrastructure to a much more diverse topology and architecture?

Martin Olsen

Olsen

Olsen: It’s an interesting question. The way I look at it is it’s about the cloud coming to you. It certainly seems that we are moving away from centralized IT or centralized locations where we process data. It’s now more about the cloud moving beyond that model.

We are on the front steps of a profound re-architecting of the Internet. Interestingly, there’s no finish line or prescribed recipe at this point. But we need to look at processing data very, very differently.

Over the past decade or more, IT has become an integral part of our businesses. And it’s more than just back-end applications like customer relationship management (CRM), enterprise resource planning (ERP), and material requirements planning (MRP) systems that service the organization. It’s also become an integrated fabric to how we conduct our businesses.

Meeting at the edge 

Gardner: Martin, Cisco predicts there will be 28.5 billion connected devices by 2022, and KPMG says 5G networks will carry 10,000 times more traffic than current 4G networks. We’re looking at an “unknown unknown” here when it comes to what to expect from the edge.

Olsen: Yes, that’s right, and the starting point is well beyond just content distribution networks (CDNs), it’s also about home automation, so accessing your home security cameras, adjusting the temperature, and other things around home automation.

That’s now moving to business automation, where we use compute and generate data to develop, design, manufacture, deploy, and operate our offerings to customers in a much better and differentiated fashion.

We’re also trying to improve the customer experience and how we interact with consumers. So billions of devices generating an unimaginable amount of data out there, is what has become known as edge computing, which means more computing done at or near the source of data.

In the past, we pushed that data out for consuming, but now it’s much more about data meets people, it’s data interacting with people in a distributed IT environment. And then, going beyond that is 5G.

We see a paradigm shift in the way we use IT. Take the amount of tech that goes into manufacturing. It’s exploding, with tens of thousands of sensors deployed in just one facility to help dramatically improve productivity and drive efficiency into the business.

We see a paradigm shift in the way we use IT. Take, for example, the amount of tech that goes into a manufacturing facility, especially high-tech manufacturing. It’s exploding, with tens of thousands of sensors deployed in just one facility to help dramatically improve productivity, differentiate, and drive efficiency into the business.

Retail operations, from a compute standpoint, now require location services to offer a personalized experience in both the pre-shop phase as well as when you go into the store, and potentially in the post-shop, or follow-up experience.

We need to deliver these services quickly, and that requires lower latency and higher levels of bandwidth. It’s increasingly about pushing out from a central standpoint to a distributed fashion. We need to be rethinking how we deploy data centers. We need to think about the future and where these data centers are going to go. Where are we going to be processing all of this data?

Where does the data go?

Gardner: The complexity over the past 10 years about factoring cloud, hybrid cloud, private cloud, and multi-cloud is now expanding back down into the organization — whether it’s an environment for retail, home and consumer, and undoubtedly industrial and business-to-business. How are IT leaders and engineers going to update their data centers to exploit 5G and edge computing opportunities despite this complexity?

Olsen: You have to think about it differently around your physical infrastructure. You have the data aspect of where data moves and how you process it. That’s going to sit on physical infrastructure somewhere, and it’s going to need to be managed somehow.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

You should, therefore, think differently about redesigning and deploying the physical infrastructure. How do you operate and manage it? The concept of a data center has to transform and evolve. It’s no longer just a big building. It could be 100, 1,000, or 10,000 smaller micro data centers. These small data centers are going to be located in places we had previously never imagined you would put in IT infrastructure.

And so, the reliance on onsite technical and operational expertise has to evolve, too. You won’t necessarily have that technical support, a data center engineer walking the halls of a massive data center all day, for example. You are going to be in places like some backroom of a retail store, a manufacturing facility, or the base of a cell tower. It could be highly inaccessible.

ecosystemYou’ll need solutions that offer predictive operations, that have self-healing capabilities within them where they can fail in place but still operate as a function of built-in redundancy. You want to deploy solutions that have zero-touch provisioning, so you don’t have to go to every site to set it up and configure it. It needs to be done remotely and with automation built-in.

You should also consider where the applications are going to be hosted, and that’s not clear now. How much bandwidth is needed? It’s not clear. The demand is not clear at this point. As I said in the beginning, there is no finish line. There’s nothing that we can draw up and say, “This is what it’s going to be.” There is a version of it out there that’s currently focused around home automation and content distribution, and that’s just now moving to business automation, but again, not in any prescribed way yet.

You should consider where the applications are going to be hosted, and that’s not clear. How much bandwidth is needed? It’s not clear. There’s nothing that we can draw up and say, “This is what it’s going to be.” 

So we don’t want to adopt the “right” technologies now. And that becomes a real concern for your ability to compete over time because you can outdate yourself really, really quickly if you don’t make the right choices.

Gardner: When you face such change in your architecture and potential decentralization of micro data centers, you still need to focus on security, backup and recovery, and contingency plans for emergencies. We still need to be mission-critical, even though we are distributed. And, as you point out, many of these systems are going to be self-healing and self-configuring, which requires a different set of skills.

We have a people, process, and technology sea change coming. You at Vertiv wanted to find out what people in the field are thinking and how they are reacting to such change. Tell us about the Vertiv-Forbes survey, what you wanted to accomplish, and the top-line findings.

Survey says seek strategic change

Olsen: We wanted to gauge the thinking and gain a sense of what the C-suite, the data center engineers, and the data center community were thinking as we face this new world of edge computing, 5G, and Internet of things (IoT). The top findings show a need for fundamental strategic change. We face a new mixture of architectures that is far more decentralized and with much more modularity, and that will mean a new way to manage and operate these data centers, too.

Based on the survey, 11 percent of C-suite executives don’t believe they are currently updated even to be ahead of current needs. They certainly don’t have the infrastructure ready for what’s needed in the future. It’s much less so with the data center engineers we polled, with only 1 percent of them believing they are ready. That means the vast majority, 99 percent, don’t believe they have the right infrastructure.

avocentThere is also broad agreement that security and bandwidth need to be updated. Concern about security is a big thing. We know from experience that security concerns have stunted remote monitoring adoption. But the sheer quantity of disparate sites required for edge computing makes it a necessity to access, assess, and potentially reconfigure and remotely fix problems through remote monitoring and access.

Vertiv is driving a high level of configurability of instruments so you can take our components and products and put them together in a multitude of different ways to provide the utmost flexibility when you deploy. We are driving modularized solutions in terms of both modular data center and modularity in terms of how it all goes together onsite. And we are adding much more intelligence into our offerings for the remote sites, as well as the connectivity to be able to access, assess, and optimize these systems remotely.

Gardner: Martin, did the survey indicate whether the IT leaders in the field are anticipating or demanding such self-configuration technologies?

Olsen: Some 24 percent of the executives reported that they expect more than 50 percent of data centers will be self-configuring or have zero-touch provisioning by 2025. And about one-third of them say that more than 50 percent of their data centers will be self-healing by then, too.

That’s not to say that they have all of the answers. That’s their prediction and their responses to what’s going to be needed to solve their needs. So, 29 percent of engineers say they don’t know what percentage of the data centers will be self-configuring and self-healing, but there is an overwhelming agreement that it is a capability they need to be thinking about. Vertiv will develop and engineer our offerings going forward based on what’s going to be put in place out there.

Gardner: So there may be more potential points of failure, but there is going to be a whole new set of technologies designed to ameliorate problems, automate, and allow the remote capability to fix things as needed. Tell us about the proper balance between automation and remote servicing. How might they work together?

Make intelligent choices before you act 

Olsen: First of all, it’s not just a physical infrastructure problem. It has everything to do with the data and workloads as well. They go hand-in-hand; it certainly requires a partnership, a team of people and organizations that come together and help.

Driving intelligence into our products and taking that data off of our systems as they operate provides actionable data. You can then offer that analysis up to non-technical people on how to rectify situations and to make changes.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

These solutions also need to communicate with the hypervisor platforms — whether that’s via traditional virtualization or containerization. Fundamentally, you need to be able to decide how and when to move your applications and workloads to the optimal points on the network.

We are trying to alleviate that challenge by making our offerings more intelligent and offering up actionable alarms, warnings, and recommendations to weigh choices across an overall platform. Again, it takes a partnership with the other vendors and services companies. It’s not just from a physical infrastructure standpoint.

Gardner: And when that ecosystem comes together, you can provide a constellation of data centers working in harmony to deliver services from the edge to the consumer and back to the data centers. And when you can do that around and around, like a circuit, great things can happen.

So let’s ground this, if we can, to the business reality. We are going to enable entirely new business models, with entirely new capabilities. Are there examples of how this might work across different verticals? Can you illustrate — when you have constructed decentralized data centers properly — the business payoffs?

Improving remote results 

Olsen: As you point out, it’s all about the business outcomes we can deliver in the field. Take healthcare. There is a shortage of healthcare expertise in rural areas. Being able to offer specialized doctors and advanced healthcare in places that you wouldn’t imagine today requires a new level of compute and network that delivers low latency all the way to the endpoints.

Imagine a truck fitted with a medical imaging suite. That’s going to have to operate somewhat autonomously. The 5G connectivity becomes essential as you process those images. They have to be graphically loaded into a central repository to be accessed by specialists around the world who read the images.

That requires two-way connectivity. A huge amount of data from these images needs to move to provide that higher level of healthcare and a better patient experience in places where we couldn’t do it before.

There will need to be aggregation points throughout the network. You will need compute to reside at multiple levels of the infrastructure. Places like the base of a cell tower could become the focal point.

So 5G plays into that, but it also means being able to process and analyze some of the data locally. There need to be aggregation points throughout the network. You will need compute to reside at multiple levels of the infrastructure. Places like the base of a cell tower could become a focal point for this.

You can imagine having four, five, six times as much compute power sitting in these places along a remote highway that is not easily accessible. So, having technical staff be able to troubleshoot those becomes vital.

There are also uses cases that will use augmented reality (AR). Think of technicians in the field being able to use AR when they dispatch a field engineer to troubleshoot a system somewhere. We can make them as effective as possible, and access expertise from around the world to help troubleshoot these sites. AR becomes a massive part of this because you can overlay what the onsite people are seeing in through 3D glasses or virtual reality glasses and help them through troubleshooting, fixing, and optimizing whatever system they might be working on.

WorkerAgain, that requires compute right at the endpoint device. It requires aggregation points and connectivity all the way back to the cloud. So, it requires a complex network working together. The more advanced these use cases become — the more remote locations we have to think through — we are going to have to deploy infrastructure and access it as well.

Gardner: Martin, when I listen to you describe these different types of data centers with increased complexity and capabilities in the networks, it sounds expensive. But are there efficiencies you gain when you have a comprehensive design across all of the parts of the ecosystem? Are there mitigating factors that help with the total cost?

Olsen: Yes, as the net footprint of compute increases, I don’t think the cost is linear with that. We have proven that with the Vertiv technologies we have developed and already deployed. As the compute footprint increases, there is a fundamental need for driving energy efficiency into the infrastructure. That comes in the form of using more efficient ways of cooling the IT infrastructure, and we have several options around that.

It’s also from new battery technologies. You start thinking about lithium-ion batteries, which Vertiv has solutions around. Lithium-ion batteries make the solution far more resilient, more compact, and it needs much less maintenance when it sits out there.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

So, the amount of infrastructure that’s going to go out there will certainly increase. We don’t think it’s necessarily going to be linear in terms of the cost when you pay close attention to how, as an organization, you deploy edge computing. By considering these new technologies, that’s going to help drive energy efficiency, for example.

Gardner: Were there any insights from the Forbes survey that went to the cost equation? How do the IT executives expect this to shake out?

Energy efficiency partnerships 

Olsen: We found that 71 percent of the C-suite executives said that future data centers will reduce costs. That speaks to both the fact that there will be more infrastructure out there, but that it will be more energy efficient in how it’s run.

It’s also going to reduce the cost of the overall business. Going back to the original discussion around the business outcomes, deploying infrastructure in all these different places will help drive down the overall cost of doing business.

It’s an energy efficiency play both from a very fundamental standpoint in the way you simply power and cool the equipment, and overall, as a business, in the way you deliver improved customer experience and how you deliver products and services for your customers.

Gardner: How do organizations prepare themselves to get out in front of this? As we indicated from the survey findings, not that many say they are prepared. What should they be doing now to change that?

Olsen: Yes, most organizations are unprepared for the future — and not necessarily even in agreement on the challenges. A very small percentage of the respondents, 11 percent of executives believe that their data centers are ahead of current needs, even less so for the data center engineers. Only 44 percent of them say that their data centers are updated regularly. Only 29 percent say their data centers even meet current needs.

To prepare going forward, they should seek partnerships. Get the data centers upgraded, but also think through and understand how organizations like Vertiv have decades of experience in designing, deploying, and operating large data centers from a physical infrastructure standpoint. We use that experience and knowledge base for the data center of tomorrow. It can be a single IT rack or two going to any location.

We take all of our learning and experience and drive it into what becomes the smallest common denominator data center, which could just be a rack. These are modular solutions that are intelligent and can be optimized remotely.

We take all of that learning and experience and drive it into what becomes the smallest common denominator data center, which could just be a rack. So it’s about working with someone who has that experience, already has the data, and has the offerings of configurable, modular solutions that are intelligent and provide accessibility to access, assess, and optimize remotely. And it’s about managing the data that comes off these systems and extracts the value out of it, the way we do that with some of our offering around Vertiv LIFE Services, with very prescriptive, actionable alarms and alerts that we send from our systems.

Very few organizations can do this on their own. It’s about the ecosystem, working with companies like Vertiv, working closely with our strategic partners on the IT side, storage networks, and all the way through to the applications that make it all work in unison.

batteryThink through how to efficiently add compute capacity across all of these new locations, what those new locations should look like, and what the requirements are from a security standpoint.

There is a resiliency aspect to it as well. In harsh environments such as high-tech manufacturing, you need to ensure the infrastructure is scalable and minimizes capital expenditure spending. The modular approach allows building for a future that may be somewhat unknown at this point. Deploying modular systems that you can easily augment and add capacity or redundancy to over time — and that operate via robust remote management platforms — are some of the things you want to be thinking about.

Gardner: This is one of the very few empirical edge computing research assets that I have come across, the Vertiv and Forbes collaboration survey. Where can people find out more information about it if they want more details? How is this going to be available?

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

Olsen: We want to make this available to everybody to review. In the interest of sharing the knowledge about this new frontier, the new world of edge computing, we will absolutely be making this research and study available. I want to encourage people to go visit vertiv.com to find more information and download the research results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Vertiv.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, Cyber security, data center, Data center transformation, enterprise architecture, hyperconverged infrastructure, Security, Vertiv | Tagged , , , , , , , , , , , , , , | Leave a comment

How Intility uses HPE Primera intelligent storage to move to 100 percent data uptime

smart head

The next BriefingsDirect intelligent storage innovation discussion explores how Norway-based Intility sought and found the cutting edge of intelligent storage.

Stay with us as we learn how this leading managed platform services provider improved uptime — on the road to 100 percent — and reduced complexity for its end users.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To hear more about the latest in intelligent storage strategies that lead to better business outcomes, please welcome Knut Erik Raanæs, Chief Infrastructure Officer at Intility in Oslo, Norway. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Knut, what trends and business requirements have been driving your need for Intility to be an early adopter of intelligent storage technology?

Knut Erik Raanæs

Raanæs

Raanæs: For us, it is important to have good storage systems that are easy to operate to lower our management costs. At the same time, it gives great uptime for our customers.

Gardner: You are dealing not only with quality of service requirements; you also have very rapid growth. How does intelligent storage help you manage such rapid growth?

Raanæs: By easily having performance trends shown so we can spot when we are running full. If that happens, we can react before we run out of capacity.

Gardner: As a managed cloud service provider, it’s important for you to have strict service level agreements (SLAs) met. Why are the requirements of cloud services particularly important when it comes to the quality of storage services?

Intelligent, worry-free storage 

Raanæs: It’s very important to have good quality of service separation because we have lots of different kinds of customers. We don’t want to have the noise-enabled problem where one customer affects another customer — or even the virtual machine (VM) of one customer affects another VM. The applications should work independently of each other.

That’s why we have been using Hewlett Packard Enterprise (HPE)Nimble Storage. Our quality of service would be much worse at the VM disk level. It’s very good technology.

Gardner: Tell us about Intility, your size, scope, how long you have been around, and some of the major services you provide.

Raanæs: Intility was founded in 2000. We have always been focused on being a managed cloud service provider. From the start, there have been central shared services, a central platform, where we on-boarded customers and they shared email systems, and Microsoft Active Directory, along with all the application backup systems.

Over the last few years, the public cloud has made our customers more open to cloud solutions in general, and to not having servers in the local on-premises room at the office. We have now grown to more than 35,000 users, spread over 2,000 locations across 43 countries. We have 11 shared services datacenters, and we also have customers with edge location deployments due to high latency or unstable Internet connections. They need to have the data close to them.

Gardner: What is required when it comes to solving those edge storage needs?

Customers often want inexpensive solutions. We have to look at different solutions that give the best stability but don’t cost too much. And we need remote management of the solution.

Raanæs: Those customers often want inexpensive solutions. So we have to look at different solutions and pick the one that gives the best stability but that also doesn’t cost too much. We also need easy remote management of the solution, without being physically present.

Gardner: At Intility, even though you’re providing infrastructure-as-a-services (IaaS), you are also providing a digital transformation benefit. You’re helping your customers mature and better manage their complexity as well as difficulty in finding skills. How does intelligent IaaS translate into digital transformation?

Raanæs: When we meet with potential customers, we focus on taking away concerns about infrastructure. They are just going to leave that part to us. The IT people can then just move up in [creating value] and focus on digitalizing the business for their customers.

Gardner: Of course, cloud-based services require overcoming challenges with security, integration, user access management, and single sign on. How are those higher-level services impacted by the need for intelligent storage?

Smart storage security 

Raanæs: With intelligent storage, we can focus on having our security operations center (SOC) monitor responses the instant they see them on our platforms. We can keep a keen eye on our storage systems to make sure that nothing ever happens on the storage. That can be an early signal of something happening.

Gardner: Please describe the journey you have been on when it comes to storage. What systems you have been using? Why have intelligence, insights, and analysis capabilities been part of your adoption?

Girl rack

Raanæs: We started back in 2013 with HPE 3PAR arrays. Before that we used IBM storage. We had multiple single-Redundant Array of Inexpensive Disks (RAID) sets and had to manage hotspots ourselves, so by moving even one VM we had to try and balance it out manually.

In 2013, when we went with the first 3PAR array, we had huge benefits. That 3PAR array used less space and at the same time we didn’t have to manage or even out the hotspots. 3PAR and its active controllers were a great plus for us for many years.

But about one-and-a-half years ago, we started using HPE Nimble arrays, primarily due to the needs of VMware vCenter and quality of service requirements. Also, with the Nimble arrays, the InfoSight technology was quite nice.

Gardner: Right. And, of course, HPE is moving that InfoSight technology into more areas of their infrastructure. How important has InfoSight been for you?

Raanæs: It’s been quite useful. We had some systems that required us to use other third-party applications to give an expansive view of the performance of the environment. But those applications were quite expensive and had functionality that we really didn’t need. So at first we pulled data from the vCenter database and visualized the data. That was a huge start for us. But when InfoSight came along later it gave us even more information about the environment.

Gardner: I understand you are now also a beta customer for HPE Primera storage. Tell us about your experience with Primera. How does that move the needle forward for you?

For 100 percent uptime 

Raanæs: Yes, we have been beta testing Primera, and it has been quite interesting. It was easy to set up. I think maybe 20 minutes from getting it into the rack and just clicking through the setup. It was then operational and we could start provisioning storage to the whole system.

And with Primera, HPE is going in with 100 percent uptime guarantee. Of course, I still expect to deal with some rare incidences or outages, but it’s nice to see a company that’s willing to put their money where their mouth is, and say, “Okay, if there is any downtime or an outage happens, we are going to give you something back for it.”

Gardner: Do you expect to put HPE Primera into production soon? How would you use it first?

With Primera, HPE is going in with 100 percent uptime guarantee. It’s nice to see a company that’s willing to put their money where their mouth is.

Raanæs: So we are currently waiting for our next software upgrade for HPE Primera. Then we are then going to look at putting it into production. The use case is going to be general storage because we have so much more storage demand and need to try to keep it consistent, to make it easier to manage.

Gardner: And do you expect to be able to pass along these benefits of speed of deployment and 100 percent uptime to your end users? How do you think this will improve your ability to deliver SLAs and better business outcomes?

Raanæs: Yes, our end users are going to be quite happy with 100 percent uptime. No one likes downtime — not us, not our customers. And HPE Primera’s speed of deployment means that we have more time to manage other parts of the platform and to get better service out to the customers.

HPE PRimera logoGardner: I know it’s still early and you are still in the proof of concept stage, but how about the economics? Do you expect that having such high levels of advanced intelligence across storage will translate into your ability to do more for less, and perhaps pass some of those savings on?

Raanæs: Yes, I expect that’s going to be quite beneficial for us. Because we are based in Norway, one of our largest expenses is for people. So, the more we can automate by using the systems, the better. I am really looking forward to seeing this improve and getting easier to manage systems and analyze performance within a few hours.

Gardner: On that issue of management, have you been able to use HPE Primera to the degree where you have been able to evaluate its ease of management? How beneficial is that?

Work smarter, not harder 

Raanæs: Yes, the ease of management was quite nice. With Primera you can do the service upgrade more easily. So with 3PAR, we had to schedule an upgrade with the upgrade team at HPE and had to wait a few weeks. Now we can just do the upgrade ourselves.

And hardware replacements are easier, too. We can just get a nice PDF showing you how to replace the parts. So it’s also quite nice.

I also like the part of the service processor in 3PAR that’s now just garnered with Primera; it’s in with the array. So, that’s one less thing to worry about managing.

Gardner: Knut, as we look to the future, other technologies are evolving across the infrastructure scene. When combined with something like HPE Primera, is there a whole greater than the sum of the parts? How will you will be able to use more intelligence broadly and leverage more of this opportunity for simplicity and passing that onto your end users?

Raanæs: I’m hoping that more will come in the future. We are also looking at non-volatile memory express (NVMe). That’s a caching solution and it’s ready to be built into HPE Primera, too. So that’s also quite interesting to see what the future will bring there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, data center, Data center transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, machine learning, VMware | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

A new status quo for data centers–seamless communication from core to cloud to edge

DC mainAs 2020 ushers in a new decade, the forces shaping data center decisions are extending compute resources to new places.

With the challenging goals of speed, agility, and efficiency, enterprises and service providers alike will be seeking new balance between the need for low latency and optimal utilization of workload placement. Hybrid models will therefore include more distributed, confined, and modular data centers at or near the edge.

These are but some of a few top-line predictions on the future state of the modern data center design. The next BriefingsDirect data center strategies discussion with two leading IT and critical infrastructure executives examines how these new data center variations nonetheless must also interoperate seamlessly from core to cloud to edge.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us learn more about the new state of extensible data centers is Peter Panfil, Vice President of Global Power at VertivTM, and Steve Madara, Vice President of Global Thermal at Vertiv. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The world is rapidly changing in 2020. Organizations are moving past the debate around hybrid deployments, from on-premises to public clouds. Why do we need to also think about IT architectures and hybrid computing differently?

Peter Panfil

Panfil

Panfil: We noticed a trend at Vertiv in our customer base. That trend is toward a new generation of data centers. We have been living with distributed IT, client-server data centers moving to cloud, either a public cloud or a private cloud.

But what we are seeing is the evolution of an edge-to-core, near-real-time data center generation. And it’s being driven by devices everywhere, the “connected-all-the-time” model that all of us seem to be going to.

And so, when you are in a near-real-time world, you have to have infrastructure that supports your near-real-time applications. And that is what the technology folks are facing. I refer to it as a pack of dogs chasing them — the amount of data that’s being generated, the applications running remotely, and the demand for availability, low latency, and driving cost down as much as you possibly can. This is what’s changing how they approach their critical infrastructure space.

Gardner: And so, a new equilibrium is emerging. How is this different from the past?

Madara: If we go back 20 years, everything was centralized at enterprise data centers. Then we decided to move to decentralized, and then back to centralized. We saw a move to colocation as people decided that’s where they could get lower cost to run their apps. And then things went to the cloud, as Peter said earlier.

Steve Madara

Madara

And now, we have a huge number of devices connected locally. Cisco says by late 2020 that it’s going to have 23 billion connected devices, and over half of those are going to be machine-to-machine communications, which, as Peter mentioned earlier, the latency is going to be very, very critical.

An interesting read is Michael Lewis’s book Flash Boys about the arbitrage that’s taking place with the low latency that you have in stock market trading. I think we are going to see more of that moving to the edge. The edge is more like a smart rack or smart row deployment in an existing facility. It’s going to be multi-tenant, because it’s going to be able to be throughout large cities. There could be 20 or 30 of these edge data center sites hosting different applications for customers.

This move to the edge is also going to provide IT resources in a lot of underserved markets that don’t yet have pervasive compute, especially in emerging countries

Gardner: Why is speed so important? We have been talking about this now for years, but it seems like the need for speed to market and speed to value continues to ramp up. What’s driving that?

Moving to the edge, with momentum 

Panfil: There is more than one kind of speed. There is speed of response of the application, that’s something that all of us demand — speed of response of the applications. I have to have low latency in the transactions I am performing with my data or with my applications. So there is the speed of the actual data being transmitted.

There is also speed of deployment. When Steve talked earlier about centralized cloud deployments in these core data centers, your data might be going over a significant distance, hopping along the way. Well, if you can’t live with that latency that gets inserted, then you have to take the IT application and put it closer to the source and consumer of the data. So there is a speed of deployment, from core to edge, that happens.

And the third type of speed is you have to have low-first-cost, high-asset-utilization, and rapid-scalability. So that’s a speed of infrastructure adaptation to what the demands for the IT applications are.

So when we mean speed, I often say it’s speed, speed, and speed. First it’s the data speed, then deploying fast, and then at scale at business-friendly cost and reliability. 

So when we mean speed, I often say it’s speed, speed, and speed. First, it’s the data IT. Once I have data IT speed, how did I achieve that? l did it by deploying fast, in the scale needed for the applications, and lastly at a cost and reliability that makes it tolerable for the businesses.

Gardner: So I guess it’s speed-cubed, right?

Panfil: At least, speed-cubed. Steve, if we had a nickel for every time one of our customers said “speed,” we wouldn’t have to work anymore. They are consumed with the different speeds that they have to deal with — and it’s really the demands of their customers.

Gardner: Vertiv for years has been looking at the data center of the future and making some predictions around what to expect. You have been rather prescient. To continue, you have now identified several areas for 2020, too. Let’s go through those trends.

Steve, Vertiv predicts that “hybrid architectures will go mainstream.” Why did you identify that, and what do you mean?

The future goes hybrid 

Madara: If we look at the history of going from centralized to decentralized, and going to colocation and cloud applications, it shows the ongoing evolution of Internet of Things (IoT) sensors, 5G networks, smart cities, autonomous cars, and how more and more of that data is generated and will need to be processed locally. A lot of that is from machine-to-machine applications.

logoSo when we now talk about hybrid, we have to get very, very close to the source, as far as the processing is concerned. That’s going to be a large-scale evolution that’s going to drive the need for hybrid applications. There is going to be processing at the edge as well as centralized applications — whether it’s in a cloud or hosted in colocation-based applications.

Panfil: Steve, you and I both came up through the ranks. I remember when the data closet down the hall was basically a communications matrix. Its intent was to get communications from wherever we were to wherever our core data center was.

Well, the cloud is not going away. Number two, enterprise IT is not going away. What the enterprise is saying is, “Okay, I am going to take my secret sauce and I am going to put it in an edge data center. I am going to put the compute power as close to my consumer of that data and that application as I possibly can. And then I am going to figure out where the rest of it’s going to go.”

If I can live with the latency I get out of a core data center, I am going to stay in the cloud. If I can’t, I might even break up my enterprise data center into small or micro data centers that give me even better responses.

“If I can live with the latency I get out of a core data center, I am going to stay in the cloud. If I can’t, I might even break up my enterprise data center into small or micro data centers that give me even better responses.”

Dana, it’s interesting, there was a recent wholesale market summary published that said the difference between the smaller and the larger wholesale deals widened. So what that says is the large wholesale deals are getting bigger, the small wholesale deals are getting smaller, and that the enterprise-based demand, in deployments under 600 kilowatts, is focused on low-latency and multi-cloud access.

That tells us that our customers, the users of that critical space, are trying to place their IT appliances as close as they can to their customers, eliminating the latency, responding with speed, and then figuring out how to mesh that edge deployment with their core strategy.

Gardner: Our second trend gets back to the speed-cubed notion. I have heard people describe this as a new arms race, because while it might be difficult to differentiate yourself when everyone is using the same public cloud services, you can really differentiate yourself on how well you can conduct yourself at speed.

What kinds of capabilities across your technologies will make differentiation around speed work to an advantage as a company?

The need for speed 

Panfil: Well, I was with an analyst recently, and I said the new reality is not that the big will eat the small — it’s that the fast will eat the slow. And any advantage that you can get in speed of applications, speed of deployment, deploying those IT assets — or morphing the data center infrastructure or critical space infrastructure – helps improve capital efficiency. What many customers tell us is that they have to shorten the period of time between deciding to spend money on IT assets and the time that those asset start creating revenue.

They want help being creative in lowering their first-cost, in increasing asset utilization, and in maintaining reliability. If, holy cow, my application goes down, I am out of business. And then they want to figure out how to manage things like supply chains and forecasting, which is difficult to do in this market, and to help them be as responsive as they can to their customers.

powerMadara: Forecasting and understanding the new applications — whether it’s artificial intelligence (AI) or 5G — the CIOs need to decide where they need to put those applications whether they should be in the cloud or at the edge. Technology is changing so fast that nobody can predict far out into the future as far as to where I will need that capacity and what type of capacity I will need.

So, it comes down to being able to put that capacity in the place where I need it, right when I need it, and not too far in advance. Again, I don’t want to spend the capital, because I may put it in the wrong place. So it’s got to be about tying the demand with the supply, and that’s what’s key as far as the infrastructure.

And the other element I see is technology is changing fast, even on the infrastructure side. For our equipment, we are constantly making improvements every day, making it more efficient, lower cost, and with more capability. And if you put capacity in today that you don’t need for a year or two down the road, you are not taking advantage of the latest, greatest technology. So really it’s coupling the demand to the actual supply of the infrastructure — and that’s what’s key.

Another consideration is that many of these large companies, especially in the colocation market, have their financial structure as a real estate investment trust (REIT). As a result, they need to tie revenue with expenses tighter and tighter, along with capital spending.

Panfil: That’s a good point, Steve. We redesigned our entire large power portfolio at Vertiv specifically to be able to address this demand.

In previous generations, for example, the uninterruptible power supply (UPS) was built as a complete UPS. The new generation is built as a power converter, plus an I/O section, plus an interface section that can be rapidly configured to the customer, or, in some cases, put into a vendor-managed inventory program. This approach allows us to respond to the market and customers quicker.

We were forced to change our business model in such a way that we can respond in real time to these kinds of capacity-demand changes.

Madara: And to add to that, we have to put together more and more modules and solutions where we are bundling the equipment to deliver it faster, so that you don’t have to do testing on site or assembly on site. Again, we are putting together solutions that help the end-user address the speed of the construction of the infrastructure.

also think that this ties into the relationship that the person who owns the infrastructure has with their supplier base. Those relationships have to build in, as Peter mentioned earlier, the ability to do stocking of inventory, of having parts available on-site to go fast.

Gardner: In summary so far, we have this need for speed across multiple dimensions. We are looking at more hybrid architectures, up and down the scale — from edge to core, on-premises to the cloud. And we are also looking at crunching more data and making real-time analytics part of that speed advantage. That means being able to have intelligence brought to bear on our business decisions and making that as fast as possible.

So what’s going on now with the analytics efficiency trend? Even if average rack density remains static due to a lack of space, how will such IT developments as high performance computing (HPC) help make this analysis equation work to the business outcome’s advantage?

High-performance, high-density pods 

Madara: The development of AI applications, machine learning (ML), and what could be called deep learning are evolving. Many applications are requiring these HPC systems. We see this in the areas of defense, gaming, the banking industry, and people doing advanced analytics and tying it to a lot of the sensor data we talked about for manufacturing.

It’s not yet widespread, it’s not across the whole enterprise or the entire data center, and these are often unique applications. What I hear in large data centers, especially from the banks, is that they will need to put these AI applications up on 30-, 40-, 50- or 60-kW racks — but they only have three or four of these racks in the whole data center.

The end-user will need to decide how to tune or adjust facilities to accommodate these small but growing pods of high-density compute. They are going to need to decide how they are going to facilitize for that type of equipment.

The end-user will need to decide how to tune or adjust facilities to accommodate these small but growing pods of high-density compute. And if they are in their own facility, if it’s an enterprise that has its own data center, they will need to decide how they are going to facilitize for that type of equipment.

A lot of the colocation hosting facilities have customers saying, “Hey, I am going to be bringing in the future a couple of racks that are very high density. A lot of these multi-tenant data centers are saying, ‘Oh, how do I provision for these, because my data center was laid out for this average of maybe 8 kW per rack? How do I manage that, especially for data centers that didn’t previously have chilled water to provide liquid to the rack?’”

We are now seeing a need to provide chilled water cooling that would go to a rear door heat exchanger on the back of the rack. It could be chilled water that would go to a rack for chip cooling applications. And again, it’s not the whole data center; it’s a small segment of the data center. But it raises questions of how I do that without overkill on the infrastructure needed.

batteryGardner: Steve, do you expect those small pods of HPC in the data center to make their way out to the edge when people do more data crunching for the low-latency requirements, where you can’t move the data to a data center? Do you expect to have this trend grow more distributed?

Madara: Yes, I expect this will be for more than the enterprise data center and cloud data centers. I think you are going to see analytics applications developed that are going to be out at the edge because of the requirements for latency.

When you think about the autonomous car; none of us know what’s going to be required there for that high-performance processing, but I would expect there is going to be a need for that down at the edge.

Gardner: Peter, looking at the power side of things when we look at the batteries that help UPS and systems remain mission-critical regardless of external factors, what’s going on with battery technology? How will we be using batteries differently in the modern data center?

Battery-powered savings 

Panfil: That’s a great question. Battery technology has been evolving at an incredibly fast rate. It’s being driven by the electric vehicles. That growth is bringing to the market batteries that have a size and weight advantage. You can’t put a big, heavy pack of batteries in a car and hope to have it perform well.

It also gives a long-life expectation. So data centers used to have to decide between long-life, high-maintenance, wet cells and the shorter-life, high-maintenance, valve-regulated lead-acid (VRLA) batteries. In step with the lithium-ion batteries (LIBs) and thin plate pure lead (TPPL) batteries, what’s happened is the total cost of ownership (TCO) has started to become very advantageous for these batteries.

Our sales leadership lead sent me the most recent TCO between either TPPL or LIBs versus traditional VRLA batteries, and the TCO is a winner for the LIBs and the TPPL batteries. In some cases, over a 10-year period, the TCO is a factor of two lower for LIB and TPPL.

Where in the cloud generation of data centers was all about lowest first cost, in this edge-to-core mentality of data centers, it’s about TCO. There are other levers that they can start to play with, too.

So, for example, they have life cycle and operating temperature variables. That used to be a real limitation. Nobody in the data center wanted their systems to go on batteries. They tried everything they could to not have their systems go on the battery because of the potential for shortening the life of their batteries or causing an outage.

Today we are developing IT systems infrastructure that takes advantage of not only LIBs, but also pure lead batteries that can increase the number of [discharge/recharge] cycles. Once you increase the number of cycles, you can think about deploying smart power configurations. That means using batteries not only in the critical infrastructure for a very short period of time when the power grid utility fails, but to use that in critical infrastructure to help offset cost.

If I can reduce utility use at peak demand periods, for example, or I can reduce stress on the grid at specified times, then batteries are not only a reliability play – they are also a revenue-offset play. And so, we’re seeing more folks talking to us about how they can apply these new energy storage technologies to change the way they think about using their critical space.

Also, folks used to think that the longer the battery time, the better off they were because it gave more time to react to issues. Now, folks know what they are doing, they are going with runtimes that are tuned to their operations team’s capabilities. So, if my operations team can do a hot swap over an IT application — either to a backup critical space application or to a redundant data center — then all of a sudden, I don’t need 5 to 12 minutes of runtime, I just need the bridge time. I might only need 60 to 120 seconds.

Now, if I can have these battery times tuned to the operations’ capabilities — and I can use the batteries more often or in higher temperature applications — then I can really start to impact my TCO and make it very, very cost-effective.

Gardner: It’s interesting; there is almost a power analog to hybrid computing. We can either go to the cloud or the grid, or we can go to on-premises or the battery. Then we can start to mix and match intelligently. That’s really exciting. How does lessening dependence on the grid impact issues such as sustainability and conserving energy?

Sustainability surges forward 

Panfil: We are having such conversations with our key accounts virtually every day. What they are saying is, “I am eventually not going to make smoke and steam. I want to limit the number of times my system goes on a generator. So, I might put in more batteries, more LIBs or TPPL batteries, in certain applications because if my TCO is half the amount of the old way, I could potentially put in twice as much, and have the same cost basis and get that economic benefit.”

And so from a sustainability perspective, they are saying, “Okay, I might need at some point in the useful life of that critical space to not draw what I think I need to draw from my utility. I can limit the amount of power I draw from that utility.”

I love all of you out there in data center design, but most of them are designed for peak useage. These changes allow them to design more for the norm of the requirements. That means they can put in less infrastructure, less battery, to right-size their generators; same thing on cooling. 

This is not a criticism, I love all of you out there in data center design, but most of them are designed for peak usage. So what these changes allow them to do is to design more for the norm of the requirements. That means they can put in less infrastructure, the potential to put in less battery. They have the potential to right-size their generators; same thing on the cooling side, to right-size the cooling to what they need and not for the extremes of what that data center is going to see.

From a sustainability perspective, we used to talk about the glass as half-full or half-empty. Now, we say there is too much of a glass. Let’s right-size the glass itself, and then all of the other things you have to do in support of that infrastructure are reduced.

Madara: As we look at the edge applications, many will not have backup generators. We will have alternate energy sources, and we will probably be taking more hits to the batteries. Is the LIB the better solution for that?

UPSPanfil: Yes, Steve, it sure is. We will see customers with an expectation of sustainability, a path to an energy source that is not fossil fuel-based. That could be a renewable energy source. We might not be able to deploy that today, but they can now deploy what I call foundational technologies that allow them to take advantage of it. If I can have a LIB, for example, that stores excess energy and allows me to absorb energy when I’m creating more than I need — then I can consume that energy on the other side. It’s better for everybody.

Gardner: We are entering an era where we have the agility to optimize utilization and reduce our total costs. The thing is that it varies from region to region. There are some areas where compliance is a top requirement. There are others where energy issues are a top requirement because of cost.

What’s going on in terms of global cross-pollination? Are we seeing different markets react to their power and thermal needs in different ways? How can we learn from that?

Global differences, normalized 

Madara: If you look at the size of data centers around the world, the data centers in the U.S. are generally much larger than in Europe. And what’s in Europe is much larger than what we have in other developed countries. So, there are a couple of things, as you mentioned, energy availability, cost of energy, the size of the market and the users that it serves. We may be looking at more edge data centers in very underserved markets that have been in underdeveloped countries.

So, you are going to see the size of the data center and the technology used potentially different to better fit needs of the specific markets and applications. Across the globe, certain regions will have different requirements with regard to security and sustainability.

Even though we have these potential differences, we can meet the end-user needs to right-size the IT resources in that region. We are all more common than we are different in many respects. We all have needs for security, we all have needs for efficiency, it may just be to different degrees.

Panfil: There are different regional agency requirements, different governmental regulations that companies have to comply with. And so what we find, Dana, is that what our customers are trying to do is normalize their designs. I won’t say they are standardizing their design because standardization says I am going to deploy exactly the same way everywhere in the world. I am a fan of Kit Kats and Kit Kats are not the same globally, they vary by region, the same is true for data centers.

So, when you look at how the customers are trying to deal with the regional and agency differences that they have to live with, what they find themselves doing is trying to normalize their designs as much as they possibly can globally, realizing that they might not to be able to use exactly the same power configuration or exactly the same thermal configuration. But we also see pockets where different technologies are moving to the forefront. For example, China has data centers that are running at high voltage DC, 240 volts DC, we have always had 48-volt DC IT applications in the Americas and in Europe. Customers are looking at three things — speed, speed, and speed.

And so when we look at the application, for example, of DC, there used to be a debate, is it AC or DC? Well, it’s not an “or” it’s an “and.” Most of the customers we talk to, for example, in Asia are deploying high-voltage DC and have some form of hybrid AC plus DC deployment. They are doing it so that they can speed their applications deployments.

In the Americas, the Open Compute Project (OCP) deploys either 12 or 48 volts to the rack. I look at it very simply. We have been seeing a move from 2N architecture to N plus 1 architecture in the power world for a decade, this is nothing more than adopting the N plus 1 architecture at the rack level versus the 2N architecture at the rack level.

And so what we see is when folks are trying to, number one, increase the speed; number two, increase their utilization; number three, lower their total cost, they are going to deploy infrastructures that are most advantageous for either the IT appliances that they are deploying or for the IT applications that they are running, and it’s not the same for everybody, right Steve?

You and I have been around the planet way too many times, you are a million miler, so am I. It’s amazing how a city might be completely different in a different time zone, but once you walk into that data center, you see how very consistent they have gotten, even though they have done it completely independently from anybody else.

Madara: Correct!

Consistency lowers costs and risks 

Gardner: A lot of what we have talked about boils down to a need to preserve speed-to-value while managing total cost of utilization. What is there about these multiple trends that people can consider when it comes to getting the right balance, the right equilibrium, between TCO and that all important speed-to-value?

Madara: Everybody strives to drive cost down. The more you can drive the cost down of the infrastructure, the more you can do to develop more edge applications.

I think we are seeing a very large rate of change of driving cost down. Yet we still have a lot of stranded capacity out there in the marketplace. And people are making decisions to take that down without impacting risk, but I think they can do it faster.

Standardization helps drive speed, whether it’s normalization or similarity. What allows people to move fast is to repeat what they are doing instead of snowflake data centers, where every one is different. 

Peter mentioned standardization. Standardization helps drive speed, whether it’s normalization or similarity. What allows people to move fast is to repeat what they are doing instead of snowflake data centers, where every new one is different.

Repeating allows you to build a supply base ecosystem where everybody has the same goal, knows what to do, and can be partners in driving out cost and in driving speed. Those are some of the key elements as we go forward.

Gardner: Peter when we look to that standardization, you also allow for more seamless communication from core to cloud to edge. Why is that important, and how can we better add intelligence and seamless communication among and between all these different distributed data centers?

Panfil: When we normalize designs globally, we take a look at the regional differences, sort out what the regional differences have to be, and then put a proof of concept deployment. And out of that comes a consistent method of procedure.

DC walkersWhen we talk about managing the data center effectively and efficiently, first of all, you have to know what you have. And second, you have to know what it’s doing. And so, we are seeing more folks normalizing their designs and getting consistency. They can then start looking at how much of their available capacity from a design perspective they are actually using both on a normal basis and on a peak basis and then they can determine how much of that they are willing to use.

We have some customers who are very risk-averse. They stay in the 2N world, which is a 50 percent maximum utilization. We applaud them for it because they are not going to miss a transaction.

There are others who will say, “I can live with the availability that an N+1 architecture gives me. I know I am going to have to be prepared for more failures. I am going to have to figure out how to mitigate those failures.”

So they are working constantly at figuring out how to monitor what they have and figure out what the equipment is doing, and how they can best optimize the performance. We talked earlier about battery runtimes, for example. Sometimes they might get short or sometimes they might be long.

As these companies get into this step and repeat function, they are going to get consistency of their methods of procedure. They’re going to get consistency of how their operations teams run their physical infrastructure. They are going to think about running their equipment in ways that is nontraditional today but will become the norm in the next generation of data centers. And then they are going to look at us and say, “Okay, now that I have normalized my design, can I use rapid deployment configuration? Can I put it on a skid, in a container? Can I drop it in place as the complete data center?”

Well, we build it one piece of equipment at a time and stitch it all together. The question that you asked about monitoring, it’s interesting because we talked to a major company just last month. Steve and I were visiting them at their site. And they said, “You know what? We spend an awful lot of time figuring out how our building management system and our data exchange happens at the site. Could Vertiv do some of that in the factory? Could you configure our data acquisition systems? Could you test them there in the factory? Could we know that when the stuff shows up on site that it’s doing the things that it’s supposed to be doing instead of us playing hunt and peck to figure out what the issues are?”

We said, “Of course.” So we are adding that capability now into our factory testing environment. What we see is a move up the evolutionary scale. Instead of buying separate boxes, we are seeing them buying solutions — and those solutions include both monitoring and controls.

Steve didn’t even get a chance to mention the industry-leading Vertiv Liebert® iCOM™ control for thermal. These controls and monitoring systems allow them to increase their utilization rates because they know what they have and what it’s doing.

Gardner: It certainly seems to me, with all that we have said today, that the data center status quo just can’t stand. Change and improvement is inevitable. Let’s close out with your thoughts on why people shouldn’t be standing still; why it’s just not acceptable.

Innovation is inevitable 

Madara: At the end of the day, the IT world is changing rapidly every day. Whether in the cloud or down at the edge, the IT world needs to adjust to those needs. They need to be able to be cut out enough of the cost structure. There is always a demand to drive cost down.

If we don’t change with the world around us, if we don’t meet the requirements of our customers, things aren’t going to work out – and somebody else is going to take it and go for it.

Panfil: Remember, it’s not the big that eats the small, it’s the fast that eats the slow.

Madara: Yes, right.

Panfil: And so, what I have been telling folks is, you got to go. The technology is there. The technology is there for you to cut your cost, improve your speed, and increase utilization. Let’s do it. Otherwise, somebody else is going to do it for you.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Vertiv.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, Cyber security, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, hyperconverged infrastructure, Internet of Things, Vertiv | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Intelligent spend management supports better decision-making across modern business functions

financeThe next BriefingsDirect discussion on attaining intelligent spend management explores the findings of a recent IDC survey on paths to holistic business processes improvement.

We’ll now learn how a long history of legacy systems and outdated methods holds companies back from their potential around new total spend management optimization. The payoffs on gaining such a full and data-rich view of spend patterns across services, hiring, and goods includes reduced risk, new business models, and better strategic decisions.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To help us chart the future of intelligent spend management, and to better understand how the market views these issues, we are joined by Drew Hofler, Vice President of Portfolio Marketing at SAP Ariba and SAP Fieldglass. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What trends or competitive pressures are prompting companies to seek better ways to get a total spend landscape view? Why are they incentivized to seek broader insights?

Drew Hofler

Hofler

Hofler: After years of grabbing best-of-breed or niche solutions for various parts of the source-to-pay process, companies are reaching the limits of this siloed approach. Companies are now being asked to look at their vendor spend as a whole. Whereas before they would look just at travel and expense vendors, or services procurement, or indirect or direct spend vendors, chief procurement and financial officers now want to understand what’s going on with spend holistically.

And, in fact, from the IDC report you mentioned, we found that 53 percent of respondents use different applications for each type of vendor spend that they have. Sometimes they even use multiple applications within a process for specific types of vendor spend. In fact, we find that a lot of folks have cobbled together a number of different things — from in-house billing to niche vendors – to keep track of all of that.

Managing all of that when there is an upgrade to one particular system — and having to test across the whole thing — is very difficult. They also have trouble being able to reconcile data back and forth.

One of our competitors, for example — to show how this Frankenmonster approach has taken root — tried to build a platform of every source and category of spend across the entire source-to-pay process by acquiring 14 different companies in six years. That creates a patchwork of applications where there is a skim of user interfaces across the top for people to enter, but the data is disconnected. The processes are disconnected. You have to manage all of the different code bases. It’s untenable.

Gardner: There is a big technology component to such a patchwork, but there’s a people level to this as well. More-and-more we hear about the employee experience and trying to give people intelligent tools to make higher-level decisions and not get bogged down in swivel-ware and cutting and pasting between apps. What do the survey results tell us all about the people, process, and technology elements of total spend management?

Unified data reconciliation

Hofler: It really is a combination of people, process, and technology that drives intelligent spend. It’s the idea of bringing together every source, every category, every buying channel for all of your different types of vendor spend so that you can reconcile on the technology side; you can reconcile the data.

For example, one of the things that we are building is master vendor unification across the different types of spend. A vendor that you see — IBM, for example — in one system is going to be the same as in another system. The data about that vendor is going to be enriched by the data from all of the other systems into a unified platform. But to do that you have to build upon a platform that uses the same micro-services and the same data that reconciles across all of the records so that you’re looking at a consistent view of the data. And then that has to be built with the user in mind.

So when we talk about every source, category, and channel of spend being unified under a holistic intelligent spend management strategy, we are not talking about a monolithic user experience. In fact, it’s very important that the experience of the user be tailored to their particular role and to what they do. For example, if I want to do my expenses and travel, I don’t want to go into a deep, sourcing-type of system that’s very complex and based on my laptop. I want to go into a mobile app. I want to take care of that really quickly.

If I’m sourcing some strategic suppliers I certainly can’t do that on just a mobile app. I need data, details, and analysis. And that’s why we have built the platform underneath it all to tie this together.

On the other hand, if I’m sourcing some strategic suppliers I certainly can’t do that on just a mobile app. I need data, details, and analysis. And that’s why we have built the platform underneath it all to tie this together even while the user interfaces and the experience of the user is exactly what they need.

When we did our spend management survey with IDC, we had more than 800 respondents across four regions. The survey showed a high amount of dissatisfaction because of the wide-ranging nature of how expense management systems interact. Some 48 percent of procurement executives said they are dissatisfied with spend management today. It’s kind of funny to me because the survey showed that procurement itself had the highest level of dissatisfaction. They are talking about their own processes. I think that’s because they know how the sausages are being made.

Gardner: Drew, this dissatisfaction has been pervasive for quite a while. As we examine what people want, how did the survey show what is working? What gives them the data they need, and where does it go next?

Let go of patchwork 

Hofler: What came out of the survey is that part of the reason for that dissatisfaction is the multiple technologies cobbled together, with lots of different workflows. There are too many of those, too much data duplication, too many discrepancies between systems, and it doesn’t allow companies to analyze the data, to really understand in a holistic view what’s going on.

In fact, 47 percent of the procurement leaders said they still rely on spreadsheets for spend analysis, which is shocking to me, having been in this business for a long time. But we are much further along the path in helping that out by reconciling master data around suppliers so they are not duplicating data.

IDCIt’s also about tying together, in an integrated and seamless way, the entire process across different systems. That allows workflow to not be based on the application or the technology but on the required processes. For example, when it comes to installing some parts to fix a particular machine, you need to be able to order the right parts from the right suppliers but also to coordinate that with the right skilled labor needed to install the parts.

If you have separate systems for your services, skilled labor, and goods, you may be very disconnected. There may be parts available but no skilled labor at the time you need in the area you need. Or there may be the skilled labor but the parts are not available from a particular vendor where that skilled labor is.

What we’ve built at SAP is the ability to tie those together so that the system can intelligently see the needs, assess the risks such as fluctuations in the labor market, and plan and time that all together. You just can’t do that with cobbled together systems. You have to be able to have a fully and seamlessly integrated platform underneath that can allow that to happen.

Gardner: Drew, as I listen to you describe where this is going, it dovetails with what we hear about digital transformation of businesses. You’re talking not just about goods and services, you are talking about contingent labor, about all the elements that come together from modern business processes, and they are definitely distributed with a lifecycle of their own. Managing all that is the key.

Now that we have many different moving parts and the technology to evaluate and manage them, how does holistic spend management elevate what used to be a series of back-office functions into a digital business transformation value?

Hofler: Intelligent spend management makes it possible for all of the insights that come from these various data points — by applying algorithms, machine learning (ML), and artificial intelligence (AI) — to look at the data holistically. It can then pull out patterns of spend across the entire company, across every category, and it allows the procurement function to be at the nexus of those insights.

If you think of all the spend in a company, it’s a huge part of their business when you combine direct, indirect, services, and travel and expenses. You are now able to apply those insights to where there are the price fluctuations, peaks and valleys in purchasing, versus what the suppliers and their suppliers can provide at a certain time.

It’s an almost infinite amount of data and insights that you can gain. The procurement function is being asked to bring to the table not just the back-office operational efficiency but the insights that feed into a business strategy and the business direction. It’s hard to do that if you have disconnected or cobbled-together systems and a siloed approach to data and processes. It’s very difficult to see those patterns and make those connections.

But when you have a common platform such as SAP provides, then you’re able to get your arms around the entire process. The Chief Procurement Officer (CPO) can bring to the table quite a lot of data and the insights and that show the company what they need to know in order to make the best decisions.

Gardner: Drew, what are the benefits you get along the way? Are there short-, medium-, and long-term benefits? Were there any findings in the IDC survey that alluded to those various success measurements?

Common platform benefits 

Hofler: We found that 80 percent of today’s spend managers’ time is spent on low-level tasks like invoice matching, purchase requisitioning, and vendor management. That came out of the survey. With the tying together of the systems and the intelligence technologies infused throughout, those things can be automated. In some cases, they can become autonomous, freeing up time for more valuable pursuits for the employees.

New technologies can also help, like APIs for ecosystem solutions. This is one of the great short-term benefits if you are on an intelligent spend management platform such as SAP’s. You become part of a network of partners and suppliers. You can tap into that ecosystem of partners for solutions aligned with core spend management functions.

Celonis, for example, looks at all of your workflows across the entire process because they are all integrated. It can see it holistically and show duplication and how to make those processes far more efficient. That’s something that can be accessed very quickly.

Longer-term, companies gain insights into the ebbs and flows of spending, cost, and risk. They can begin to make better decisions on who to buy from based on many criteria. They can better choose who to buy from. They start to understand the risks across entire supply chains.

Longer-term, companies gain insights into the ebbs and flows of spending, cost, and risk. They can begin to make better decisions on who to buy from based on many criteria. They can better choose who to buy from. They can also in a longer-term situation start to understand the risks involved across entire supply chains.

One of the great things about having an intelligent spend platform is the ability to tie in through that network to other datasets, to other providers, who can provide risk information on your suppliers and on their suppliers. It can see deep into the supply chain and provide risk analytics to allow you to manage that in a much better way. That’s becoming a big deal today because there is so much information, and social media allows information to pass along so quickly.

When a company has a problem with their supply chain — whether that’s reputational or something that their suppliers’ suppliers are doing — that will damage their brand. If there is a disruption in services, that comes out very quickly and can very quickly hit the bottom line of a company. And so the ability to moderate those risks, to understand them better, and to put strategies together longer term and short-term makes a huge difference. An intelligent spend platform allows that to happen.

Gardner: Right, and you can also start to develop new business models or see where you can build out the top line and business development. It makes procurement not just about optimization, but with intelligence to see where future business opportunities lie.

Comprehend, comply, control 

Hofler: That’s right, you absolutely can. Again, it’s all about finding patterns, understanding what’s happening, and getting deeper understanding. We have so much data now. We have been talking about this forever, the amount of data that keeps piling up. But having an ability to see that holistically, have that data harmonized, and the technological capability to dive into the details and patterns of that data is really important.

Ariba logoAnd that data network has, in our case, more than 20 years’ worth of spend data, with more than $13 trillion in lifetime of spend data and more than $3 trillion a year of transactions moving through our network – the Ariba Network. So not only do companies have the technologies that we provide in our intelligent spend management platform to understand their own data, but there is also the capability to take advantage of rationalized data across multiple industries, benchmarks, and other things, too, that affect them outside of their four walls.

So that’s a big part of what’s happening right now. If you don’t have access into those kinds of insights, you are operating in the dark these days.

Gardner: Are there any examples that illustrate some of the major findings from the IDC survey and show the benefits of what you have described?

Hofler: Danfoss, a Danish company, is a customer of ours that produces heating and cooling drives, and power solutions; they are a large company. They needed to standardize disparate enterprise resource planning (ERP) systems across 72 factories and implement services for indirect spend control and travel across 100 countries. So they have a very large challenge where there is a very high probability for data to become disconnected and broken down.

That’s really the key. They were looking for the ability to see one version of truth across all the businesses, and one of the things that really drives that need is the need for compliance. If you look at the IDC survey findings, close to half of executive officers are particularly concerned with compliance and auditing in spend management policy. Why? Because it allows both more control and deeper trust in budgeting and forecasting, but also because if there are quality issues they can make sure they are getting the right parts from the right suppliers.

The capability for Danfoss to pull all of that together into a single version of truth — as well as with their travel and expenses — gives them the ability to make sure that they are complying with what they need to, holistically across the business without it being spotty. So that was one of the key examples.

Another one of our customers, Swisscom, a telecommunications company in Switzerland, a large company also, needed intelligent spend management to manage their indirect spend and their contingent workforce.

They have 16,000 contingent workers, with 23,000 emails and a couple of thousand phone calls from suppliers on a regular basis. Within that supply chain they needed to determine supplier selection and rates on receipt of purchase requisitions. There were questions about supplier suitability in the subsequent procurement stages. They wanted a proactive, self-service approach to procurement to achieve visibility into that, as well as into its suppliers and the external labor that often use and install the things that they procure.

By moving from a disconnected system to the SAP intelligent spend offering, they were able to gain cohesive information and a clear view of their processes — consumer, supplier, procurement, and end-user services.

So, by moving from a disconnected system to the SAP intelligent spend offering, they were able to gain cohesive information and a clear view of their processes, which includes those around consumer, supplier, procurement, and end user services. They said that using this user-friendly platform allowed them to quickly reach compliance and usability by all of their employees across the company. It made it very easy for them to do that. They simplified the user experience.

And they were able to link suppliers and catalogs very closely to achieve a vision of total intelligent spend management using SAP Fieldglass and SAP Ariba. They said they transformed procurement from a reactive processing role to one of proactively controlling and guiding, thanks to uniform and transparent data, which is really fundamental to intelligent spend.

Gardner: Before we close out, let’s look to the future. It sounds like you can do so much with what’s available now, but we are not standing still in this business. What comes next technologically, and how does that combine with process efficiencies and people power — giving people more intelligence to work with? What are we looking for next when it comes to how to further extend the value around intelligent spend management?

Harmony and integration ahead 

Hofler: Extending the value into the future begins with the harmonization of data and the integration of processes seamlessly. It’s process-driven, and it doesn’t really matter what’s below the surface in terms of the technology because it’s all integrated and applied to a process seamlessly and holistically.

What’s coming in the future on top of that, as companies start to take advantage of this, is that more intelligent technologies are being infused into different parts of the process. For example, chatbots and the ability for users to interact with the system in a natural language way. Automation of processes is another example, with the capability to turn some processes into being fully autonomous, where the decisions are based on the learning of the machines.

screenshot MacThe user interaction can then become one of oversight and exception management, where the autonomous processes take over and manage when everything fits inside of the learned parameters. It then brings in the human elements to manage and change the parameters and to manage exceptions and the things that fall outside of that.

There is never going to be removal of the human, but the human is now able with these technologies to become far more strategic, to focus more on analytics and managing the issues that need management and not on repetitive processes that can be handled by the machine. When you have that connected across your entire processes, that becomes even more efficient and allows for more analysis. So that’s where it’s going.

Plus, we’re adding more ecosystem partners. When you have a networked ecosystem on intelligent spend, that allows for very easy connections to providers who can augment the core intelligent spend functions with data. For example, for attaining global tax, compliance, risk, and VAT rules through partners like American Express and Thomson Reuters. All of these things can be added. You will see that ecosystem growing to continue to add exponential value to being a part of an intelligent spend management platform.

Gardner: There are upcoming opportunities for people to dig into this and understand it and find the ways that it makes sense for them to implement, because it varies from company to company. What are some ways that people can learn details?

Hofler: There is a lot coming up. Of course, you can always go to ariba.com, fieldglass.com or sap.com and find out about our intelligent spend management offerings. We will be having our SAP Ariba Live conference in Las Vegas in March, and so tons and tons of content there, and lots of opportunity to interact with other folks who are in the same situation and implementing these similar things. You can learn a lot.

We are also doing a webinar with IDC to dig into the details of the survey. You can find information about that on ariba.com, and certainly if you are listening to this after the fact, you can hear the recording of that on ariba.com and download the report.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, data analysis, machine learning, procurement, risk assessment, SAP, SAP Ariba, Spot buying, supply chain | Tagged , , , , , , , , , , , , , , , | Leave a comment

How an MSP brings comprehensive security services to diverse clients

Polaris spin

As businesses move more of their IT services to the cloud, reducing complexity and making sure that security needs are met throughout the migration process are now top of mind.

For a UK managed services provider (MSP), finding the right mix of security strength and ease-of-use for its many types of customers became a top priority.

The next managed services security management edition of BriefingsDirect explores how Northstar Services, Ltd. in Bristol-area England adopted Bitdefender Cloud Security for Managed Service Providers (MSPs) to both improve security for their end users and to make managing that security at scale and from the cloud easier than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to discuss the role of the latest Bitdefender security technology — and making MSPs more like security services providers — is John Williams, Managing Director at Northstar Services, Ltd. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are some of the top trends driving the need for an MSP such as Northstar to provide even better security services?

Williams: We used to get lots of questions regarding stability for computers. They would break fairly regularly and we’d have to do hardware changes. People were interested in what software we were going to load — what the next version of this, that, and the other was — but those discussions have changed a great deal. Now everybody is talking about security in one form or another.

Gardner: Whenever you change something — whether it’s configurations, the software, or the service provider, like a cloud — it leaves gaps that can create security problems. How can you be doubly sure when you make changes that the security follows through?

The value of visibility, 24-7 

John Williams

Williams

Williams: We used to install a lot of antivirus software on centralized servers. That was very common. We would set up a big database and install security software on there, for example. And then we would deploy it to the endpoints from those servers, and it worked fairly well. Yet it was quite a lot of work to maintain it.

But now we are supporting people who are so much more mobile. Some customers are all out and about on the road. They don’t go to the office. They are servicing their customers, and they have their laptop. But they want the same level of security as they would have on a big corporate network.

So we have defined the security products that give us visibility of what’s happening. It means that we don’t have to know that they are up to date. We have to manage those clients wherever they are on whatever device they have — all from one place.

Gardner: Even though these customers are on the frontline, you’re the one as the MSP they are going to call up when things don’t go right.

Williams: Yes, absolutely. We have lots of customers who don’t have on-site IT resources. They are not experts. They often have small businesses with hundreds of users. They just want to call us, find out what’s going on when they see a problem on their computers, and we have got to know whether that’s a security issue or an application that’s broken.

But they are very concerned that we have that visibility all of the time. Our engineers need to be able to access that easily and address it as soon as a call comes in.

Gardner: Before we learn more about your journey to solving those issues, tell us about Northstar. How long have you been around and what’s the mainstay of your business?

Williams: I have been running Northstar for more than 20 years now, since January 1999. I had been working in IT as an IT support engineer in large organizations for a few years, but I really wanted to get involved in looking after small businesses.

People appreciate it when you make an effort. They want to tell you that you did a good job, and they want to know that someone is paying attention to them.

I like that because you get direct feedback. People appreciate it when you make an effort. They want to tell you that you did a good job, and they want to know that someone is paying attention to them.

So it was a joy to be able to get that up and going. We have a great team here now and that’s what gets me out of bed in the morning — working with our team to look after our customers.

Gardner: Smaller businesses moving to the cloud has become more the norm lately. How are your smaller organizations managing that? Sometimes with the crossover — the hybrid period between having both on-premises devices as well as cloud services — can be daunting. Is that something you are helping them with?

Moving to cloud step-by-step 

Williams: Yes, absolutely. We often see circumstances where they want to move one set of systems to the cloud before they want to move everything to the cloud. So they generally are on a trend where they want to get rid of in-house services, especially for the smaller end of the market, for customers who are smaller. But they often have legacy systems that they can’t easily port off the services from. They might have been custom written or are older versions that they can’t afford to upgrade at this point. So we end up supporting partly in the cloud and partly on-premises.

And some customers, that’s their strategy. They take a particular workload, a database, for example, or some graphics software that they use, that runs brilliantly on servers in their offices. But they want to outsource other applications.

So, when we look at security, we need software that’s going to be able to work across those different scenarios. It can’t just be one or the other. It’s no good if it’s just on-premises, and no good if it’s just in the cloud. It has to be able to do all of that, all from one console because that’s what we are supporting.

PC

Gardner: John, what were your requirements when you were looking for the best security to accomplish this set of requirements? What did you look for and how did your journey end?

Williams: Well, you can talk about the things being easy to manage, things being visible and with good reporting. All those things are important, and we assessed all of those. But the bottom line is, does it pick up infections? Is it able to keep those units secure and safe? And when an infection has happened, does it clean them up or stop them in their tracks quickly?

That has to be the number one thing, because whatever other savings you might make in looking after security, the fact that something that’s trying to do something bad is blocked — that has to be number one; stopping it in its tracks and getting it off that unit as quickly as possible. The sooner it’s stopped, the less damage and the less time the engineers have to spend rebuilding the units that have been killed by viruses or malware.

And we used to do quite a lot of that. With the previous antivirus security software we used, there was a constant stream of cleaning up after infections. Although it would detect and alert us, very often the damage was already done. So, we had a long period of repairing that, often rebuilding the whole operating system (OS), which is really inconvenient for customers.

And again, coming back to the small businesses, they don’t have spare PCs hanging around that they can just get out of the cupboard and carry on. Very often that’s the most vital kit that they own. Every moment it’s out of action, that’s directly affecting their bottom line. So detecting infections and stopping them in their tracks was our number-one criteria when we were looking.

Gardner: In the best of all worlds, the end user is not even aware that they were infected, not aware it was remediated, not having to go through the process of rebuilding. That’s a win-win for everyone.

Automation around security is therefore top of mind these days. What you have been able to do with Bitdefender Cloud Security for MSPs that accomplishes that invisibility to the end user — and also helps you with automation behind the scenes?

Stop malware in its tracks 

Williams: Yes, the stuff was easy to deploy. But what it boils down to is that we just don’t get as many issues to have to automate the resolution for. So automation is important, and the things it does are useful. But the number of issues that we have to deal with is so few now that even if we were to 100 percent automate, it wouldn’t make a massive savings, because it’s not interrupting us very much.

rich bitdefender logoIt’s stopping malware in its tracks and cleaning it up. Most of the time we are seeing that it has done it, rather than us having to automate a script to do some removal or some changes or that kind of thing. It has already done it. I suppose that is automated, if you think about it, yes.

Gardner: You said it’s been a dramatic difference between the past and now with the number of issues to deal with. Can you qualify that?

Williams: In the three or four years we have used Bitdefender, when we look at the number of tickets that we used to get in for antivirus problems on people’s laptops and PCs, they have just dropped to such a low level now, it’s a tiny proportion. I don’t think it’s even coming up on a graph.

When we look at the number of tickets we used to get in for antivirus problems, since we have used Bitdefender they have just dropped to such a low level now, it’s a tiny proportion. It doesn’t even come up on a graph.

You record the type of ticket that comes in, and it’s a printer issue, a hardware issue. The virus removal tickets are not featuring high enough to even appear on the graph because Bitdefender is just dealing with those infections and fixing them without having to get to them and rebuild PCs.

Gardner: When you defend a PC, Mac or mobile device, that can have a degradation effect. Users will complain about slow apps, especially when the antivirus software is running. Has there been an improvement in terms of the impact of the safety net when it comes to your use of Bitdefender Cloud Security for MSPs?

Williams: Yes, it’s much lighter on the OS than the previous software that we were using. We were often getting calls from customers to say that their units were running slowly because of the heavy load it was having to do in order to run the security software. That’s the exact opposite of what you want. You are putting this software on there so that they get a better experience; in other words, they are not getting infected as often.

But then you’re slowing down their work every day, I mean, that’s not a great trade-off. Security is vital but if it has such a big impact on them that they are losing time by just having it on there — then that’s not working out very well.

Now [with Bitdefender Cloud Security for MSPs] it’s light enough from the that it just isn’t an issue. We don’t get customers saying, “Since you put the antivirus on my laptops, it seems to be slower.” In fact, it’s usually the opposite.

Gardner: I’d like to return to the issue of cloud migration. It such a big deal when people move across a continuum of on-premises, hybrid, and cloud – and be able to move while security is maintained. It’s like changing the wings on an airplane and keeping it flying at the same time.

What is it about the way that Bitdefender has architected its solution that helps you, as a service provider, guide people through that transition but not lose a sense of security?

Don’t worry, be happy 

Williams: It’s because we are managing all of the antivirus licenses in the cloud, whether they are on-premises, inside an office where they are using those endpoints,  or whether they are out and about; whether it’s a client-server running in cloud services or running on-premises, we are putting the same software on there and managing it in the same console. It means we don’t worry about that security piece. We know that whatever they change to, whatever they are coming from, we can put the same software on and manage it in the same place — and we are happy.

Gardner: As a service provider I’m sure that the amount of man hours you have to apply to different solutions directly affects your bottom line. Is there something about the administration of all of this across your many users that’s been an improvement? The GravityZone Cloud Management console, for example, has that allowed you to do more with less when it comes to your internal resources?

Williams: Yes, and the way that I gauge that is the amount of time. Engineers want to do an efficient job, that’s what they like, they want to get to the root of problems and fix them quickly. So any piece of software or tool that doesn’t work efficiently for them, I get a long list of complaints about on a regular basis. All engineers want to fix things fast because that’s what the customer wants, and they are working on their behalf.

Before, I would have constant complaints about how difficult it was to manage and deploy software on the units if they needed to be decommissioned. It was just troublesome. But now I don’t get any complaints over it. The staff is nothing but complimentary about the software. That just makes me happy because I know that they are able to work with it, which means that they are doing the job that they want to do, which is helping our customers and keeping them happy. So yes, it’s much better.

Gardner: Looking to the future, is there something that you are interested in seeing more of? Perhaps around encryption or the use of machine learning (ML) to give you more analytics as to what’s going on? What would you like to see out of your security infrastructure and services from the cloud in the next couple of years?

The devil’s in the data detail 

Williams: One thing that customers are talking to us about quite a bit now is data security. So they are thinking more about the time when they are going to have to report the fact that they’ve been attacked. And no software on earth is perfect. The whole point of security is that the threat continually evolves.

At the point where you’ve had a breach of some kind, you want to understand what’s happened. And so, having information back from the security software that helps you to understand how the breach happened — and the extent of it — that’s becoming really important to customers. When they submit those reports, as legally they have to do, they want to have accurate information to say, “We had an infection, and that’s it.” If they don’t know exactly what the extent of it was – or whether any data was accessed or infected or encrypted without having that detail — that’s a problem.

keyboardSo the more information that we can gain from the security software about the extent, that’s going to be more important going forward.

Gardner: Anything else come to mind about what you’d like to see from the technology side?

Williams: So automation is important and that artificial intelligence (AI) side of it where the software itself learns about what’s happening and can give you an idea when it spots something that’s out of the ordinary — that will be more useful as time goes on.

Gardner: John, what advice do you have for other MSPs when it comes to a security, a better security posture?

Williams: Don’t be afraid of defining the securing services. You have to lead that conversation, I think. That’s what customers want to know. They want to know that you have thought about it, and that’s at the very full front of your mind.

We meet our customers regularly. The first item on the agenda is security. We like to talk about where they are, what’s the next thing that they can do to make sure they are doing everything they can to protect the data they have gathered from their customers, and to look after their data about their staff, too, and to keep their services running.

We go meet our customers regularly and we usually have a standard agenda that we use. The first item on the agenda is security. And that journey for each customer is different. They are starting from different places. So we like to talk about where they are, what’s the next thing that they can do to make sure they are doing everything they can to protect the data they have gathered from their customers, and to look after their data about their staff, too, and to keep their services running.

We put that at the top of the agenda for every meeting. That’s a great way of behaving as a service provider. But, of course, in order to do that, to deliver on that, you have to have the right tools. You have to say, “Okay, if I am going to be in that role to help people with a security, I have to have those tools in place.”

If they are complicated, difficult to use, and hard to implement — then that’s going to make it horrible. But if they are simple and give you great visibility, then you are going to be able to deliver a service that customers will really want to buy.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in Bitdefender, Cloud computing, Cyber security, Help desk, managed services, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Better IT security comes with ease in overhead for rural Virginia county government

Caroline_County_Courthouse 2The next public sector security management edition of BriefingsDirect explores how managing IT for a rural Virginia county government means doing more with less — even as the types and sophistication of cybersecurity threats grow.

For County of Caroline, a small team of IT administrators has built a technically advanced security posture that blends the right amounts of automation with flexible, cloud-based administration.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share their story on improving security in a local government organization are Bryan Farmer, System Technician, and David Sadler, Director of Information Technology, both for County of Caroline in Bowling Green, Virginia. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Dave, tell us about County of Caroline and your security requirements. What makes security particularly challenging for a public sector organization like yours?

Sadler: As everyone knows, small governments in the State of Virginia — and all across the United States and around the world — are being targeted by a lot of bad guys. For that reason, we have the responsibility to safeguard the data of the citizens of this county — and also of the customers and other people that we interact with on a daily basis. It’s a paramount concern for us to maintain the security and integrity of that data so that we have the trust of the people we work with.

Gardner: Do you find that you are under attack more often than you used to be?

David Sadler

Sadler

Sadler: The headlines of nearly any major newspaper you see, or news broadcasts that you watch, show what happens when the bad guys win and the local governments lose. Ransomware, for example, happens every day. We have seen a major increase in these attacks, or attempted attacks, over the past few years.

Gardner: Bryan, tell us a bit about your IT organization. How many do you have on the frontlines to help combat this increase in threats?

Farmer: You have the pleasure today of speaking with the entire IT staff in our little neck of the woods. It’s just the two of us. For the last several years it was a one-man operation, and they brought me on board a little over a year-and-a-half ago to lend a hand. As the county grew, and as the number of users and data grew, it just became too much for one person to handle.

Gardner: You are supporting how many people and devices with your organization?

Small-town support, high-tech security

Bryan Farmer (1)

Farmer

Farmer: We are mainly a Microsoft Windows environment. We have somewhere in the neighborhood of 250 to 300 users. If you wrap up all of the devices, Internet of Things (IoT) stuff, printers, and things of that nature, it’s 3,000 to 4,000 devices in total.

Sadler: But the number of devices that actually touch our private network is in the neighborhood of around 750.

Farmer: We are a rural area so we don’t have the luxury of having fiber in between all of our locations and sites. So we have to resort to virtual private networks (VPNs) to get traffic back and forth. There are airFiberconnections, and we are doing some stuff over the air. We are a mixed batch. There is a little bit of everything here.

Gardner: Just as any business, you have to put your best face forward to your citizens, voters, and taxpayers. They are coming for public services, going online for important information. How large is your county and what sort of applications and services you are providing to your citizens?

Farmer: Our population is 30,000?

Sadler: Probably 28,000 to 30,000 people, yes.

Farmer: A large portion of our county is covered by a U.S. Army training base, it’s a lot of nonliving area, so to speak. The population is condensed into a couple of small areas.

We host a web site and forum. People can look up their taxes, permit prices, and basic information that the average citizen will need.

We host a web site and forum. It’s not as robust as what you would find in a big city or a major metropolitan area, but people can look up their taxes, permit prices, things of that nature; basic information that the average citizen will need such as utility information.

Gardner: With a potential of 30,000 end users — and just two folks to help protect all of the infrastructure, applications, and data — automation and easy-to-use management must be super important. Tell us where you were in your security posture before and how you have recently improved on that.

Finding a detection solution

Sadler: Initially when I started here, and I came over from the private sector, we were running one of the big companies that had a huge name but was basically not showing us the right amount of good protection, you could say.

So we switched to a second company, Kaspersky, and immediately we started finding detections of existing malware and different anomalies in the network that had existed for years without protection from Symantec. So we settled on Kaspersky. And anytime you go to an enterprise-level antivirus (AV) endpoint solution, the setup, adjustment, and on-boarding process takes longer than what a lot of people would lead you to believe.

It took us about six months with Kaspersky. I was by myself, so it took me about six months to get everything set up and running like it should, and it performed extremely well. I had a lot of granularity as far as control of firewalls and that type of product.

The granularity is what we like because we have users that have a broad range of needs. We have to be able to address all of those broad ranges under one umbrella.

Many of the different AV endpoint solutions we evaluated lacked the granularity we wanted to address the needs of everyone with one product. We spend six months evaluating and we landed on Bitdefender.

Unfortunately, when the US Department of Homeland Security decided to at first recommend that you not use [Kaspersky] and then later banned that product from use, we were forced to look for a replacement solution, and we evaluated multiple different products.

Again, what we were looking for was granularity because we wanted to be able to address the needs of everyone under the umbrella with one particular product. Many of the different AV endpoint solutions we evaluated lacked that granularity. It was, more or less, another version of the software that we started with. They didn’t give a real high level of protection or did not allow for adjustment.

When we started evaluating a replacement, we were finding things that we could not do with a particular product. We spent probably about six months evaluating different products — and then we landed on Bitdefender.

Now, coming from the private sector and dealing with a lot of home users, my feelings for Bitdefender were based on the reputation of their consumer-grade product. They had an extremely good reputation in the consumer market. Right off the bat, they had a higher score when we started evaluating. It doesn’t matter how easy a product is to use or adjust if their basic detection level is low, then everything else is a waste of time.

rich bitdefender logoBitdefender right off the bat has had a reputation for having a high level of detection and protection as well as a low impact on the systems. Being a small, rural county government, we use machines that are unfortunately a little bit older than what would be recommended, five to six years old. We are using some older machines that have lower processing power, so we could not take a product that would kill the performance of the machine and make it unusable.

During our evaluations we found that Bitdefender performed well. It did not have a lot of system overhead and it gave us a high level of protection. What’s really encouraging is when you switch to a different product and you start scaling your network and find threats that had been existing there for years undetected. Now you know at least you are getting something for your money, and that’s what we found with Bitdefender.

Gardner: I have heard that many times. It has to, at the core, be really good at detecting. All the other bells and whistles don’t count if that’s not the case. Once you have established that you are detecting what’s been there, and what’s coming down the wire every day, the administration does become important.

Bryan, what is the administration like? How have you improved in terms of operations? Tell us about the ongoing day-to-day life using Bitdefender.

Managing mission-critical tech

Farmer: We are Bitdefender GravityZone users. We host everything in the cloud. We don’t have any on-premises Bitdefender machines, servers, or anything like that, and it’s nice. Like Dave said, we have a wide range of users and those users have a wide range of needs, especially with regards to Internet access, web page access, stuff like that.

For example, a police officer or an investigator needs to be able to access web sites that a clerk in the treasurer’s office just doesn’t need to be able to access. To be able to sit at my desk or take my laptop out anywhere that I have an Internet connection and make an adjustment if someone cannot get to somewhere that they need is invaluable. It saves so much time.

We don’t have to travel to different sites. We don’t have to log-in to a server. I can make adjustments from my phone. It’s wonderful to be able to set up these different profiles and to have granular control over what a group of people can do.

We can adjust which programs they can run. We can remove printing from a network. There are so many different ways that we can do it, from anywhere as long as we have a computer and Internet access. Being able to do that is wonderful.

Gardner: Dave, there is nothing more mission-critical than a public safety officer and their technology. And that technology is so important to everybody today, including a police officer, a firefighter, and an emergency medical technician (EMT). Any feedback when it comes to the protection and the performance, particularly in those mission-critical use cases?

Sadler: Bitdefender has allowed us the granularity to be able to adjust so that we don’t interfere with those mission-critical activities that the police officer or the firefighter are trying to perform.

Our security service is hosted in the cloud, and we have found that that is an actual benefit. Bitdefender GravityZone offers us the capability to monitor as well as adjust on machines that never see our network. 

So initially there was an adjustment period. Thank goodness everybody was patient during that process and I think now we are finally — about a year into the process, a little over a year — and we have gotten stuff set pretty good. The adjustments that we are having to make now are minor. Like Bryan said, we don’t have an on-premises security server here. Our service is hosted in the cloud, and we have found that that is an actual benefit. Before, with having a security server and the software hosted on-premises, there were machines that didn’t touch the network. We are looking at probably 40 to 50 percent of our machines that we would have had to manage and protect [manually] because they never touch our network.

The Bitdefender GravityZone cloud-based security product offers us the capability to be able to monitor for detections, as well as adjust firewalls, etc., on machines that we never touch or never see on our network. It’s been a really nice product for us and we are extremely happy with its performance.

endpoint-security-solution

Gardner: Any other metrics of success for a public sector organization like yours with a small support organization? In a public sector environment you have to justify your budget. When you tell the people overseeing your budget why this is a good investment, what do you usually tell them?

Sadler: The benefit we have here is that our bosses are aware of the need to secure the network. We have cooperation from them. Because we are diligent in our evaluation of different products, they pretty much trust our decisions.

Justifying or proving the need for a security product has not been a problem. And again, the day-to-day announcements that you see in the newspaper and on web sites about data breaches or malware infections — all that makes justifying such a product easier.

Gardner: Any examples come to mind that have demonstrated the way that you like to use these products and these services? Anything come to mind that illustrates why this works well, particularly for your organization?

Stop, evaluate, and reverse infections

Farmer: Going back to the cloud hosting, all a machine has to do is touch the Internet. We have a machine in our office here right now that one of our safety officials had and we received an email notification that something was going on. That machine needed to be disinfected, we needed to take a look at this machine.

The end-user didn’t have to notice it. We didn’t have to wait until it was a huge problem or a ransomware thing or whatever the case may be. We were notified automatically in advance. We were able to contact the user and get to the machine. Thankfully, we don’t think it was anything super-critical, but it could have been.

That automation was fantastic, and not having to react so aggressively, so to speak. So the proactivity that a solution like Bitdefender offers is outstanding.

Gardner: Dave, anything come to mind that illustrates some of the features or functions or qualitative measurements that you like?

Sadler: Yes, with Bitdefender GravityZone, it will sandbox a suspicious activity and watch its actions and then roll back if something bad is going on.

We actually had a situation where a vendor that we use on a regular basis from a large company, well-respected, called in to support a machine that they had in one of our offices. We were immediately notified via email that a ransomware attack was being attempted.

We were notified on our end that a ransomware attack had been stopped, evaluated, and reversed by Bitdefender. Not once, but twice in a row. We were immediately able to contact that office say, “Hey, stop what your are doing.”

So this vendor was using a remote desktop application. Somehow the end-user got directed to a bad site, and when it failed the first time on their end, all they could tell was, “Hey, my remote desktop software is not working.” They stopped and tried it again.

We were notified on our end that a ransomware attack had been stopped, evaluated, and reversed by Bitdefender. Not once, but twice in a row. So we were immediately able to contact that office and say, “Hey, stop what you are doing.”

Then we followed up by disconnecting that computer from the network and evaluating them for infection, to make sure that everything had been reversed. Thank goodness, Bitdefender was able to stop that ransomware attack and actually reverse the activity. We were able to get a clean scan and return that computer back to service fairly quickly.

Gardner: How about looking to the future? What would you like to see next? How would you improve your situation, and how could a vendor help you do that?

Meeting government requirements

Sadler: The State of Virginia just passed a huge bill dealing with election security and everybody knows that that’s a huge, hot topic when it comes to security right now. And because most of the localities in Virginia are independent localities, the state passed a bill that allows state Department of Elections and the US Homeland Security Department to step in a little bit more to the local governments and monitor or control the security of the local governments, which in the end is going to be a good thing.

But a lot of the products or solutions that we are now being required to be able to answer about are already answered by the Bitdefender product. For example, automated patch management notification of security issues.

So, Bitdefender right now is already answering a lot of the new requirements. The one thing that I would like to see … from what I understand the cloud-based version of Bitdefender does not allow you to do mobile device management. And that’s going to be required by some of these regulations that are coming down. So it would be really nice if we could have one product that would do the mobile device management as well as the cloud-based security protection for a network.

pring_Grove_gateGardner: I imagine they hear you loud and clear on that. When it comes to compliance like you are describing from a state down to a county, for example, many times there are reports and audits that are required. Is that something that you feel is supported well? Are you able to rise to that occasion already with what you have installed?

Farmer: Yes, Bitdefender is a big part of us being able to remain compliant. The Criminal Justice Information Services (CJIS) audit is one we have to go through on a regular basis. Bitdefender helps us address a lot of the requirements of those audits as well as some of the upcoming audits that we haven’t seen yet that are going to be required by this new regulation that was just passed this past year in the Commonwealth of Virginia.

But from the previews that we are getting on the requirements of those newly passed regulations, it does appear that Bitdefender is going to be able to help us address some of those needs, which is good. By far, it’s the capability to be able to answer some of those needs with Bitdefender that is superior to the products that we have been using in the past.

Gardner: Given that many other localities, cities, towns, municipalities, counties are going to be facing similar requirements, particularly around election security, for example, what advice would you give them, now that you have been through this process? What have you learned that you would share with them so that they can perhaps have an easier go at it?

Research reaps benefits in time, costs 

Farmer: I have seen in the past a lot of places that look at the first line item, so to speak, and then make a decision on that. Then when they get down the page a little bit and see some of the other requirements, they end up in situations where they have two, three, or four pieces of software, and a couple of different pieces of hardware, working together to accomplish one goal. Certainly, in our situation, Bitdefender checks a lot of different boxes for us. If we had not taken the time to research everything properly and get into the full breadth of what’s capable, we could have spent a lot more money and created a lot more work and headaches for ourselves.

A lot of people in IT will already know this, but you have to do your homework. You have to see exactly what you need and get a wide-angle view of it and try to choose something that helps do all of those things. Then automate off-site and automate as much as you can to try to use your time wisely and efficiently.

Gardner: Dave, any advice for those listening? What have you learned that you would share with them to help them out?

The breadth of the protection that we are getting from Bitdefender has been a major plus. Find the product that your can put together under one big umbrella so you have one point of adjustment from one single control panel.

Sadler: The breadth of the protection that we are getting from Bitdefender has been a major plus. So again, like Bryan said, find the product that you can put together under one big umbrella — so that you have one point of adjustment. For example, we are able to adjust firewalls, virus protection, and off-site USB protection — all this from one single control panel instead of having to manage four or five different control panels for different products.

It’s been a positive move for us, and we look forward to continuing to work with that product and we are watching the new product still under development. We see new features coming out constantly. So if anyone from Bitdefender is listening, keep up the good work. We will hang in there with you and keep working.

But the main thing for IT operators is to evaluate your possibilities, evaluate whatever possible changes you are going to make before you do it. It can be an investment of money and time that goes wasted if you are not sure of the direction you are going in. Use a product that has a good reputation and one that checks off all the boxes like Bitdefender.

Farmer: In a lot of these situations, when you are working with a county government or a school you are not buying something for 30, 60, or 90 days – instead you are buying a year at a time. If you make an uninformed decision, you could be putting yourself in a jam time- and labor-wise for the next year. That stuff has lasting effects. In most counties, we get our budgets and that’s what we have. There are no do-overs on stuff like this. So, it speaks back to making a well-informed decision the first time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in artificial intelligence, Bitdefender, Business intelligence, BYOD, Cloud computing, Cyber security, data analysis, Government, machine learning, mobile computing, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Delivering a new breed of patient access best practices requires an alignment of people, process, and technology

receptionistThe next BriefingsDirect healthcare finance insights discussion explores the rapidly changing ways that caregiver organizations on-board and manage patients.

How patients access their healthcare is transitioning to the digital world — but often in fits and starts. This key process nonetheless plays a major role in how patients perceive their overall experiences and determines how well providers manage both care and finances.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to unpack the people, process, and technology elements behind modern patient access best practices. To learn more, we are joined by an expert panel: Jennifer Farmer, Manager of Patient Access and Admissions at Massachusetts Eye and Ear Infirmary in Boston; Sandra Beach, Manager of the Central Registration Office, Patient Access, and Services and Pre-Services at Cooley Dickinson Healthcare in Northampton, Mass., and Julie Gerdeman, CEO of HealthPay24 in Mechanicsburg, Penn. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jennifer, for you and your organization, how has the act of bringing a patient into a healthcare environment — into a care situation — changed in the past five years?

Jennifer Farmer headshot

Farmer

Farmer: The technology has exploded and it’s at everyone’s fingertips. So five years ago, patients would come to us, from referrals, and they would use the old-fashioned way of calling to schedule an appointment. Today it is much easier for them. They can simply go online to schedule their appointments.

They can still do walk-ins as they did in the past, but it’s much easier access now because we have the ways and means for the patients to be triaged and given the appropriate information so they can make an appointment right then and there, versus waiting for their provider to call to say, “Hey, we can schedule your appointment.” Patients just have it a lot easier than they did in the past.

Gardner: Is that due to technology? It seems to me that when I used to go to a healthcare organization they would be greeting me by handing me a clipboard, but now they are always sitting at a computer. How has the digital experience changed this?

Farmer: It has changed it drastically. Patients can now complete their accounts online and so the person sitting at the desk already has that patient’s information. So the clipboard is gone. That’s definitely something patients like. We get a lot of compliments on that.

It’s easier to have everything submitted to us electronically, whether it’s medical records or health insurance. It’s also easier for us to communicate with the patient through the electronic health record (EHR). If they have a question for us or we have a question for them, the health record is used to go back and forth.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

There are not as many phone calls as there used to be, not as many dropped ends. There is also the advent of telemedicine these days so doctors can have a discussion or a meeting with the patient on their cell phones. Technology has definitely changed how medicine is being distributed as well as improving the patient experience.

Gardner: Sandra, how important is it to get this right? It seems to me that first impressions are important. Is that the case with this first interception between a patient and this larger, complex healthcare organization and even ecosystem?

Sandra Beach headshot

Beach

Beach: Oh, absolutely. I agree with Jennifer that so many things have changed over the last five years. It’s a benefit for patients because they can do a lot more online, they can electronically check-in now, for example, that’s a new function. That’s going to be coming with [our healthcare application] Epicso that patients can do that all online.

The patient portal experience is really important too because patients can go in there and communicate with the providers. It’s really important for our patients as telemedicine has come a huge distance over the years.

Gardner: Julie, we know how important getting that digital trail of a patient from the start can be; the more data the better. How have patient access best practices been helped or hindered by technology? Are the patients perceiving this as a benefit?

Gerdeman: They are. There has been a huge improvement in patient experience from technology and the advent and increase in technology. A patient is also a consumer. We are all just people and in our daily lives we do more research.

So, for patient access, even before they book an appointment, either online or on the phone, they pull out their phones and do a ton of research about the provider institution. That’s just like folks do for anything personal, such as a local service like a dry cleaning or a haircut. For anything in your neighborhood or community, you do the same for your healthcare because you are a consumer.

The same level of consumer support that’s expected in our modern daily lives has now come to be expected with our healthcare experiences. Leveraging technology for access is just beginning and will continue to impact healthcare.

Julie Gerdeman headshot

Gerdeman

The same level of consumer support that’s expected in our modern daily lives has now come to be expected with our healthcare experiences. Leveraging technology for access, and as Jennifer and Sandra mentioned, the actual clinical experience — via telemedicine and digital transformation — is just getting into and will continue to impact healthcare.

Gardner: We have looked at this through the lens of the experience and initial impressions — but what about economics? When you do this right, is there a benefit to the provider organization? Is there a benefit to the patient in terms of getting all those digital bits and bytes and information in the right place at the right time? What are the economic implications, Jennifer?

Technology saves time and money

Farmer: They are two-fold. One, the economic implication for a patient is tht they don’t necessarily have to take a day off from work or leave work early. They are able to continue via telemedicine, which can be done through the evening. When providers offer evening and weekend appointments, that’s to satisfy the patient so they don’t have to spend as much time trying to rearrange things, get daycare, or pay for parking.

For the provider organization, the economic implications are that we can provide services to more patients, even as we streamline certain services so that it’s all more efficient for the hospital and the various providers. Their time is just as valuable as anyone else’s. They also want to reduce the wait times for someone to see a patient.

The advent of using technology across different avenues of care reduces that wait time for available services. The doctors and technicians are able to see more patients, which obviously is an economic positive for the hospital’s bottom line.

Gardner: Sandra, patients are often not just having one point of intersection, if you will, with these provider organizations. They probably go to a clinic, then a specialist, perhaps rehabilitation, and then use pharmaceutical services. How do we make this more of a common experience for how patients intercept such an ecosystem of healthcare providers?

Beach: I go back to the EHRs that Jennifer talked about. With us being in a partner system, no matter where you go — you could go to a rehab appointment, a specialist, to the cancer center in Boston — all your records are accessible for the physicians, and for the patients. That’s a huge step in the right direction because, no matter where the patient goes, you can access the records, at least within our system.

Gardner: Julie, to your point that the consumer experience is dictating people’s expectations now, this digital trail and having that common view of a patient across all these different parts of the organization is crucial. How far along are we with that? It seems to me that we are not really fully baked across that digital experience.

desktop shotGerdeman: You’re right, Dana. I think the partner approach is an amazing exception to the rule because they are able to see and share data across their own network.

Throughout the rest of the country, it’s a bit more fractured and splintered. There remains a lot of friction in accessing records as you move — even in some cases within the same healthcare system — from a clinic or the emergency department (ED) into the facility or to a specialist.

The challenge is one of interoperability of data and integration of that data. Hospitals continue to go through a lot of mergers and acquisitions, and every acquisition creates a new challenge.

From the consumer perspective, they want that to be invisible. It should be invisible, the right data should be on their phones regardless of what the encounter was, what the financial obligation for the encounter was — all of it. So that’s the expectation and what’s still happening. There is a way to go in terms of interoperability and integration from the healthcare side.

Gardner: We have addressed the process and the technology, but the third leg on the stool here is the people. How can the people who interact with patients at the outset foster a better environment? Has the role and importance of who is at that initial intercept with the patient been elevated? So much rides on getting the information up front. Jennifer, what about the people in the role of accessing and on-boarding patients, what’s changed with them?

Get off to a great start

Farmer: That is the crux of the difference between a good patient experience and a terrible patient experience, that first interaction. So folks who are scheduling appointments and maybe doing registration — they may be at the information desk — they are all the drivers to making sure that that patient starts off with a great experience.

Most healthcare organizations are delving into different facets of customer service in order to ensure that the patient feels great and like they belong when they come into an organization. Here at Mass. Eye and Ear, we practice something called Eye Care. Essentially, we think about how you would want yourself and your family members to be treated, to make sure that we all treat patients who walk in the door like they are our family members.

When you lead with such a positive approach it downstreams into that patient’s feelings of, “I am in the right place. I expect my care to be fantastic. I know that I’m going to receive great care.” Their positive initial outlook generally reflects the positive outcome of their overall visit.

Most providers are siloed, with different areas or departments. That means patients would hear, “Oh, sorry, we can’t help you. That’s not our area.” To make it a more inclusive experience, everyone in the organization is a brand ambassador.

This has changed dramatically even within the past two to three years. Most providers are siloed, with different areas or departments. That means patients would hear, “Oh, sorry, we can’t help you. That’s not our area.” To make it a more inclusive experience, everyone in the organization is a brand ambassador.

We have to make sure that people understand that, to make it more inclusive for the patient and less hectic for the patient, no matter where you are within a particular organization. I’m sure Sandra can speak to this as well. We are all important to that patient, so if you don’t know the answer, you don’t have to say, “I don’t know.” You can say, “Let me get someone who can assist you. I’ll find some information for you.”

It shouldn’t be work for them when patients walk in the door. They should be treated as a guest, welcomed and treated as a family member. Three or four years ago, it was definitely the mindset of, “Not my job.” At other organizations that I visit, I do see more of a helpful environment, which has changed the patient perception of hospitals as well.

Beach: I couldn’t agree more, Jennifer. We have the same thing here as with your Eye Care. I ask our staff every day, “How would you feel if you were the patient walking in our door? Are we greeting patients with a nice, warm, friendly smile? Are we asking, ‘How can I help you today?’ Or, ‘Good morning, what can I do for you today?’”

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

We keep that at the forefront for our staff so they are thinking about this every time that they greet a patient, every day they come to work, because patients have choices, patients can go to other facilities, they can go to other providers.

We want to keep our patients within our healthcare system. So it’s really important that we have a really good patient experience on the front end, because Jennifer is correct, it has a positive outcome on the back end. If they start off in the very beginning with a scheduler or a registrar or an ED check-in person, and they are not greeted in a friendly, warm atmosphere, then typically that’s what sets off their total visit. That seems to be what they remember. That first interaction is really what they remember.

Gardner: Julie, this reflects back on what’s been happening in the consumer world around the user experience. It seems obvious.

So I’m curious about this notion of competition between healthcare providers. That might be something new as well. Why do healthcare provider organizations need to be thinking about this perception issue? Is it because people could pick up and choose to go somewhere else? How has competition changed the landscape when it comes to healthcare?

Competing for consumers’ care 

Gerdeman: Patients have choices. Sandra described that well. Patients, particularly in metropolitan or suburban areas, have lots of options for primary care, specialty care, and elective procedures. So healthcare providers are trying to respond to that.

In the last few years you have seen not just consumerism from the patient experience, but consumerism in terms of advertising, marketing, and positioning of healthcare services — like we have never seen before. That competition will continue and become even more fierce over time.

mee-exteriorProviders should put the patient at the center of everything that they do. Just as Jennifer and Sandra talked about, putting the patient at the heart and then showing empathy from the very first interaction. The digital interaction needs to show empathy, too. And there are ways to do that with technology and, of course, the human interaction when you are in the facility.

Patients don’t want to be patients most of the time. They want to be humans and live their lives. So, the technology supporting all of that becomes really crucial. It has to become part of that experience. It has to arm the patient access team and put the data and information at their fingertips so they can look away from a computer or a kiosk and interact with that patient on a different level. It should arm them to have better, empathic interactions and build trust with the patient, with the consumer.

Gardner: I have seen that building competition where I live in New Hampshire. We have had two different nationally branded critical-care clinics open up — pop-up like mushrooms in the spring rain — in our neighborhood.

Let’s talk about the experience not just for the patient but for that person who is in the position of managing the patient access. The technology has extended data across the partner organization. But still technology is often not integrated in the back end for the poor people who are jumping between four and five different applications — often multiple systems — to on-board patients.

What’s the challenge from the technology for the health provider organization, Jennifer?

One system, one entry point, it’s Epic

Farmer: That used to be our issue until we gained the Epic system in 2016. People going into multiple applications was part of the issue with having a positive patient experience. Every entry point that someone would go to, they would need to repeat their name and date of birth. It looked one way in one system and it looked another way in a different system. That went away with Epic.

Epic is one system, the registration or the patient access side. It is also the coding side, it’s billing, it’s medical records, it’s clinical care, medications, it’s everything.

So for us here at Mass. Eye and Ear, no matter where you go within the organization, and as Sandra mentioned earlier, we are part of the same Partners HealthCare system. You can actually go to any Partners facility and that person who accesses your account can see everything. From a patient access standpoint, they can see your address and phone number, your insurance information, and who you have as an emergency contact.

There isn’t that anger that patients had been feeling before, because now they are literally giving their name and date of birth only as a verification point. It does make it a lot easier for our patients to come through the door, go to different departments for testing, for their appointment, for whatever reason that they are here, and not have to show their insurance card 10 times.

If they get a bill in the mail and they are calling our billing department, they can see the notes that our financial coordinators, our patient access folks, put on the account when they were here two or three months ago and help explain why they might have gotten a bill. That’s also a verification point, because we document everything.

So, a financial coordinator can tell a patient they will get a bill for a co-pay or for co-insurance and then they get that bill, they call our billing team, they say, “I was never told that,” but we have documentation that they were told. So, it’s really one-stop shopping for the folks who are working within Epic. For the patient, nine times out of 10 they just can go from floor to floor, doctor to doctor, and they don’t have to show ID again, because everything is already stored in Epic.

Beach: I agree because we are on Epic as well. Prior to that, three years ago, it would be nothing for my registrars to have six, seven systems up at the same time and have to toggle back and forth. You run a risk by doing that, because you have so many systems up and you might have different patients in the system, so that was a real concern.

If a patient came in and didn’t have an order from the provider, we would have to call their office. The patient would have to wait. We might call two or three times.

Now we have one system. If the patient doesn’t have the order, it’s in the computer system. We just have to bring it up, validate it, patient gets checked in, patient has their exam, and there is no wait. It’s been a huge win for us and for our patients.

Now, we have one system. If the patient doesn’t have the order, it’s in the computer system. We just have to bring it up, validate it, patient gets checked in, patient has their exam, and there is no wait. It’s been a huge win for us for sure — and for our patients.

Gardner: Privacy and compliance regulations play a more important role in the healthcare industry than perhaps anywhere else. We have to not only be mindful of the patient experience, but also address these very important technical issues around compliance and security. How are you able to both accomplish caring for the patient and addressing these hefty requirements?

It’s healthy to set limits on account access

Farmer: Within Epic, access is granted by your role. Staff that may be working in admitting or the ED or anywhere within patient access, but they don’t have access to someone’s medication list or their orders. However, another role may have access.

Compliance is extremely important. Access is definitely something that is taken very seriously. We want to make sure that staff are accessing accounts appropriately and that there are guardrails built in place to prevent someone from accessing accounts if they should not be.

For instance, within the Partners HealthCare system, we do tend to get people of a certain status; we get politicians, we get celebrities, we get heads of state, public figures that go to various hospitals, even outside of Partners that are receiving care. So we have locks on those particular accounts for employees. Their accounts are locked.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

So if you try to access the account, you get a hard stop. You have to complete why you are accessing this account, and then it is reviewed immediately. And if it’s determined that your role has nothing to do with it, you should not be accessing this particular account, then the organization does takes necessary steps to investigate and either say yes, they had a reason to be in this account, or no, they did not, and the potential of termination is there.

But we do take privacy very seriously within the system and then outside of the system. We make sure we are providing a safe space for people to be able to provide us with their information. It is on the forefront, it drives us, and folks definitely are aware because it is part of their training.

Cooley-DickinsonBeach: You said it perfectly, Jennifer. Because we do have a lot of people that are high profile and that do come through our healthcare systems the security, I have to say, is extremely tight on records. And so it should be. If you are in a record, and you shouldn’t be there, then there are consequences to that.

Gardner: Julie, in addition to security and privacy we have also had to deal with a significant increase in the complexity around finances and payments given how insurers and the payers work. Now there are more copays, more kinds of deductibles. There are so many different plans: platinum, gold, silver, bronze.

In order to keep the goal of a positive patient experience, how are we addressing this new level of complexity when it comes to the finances and payments? Do they go hand-in-hand, the patient experience, the access, and the economics?

A clean bill of health for payment

Gerdeman: They do, and they should, and they will continue to. There will remain complexity in healthcare. It will improve certainly over time, but with all of the changes we have seen complexity is a given. It will be there. So how to handle the complexity, with technology, with efficient process, and with the right people becomes more and more important.

There are ways to make the complex simple with the right technology. On the back end, behind that amazing patient experience — both the clinical experience and also the financial experience – we try to shield the patient. At HealthPay24 we are focused on financial experience and taking all of the data that’s behind there and presenting it very simply to a patient.

That means one small screen on the phone — with different encounters and different back ends – of being able to present that very simply for our patients to meet their financial obligations. They are not concerned that the ED had one different electronic medical record (EMR) than the specialist. That’s really not the concern of the patient, nor should it be. It’s the concern of how the providers can use technology in the back end to then make it simple and change that experience.

We talked about loyalty, and that’s what drives loyalty. You are going to keep coming back to a great experience, with great care, and ease of use. So for me, that’s all crucial as we go forward with healthcare – the technology and the role it plays.

Gardner: And Jennifer and Sandra, how do you see the relationship between the proper on-boarding, access, and experience and this higher complexity around the economics and finance? Do you see more of the patient experience addressing the economics?

Farmer: We have done an overhaul of our system, where it concerns patients, for paying bills or for not having health insurance. Our financial coordinators are there to assist our patients, whether by phone, email, in person. There are lots of different programs we can introduce patients to.

We are certified counselors for the Commonwealth of Massachusetts. That means we are able to help the patient apply for health insurance through the Health Connector for Massachusetts as well as for the state Medicaid program called MassHealth. And so we are here to help those patients go through that process.

We also have an internal program that can assist patients with paying their bills. We talk to patients about different credit cards that are available for those that may qualify. And essentially, the bottom line too is somebody just once again on a payment plan. So, we take many factors, and we try to make it work as best as we can for the patient.

At the end of the day, it’s about that patient receiving care and making sure that they are feeling good about it. We definitely try to meet their needs and introduce them to different things. We are here to support them, and at the end of the day it’s again about their care. If they can’t pay anything right now, but they obviously need immediate medical services, then we assure them, let’s focus on your care. We can talk about the back end or we can talk about your bills at a different point.

We do provide them with different avenues, and we are pretty proud of that because I like to believe that we are successful with it and so it helps the patient overall.

Gerdeman: It really does go to that patients want to meet their obligations, but they need options to be able to do that. Those options become really important — whether it’s a loan program, a payment plan, applying for financial assistance – and technology can enable all of these things.

For HealthPay24, we enable an eligibility check right in the platform so you don’t have to worry about others knowing. You can literally check for eligibility by clicking a button and entering a few fields to know if you should be talking to financial counseling at a provider.

You can apply for payment plans, if the providers opt for that. It will be proactively offered based on demographic data to a patient through the platform. You can also apply for loans, for revolving credit, through the platform. Much of what patients want and need financially is now available and enabled by technology.

Gardner: Sandra, such unification across the financial, economic, and care giving roles strikes me as something that’s fairly new.

Beach: Yes, absolutely it is. We have a program in our ED, for example, that we instituted a year ago. We offer an ED discharge service so when the patient is discharged, they stop at our desk and we offer these patients a wide variety of payment options. Or maybe they are homeless and they are going through a tough time. We can tell them where they can go to get a free meal or spend the night. There are a whole bunch of programs available.

That’s important because we will never turn a patient away. And when patients come through our ED, they need care. So when they leave, we want to be able to help them as much as we can by supporting them and giving them these options. 

That’s important because we will never turn a patient away. And when patients come through our ED, they need care. So when they leave, we want to be able to help them as much as we can by supporting them and giving them these options.

We have also made phone calls for our patients as well. If they need to get someplace just to spend the night, we will call and we will make that arrangement for those patients. So when they leave, they know they have a place to go. That’s really important because people go through hard times.

Gardner: Sandra, do you have any other examples of processes or approaches to people and technology that you have put in place recently? What have been some of the outcomes?

Check-in at home, spend less time waiting 

Beach: Well, the ED discharge service has made a huge impact. We saw probably 7,000-8,000 patients through that desk over the last year. We really have helped a lot of patients. But we are also there just to lend an ear. Maybe they have questions about what the doctor just said to them, but they really weren’t sure what he said. So it’s just made a huge impact for our patients here.

Gardner: Jennifer, same question, any processes you have put in place, examples of things that have worked and what are the metrics of success?

Farmer: We just rolled out e-check-in. So I don’t have any metrics on it just yet, but this is a process where the patient can go to their MyChart or their EHR and check in for an appointment prior to the day. They can also pay their copay. They can provide us with updates to their insurance information, address or phone number, so when they actually come to their appointment, they are not stopping at the desk to sign in or do check in.

That seems to be a popular option for the office visitor currently piloting this, and we are hoping for a big success. It will be rolled out to other entities, but right now that is something that we are working on. It’s tying in the technology, the patient care, for the patient access. It’s tying in the ease of the check-in with that patient. And so again, we are hoping that we have some really positive metrics on that.

Gardner: What sort of timeframe are we talking about here in terms of start to finish from getting that patient into their care?

Farmer: So if they are walking in the door because they have already done e-check-in, they are immediately going in for their appointment, because they are showing up on time, they are expected, they are going right in, so the time that the patient is sitting there waiting in line, sitting in the waiting area, that’s being reduced; the time that they have to talk to someone about any changes or confirming everything that we have on their account, that time is being reduced.

MEE-ExteriorAnd then we are hoping to test this in a pilot program for the next month to six weeks to see what kind of data we can get and hopefully that will — just across the board, just help with that check in process for patients and reduce that time for the folks who are at the desk and they can focus on other tasks as well. So we are giving them back their time.

Gardner: Julie, this strikes me in the parlance of other industries as just-in-time healthcare, and it’s a good move. I know you deal with a national group of providers and payers. Any examples, Julie, that demonstrate and illustrate the positive direction we are going with patient access and why technology is an important part of that?

Just-in-time wellness

Gerdeman: I refer to Christopher Penn’s model of People, Process, and Technology here, Dana, because when people touch process, there is scale, and when process and technology intersect, there is automation. But most importantly, when people intersect with technology, there is innovation, and what we are seeing is not just incremental innovation — but huge leaps in innovation.

What Jen just described as that experience of just-in-time healthcare, that is literally a huge need, that’s a leap, right? We have come to expect it when we reserve a table via OpenTable, when we e-check-in for a hair appointment. I go back to that consumer experience, but that innovation, right, that’s happening all across healthcare.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

One of the things that we just launched, which we are really excited about, is predictive analytics tied to the payment platform. If you know and can anticipate the behaviors and the patterns of a demographic of patients, financially speaking, then it will help ease the patient experience in what they owe, how they pay, and what’s offered to them. It boosts the bottom line of providers, because they are going to get increased revenue collection.

So where predictive analytics is going in healthcare and tying that to the patient experience and to the financial systems, I think will become more and more important. And that leads to even more — there is so much emerging technology on the clinical side and we will continue to see more emerging technology on the back-end systems and the financial side as well.

Gardner: Before we close out, perhaps a look to the future, and maybe even a wish list. Jennifer, if you had a wish list for how this will improve in the next few years, what’s missing, what’s yet to come, what would you like to see available with people, process, and technology?

Farmer: I go back to just patient care, and while we are in a very good spot right now, it can always improve. We need more providers, we need more technicians, we need more patient access folks, and the sense of being able to take care of people because the population is growing and whether you know it or not, you are going to need a doctor at some point.

So I think continuing on the path that we are on of providing excellent customer service, listening to patients, being empathetic. Also providing them with options; different appointment times, different finance options, different providers, it can only get better.

Beach: I absolutely agree. We have a really good computer system, we have the EMRs, but I would have to agree with Jennifer as well that we really need more providers. We need more nurses to take care of our patients.

Gardner: So it comes down to human resources. How about those front-line people who are doing the patient access intercept? Should they have an elevated status, role, and elevated pay schedule?

Farmer: It’s really tough for the patient access people because on the front line — every minute of every day, eight to 10 hours a day — they are working on that front line, so sometimes that’s tough.

It’s really important that we keep training with them. We give them options of going to customer service classes, because their role has changed from basically checking in a patient to now making sure if their insurance is correct. We have so many different insurance plans these days. To know each of those elevates that registrar to be almost an expert in that field in order to be able to help the patient and get them through that registration process, and the bottom line — to get reimbursed for those services. So it’s really come a long way.

Gardner: Julie, on this future perspective, what do you think will be coming down the pike for provider organizations like Jennifer and Sandra’s in terms of technology and process efficiency? How will the technology become even more beneficial?

Gerdeman: It’s going to be a big balancing act. What I mean by that is we are now officially more of an older country than a younger country in terms of age. People are living longer, they need more care than ever before, and we need the systems to be able to support that. So, everything that was just described is critical to support our aging population.

We have a whole other generation entering into healthcare as patients, as providers, and as technologists. This new generation has a completely different expectation of what that experience should and will be. 

But what I mean by the balancing act is we have a whole other generation entering into healthcare as patients, as providers, and as technologists. This new generation has a completely different expectation of what that experience should and will be. They might have an expectation that their wearable device should give all of that data to a provider. That they wouldn’t need to explain it, that it should all be there all day, not just that they walk in and have just-in-time, but all the health data is communicated ahead of time, before they are walking in and then having a meaningful conversation about what to do.

This new generation is going to shift us to wellness care, not just care when we are sick or injured. I think that’s all changing. We are starting to see the beginnings of that focus on wellness. And wearables and devices, and how they are used, the providers are going to have to juggle that with the aging population and traditional services — as well as the new. Technology is going to be a key, core part of that going forward.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in artificial intelligence, Business intelligence, Cloud computing, contact center, CRM, electronic medical records, Enterprise transformation, healthcare, Identity, machine learning, professional services, Security, supply chain, User experience | Tagged , , , , , , , , , , | Leave a comment

How security designed with cloud migrations in mind improves an enterprise’s risk posture top to bottom

DominosThe next BriefingsDirect data security insights discussion explores how cloud deployment planners need to be ever-vigilant for all types of cybersecurity attack vectors. Stay with us as we examine how those moving to and adapting to cloud deployments can make their data and processes safer and easier to recover from security incidents.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about taking the right precautions for cloud and distributed data safety we welcome two experts in this field, Mark McIntyre, Senior Director of Cybersecurity Solutions Group at Microsoft, and Sudhir Mehta, Global Vice President of Product Management and Strategy at Unisys. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mark, what’s changed in how data is being targeted for those using cloud models like Microsoft Azure? How is that different from two or three years ago?

Mark McIntyre

McIntyre

McIntyre: First of all, the good news is that we see more and more organizations around the world, including the US government, but broadly more global, pursuing cloud adoption. I think that’s great. Organizations around the world recognize the business value and I think increasingly the security value.

The challenge I see is one of expectations. Who owns what, as you go to the cloud? And so we need to be crisper and clearer with our partners and customers as to who owns what responsibility in terms of monitoring and managing in a team environment as you transition from a traditional on-premises environments all the way up into a software-as-a-services (SaaS) environment.

Gardner: Sudhir, what’s changed from your perspective at Unisys as to what the cloud adoption era security requirements are?

Sudhir Mehta

Mehta

Mehta: When organizations move data and workloads to the cloud, many of them underestimate the complexities of securing hybrid, on-premises, and cloud ecosystems. A lot of the failures, or what we would call security breaches or intrusions, you can attribute to inadequate security practices, policies, procedures, and misconfiguration errors.

As a result, cloud security breach reports have been on the rise. Container technology adds flexibility and speed-to-market, but it is also introducing a lot of vulnerability and complexity.

A lot of customers have legacy, on-premises security methodologies and technologies, which obviously they can no longer use or leverage in the new, dynamic, elastic nature of today’s cloud environments.

Gartner estimates that through 2022 at least 95 percent of cloud security failures will be the customers’ fault. So the net effect is cloud security exposure, the attack surface, is on the rise. The exposure is growing.

Change in cloud worldwide 

Gardner: People, process, and technology all change as organizations move to the cloud. And so security best practices can fall through the cracks. What are you seeing, Mark, in how a comprehensive cloud security approach can be brought to this transition so that cloud retains its largely sterling reputation for security?

McIntyre: I completely agree with what my colleague from Unisys said. Not to crack a joke — this is a serious topic — but my colleagues and I meet a lot with both US government and commercial counterparts. And they ask us, “Microsoft, as a large cloud provider, what keeps you awake at night? What are you afraid of?”

It’s always a delicate conversation because we need to tactfully turn it around and say, “Well, you, the customer, you keep us awake at night. When you come into our cloud, we inherit your adversaries. We inherit your vulnerabilities and your configuration challenges.”

We need to be really clear with our customers about the technologies that they need to make themselves more secure. We need to give them awareness into their posture so it’s built right into the fabric of the cloud service.

As our customers plan a cloud migration, it will invariably include a variety of resources being left on-premises, in a traditional IT infrastructure. We need to make sure that we help them understand the benefits already built into the cloud, whether they are seeking infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), or SaaS. We need to be really clear with our customers — through our partners, in many cases – about the technologies that they need to make themselves more secure. We need to give them awareness into their posture so that it is built right into the fabric of the cloud service.

Gardner: Sudhir, it sounds as if organizations who haven’t been doing things quite as well as they should on-premises need to be even more mindful of improving on their security posture as they move to the cloud, so that they don’t take their vulnerabilities with them.

From Unisys’s perspective, how should organizations get their housecleaning in order before they move to the cloud?

Don’t bring unsafe baggage to the cloud 

Mehta: We always recommend that customers should absolutely first look at putting their house in order. Security hygiene is extremely important, whether you look at data protection, information protection, or your overall access exposure. That can be from employees working at home or through to vendors or third-parties — wherever they have access to a lot of your information and data.

azure bugFirst and foremost, make sure you have the appropriate framework established. Then compliance and policy management are extremely important when you move to the cloud and to virtual and containerized frameworks. Today, many companies do their application development in the cloud because it’s a lot more dynamic. We recommend that our customers make sure they have the appropriate policy management, assessments, and compliance checks in place for both on-premises and then for your journey to the cloud.

Learn More About  Cyber Recovery

With Unisys Stealth 

The net of it is, if you are appropriately managed when you are on-premises, chances are as you move from hybrid to more of a cloud-native deployment and/or cloud-native services, you are more likely to get it right. If you don’t have it all in place when you are on-premises, you have an uphill battle in making sure you are secured in the cloud.

Gardner: Mark, are there any related issues around identity and authentication as organizations move from on-premises to outside of their firewall into cloud deployment? What should organizations be thinking about specifically around identity and authentication?

Avoid an identity crisis

McIntyre: This is a huge area of focus right now. Even within our own company, at Microsoft, we as employees operate in essentially an identity-driven security model. And so it’s proper that you call this out on this podcast.

Face IDThe idea that you can monitor and filter all traffic, and that you are going to make meaningful conclusions from that in real time — while still running your business and pursuing your mission — is not the best use of your time and your resources. It’s much better to switch to a more modern, identity-based model where you can actually incorporate newer concepts.

Within Microsoft, we have a term called Modern Workplace. It’s a reflection of the fact that government organizations and enterprises around the world are having to anticipate and hopefully provide a collaborative work environment where people can work in a way that reflects their personal preferences around devices and working at home or on the road at a coffee shop or restaurant — or whatever. The concept of work has changed around enterprise and is definitely forcing this opportunity to look at creating a more modern identity framework.

Zero Trust networking and micro-segmentation initiatives recognize that we know people need to keep working and doing their jobs wherever they are. The idea is to accept the fact that people will always cause some level of risk to the organization.

If you look at some of the initiatives in the US government right now, we hear the term Zero Trust. That includes Zero Trust networking and micro-segmentation. Initiatives like these recognize that we know people need to keep working and doing their jobs wherever they are. The idea is to accept the fact that people will always cause some level of risk to the organization.

We are curious, reasonably smart, well-intentioned people, and we make mistakes, just like anybody else. Let’s create an identity-driven model that allows the organization to get better insight and control over authentications, requests for resources, end-to-end, and throughout a lifecycle.

Gardner: Sudhir, Unisys has been working with a number of public-sector organizations on technologies that support a stronger posture around authentication and other technologies. Tell us about what you have found over the past few years and how that can be applied to these challenges of moving to a cloud like Microsoft Azure.

Mehta: Dana, going back in time, one of the requests we had from the US Department of Defense (DoD) on the networking side, was a concern around access to sensitive information and data. Unisys was requested by the DoD to develop a framework and implement a solution. They were looking at more of a micro-segmentation solution, very similar to what Mark just described.

So, fast forward, since then we have deployed and released a military-grade capability called Unisys Stealth®, wherein we are able to manage micro-segmentation, what we classify as key-based, encrypted micro-segmentation, that controls access to different hosts or endpoints based on the identity of the user. It permits only authorized users to communicate with approved endpoints and denies unauthorized communications, and so prevents the spread of east-to-west, lateral attacks.

Gardner: Mark, for those in our audience who aren’t that technology savvy, what does micro-segmentation mean? Why has it become an important foundational capability for security across a cloud-use environment?

Need-to-know access 

McIntyre: First of all, I want to call out Unisys’s great work here and their leadership in the last several years. It means a Zero-Trust environment can essentially gauge or control east-to-west behavior or activity in a distributed environment.

Stealth bugFor example, in a traditional IT environment, devices are not really well-managed when they are centralized, corporate-issued devices. You can’t take them out of the facility, of course. You don’t authenticate once you are on a network because you are already in a physical campus environment. But it’s different in a modern, collaborative environment. Enterprises are generally ahead on this change, but it’s now coming into government requirements, too.

And so now, you essentially can parse out your subjects and your objects, your subjects trying to access objects. You can spit them out and say, “We are going to create all user accounts with a certain set of parameters.” It amounts to a privileged, need-to-know model. You can enforce strong controls with a set of certain release-privilege rights. And, of course, in an ideal world, you could go a step further and start implementing biometrics [to authenticate] to get off of password dependencies.

Learn How Unisys Stealth Security 

Simplifies Zero Trust Networks

But number one, you want to verify the identity. Is this a person? Is this the subject who we think they are? Are they that subject based on a corroborating variety of different attributes, behaviors, and activities? Things like that. And then you can also apply the same controls to a device and say, “Okay, this user is using a certain device. Is this device healthy? Is it built to today’s image? Is it patched, clean, and approved to be used in this environment? And if so, to what level?”

And then you can even go a step further and say, “In this model, now that we can verify the access, should this person be able to use our resources through the public Internet and access certain corporate resources? Should we allow an unmanaged device to have a level of access to confidential documents within the company? Maybe that should only be on a managed device.”

So you can create these flexible authentication scenarios based on what you know about the subjects at hand, about the objects, and about the files that they want to access. It’s a much more flexible, modern way to interact.

Within Azure cloud, Microsoft Azure Active Directory services offer those capabilities – they are just built into the service. So micro-segmentation might sound like a lot of work for your security or identity team, but it’s a great example of a cloud service that runs in the background to help you set up the right rules and then let the service work for you.

Gardner: Sudhir, just to be clear, the Unisys Stealth(cloud) Extended Data Center for Microsoft Azure is a service that you get from the cloud? Or is that something that you would implement on-premises? Are there different models for how you would implement and deploy this?

A stealthy, healthy cloud journey 

Mehta: We have been working with Microsoft over the years on Stealth, and we have a fantastic relationship with Microsoft. If you are a customer going through a cloud journey, we deploy what we call a hybrid Stealth deployment. In other words, we help customers do what we call isolation with the help of communities of interests that we create that are basically groupings of hosts, users, and resources based on like interests.

Then, when there is a request to communicate, you create the appropriate Stealth-encrypted tunnels. If you have a scenario where you are doing the appropriate communication between an on-premises host and a cloud-based host, you do that through a secure, encrypted tunnel.

We have also implemented what we call cloaking. With cloaking, if someone is not authorized to communicate with a certain host or a certain member of a community of interest, you basically do not give a response back. So cloaking is also part of the Stealth implementation.

And in working closely with Microsoft, we have further established an automated capability through a discovery API. So when Microsoft releases new Azure services, we are able to update the overall Stealth protocol and framework with the updated Azure services. For customers who have Azure workloads protected by Stealth, there is no disruption from a productivity standpoint. They can always securely leverage whatever applications they are running on Azure cloud.

For customers leveraging Azure cloud with different workloads, we maintain the appropriate level of secure communications just as they would have in an on-premises deployment.

The net of it is being able to establish the appropriate secure journey for customers, from on-premises to the cloud, the hybrid journey. For customers leveraging Azure cloud with different workloads, we maintain the appropriate level of secure communications just as they would have in an on-premises deployment.

Gardner: Mark, when does this become readily available? What’s the timeline on how these technologies come together to make a whole greater than the sum of the parts when it comes to hybrid security and authentication?

McIntyre: Microsoft is already offering Zero Trust, identity-based security capabilities through our services. We haven’t traditionally named them as such, although we definitely are working along that path right now.

Microsoft Chief Digital Officer and Executive Vice President Kurt DelBene is on the US Defense Innovation Board and is playing a leadership role in establishing essentially a DoD or US government priority on Zero Trust. In the next several months, we will be putting more clarity around how our partners and customers can better map capabilities that they already own against emerging priorities and requirements like these. So definitely look for that.

Stealth cloud XDC for Microsoft AzureIn fact, Ignite DC is February 6 and 7, in downtown Washington, DC, and Zero Trust is certainly on the agenda there, so there will be updates at that conference.

But generally speaking, any customer can take the underlying services that we are offering and implement this now. What’s even better, we have companies that are already out there doing this. And we rely greatly on our partners like Unisys to go out and really have those deep architecture conversations with their stakeholders.

Gardner: Sudhir, when people use the combined solution of Microsoft Azure and Stealth for cloud, how can they react to attacks that may get through to prevent damage from spreading?

Contain contagion quickly 

Mehta: Good question! Internally within Unisys’s own IT organization, we have already moved on this cloud journey. Stealth is already securing our Azure cloud deployments and we are 95 percent deployed on Azure in terms of internal Unisys applications. So we like to eat our own dog food.

If there is a situation where there is an incident of compromise, we have a capability called dynamic isolation, where if you are looking at a managed security operations center (SOC) situation, we have empowered the SOC to contain a risk very quickly.

We are able to isolate a user and their device within 10 seconds. If you have a situation where someone turns nefarious, intentionally or coincidentally, we are able to isolate the user and then implement different thresholds of isolation. If a high threshold level is breached across 8 out of 10, that means we completely isolate that user.

Learn More About  Cyber Recovery

With Unisys Stealth 

If there is a threshold level of 5 or 6, we may still give the user certain levels of access. So within a certain group they would continue to access or be able to communicate.

Dynamic isolation isolates a user and their device with different levels of thresholds while we have like a managed SOC go through their cycles of trying to identify what really happened as part of what we would call an advanced response. Unisys is the only solution where we can actually isolate a user or the device within the span of seconds. We can do it now within 10 seconds.

McIntyre: Getting back to your question about Microsoft’s plans, I’m very happy to share how we’ve managed Zero Trust. Essentially it relies on Intune for device management and Azure Active Directory for identity. It’s the way that we right now internally manage our own employees.

My access to corporate resources can come via my personal device and work-issued device. I’m very happy with what Unisys already has available and what we have out there. It’s a really strong reference architecture that’s already generally available.

Gardner: Our discussion began with security for the US DoD, among the largest enterprises you could conceive of. But I’m wondering if this is something that goes down market as well, to small- to medium-sized businesses (SMBs) that are using Azure and/or are moving from an on-premises model.

Do Zero Trust and your services apply to the mom and pop shops, SMBs, and the largest enterprises?

All sizes of businesses

McIntyre: Yes, this is something that would be ideally available for an SMB because they likely do not have large logistical or infrastructure dependencies. They are probably more flexible in how they can implement solutions. It’s a great way to go into the cloud and a great way for them to save money upfront over traditional IT infrastructure. So SMBs should have a really good chance to literally, natively take an idea like this and implement it.

Gardner: Sudhir, anything to offer on that in terms of the technology and how it’s applicable both up and down market?

Mehta: Mark is spot on. Unisys Stealth resonates really well for SMBs and the enterprise. SMBs benefit, as Mark mentioned, in their capability to move quickly. And with Stealth, we have an innovative capability that can discover and visualize your users. Thereafter, you can very quickly and automatically virtualize any network into the communities of interest I mentioned earlier. SMBs can get going within a day or two.

Enterprises can define their journey depending on what you’re actually trying trying to migrate or run in the cloud. The opportunities are there for both SMBs and enterprises.

If you’re a large enterprise, you can define your journey — whether it’s from on-premises to cloud — depending on what you’re actually trying to migrate or run in the cloud. So I would say absolutely both. And it would also depend on what you’re really looking at managing and deploying, but the opportunities are there for both SMBs and enterprises.

Gardner: As companies large and small are evaluating this and trying to discern their interest, let’s look at some of the benefits. As you pointed out, Sudhir, you’re eating your own dog food at Unisys. And Mark has described how this is also being used internally at Microsoft as well.

Do you have ways that you can look at before and after, measure quantitatively, qualitative, maybe anecdotally, why this has been beneficial? It’s always hard in security to prove something that didn’t happen and why it didn’t happen. But what do you get when you do Stealth well?

Proof is in the protection 

Mehta: There are a couple of things, Dana. So one is there is certainly a reduction in cost. When we deploy for 20,000 Unisys employees, our Chief Information Security Officer (CISO) obviously has to be a big supporter of Stealth. His read is from a cost perspective that we have seen significant reductions in costs.

Prior to having Stealth implemented, we had a certain approach as relates to network segmentation. From a network equipment perspective, we’ve seen a reduction of over 70 percent. If you look at server infrastructure, there has been a reduction of more than 50 percent. The maintenance and labor costs have had a reduction north of 60 percent. Ongoing support labor cost has also seen a significant reduction as well. So that’s one lens you could look at.

The other lens that has been interesting is the virtual private network (VPN) exposure. As many of us know, VPNs are perhaps the best breach route for hackers today. When we’ve implemented Stealth internally within Unisys, for a lot of our applications we have done away with the requirement for logging into a VPN application. That has made for easier access to a lot of applications – mainly for folks logging in from home or from a Starbucks. Now when they communicate, it is through an encrypted tunnel and it’s very secure. The VPN exposure completely goes away.

Those are the best two lenses I could give to the value proposition. Obviously there is cost reduction. And the other is the VPN exposure goes away, at least for Unisys that’s what we’ve found with implementing internally.

Gardner: For those using VPNs, should they move to something like Stealth? Does the way in which VPNs add value change when you bring something like Stealth in? How much do you reevaluate your use of VPNs in general?

Mehta: I would be remiss to say you can completely do away with VPNs. If you go back in time and see why VPNs were created, the overall framework was created for secure access for certain applications. Since then, for whatever reasons, VPNs became the only way people communicate from working at home, for example. So the way we look at this is, for applications that are not extremely limited to a few people, you should look at options wherein you don’t necessarily need a VPN. You could therefore look at a solution like Unisys Stealth.

And then if there are certain applications that are extremely sensitive, limited to only a few folks for whatever reason, that’s where potentially you could consider using an application like a VPN.

Gardner: Let’s look to the future. When you put these Zero Trust services into practice, into a hybrid cloud, then ultimately a fully cloud-native environment, what’s the next shoe to fall? Are there some things you gain when you enter into this level of micro-segmentation, by exploiting these newer technologies?

Can this value be extended to the edge, for example? Does it have a role in Internet of things (IoT)? A role in data transfers from organization to organization? What does this put us in a position to do in the future that we couldn’t have done previously?

Machining the future securely 

McIntyre: You hit on two really important points. Obviously devices, IoT devices, for example, and data. So data increasingly — you see T-shirts out and you see slogans, “Data is the new oil,” and such. From a security point of view there is no question this is becoming the case, when there’s something like 44 to 45 zettabytes of data projected to be out there for the next few years.

You can employ traditional security monitoring practices, for example label-free detection, things like that. But it’s just not going to allow you to work quickly, especially in an environment where we’re already challenged with having enough security workforce. There are not enough people out there, it’s a global talent shortage.

It’s a fantastic opportunity forced on us to rely more on modern authentication frameworks and on machine learning (ML) and artificial intelligence (AI) technologies to take on a lot of that lower-level analysis, the log analysis work, out of human hands and have machines free people up for the higher-level work.

We’re trying to make sure that as we deliver new services to the marketplace that those are built in a way that you can configure and monitor them like any other device in the company.  We can make sure that it is being monitored in the same way as your traditional infrastructure. 

For example, we have a really interesting situation within Microsoft. It goes around the industry as well. We have many organizations go into the cloud, but of course, as we mentioned earlier, it’s still unclear on the roles and responsibilities. We’re also seeing big gaps in use of cloud resources versus security tools built into those resources.

And so we’re really trying to make sure that as we deliver new services to marketplace, for example, IoT, that those are built in a way that you can configure and monitor them like any other device in the company. With Azure, for example, we have IoT Hub. We can literally, as you build an IoT device, make sure that it is being monitored in the same way as your traditional infrastructure monitors.

cloud imageThere should not be a gap there. You can still apply the same types of logical access controls around them. There shouldn’t be any tradeoffs on security for how you do security — whether it’s IT or IoT.

Gardner: Sudhir, same question, what is use of Stealth in conjunction with cloud activities get you in the future?

Mehta: Tagging on to what Mark said, AI and ML are becoming interesting. We obviously had a very big digital workplace solutions organization. We are a market leader for services, for helpdesk services. We are looking at the introduction of a lot of what you would call as AIOps in automation as it leads to robotic process automation (RPA) and voice assistance.

So one of the things we are observing is, as you go on this AI-ML, there is a larger exposure because you are focusing more around the operationalization in automation or AI-ML and certain areas where you may not be able to manage, for instance, the way you get the training done for your bots.

So that’s where Stealth is a capability we are implementing right now with digital workplace solutions as part of a journey for AIOps automation as an example. The other area we are working very closely with some of other partners, as well as Microsoft, is around application security and hardening in the cloud.

How do you make sure that when you deploy certain applications in the cloud you ensure that it is secure and it is not being breached, or are there intrusions when you try to make changes to your applications?

Those are two areas we are currently working on, the AIOps and MLOps automation and then the application security and hardening in the cloud, working with Microsoft as well.

Gardner: If I want to be as secure as I can, and I know that I’m going to be doing more in the cloud, what should I be doing now in order to make myself in the best position to take advantage of things like micro-segmentation and the technologies behind Stealth and how they apply to a cloud like Azure? How should I get myself ready to take advantage of these things?

Plan ahead to secure success 

McIntyre: First thing is to remember how you plan and roll out your security estate. It should be no different than what you’re doing with your larger IT planning anyway, so it’s all digital transformation. First thing to do is close that gap between security teams. All the teams – business and IT — should be working together.

Learn How Unisys Stealth Security 

Simplifies Zero Trust Networks

We want to make sure that our customers go to the cloud in a secure way, without losing this ability to access their data. We continue to put more effort in very proactive services — architecture guidance, recommendations, things that can help people get started in the cloud. It’s called Azure Blueprints, a configuration guidance and predefined templates that can help an organization launch a resource in the cloud that’s already compliant against FedRAMP or NIST or ISO or HIPAA standards.

We’ll continue to invest in the technologies that help customers securely deploy technologies or cloud resources from the get-go so that we close those gaps and configuration and close the gaps in reporting and telemetry as well. And we can’t do it without great partners that provide those customized solutions for each sector.

Gardner: Sudhir, last word to you. What’s your advice for people to prepare themselves to be ready to take advantage of things like Stealth?

Mehta: Look at a couple of things. One is focus on trusted identity in terms of who you work with, who you give access to. Even within your organization you obviously need to make sure you establish that trusted identity. And how you do it is you make sure it is simple. Second, look at an overlay network agnostic framework, which is where Stealth can help you. Make sure it is unique. One individual has one identity. Third is make sure it is refutable. So it’s undeniable in terms of how you implement it, and then the fourth is, make sure it’s got the highest level of efficacy, whether it’s related to how you deploy and it’s also the way you architect your solution.

So, the net of it is, a) trust no one, b) assume a breach can occur, and then c) respond really fast to limit damage. If you do these three things, you can get to Zero Trust for your organization.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsors: Unisys and Microsoft.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Business intelligence, Cloud computing, Cyber security, data center, Data center transformation, DevOps, enterprise architecture, machine learning, Microsoft, multicloud, Security, Unisys | Tagged , , , , , , , , , , , , , , | Leave a comment

SambaSafety’s mission to reduce risk begins in its own datacenter security partnerships

carsSecurity and privacy protection increasingly go hand in hand, especially in sensitive industries like finance and public safety.

For driver risk management software provider SambaSafety protecting their business customers from risk is core to their mission — and that begins with protection of their own IT assets and workers.

Stay with us now as BriefingsDirect explores how SambaSafety adopted Bitdefender GravityZone Advanced Business Security and Full Disk Encryption to improve the end-to-end security of their operations and business processes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To share their story, please welcome Randy Whitten, Director of IT and Operations at SambaSafety in Albuquerque, New Mexico. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Randy, tell us about SambaSafety, how big it is, and your unique business approach.

Randy Whitten

Whitten

Whitten: SambaSafety currently employs approximately 280 employees across the United States. We have four locations. Corporate headquarters is in Denver, Colorado. Albuquerque, New Mexico is another one of our locations. There’s Rancho Cordova just outside of Sacramento, California, and Portland, Oregon is where our transportation division is.

We also have a variety and handful of remote workers from coast to coast and from border to border.

Gardner: And you are all about making communities safer. Tell us how you do that.

Whitten: We work with departments of motor vehicles (DMVs) across the United States, monitoring the drivers for companies. We put a partnership together with state governments, and third-party information is provided to allow us to process reporting for critical driver information.

We seek to transform that data into action to protect the businesses and our customers from driver and mobility risk. We work to incorporate top-of-the-line security software to ensure that all of our data is protected while we are doing that.

Data-driven driver safety 

Gardner: So, it’s all about getting access to data, recognizing where risks might emerge with certain drivers, and then alerting those people who are looking to hire those drivers to make sure that the right drivers are in the right positions. Is that correct?

Whitten: That is correct. Since 1998, SambaSafety has been the pioneer and leading provider of driver risk management software in North America. SambaSafety has led the charge to protect businesses and improve driver safety, ultimately making communities safer on the road.

Our mission is to guide our customers, including employers, fleet managers, and insurance providers to make the right decisions at the right time by collecting, correlating and analyzing motor vehicle records (MVRs) and other data resources. We identify driver risk and enable our customers to modify their drivers’ behaviors, reduce the accidents, ensure compliance, and assist with lowering the cost, ultimately improving the driver and the community safety once again.

Gardner: Is this for a cross-section of different customers? You do this for public sector and private sector? Who are the people that need this information most?

Whitten: We do it across both sectors, public and private. We do it across transportation. We do it across drivers such as Lyft drivers, Uber drivers, and transportation drivers — our delivery carriers, FedEx, UPS, etc. — those types of customers.

These transportation drivers are delivering our commodities every day — the food we consume, the clothes we wear, the parts that fix our vehicles, all what’s essential to our everyday living.

Gardner: This is such an essential service, because so much of our economy is on four wheels, whether it’s a truck delivering goods and services, transportation directly for people, and public safety vehicles. A huge portion of our economy is behind the wheel, so I think this is a hugely important service you are providing.

Whitten: That’s a good point, Dana. Yes, it is very much. Transportation drivers are delivering our commodities every day — the food that we consume, the clothes that we wear, also the parts that fix our vehicles to drive, plus also just to be able to get like those Christmas packages via UPS or FedEx — the essential items to our everyday living.

intersectionGardner: So, this is mission-critical on a macro scale. Now, you also are dealing, of course, with sensitive information. You have to protect the privacy. People are entitled to information that’s regulated, monitored, and provided accordingly. So you have to be across-the-board reducing risk, doing it the right way, and you also have to make your own systems protected because you have that sensitive information going back and forth. Security and privacy are probably among your topmost mission-critical requirements.

Securing the sectors everywhere

Whitten: That is correct. SambaSafety has a SOC 2 Type II compliant certification. It actually is just the top layer of security we are using within our company, either for our endpoints or for our external customers.

Gardner: Randy, you described your organization as distributed. You have multiple offices, remote workers, and you are dealing with sensitive private and public sector information. Tell us what your top line thinking, your philosophy, about security is and then how you execute on that.

Whitten: Our top line essentially is to make sure that our endpoints are protected, that we are taking care of our employees internally to be able to set them up for success, so they don’t have to worry about security. All of our laptops are encrypted. We have different types of levels of security within our organization, so that gives all of our employees a way to ease their comfort so that they can concentrate on taking care of our end customer.

Gardner: That’s right, security isn’t just a matter of being very aggressive, it also means employee experience. You have to give your people the opportunity to get their work done without hindrance — and the performance of their machine, of course, is a big part of that.

Tell us about the pain points, what were the problems you were having in the past that led you into a new provider when it comes to security software?

We were seeing threats get through the previous antivirus solution, and the cost of that solution was increasing month over month. Every time we’d add a new license it would seem like the price would jump.

Whitten: Some of the things that we have had to deal with within the IT department here at SambaSafety is when we see our tickets come in, it’s typically about memory usage as applications were locking up the computers, where it took a lot of resources to be able to launch the application.

We also were seeing threats getting through the previous antivirus solution, and then just the cost, the cost of that solution was increasing month over month. Every time we would add a new license it would seem like the price point would jump.

Gardner: I imagine you weren’t seeing them as a partner as much as a hindrance.

Whitten: Yes, that is correct. It started to seem like it was a monthly call, then it turned into a weekly call to their support center just to be able to see if we could get additional support and help from them. So that brought up, “Okay, what do we do next and what is our next solution going to look like?”

Gardner: Tell me about that process. What did you look at, and how did you make your choices?

Whitten: We did an overall scoping session and brought in three different antivirus solutions providers. It just so happens that they all measured up to be the next vendor that we were going to work with. Bitdefender came out on top and it was a solution that we could put into our cloud-hosted solution, it was also something that we could work with on our endpoints and also to be able to ensure that all of our employees are protected.

Gardner: So you are using GravityZone Advanced Business Security, Full Disk Encryption, and the Cloud Management Console, all from Bitdefender, is that correct?

Whitten: That is correct. The previous solution for our disk encryption is just about exhausted. Currently we have about 90 percent of our endpoints for disk encryption on Bitdefender now and we have had zero issues with it.

Gardner: I have to imagine you are not just protecting your endpoints, but you have servers and networks, and other infrastructure to protect. What does that consist of and how has that been going?

truckWhitten: That is correct. We have approximately 280 employees, which equals 280 laptops to be protected. We have a fair amount of additional hardware that has to be protected. Those endpoints have to be secured. And then 30 percent of additional hardware, i.e. the Macs that are within our organization, are also part of that Bitdefender protection.

Gardner: And everyone knows, of course, that management of operations is essential for making sure that nothing falls between the cracks — and that includes patch management, making sure that you see what’s going on with machines and getting alerts as to what might be your vulnerability.

So tell us about the management, the Cloud Console, particularly as you are trying to do this across a hybrid environment with multiple sites?

See what’s secure to ensure success 

Whitten: It’s been vital for the success of Bitdefender and their console that we can log on and we can see what’s happening. It has been very key to the success. I can’t say that enough.

And it goes as far as information gathering, dashboard, data analytics, network scanning, and the vulnerability management – just being able to ensure our assets are protected has been key.

Also, we could watch the alerting that happens to ensure that the behavior is not changing from machine intelligence or machine learning (ML) so that our systems do not get infected in any way.

Gardner: And the more administration and automation you get, the more you are able to devote your IT operations people to other assets, other functions. Have you been able to recognize, not only an improvement in security, but perhaps an easing up on the man hours and labor requirements?

Whitten: Sure. The first 60 days of our implementation I was able to improve return on investment (ROI) quickly. We were able to allow additional team resources to focus on other tickets and also other items that came into our work scope within our department.

Bitdefender was already out there managing itself. It was doing what we paying for it to do. It was actually a really good choice for us. The partnership with them is very solid, we are very pleased with it, a win-win situation for both of our companies.

Bitdefender was already out there, and it was managing itself, it was doing what we were paying for it to do — and it was actually a really good choice for us. The partnership with them is very solid, we are very pleased with it, it is a win-win situation for both of our companies.

Gardner: Randy, I have had people ask me, “Why do I need Full Disk Encryption? What does that provide for me? I am having a hard time deciding whether it’s the right thing for our organization.”

What were your requirements for widespread encryption and why do you think that’s a good idea for other organizations?

sambasafety-logoWhitten: The most common reason to have Full Disk Encryption is you are at the store, someone comes in, they break into your car, they steal your laptop bag or they see your computer laying out, they take it. As the Director of IT and Operations for SambaSafety, my goal is to ensure that our assets are protected. So having Full Disk Encryption on board that laptop gives me a chance to sleep a little easier at night.

Gardner: You are not worried about that data leaving the organization because you know it’s got that encryption wrapper.

Whitten: That is correct. It’s protected all the way around.

Gardner: As we start to close out, let’s look to the future. What’s most important for you going forward? What would you like to see improve in terms of security, intelligence and being able to monitor your privacy and your security requirements?

Scope out security needs

Whitten: The big trend right now is to ensure that we are staying up to date and Bitdefender is staying up to date on the latest intrusions so that our software is staying current and we are pushing that out to our machines.

Also just continue to be right on top of the security game. We have enjoyed our partnership with Bitdefender to date and we can’t complain, and for sure it has been a win-win situation all the way around.

Gardner: Any advice for folks that are out there, IT operators like yourself that are grappling with increased requirements? More people are seeing compliance issues, audit issues, paperwork and bureaucracy. Any advice for them in terms of getting the best of all worlds, which is better security and better operations oversight management?

Bitdefender bug bestWhitten: Definitely have a good scope of what you are looking for, for your organization. Every organization is different. What tends to happen is that you go in looking for a solution and you don’t have all of the details that would meet the needs of your organization.

Secondly, get the buy-in from your leadership team. Pitch the case to ensure that you are doing the right thing, that you are bringing the right vendor to the table, so that once that solution is implemented, then they can rest easy as well.

Every company executive across the world right now that has any responsibility with data, definitely security is at the top of their mind. Security is at the top of my mind every single day, protecting our customers, protecting our employees, making sure that our data stays protected and secured so that the bad guys can’t have it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in Bitdefender, Cloud computing, Cyber security, Enterprise architect, Identity, Information management, machine learning, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Why flexible work and the right technology may just close the talent gap

order workers

Companies struggle to find qualified workers in the mature phase of any business cycle. Yet as we enter a new decade in 2020, they have more than a hyper-low unemployment rate to grapple with.

Businesses face a gaping qualitative chasm between the jobs businesses need to fill and the interest of workers in filling them. As a result, employees have more leverage than ever to insist that jobs cater to their lives, locations, and demands to be creatively challenged.

Accordingly, IDC predicts that by 2021, 60 percent of Global 2000 companies will have adopted a future workspace model — flexible, intelligent, collaborative, virtual, and physical work environments.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us now as BriefingsDirect explores how businesses must adapt to this new talent landscape and find the innovative means to bring future work and workers together. Our flexible work solutions panel consists of Stephane Kasriel, the former Chief Executive Officer and a member of the board at Upwork, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: If flexible work is the next big thing, that means we have been working for the past decade or two in an inflexible manner. What’s wrong with the cubicle-laced office building and the big parking lot next to the freeway model?

Tim Minahan

Minahan

Minahan: Dana, the problem dates back a little further. We fundamentally haven’t changed the world of work since Henry Ford. That was the model where we built big work hubs, big office buildings, call centers, manufacturing facilities — and then did our best to hire as much talent around that.

This model just isn’t working anymore against the backdrop of a global talent shortage, which is fast approaching more than 85 million medium- to high-skilled workers. We are in dire need of more modern skill sets that aren’t always located near the work hubs. And to your earlier point, employees are now in the driver’s seat. They want to work in an environment that gives them flexible work and allows them to do their very best work wherever and whenever they want to get it done.

Gardner: Stephane, when it comes to flexible work, are remote work and freelance work the same? How wide is this spectrum of options when it comes to flexible work?

Kasriel: Almost by definition, most freelance work is done remotely. At this stage, freelancing is growing faster than traditional work, about three times faster, in fact. About 35 percent of US workers are doing some amount of freelancing. And the vast majority of it is skilled work, which is typically done remotely.

Stephane Kasriel

Kasriel

Increasingly what we see is that freelancers become full-time freelancers; meaning it’s their primary source of income. Usually, as a result of that, they tend to move. And when they move it is out of big cities like San Francisco and New York. They tend to move to smaller cities where the cost of living is more affordable. And so that’s true for the freelance workforce, if you will, and that’s pulling the rest of the workforce with it.

What we see increasingly is that companies are struggling to find talent in the top cities where the jobs have been created. Because they already use freelancers anyway, they are also allowing their full-time employees to relocate to other parts of the country, as well as to hire people away from their headquarters, people who essentially work from home as full-time employees, remotely.

Gardner: Tim, it sounds like Upwork and its focus on freelance might be a harbinger of what’s required to be a full-fledged, flexible work support organization. How do you view freelancing? Is this the tip of the arrow for where we are headed?

Minahan: Against the backdrop of a global talent shortage and outdated model of hub-and-spoke-based work models, the more innovative companies — the ones securing the best talent — go to where the talent is, whether using contingent or full-time workers.

They are also shifting from the idea of having a full-time employee staff to having pools of talent. These are groups that have the skills and capabilities to address a specific business challenge. They will staff up on a given project.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

So, work is becoming much more dynamic. The leading organizations are tapping into that expertise and talent on an as-needed basis, providing them an environment to collaborate around that project, and then dissolving those teams or moving that talent on to other projects once the mission is accomplished.

Gardner: So, it’s about agility and innovation, being able to adapt to whatever happens. That sounds a lot like what digital business transformation is about. Do you see flexible work as supporting the whole digital transformation drive, too?

Minahan: Yes, I certainly do. In fact, what’s interesting is the first move to digital transformation was a shift to transforming customer experience, of creating new ways and new digital channels to engage with customers. It meant looking at existing product lines and digitizing them.

upwork chess

And along the way, companies realized two things. Number one, they needed different skills than they had internally. So the idea of the contingent worker or freelance worker who has that specific expertise becomes increasingly vital.

They also realized they had been asking employees to drive this digital transformation while anchoring them to archaic or legacy technology and a lot of bureaucracy that often comes with traditional work models.

And so there is now an increased focus at the executive C-suite level on driving employee experience and giving employees the right tools, the right work environment, and the flexible work models they need to ensure that they not only secure the best talent, but they can arm them to do their very best work.

There is now an increased focus at the C-suite level on driving employee experience and giving employees the right tools, work environment, and flexible work models they need to ensure they can do their very best work.

Gardner: Stephane, for the freelance workforce, how have they been at adapting to the technologies required to do what corporations need for digital transformation? How does the technology factor into how a freelancer works and how a company can best take advantage of them?

Kasriel: Fundamentally, a talent strategy is a critical part of digital transformation. If you think about digital transformation, it is the what, and the talent strategy is the how. And increasingly, as Tim was saying, as businesses need to move faster, they realize that they don’t have all the skills internally that they need to do digital transformation.

They have to tap into a pool of workers outside of the corporation. And doing this in the traditional way, using staffing firms or trying to find local people that can come in part-time, is extremely inefficient, incredibly slow, and incompatible with the level of agility that companies need to have.

citrix-logo-250x250So just as there was a digital transformation of the business firm, there is now also a digital transformation of the talent strategy for the firm. Essentially work is moving from an offline model to an online model. The technology helps with security, collaboration, and matching supply and demand for labor online in real-time, particularly for niche skills in short-duration projects.

Increasingly companies are reassembling themselves away from the traditional Taylorism model of silos, org charts, and people doing the same work every single day. They are changing to much more self-assembled, cross-functional, agile, and team-based work. In that environment, the teams are empowered to figure out what it is that they need to do and what type of talent they need in order to achieve it. That’s when they pull in freelancers through platforms such as Upwork to add skills they don’t have internally — because nobody has those internally.

And on the freelancer side, freelancers are entrepreneurs. They are usually very good at understanding what skills are in demand and acquiring those skills. They tend to train themselves much more frequently than traditional full-time employees because there is a very logical return on investment (ROI) for them to do so.

If I learned the latest Java framework in a few weeks, for example, I can then bill at a much higher rate than I would otherwise could if I didn’t have those skills.

Gardner: Stephane, how does Upwork help solve this problem? What is your value-add?

Upwork secures hiring, builds trust 

Kasriel: We essentially provide three pieces of value-add. One is a very large database of freelancers on one side and a very large database of clients and jobs on the other side. With that scale comes the ability to have high liquidity. The median time to fill a job on Upwork right now is less than 24 hours, compared to multiple weeks in the offline world. That’s one big piece of it.

The second is around an end-to-end workflow and processes to make it easy for large companies to engage with independent contractors, freelancers, and consultants. Companies want to make sure that these workers don’t get misclassified, that they only have access to IT systems they are supposed to, that they have signed the right level of agreements with the company, and that they have been background checked or whatever other processes that the company needs.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

The third big piece is around trust and safety. Fundamentally, freelancers want to know that they are going to be working with reputable clients and that they are going to get paid. Conversely, companies are engaging with freelancers for things that might be highly strategic, have intellectual property associated with them, and so they want to make sure that the work is going to be done properly and that the freelancer is not going to be selling information from the company, as an example.

So, the three pieces around matching, collaboration and security software, and trust and safety are the things that large companies are using Upwork for to meet the needs of their hiring managers.

Fundamentally, we want to be invisible. We want the platform to look simple so that people can get things done by having freelancers — and not have to think about all of the complexities of being compliant with the various roles that large companies have as it relates to engaging with people in general, but with independent contractors in particular.

Mind the gap in talent, skills 

Gardner: Tim, a new study has been conducted by the Center for Business and Economic Research on these subjects. What are some of the findings?

Minahan: At Citrix, we are committed to helping companies drive higher levels of employee experience using technology to create environments that allow much more flexible work models and empower employees to get their very best work done. So we are always examining the direction of overall work models in the market. So we partnered to better understand how to solve this massive talent crisis.

Consider that there is a gap of close to 90 million medium- to high-skilled workers around the globe, all of these unfilled jobs. There are a couple of ways to solve this. The best way is to expand the talent pool. So, as Stephane said, that can be through tapping into freelance marketplaces, such as Upwork, to find a curated path to the top talent, those who have the finest skills to help drive digital transformation.

But we can couple that with digital workspaces that allow flexible work models by giving the talent access to the tools and information they need to be productive and to collaborate. They can do that in a secure environment that leaves the company confident their information and systems remain secure.

The key findings of the study are that we have an untapped market. Some 69 percent of people who currently are unemployed or economically inactive indicate that they would start working if given more flexible work models and the technology to enable them to work remotely.

The key findings of the Center for Business and Economic Research study are that we have an untapped market. Some 69 percent of people who currently are unemployed or economically inactive indicate that they would start working if given more flexible work models and the technology to enable them to work remotely.

Think about the massive shifts in the demographics of the workplace. We talk about millennials coming into the workforce, and new work models, and all of that’s interesting and important. But we have a massive other group of workers at the other end of the spectrum — the baby boomers — who have massive amounts of talent and knowledge and who are beginning to retire.

upwork bugWhat if we could re-employ them on their own terms? Maybe a few days a week or a few hours a day, to contribute some of their expertise that is much needed to fill some of the skills gaps that companies have?

We are in a unique position right now and have an incredible opportunity to embrace these new work models, these new freelance marketplaces, and the technology to solve the talent gap.

Kasriel: We run a study every year called Freelancing in America; we have been running it for six years now. One of the highlights of the study is that 46 percent, so almost half of freelancers, say that they cannot take a traditional full-time job. And that’s usually primarily driven by health issues, by care duties, or by the fact that they live in a part of the US where there are no jobs for their skills. They tend to be more skilled and educated on average than non-freelancers, and they tend to be completely undercounted in the Bureau of Labor Statistic data every month.

So when we talk about no unemployment in the country, and when we talk about the skills gap, there is this other pool of talent that tends to be very resilient, really hardworking, and highly skilled — but who cannot commit to a traditional full-time job that requires them to be on-site.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

So, yes, there is a skills gap overall. If you look at the micro numbers, that is true. But at the macro level, at the business firm level, it’s much more of a gap of flexibility — and a gap of imagination — than anything else. Firms are competing for the same talent in the same way and then wondering why they are struggling to attract new fresh talent and improve their diversity.

I tell them to go online and look at the talent available there. You will find a world of work, of people that are extremely eager to work for you. In fact, they are probably going to be much more loyal to your company than anybody else because you are by far the best employer that they could work with.

Gardner: To be clear, this is not North America or the US only. I have seen similar studies and statistics coming out of Europe and Japan. They differ from market to market, but it’s all about trying to solve the mismatch between employers and available potential talent.

Tim, people have been working remotely for quite a while now. Why is this not an option, but a necessity, when it comes to flexible and remote work?

Minahan: It’s the market dynamics we have been talking about. Companies struggle to find the talent they need at scale in the locations where they traditionally have major office hubs. Out of necessity, to advance their business and access the skills they need, they must embrace more flexible work models. They need to be looking for talent in nontraditional ways, such as making freelance workers part of their regular talent strategies, and not an adjunct for when someone is out on sick leave.

And it’s really accelerating quite dramatically. We talk a lot about that talent crunch, but in addition, it’s also a skills gap. As Stephane was saying, so many of these freelance workers have the much-in-demand skills that people need.

When you think about the innovators in the industry, folks like Amazon who recently said, “Hey, we can’t find all of the talent we need with the skills that we need so we are going to retrain and spend close to $1 billion to retain a third of our workforce.”

They are expanding their talent pool. That’s what innovative companies are beginning to do. They are saying: “Okay, we have these constraints. What can we do, how can we work differently, how can we embrace technology differently, and how can we look at the workforce differently in order to expand our talent pool?”

Gardner: If you seek out the best technology to make that flexible workforce innovative, collaborative, and secure, are there other economic paybacks? If you do it right, can out also put money to the bottom line? What is the economic impact?

More remote workers, more revenue

Minahan: From the study that we did around remote workers and tapping into the untapped talent pool, the research found that this could equate to more than $2 trillion in added value per year — or a 10 percent boost to the US GDP. It’s because otherwise businesses are not able to deliver services because they don’t have the talent.

On a micro level, at an individual business level, when workers are engaged in these more flexible work models they are more stress-free. They are far more productive. They have more time for doing meaningful work. As a result, companies that embrace these work models are seeing far higher revenue growth, sometimes upward of 2.5 times. There are revenue growths, far higher profitability, and far greater worker retention than their peers.

Kasriel: It’s also important to remember that the average American worker spends more time commuting to work than on vacation in a given year. Imagine if all of that time could be reused to be productive at work, spend another couple of hours every day doing work for the company, or doing other things in their lives so they could consume more goods and services, which would drive economic growth.

Right now the amount of waste coming from companies requiring that their workers commute to work is probably the biggest amount of waste that companies are creating in the economy. It also causes income inequality, congestion, and pollution.

Right now the amount of waste coming from companies requiring that their workers commute to work is probably the biggest amount of waste that companies are creating in the economy. By the way, it also causes income inequality, congestion, and pollution. So there are countless negative externalities that nobody is even taking into account. Yet the waste of time by forcing workers to commute to work is increasing every year when it doesn’t need to be.

Some 20 years ago, when people were talking about remote work, it felt challenging from a cultural standpoint. We were all used to working face-to-face. It was challenging from a technological standpoint. We didn’t have broadband, secure application environments such as Citrix, and video conferencing. The tools were not in the cloud. A lot of things made it challenging to work remotely — but now that cultural barrier is not nearly as big.

We are all more or less digital natives; we all use these tools. Frankly, even when you are two floors away in the same building, how many times you take the elevator to go down to meet somebody face-to-face versus chat with them or do a video conference with them?

Human Resources SectionAt this stage, whether you are two floors away or 200 miles away makes almost no difference whatsoever. Where it does make a difference is forcing people to have to come to work every single day when it adds a huge amount of constraint in their lives and it’s fundamentally not productive for the economy.

Minahan: Building on what Stephane said, the study we did found that in addition to unlocking that untapped pool of talent, those folks who do currently have full-time jobs, 95 percent of them said they would work from home at least twice a week if given the opportunity. To Stephane’s point, you just look at that group alone and the time they would save from commuting multiplies to 105 hours of newly free time per year, time they didn’t have to spend commuting and doing unproductive things. Most of them said that they would put more hours into work because they didn’t have to deal with all the hassle of getting there.

Flexible work provides creativity 

Gardner: What about the quality of the work? It seems to me that creative work happens in its own ways, even in a state of leisure. I have to tell you some of the best creative thoughts I have occur when I’m in the shower. I don’t know why. So maybe creativity isn’t locked into a 9-to-5 definition.

Is there something in what we’re talking about that caters to the way the human brain works? As we get into the age of robotic process automation (RPA) should we look more to the way that people are intrinsically creative and free that?

Kasriel: Yes, the World Economic Forum has called attention to such changes in our evolution, the idea that progressively machines are going to be taking over the parts of our jobs that they can do better than we can. This frees us to be the best of ourselves, to be humans. The repetitive, non-cognitive work being done in a lot of offices is progressively going to be automated through RPA and artificial intelligence (AI). That allows us to spend more time on the creative work. The nature of creative work is such that you can’t order it on-demand, you can’t say, “Be creative in the next five minutes.”

It comes when it comes. It’s the inspiration that comes. So putting in artificial boundaries of saying, “You will be creative from 9-to-5, and you will only do this in the office environment,” is unlikely to be successful. Frankly, if you look at workplace management, you see companies increasingly trying to design work environments that are mix between areas of the office where you can be very productive — by just doing the things that you need to do — and places where you can be creative and thinking.

And that’s just a band-aid solution. The real solution is to let people work from anywhere and let them figure out the time at which they are the most creative and productive. Hold people accountable for an outcome, as opposed to holding them accountable for the number of fixed-time hours they are giving to the firm. It is, after all, very weakly correlated to the amount of output, of what they actually generate for the company.

Minahan: I fully agree. If you look at the overall productivity and the GDP, productivity advanced consistently with each new massive innovation right up until recently. The advent of mobile devices, mobile apps, and all of the distractions from communications and chat channels that we have at work have reached a crescendo.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

On any given day, a typical employee spends nearly 65 percent of their time on such busy work. That means responding to Slack messages, being distracted by the application alerts about some tasks that may not be pertinent for your job and spending another 20 percent of time just searching for information. These all leave employees with less than two hours a day, by some estimates, on the meaningful and rewarding work that they were hired to do.

If we can free them up from those distractions and give them an environment to work where and how they want, one of the chief benefits is the capability to drive greater innovation and creativity than they can in an interruptive office environment.

Gardner: We have been talking in general terms. Do we have any concrete examples, use cases perhaps, that illustrate what we have been driving at? Why is it good for business and also for workers?

Blended workforce wins 

Kasriel: If you look at tech companies created in the last 15 to 20 years, increasingly you see them as what people call remote first, where they try to hire people outside of their main headquarters first and only put people in the office if they happen to live nearby. And that leads to a blended workforce, a mix between full-time employees and free-lancers.

The companies most visible started in open-source software development. So if you look at Mozilla, the non-profit behind Firefox, or if you look at the Wikipedia foundation, the non-profit building Wikipedia, if you look at Automattic, the for-profit open source company that builds WordPress, or if you look at GitLab. I mean, if you look at Upwork, we ourselves are mostly distributed, 2,000 people working in 800 different cities. InVision would be another example.

So, very well-known tech companies that build products used by hundreds of millions of people. WordPress alone empowers a subset of the Internet. These companies tend to have well over 100,000 workers between full-time employees and freelancers. They either have no office or most of their people are not working in an office.

Microsoft started using Upwork a few years ago. At this stage, they have thousands of different freelancers working on thousands of different projects. They are doing it becuase it’s the right thing to do.

The companies that are a little bit more challenging are the ones that have grown in a world where everybody was a full-time employee. Everybody was on-site. But progressively they have made a shift to more flexible work models.

Probably the company that I’ve seen to be the most publicly vocal about this is Microsoft. Microsoft started using Upwork a few years ago. At this stage, they have thousands of different freelancers working on thousands of different projects. Partly they do it because they struggle to find great talent in Redmond, Wash., just like everybody else. There is a finite talent pool. But partly they are doing it because it’s the right thing to do.

Citrix campusIncreasingly we hear companies say, “We can do well, and we can do a good at the same time.” That means helping people who may be disabled, people that may have care duties, young parents with children at home, people that are retiring but are not fully willing to completely step out of the workforce, or people that just happened to live in smaller cities in the U.S. where increasingly, even if you have the skills, they are not local jobs.

And they have spoken about this in both terms, which is: It’s the right thing for their shareholders, the right thing for their business, but it’s also helping society be more fair and distributed in a way that benefits workers outside of the big tech hubs of San Francisco, Seattle, Boston, New York, and Austin.

Gardner: Tim, any examples that demonstrate why a future workspace model helps encourage this flexible work and why it’s good for both the employees and employers?

May the workforce be with you

Minahan: Stephane did a great job covering the more modern companies built from the ground up on flexible work models. He brought up an interesting point. It’s much more challenging for traditional or established companies to transition to these models. One that stands out and is relevant is eBay.

eBay, as we all know, is one of the largest digital marketplaces in the world. Like many others, they built call centers in major cities and hired a whole bunch of folks to answer and provide support calls to buyers and sellers as they were conducting commerce in the marketplace. However, their competition was setting up call centers right down the street, so they were in constant churning — hiring, training, losing them, and needing to rehire. Finally they said, “This can’t go on. We have to figure out a different model.”

They embraced technology and consequently a more flexible work model. They went where the talent is: The stay-at-home parent in Montana, the retiree in Florida, the gig worker in New York or Boston. They armed them with a digital workspace that gave them the information, tools, and knowledge base they needed to answer questions from customers but in far more flexible work models. They could work three hours a day or maybe one day a week. eBay was able to Uberfy the workforce.

They started a year-and-a-half ago and are now they are close to having 4,000 of these call center workers as a remote workforce, and it’s all transparent to the rest of us. They are delivering a higher-level service to the customers by going to where the talent is and it’s completely transparent. We are unaware that they are not sitting in a call center somewhere. They are actually sitting in a remote office in all corners of the country.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in application transformation, artificial intelligence, Business networks, Citrix, Cloud computing, Cyber security, mobile computing, Security, User experience | Tagged , , , , , , , , , , | Leave a comment

As hybrid IT complexity ramps up, operators look to data-driven automation tools

sphere image

The next edition of the BriefingsDirect Voice of the Innovator podcast series examines the role and impact of automation on IT management strategies.

Growing complexity from the many moving parts in today’s IT deployments are forcing managers to seek new productivity tools. Moving away from manual processes to bring higher levels of automation to data center infrastructure has long been a priority for IT operators, but now new tools and methods are making composability and automation better options than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us learn more about the advancing role and impact from IT automation is Frances Guida, Manager of HPE OneView Automation and Ecosystem Product Management at Hewlett Packard Enterprise (HPE). The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the top drivers, Frances, for businesses seeking higher levels of automation and simplicity in their IT infrastructure?

Guida: It relates to what’s happening at a business level. It’s a truism that business today is moving faster than it ever has before. That puts pressure on all parts of a business environment — and that includes IT. And so IT needs to deliver things more quickly than they used to. They can’t just use the old techniques; they need to move to much more automated approaches. And that means they need to take work out of their operational environments.

Gardner: What’s driving the complexity that makes such automation beneficial?

IT means business 

Guida: It again starts from the business. IT used to be a support function, to support business processes. So, it could go along on its own time scale. There wasn’t much that the business could or would do about it.

Frances Guida

Guida

In 2020, technology is now part of the fabric of most of the products, services, and experiences that businesses offer. So when technology is part of an offering, all of a sudden technology is how a business is differentiated. As part of how a business is differentiated, business leaders are not going to take, “Oh, we will get to it in 18 months,” as an answer. If that’s the answer they get from the IT department, they are going to go look for other ways of getting things done.

And with the advances of public cloud technology, there are other ways of getting things done that don’t come from an internal IT department. So IT organizations need to be able to keep up with the pace of business change, because businesses aren’t going to accept their historical time scale.

Gardner: Does accelerating IT via automation require an ecosystem of partners, or is there one tool that rules them all?

Guida: This is not a one-size-fits-all world. I talk to customers in our HPE Executive Briefing Centers regularly. The first thing I ask them is, “Tell me about the toolsets you have in your environment.” I often ask them about what kinds of automation toolsets they have. Do you have Terraform or Ansible or Chef or Puppet or vRealize Orchestrator or something else? It’s not uncommon for the answer to be, “Yes.” They have all of them.

So even within a customer’s environment, they don’t have a single tool. We need to work with all the toolsets that the customers have in their IT environments.

Gardner: It almost sounds like you are trying to automate the automation. Is that fair?

Guida: We definitely are trying to take some of the hard work that has historically gone into automation and make it much simpler.

Complexity is Growing in the Data Center

What’s the Solution?

Gardner: IT operations complexity is probably only going to increase, because we are now talking about pushing compute operations — and even micro data centers — out to the edge in places like factories, vehicles, and medical environments, for example. Should we brace ourselves now for a continuing ramp-up of complexity and diversity when it comes to IT operations?

Guida: Oh, absolutely. You can’t have a single technology that’s going to answer everything. Is the end user going to interface through a short message service (SMS) or are they going to use a smartphone? Are they going to be on a browser? Is it an endpoint that interacts with a system that’s completely independent of any user base technology? All of this means that IT has to be multifaceted.

Even if we look at data center technologies, for the last 15 years virtualization has been pretty much the standard way that IT deploys new systems. Now, increasingly, organizations are looking at a set of applications that don’t run in virtual machines (VMs), but rather are container-based. That brings a whole other set of complexity they have to think about in their environments.

Complexity is like entropy; it just keeps growing. When we started thinking about bringing a lot more flexibility to on-premises data center environments, we looked holistically at the problem … at a deeper level.

Complexity is like entropy; it just keeps growing. When we started thinking about bringing a lot more flexibility to on-premises data center environments, we looked holistically at the problem. I don’t think the problem can only be addressed through better automation; in fact, it has to be addressed at a deeper level.

And so with our composable infrastructure strategies, we thought architecturally about how we could bring the same kind of flexibility you have in a public cloud environment to on-premises data centers. We realized we needed a way to liberate IT beyond the boundaries of physical infrastructure by being able to group that physical infrastructure into pools of resources that could be much more fluid and where the physical aspects could be changed.

Now, there is some hardware infrastructure technology in that, but a lot of that magic is done through software, using software to configure things that used to be done in a physical manner.

CISo we defined a layer of software-defined intelligence that captures all of the things you need to know about configuring physical hardware — whether it’s firmware levels or biased headings or connections. We define and calculate all of that in software.

And automation is the icing on that cake. Once you have your infrastructure that can be defined in software, you can program it. That’s where the automation comes in, being able to use everyday automation tools that organizations are already using to automate other parts of their IT environment and apply that to the physical infrastructure without a whole bunch of unnatural acts that were previously required if you wanted to automate physical infrastructure.

Gardner: Are we talking about a fundamental shift in how infrastructure should be conceived or thought of here?

Consolidate complexity via automation 

Guida: There has been a saying in the IT industry for a while about moving from pets to cattle. Now we even talk about thinking about herds. You can brute-force that transition by trying to automate to all of the low-level application programing interfaces (APIs) in physical infrastructure today. Most infrastructure today is programmable, with rare exceptions.

But then you as the organization are doing the automation, and you must internalize that and make your automation account for all of the logic. For example, if you then make a change in the storage configuration, what does that mean for the way the network needs to be configured? What does that mean for firmware settings? You would have to maintain all of that in your own automation logic.

How to Simplify and Automate

Your Data Center

There are some organizations in the world that have the scale of automation engineering to be able to do that. But the vast majority of enterprises don’t have that capability. And so what we do with composable infrastructure, HPE OneView, and our partner ecosystem is we actually encapsulate all of that in our software to find intelligence. So all you have to do is take that configuration file and apply it to a set of physical hardware. It brings things that used to be extremely complex down to what a standard IT organization has the capabilities of doing today.

Gardner: And not only is that automation going to appeal to the enterprise IT organizations, it’s also going to appeal to the ecosystem of partners. They now have the means to use the composable infrastructure to create new value-added services.

How does HPE’s composability benefit both the end-user organizations and the development of the partner ecosystem?

Guida: When I began the composable ecosystem program, we actually had two or three partners. This was about four years ago. We have now grown to more than 30 different integrations in place today, with many more partners that we are talking to. And those range from the big, everyday names like VMware and Microsoft to smaller companies that may be present in only a particular geography.

But what gets them excited is that, all of a sudden, they are able to bring better value to their customers. They are able to deliver, for example, an integrated monitoring system. Or maybe they are already doing application monitoring, and all of a sudden they can add infrastructure monitoring. Or they may already be doing facilities management, managing the power and cooling, and all of a sudden they get a whole bunch of data that used to be hard to put in one place. Now they can get a whole bunch of data on the thermals, of what’s really going on at the infrastructure level. It’s definitely very exciting for them.

Gardner: What jumps out at you as a good example of taking advantage of what composable infrastructure can do?

Guida: The most frequent conversations I have with customers today begin with basic automation. They have many tools in their environment; I mentioned many of them earlier: Ansible, Terraform, Chef, Puppet, or even just PowerShell or Python; or in the VMware environment, vRealize Orchestrator.

They have these tools and really appreciate what we have been able to do with publishing these integrations on GitHub, for example, of having a community, and having direct support back to our engineers who are doing this work. They are able to pretty straightforwardly add that into their tools environment.

How a Software-Defined Data Center

Lets the Smartest People Work for You

And we at HPE have also done some of the work ourselves in the open source tools projects. Pretty much every automation tool that’s out there in mainstream use by IT — we can handle it. That’s where a lot of the conversations we have with customers begin.

If they don’t begin there, they start back in basic IT operations. One of the ways people take advantage of the automation in HPE OneView — but they don’t realize they are taking advantage of automation — is in how OneView helps them integrate their physical infrastructure into a VMware vCenter or a Microsoft System Center environment.

Visualize everything, automatically

For example, in a VMware vCenter environment, an administrator can use our plug-in and it automatically sucks in all of the data from their physical infrastructure that’s relevant to their VMware environment. They can see things in their vCenter environment that they otherwise couldn’t see.

They can see everything from a VM that’s sitting on the VM host that’s connected through the host bus adapters (HBAs) out to the storage array. There is the logical volume. And they can very easily visualize the entire logical as well as physical environment. That’s automation, but you are not necessarily perceiving it as automation. You are perceiving it as simply making an IT operations environment a lot easier to use.

The automation benefits — instead of just going down into the IT operations — can also go up to allow more cloud management. It affects infrastructure and applications.

For that level of IT operations integration, VMware and Microsoft environments are the poster children. But for other tools, like Micro Focus and some of the capacity planning tools, and event management tools like ServiceNow – those are another big use case category.

The automation benefits – instead of just going down into the IT operations – can also go up to allow more cloud management. Another way IT organizations take advantage of the HPE automation ecosystem means, “Okay, it’s great that you can automate a piece of physical infrastructure, but what I really need to do — and what I really care about — is automating a service. I want to be able to provision my SQL database server that’s in the cloud.”

That not only affects infrastructure pieces, it touches a bunch of application pieces, too. Organizations want it all done through a self-service portal. So we have a number of partners who enable that.

Morpheus comes to mind. We have quite a lot of engagements today with customers who are looking at Morpheus as a cloud management platform and taking advantage of how they can not only provision the logical aspects of their cloud, but also the physical ones through all of the integrations that we have done.

How to Simplify, Automate, and

Develop Faster

Gardner: How does HPE and the partner ecosystem automate the automation, given the complexity that comes with the newer hybrid deployment models? Is that what HPE OneView is designed to help do these days?

Automatic, systematic, cost-saving habit 

Guida: I want to talk about a customer who is an online retailer. If you think about the retail world — obviously a highly dynamic world and technology is at the very forefront of the product that they deliver; technology is the product that they deliver.

They have a very creative marketing department that is always looking for new ways to connect to their customers. That marketing department has access to a set of application developers who are developing new widgets, new ways of connecting with customers. Some of those developers like to develop in VMs, which is more old school; some of the developers are more new school and they prefer container-based environments.

multicloud

The challenge the IT department has is that from one week to the next they don’t fully know how much of their capacity needs to be dedicated to a VM versus a container environment. It all depends on which promotions or programs the business decides it wants to run at any time.

So the IT organization needed a way to quickly switch an individual VM host server to be reconfigured as a bare-metal container host. They didn’t want to pay a VM tax on their container host. They identified that if they were going to do that manually, there were dozens and dozens — I think they had 36 or 37 — steps that they needed to do. And they could not figure out a way to automate individually each one of those 37 steps.

When we brought them an HPE Synergy infrastructure — managed by OneView, automated by Ansible — they instantly saw how that was going to help solve their problems. They were able to change their environemnt from one personality to another in a completely automated fashion.

When we brought them an HPE Synergy infrastructure — managed by OneView, automated with Ansible — they instantly saw how that was going to help solve their problems. They were going to be able to change their environment from one personality to another personality in a completely automated fashion. And now they are able to do that changeover in just 30 minutes, and instead of needing dozens of manual steps. They have zero manual steps; everything is fully automated.

And that enables them to respond to the business requirements. The business needs to be able to run whatever programs and promotions it is that they want to run — and they can’t be constrained by IT. Maybe that gives a picture of how valuable this is to our customers.

Gardner: Yes, it speaks to the business outcomes, which are agility and speed, and at the same time the IT economics are impacted there as well.

Speaking of IT economics and IT automation, we have been talking in terms of process and technology. But businesses are also seeking to simplify and automate the economics of how they acquire and spend on IT, perhaps more on a pay-per-use basis.

Is there alignment between what you are doing in automation and what HPE is doing with HPE GreenLake? Do the economics and automation reinforce one another?

How to Drive Innovation and

Automation in Your Data Center

Guida: Oh, absolutely. We bring physical infrastructure flexibility, and HPE GreenLake brings financial flexibility. Those go hand in hand. In fact, the example that I was just speaking about, the online retailer, they are very, very busy during the Christmas shopping season. They are also busy for Valentine’s Day, Mother’s Day, and back-to-school shopping. But they also have times where they are much less busy.

They have HPE GreenLake integrated into their environment so in addition to having the physical flexibility in their environment, they are financially aligning through a flexible capacity program and paying for technology — in the way that their business model works. So, these things go hand-in-hand.

As I said earlier, I talk to a lot of HPE customers because I am based in the San Francisco Bay Area where we have our corporate headquarters. I am frequently in our Executive Briefing Center two to three times a week. There are almost no conversations I am part of that don’t lead eventually to the financial aspects, as well as the technical aspect, of how all the technology works.

Gardner: Because we have opened IT automation up to the programmatic level, a new breed of innovation can be further brought to bear. Once people get their hands on these tools and start to automate, what have you seen on the innovation side? What have people started doing with this that you maybe didn’t even think they would do when you designed the products?

Single infrastructure signals innovation

Guida: Well, I don’t know that we didn’t think about this, but one of the things we have been able to do is make something that the IT industry has been talking about for a while in an on-premises IT environment.

There are lots of organizations that have IT capacity that is only used some of the time. A classic example is an engineering organization that provides a virtual desktop infrastructure (VDI) capability for engineers. These engineers need a bunch of analytics applications — maybe it’s genomic engineering, seismic engineering, or fluid dynamics in the automotive industry. They have multiple needs. Typically they have been running those on different sets of physical infrastructures.

With our automation, we can enable them to collapse that all into one set of infrastructure, which means they can be much more financially efficient. Because they are more financially efficient on the IT side, they are able to then devote more of their dollars to driving innovation — finding new ways of discovering oil and gas under the ground, new ways of making automobiles much more efficient, or uncovering new secrets within our DNA. By spending less on their IT infrastructure, they are able to spend more on what their core business innovation should be.

Gardner: Frances, I have seen other vendors approach automation with a tradeoff. They say, “Well, if you only use our cloud, it’s automated. If you only use our hypervisor, it’s automated. If you only use our database, it’s automated.”

But HPE has taken a different tack. You have looked at heterogeneity as the norm and the complexity as a result of heterogeneity as what automation needs to focus on. How far ahead is HPE on composability and automation? How differentiated are you from others who have put a tradeoff in place when it comes to solving automation?

We have had composable infrastructure on the market for three-plus years. Our HPE Synergy platform now has a $1 billion run rate. We have 3,600 customers around the world. It’s been a tremendously successful business for us.

Guida: We have had composable infrastructure on the market for three-plus years now. Our HPE Synergy platform, for example, now has a more than $1 billion run rate for HPE. We have 3,600 customers and counting around the world. It’s been a tremendously successful business for us.

I find it interesting that we don’t see a lot of activity out there, of people trying to mimic or imitate what we have done. So I expect composability and automation will remain fundamentally differentiating for us from many of our traditional on-premises infrastructure competitors.

HPE-GreenlakeIt positions us very well to provide an alternative for organizations who like the flexibility of cloud services but prefer to have them in their on-premises environments. It’s been tremendously differentiating for us. I am not seeing anyone else who has anything coming on hot in any way.

Gardner: Let’s take a look to the future. Increasingly, not only are companies looking to become data-driven, but IT organizations are also seeking to become data-driven. As we gather more data and inference, we start to be predictive in optimizing IT operations.

I am, of course, speaking of AIOps. What does that bring to the equation around automation and composability? How will AIOps change this in the coming couple of years?

Automation innovation in sight with AIOps

Guida: That’s a real opportunity for further innovation in the industry. We are at the very early stages about how we take advantage in a symptomatic way of all of the insights that we can derive from knowing what is actually happening within our IT environments and mining those insights. Once we have mined those insights, it creates the possibility for us to take automation to another level.

We have been throwing around terms like self-healing for a couple of decades, but a lot of organizations are not yet ready for something like self-healing infrastructure. There is a lot of complexity within our environments. And when you put that into a broader heterogeneous data center environment, there is even more complexity. So there is some trepidation.

How to Accelerate to

A Self-Driving Data Center

Over time, for sure, the industry will get there. We will be forced to get there because we are going to be able to do that in other execution venues like the public cloud. So the industry will get there. The whole notion of what we have done with automation of composable infrastructure is absolutely a great foundation for us as we take our customers toward these next journeys around automation.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, containers, data center, Data center transformation, DevOps, Enterprise architect, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, multicloud | Tagged , , , , , , , , , , , | Leave a comment

How MSP StoredTech brings comprehensive security services to diverse clients using Bitdefender

endpoint-security-solution

The choice of bedrock security technology can make or break managed service providers’ (MSPs’) ability to scale, grow rapidly while remaining efficient, and maintain top quality customer service.

The next edition of BriefingsDirect explores how by simultaneously slashing security-related trouble tickets and management costs by more than 75 percent, Stored Technology Solutions, or StoredTech, grew its business and quality of service at the same time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us as we learn now how StoredTech adopted Bitdefender Cloud Security for Managed Service Providersto dramatically improve the security of their end users — and develop enhanced customer loyalty.

Here to discuss the role of the latest Bitdefender security technology for making MSPs more like security services providers is Mark Shaw, President of StoredTech in Raleigh, North Carolina. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mark, what trends are driving the need for MSPs like yourself to provide security that enhances the customer experience?

Mark Shaw

Shaw

Shaw: A lot of things are different than they were back in the day. Attacks are very easy to implement. For a dollar, you can buy a malware kit on the Dark Web. Anyone with a desire to create havoc with malware, ransomware, or the like, can do it. It’s no longer a technical scenario, it’s simply a financial one.

At the same time, everyone is now a target. So back in the day, obviously, there were very needy targets. People would spend a lot of time, effort, and technical ability to hack large enterprises. But now, there is no business that’s too small.

If you have data and you don’t want to lose it, you’re a target. Of course, the worst part for us is that MSPs are now directly being targeted. So no matter what the size, if you are an MSP, they want access to your clients.

China has entire groups dedicated to hacking only MSPs. So the world landscape has dramatically shifted.

Gardner: And, of course, the end user doesn’t know where the pain point is. They simply want all security all the time — and they want to point the finger at you as the MSP if things aren’t right.

Shaw: Oh, absolutely right; that’s what we get paid to do.

Gardner: So what were your pain points? What from past security providers and vendors created the pain that made you seek a higher level of capability?

Just-right layers of security prevent pain 

Shaw: We see a lot of pain points when it comes to too many layers. We talk about security being a layering process, which is fantastic. You want the Internet provider to be doing their part. You want the firewall to do its part.

When it comes to security, a lot of the time we see way too many security applications from different vendors running on a machine. That really decimates the performance. End users really don’t care; they do care about security — but they aren’t going to sacrifice performance.

A lot of the time we see way too many security applications from different vendors running on a machine. That really decimates the performance. End users really don’t care; they do care about security — but they are not going to sacrifice performance.

We also see firms that spend all their time meeting all the industry and government regulations, and they are still completely exposed. What we tell people is, just because you check a box in security, that doesn’t mean you are in compliance. It doesn’t mean that you are secure.

For small business owners, we see all these pain points in how they handle their compliance and security needs. And, of course, in our world, we are seeing a lot of pain points because insurance for cybersecurity is becoming more prevalent and paying out through cryptovirus and ransomware attacks. That insurance is becoming more prevalent. And so we are seeing a chicken-and-egg thing, with a recent escalation in malware and ransom attacks [because of those payments].

Gardner: Tell us about StoredTech. What’s your MSP about?

The one throat to choke 

Shaw: We are both an MSP and a master MSP. We refer to ourselves as the “one throat to choke.” Our job is to provide solutions that have depth of scale. For us, it’s all about being able to scale.

We provide the core managed services that most MSPs provide, but we also provide telco services. We help people select and provide Internet services, and we spend a lot of time working with cameras and access control, which require an entirely different level of security and licensing.

If it’s technology-related, we don’t want customers pointing fingers and saying, “Well, that’s the telephone guys’ problem,” or, “That’s the guy with the cameras and the access control, that’s not us.”

We remove all of that finger-pointing. Our job is to delight our customers by finding ways to say, “Yes,” and to solve all of their technology needs.

Gardner: You have been in business for about 10 years. Do you focus on any particular verticals, size of company, or specialize?

Shaw: We really don’t, unlike the trends in the industry today. We are a product of our scars. When I worked for corporate America, we didn’t know we were getting laid off until we read it in the newspaper. So, I don’t want to have any one client. I don’t want to have anybody surprising us.

storedtech workersWe have the perfect bell-curve distribution. We have clients who are literally one guy with a PC in his basement running an accounting firm, all the way up to global clients with 30,000 endpoints and employees.

We have diverse geographies as well as technical verticals among our clients — everything from healthcare to manufacturing, retail, other technology companies; you name it. We resell them as well. For us, we are not siloed. We run the gamut. Everybody needs technology.

Gardner: True. So, one throat to choke is your value, and you are able to extend that and scale up to 30,000 employees or scale down to a single seat. You must have been very choosy about improving your security posture. Tell us about your security journey.

Shaw: Our history goes way back. We started with the old GFI LanGuard for Macs product, which was a remote monitoring and management (RMM) that tied to VIPRE. SolarWinds acquired that product and we got our first taste of the Bitdefender engine. We loved what Bitdefender did. When Kaseya was courting us to work with them, we told them, “Guys, we need to bring Bitdefender with us.”

At that point in time, we had no idea that Bitdefender also had an entire GravityZone platform with an MSP focus. So when we were able to get onto the Bitdefender GravityZone platform, it was just amazing for us.

We loved what Bitdefender did. When we were able to get the Bitdefender GravityZone platform with an MSP focus, it was just amazing for us. We actually use Bitdefender as a sales tool against other MSPs.

We actually used Bitdefender as a sales tool against other MSPs and their security platforms by saying, “Hey, listen. If we come in, we are going to put in a single agent that’s the security software, right? Your antivirus, your content filtering, your malware detection and prevention – and it’s going to be lighter and faster. We are going to speed up your computers by putting this software on.”

We went from VIPRE software to the Bitdefender engine, which really wasn’t the full Bitdefender, to then the full Bitdefender GravityZone when we finally moved with the Kaseya product. Bitdefender lit up our world. We were able to do deployments faster and quicker. We really just started to scale at that point.

Gardner: And just as you want to be one throat to choke to your customers, I am pretty sure that Bitdefender wants to be one throat to choke for you. How does Bitdefender help you protect yourselves as an MSP?

A single-point solution that’s scalable 

Shaw: For us, it’s really about being able to scale quickly and easily. It’s the ability to have customizable solutions whether we are deploying it on a Mac, SQL Server, or in a Microsoft Azure instance in the cloud, we need scalability. But at the same time, we need customizing, the ability to change and modify exactly what we want out there.

The Bitdefender platform gives us the capability to either ramp up or scale down the solution based on which applications are running and what the end user expects. It’s the best of both worlds. We have this 800-pound gorilla, one single point of security, and at the same time we can get so granular with it that we can solve almost any client’s needs without having to retool and without layering on multiple products.

In the past, we used to use other antivirus products, layered them on with the content filtering products. We just had layer after layer after layer, which for our engineers meant if you wanted to see what was wrong, you had to log into one of the four consoles. Today, it’s log-in to this one console and you can see the status of everything.

By making it simple, the old KISS method, we were able to dramatically scale and ramp up — whether that’s 30,000 end points or one. We have a template for almost anything.

We have a great hashtag called automate-or-die. The concept is to automate so we can give every customer exactly what they need without having to retool the environment or add layer upon layer of products, all of which have an impact on the end user.

Gardner: You are a sophisticated enough organization that you want automation, but you also want customization. That’s often a difficult balance. What is it about Bitdefender Cloud Security for MSPs that gets that balance?

Shaw: Being able to see everything in one dashboard — to have everything laid out in front of you – and be able to apply different templates and configurations to different types of machines based on a core functionality. That allows us to provide customization without large overhead from manual configuration every single time we have to do it.

To be able to add that value — but not add those additional man hours — really brings it all together. Having that single platform, which we never had before in the old days, gives us that. We can see it, deploy it, understand it, and report on it. Again, it’s exactly what we would tell our customers, come to us for one throat to choke.

And we basically demanded that Bitdefender have that same throat to choke for us. We want it all easy, customizable — we want everything. We want the Holy Grail, the golden goose — but we don’t want to put any effort into it.

Gardner: Sounds like the right mix to me. How well has Bitdefender been doing that for you? What are the results? Do you have some metrics to measure this?

The measure of success 

Shaw: We had some metrics that you mentioned. We understand by what we have to do, how much time we have to support and how quickly we can implement and deploy.

We have seen malware infections reduced by about 80 percent. We took weekly trouble tickets from previous antivirus and security vendors from 50, down to about 1 a week. We slashed administration costs by about 75 percent. Customer satisfaction has never been higher.

In the old days of multiple layers of security, we got calls, “My computer is running slow.” And we would find that an antivirus agent was scanning or a content filtering app was doing some sort of update.

We have one Bitdefender agent to deploy. We go out there, we deploy it, and it’s super simple. We just have an easier time now managing that entire security apparatus versus what we used to do.

Now we are able to say, “You know what? This is really easy.” We have one Bitdefender agent to deploy. We go out there, we deploy it, and it’s super simple. We just have an easier time now managing that entire security apparatus versus what we used to do.

Gardner: Mark, you mentioned that you support a great variety of sizes of organizations and types of vertical industries. But nowadays there’s a great variety between on-premises, cloud, and a hybrid continuum. It’s difficult for some vendors to support that continuum.

How has Bitdefender risen to that challenge? Are you able to support your clients whether they are on-premises, in the cloud, or both?

No dark cloud on the horizon 

Shaw: If you look at the complexion of most customers nowadays that’s exactly what you see. You see a bunch of people who say, “I am never, ever taking my software off-premises. It’s going to stay right here. I don’t trust the cloud. I am never going to use it.” You have those “never” people.

You have some people who say, “I’d really love to go to the cloud 100 percent, but these four or five applications aren’t supported. So I still need servers, but I’d love to move everything else to the cloud.”

Storedtech bugAnd then, of course, we have some clients who are literally born in the cloud: “I am starting a new company and everything is going to be cloud-enabled. If you can’t put it up in the cloud, if you can’t put it in Azure or something of this sort, don’t even talk to us about it.”

The nice part about that is, it doesn’t really matter. At the end of the day, we all make jokes. The cloud is just somebody else’s hardware. So, if we are responsible for either those virtual desktop infrastructure (VDI) clients, or those servers, or those physical workstations — whatever the case may be — it doesn’t matter. If it’s an Exchange Server, a SQL Server, an app server, or an Active Directory server, we have a template. We can deploy it. It’s quick and painless.

Knowing that Bitdefender GravityZone is completely cloud-centric means that I don’t have to worry about loading anything on-premises. I don’t have to spin up a server to manage it – it just doesn’t matter. At the end of the day, whatever the complexion of the customer is we can absolutely tailor to their needs with a Bitdefender product without a lot of headaches.

Gardner: We have talked about the one throat and the streamlining from a technology and a security perspective. But, as a business, you also want to streamline operations, billing, licensing, and make sure that people aren’t being overcharged or undercharged. Is there anything about the Bitdefender approach, in the cloud, that’s allowed you to have less complexity when it comes to cost management?

Keep costs clean and simple 

Shaw: The nice part about it, at least for us is, we don’t put a client out there without Bitdefender. For us it’s almost a one-to-one. For every RMM agent deployed, it’s one Bitdefender deployed. It’s clean and simple, there is no fuss. If a client is working with us, they are going to be on our solutions and our processes.

Going back to that old KISS method, we want to just keep it simple and easy. When it comes to the back-office billing, if we have an RMM agent on there, it has a Bitdefender agent. Bitdefender has a great set of application programming interfaces (APIs). Not to get super-technical, but we have a developer on staff who can mine those APIs, pull that stuff out, make sure that we’re synchronized to our RMM product, and just go from there.

As long as we have a simple solution and a simple way of billing on the back end, clients don’t mind. Our accounting department really likes it because if there’s an RMM agent on there, there’s a Bitdefender agent, and it’s as simple as that.

Gardner: Mark, what comes next? Are there other layers of security you are looking at? Maybe full-disk encryption, or looking more at virtualization benefits? How can Bitdefender better support you?

Follow Bitdefender into the future 

Shaw: Bitdefender’s GravityZone Full Disk Encryption is fantastic; it’s exactly what we need. I trust Bitdefender to have our best interests in mind. Bitdefender is a partner of ours. We really mean that, they are not a vendor.

So when they talk to us about things that they are seeing, we want to make sure that we spend a lot of time and understand that. From our standpoint, encryption, absolutely. Right now we spend a lot of time with clients who have data that is not necessarily personally identifiable information (PII), but it is data that is subject to patents, or is their secret sauce — and it can’t get out. So we use Bitdefender to do a lot of things like locking down universal serial bus (USB) drives and things like that.

As Bitdefender looks down the road to ML and AI, just make sure to be cutting edge — but not bleeding edge — because nobody wants wants to hemorrhage cash, time, and everything else.

I know there is a lot of talk about machine learning (ML) and artificial intelligence (AI) out there. To me they are cool buzzwords, but I don’t know if they are there yet. If they get there, I believe and trust that Bitdefender is going to say, “We are there. We believe it’s the right thing to do.”

I have seen a lot of next-generation antivirus software that says, “We use only AI or we use ML only.” And what I see is they miss apparent threats. They slow the machines into a crawl, and they make the end-user experience miserable.

As Bitdefender looks down these roads of ML and AI, just make sure to be cutting edge here, but don’t be bleeding edge because nobody wants to hemorrhage cash, time, and everything else.

Bitdefender bug bestWe are vested in the Bitdefender experience. The guys and girls at Bitdefender, they know what’s coming. They see it all time. We are happy to play along with that. Typically by the time it hits an end user or a customer in the enterprise space, it’s old hat. I think the real cutting edge, bleeding edge stuff happens well before an MSP tends to play in that space.

But there’s a lot of stuff coming out, a lot of security risk, on mobile devices, the Internet of everything, and televisions. Every day now you see how those are being hacked — whether it’s a microphone, the camera, or whatever. There is a lot of opportunity and a lot of growth out there, and I am sure Bitdefender is on top of it.

Gardner: Before we close out, do you have any advice for organizations on how to manage security better as a culture, as an ongoing, never-ending journey? You mentioned that you peel back the onion, and you always hit another layer. There is something more you have to solve the next day. This is a nonstop affair.

What advice do you have for people so that they don’t lose track of that?

Listen and learn 

Shaw: From an MSP’s standpoint, whether you’re an engineer, in sales, or an account manager — it’s about constant learning. Understand, listen to your clients. Your clients are going to tell you what they need. They are going to tell you what they are concerned about. They are going to tell you their feelings.

If you listen to your clients and you are in tune with them, they are going to help set the direction for your company. They are going to guide you to what’s most important to them, and then that should parlay into what’s most important for you.

In our world, we went from just data storage and MSP services into then heading to telco and telephones, structured cabling, cameras, and access control, because our clients asked us to. They kept saying these are pain points, can you help us?

And, for me, that’s the recipe to success. Listen to your clients, understand what they want, especially when it comes to security. We always tell everybody, eat your own dog food. If you are selling a security solution that you are putting out there for your clients, make sure your employees have it on all of their machines. Make sure your employees are using it at home. Get the same experience with the customers. If you are going through cyber security training, put your staff through cyber security training, too. Everyone, from the CEO right down on to the person managing the warehouse should go through the same training.

If we put ourselves in our customers’ shoes and we listen to our customers — no matter what it is, security, phones, computers, MSP, whatever it is — you are going to be in tune with your customers. You’re going to have success.

We just try to find a way to say, “Yes,” and delight our customers. At the end of the day if you are doing that, if you are listening to their needs, that’s all that matters.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in AIOps, artificial intelligence, Bitdefender, BYOD, Cloud computing, Cyber security, Identity, managed services, Security, User experience | Tagged , , , , , , , , , | Leave a comment

Healthcare providers define new ways to elevate and improve the digital patient experience

patient infoThe next BriefingsDirect healthcare insights discussion explores ways to improve the total patient experience — including financial considerations — using digital technology.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about ways that healthcare providers are seeking to leverage such concepts as customer relationship management (CRM) to improve their services we are joined by Laura Semlies, Vice President of Digital Patient Experience, at Northwell Health in metro New York; Julie Gerdeman, CEO at HealthPay24 in Mechanicsburg, Penn., and Jennifer Erler, Cash Manager in the Treasury Department at Fairview Health Services in Minneapolis. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Laura, digital patient experiences have come a long way, but we still have a long way to go. It’s not just technology, though. What are the major components needed for improved digital patient experience?

laura-semlies-headshot

Semlies

Semlies: Digital, at the end of the day, is all about knowing who our patients are, understanding what they find valuable, and how they are going to best use tools and assets. For us the primary thing is to figure out where the points of friction are and how digital then has the capability to help solve that.

If you continuously gain knowledge and understanding of where you have an opportunity to provide value and deliberately attack each one of those functions and experiences, that’s how we are going to get the best value out of digital over time.

So for us that was around knowing the patient in every moment of interaction, and how to give them better tools to access our health system — from an appointments’ perspective, to drive down the redundant data collection, and give them the ability to both pay their bills online and to not be surprised when they get their bill and the amount. Those are the things that we focused on, because they were the highest points of friction and value as articulated by our patients.

Where we go next is up to the patients. Frankly, the providers who are struggling with the technology between them and their patients [also struggle] in the relationship itself.

Partner with IT to provide best care

Gardner: Jennie, the financial aspects of a patient’s experience are very important. We have separate systems for financial and experience. Should we increasingly be talking about both the financial and the overall care experience?

Jennie Erler headshot

Erler

Erler: We should. Healthcare organizations have an opportunity to internally partner with IT. IT used to be an afterthought, but it’s coming to the forefront. IT resources are a huge need for us in healthcare to drive that total patient experience.

As Laura said, we have a lot of redundant data. How do we partner with IT in the best way possible where it benefits our customers’ experience? And how do they want that delivered? Looking at the industry today, I’m seeing Amazon and Walmart getting into the healthcare field.

As healthcare organizations, perhaps we didn’t invest heavily in IT, but I think we are trying to catch up now. We need to invest in the relationship with IT — and all the other operational partners — to deliver to the patients in the best way possible.

Gardner: Julie, doesn’t making technology better for the financial aspects of the patient experience also set the stage for creating an environment and the means to accomplish a total digital patient experience?

julie-gerdeman-picture-1

Gerdeman

Gerdeman: It does, Dana. We see the patient at the center of all those decisions. So put the patient at the center, and then engage with that patient in the way that they want to engage. The role that technology plays is to personalize digital engagement. There is an opportunity in the financial engagement of the patient to communicate; to communicate clearly, simply, so that they know what their obligation is — and that they have options. Technology enables options, it enables communication, and that then elevates their experience. With the patient at the center, with technology enabling it, that takes it to a whole other level.

Learn to listen; listen to learn 

Semlies: At the end of the day, technology is about giving us the tools to be active listeners. Historically it has been one-directional. We have a transaction to perform and we go when we perform that transaction.

In the tomorrow-state, it becomes much more of a dialogue. The more we learn about an individual, and the more we learn about a behavior, the more we learn what was a truly positive experience — or a negative experience. Then we can take those learnings and activate them in the right moments.

We just don’t have the tools yet to actively listen and understand how to get to a higher level of personalization. Most of our investment is now going to figure out what we need to be actively listening. 

It’s always impressive to me when something pops up on my Amazon cart as a recommendation. They know I want something before I even know I want something. What is the analogy in healthcare? It could be a service that I need and want, or a new option that would be attractive to me, that’s inherently personalized. We just don’t have the tools yet to actively listen and understand how to get to that level of personalization.

Most of our investment is now going to figure out what do we need so that we can be actively listening — and actively talking in the right voice to both our providers and our patients to drive better experiences. Those are the things that other industries, in my opinion, have a leg up on us.

We can do the functions but connecting those functions and getting to where we can design and cultivate simple experiences that people love — and drive loyalty and relationships – that’s the magic sauce.

Gain a Detailed Look at Patient

Financial Engagement Strategies 

Gardner: It’s important to know what patients want to know, when they want to know it, and maybe even anticipate that across their experience. What’s the friction in the process right now? What prevents the ultimate patient experience, where you can anticipate their needs and do it in a way that makes them feel comfortable? That also might be a benefit to the payers and providers.

Erler: Historically, when we do patient surveys, we ask about the clinical experience. But maybe we are not asking patients the right questions to get to the bottom of it all. Maybe we are not being as intuitive as we could be with all the data we have in our systems.

It’s been a struggle from a treasury perspective. I have been asking, “Can we get a billing-related question on the survey?” That’s part of their experience, too, and it’s part of their wellness. Will they be stressing about what they owe on my bill and what it is going to cost them? We have to take another look at how we serve our patients.

We need to be more in-the-moment instead of after-the-fact. How was your visit and how can we fix it? How can we get that feedback right then and there when they are having that experience?

Gardner: It’s okay to talk about the finances as part of the overall care, isn’t it?

Erler: Right!

Healthy money, healthy mind 

Gerdeman: Yeah, absolutely. We recently conducted a study with more than 150 providers at HealthPay24. What we found is a negative billing-financial experience can completely negate the fabulous clinical experience from a healthcare provider. Really, it can leave such a bad impression.

To Jennie’s point, by asking questions — not just around the clinical experience, but around the financial experience, and how things can be improved – allows patients to get back to their options and the flexibility is provided in a personalized way, based on who they are and what they need.

Semlies: The other component of this is that we are very organized around transactional interactions with patients, but when it comes to experience — experience is relationship-based. Odds are you don’t have one bill coming to you, you have multiple bills coming to you, and they come to you with multiple formats, with multiple options to pay, with multiple options to help you with those bills. And that is very, very confusing, and that’s in one interaction with the healthcare system.

If you connect that to a patient who is dealing with something more chronic or more serious, they could have literally 20, 30, 40 or 100 bills coming in. That just creates such an exasperation for our patients — and frustration.

two peopleOur path to solving this needs to be far less around single transactions and far broader. It demands that the healthcare systems think differently about how they approach these problems. Patients don’t experience one bill; they experience a series of bills. If we give them different support numbers, different tools, different options for each and every one of those, it will always be confusing – no matter how sophisticated the tool that you use to pay the bill is.

Gardner: So the idea is to make things simpler for the patient. But there is an awful lot of complexity behind the scenes in order to accomplish that. It’s fundamentally about data and sharing data. So let’s address those two issues, data and complexity. How do we overcome those to provide improved simplicity?

Erler: We have all the information we need on a claim that goes to the payer. The payer knows what they are going to pay us. How do we get more married-up with the payer so that we can create that better experience for our customers? How do we partner better with the payers to deliver that information to the patients?

How do we start to individualize our relationships with patients so we know how they are going to behave and how they are going to interact? How do we partner better with the payers to deliver information to the patients?

And then how do we start to individualize our relationships with patients so we know how they are going to behave and how they are going to interact?

I don’t know that patients are aware of the relationship that we as providers have with our payers, and how much we struggle just to get paid. The data is in the claim, the payer has the data, so why is it so difficult for us to do what we need with that data on the backend? We need to make that simpler for everybody involved.

Gardner: Julie … people, process, and technology. We have seen analogs to this in other industries. It is a difficult problem. What technologically and culturally do you think needs to happen in order for these improvements to take place?

Connect to reduce complexity 

Gerdeman: It’s under way and it’s happening. The generations and demographics are changing in our society and in our culture. As the younger generations become patients, they bring with them the expectation that data is at their fingertips and that technology enables their lives, wherever they are and whatever they are doing, because they have a whole other view.

Millennials, the younger generations, have a different perspective and expectations around wellness. There is a big shift happening — not just care for being sick, but actual wellness to prevent illness. The technology needs to engage with that demographic in a new way and understanding.

Laura used the word connectionConnection and interoperability are truly how we address the complexity you referenced. Through that connection, the technology enables IT to be interoperable with all the different health systems hospitals use. That’s how we are going to solve it.

Gardner: We are also seeing in other industries an interesting relationship between self-help, or self-driven processes, and automation. They complement one another, if it’s done properly.

Do you see that as an opportunity in healthcare, where the digital experience gives the patient the opportunity to drive their own questions and answers, to find their own way should they choose? Is automation a way that makes an improved experience possible?

Gain a Detailed Look at Patient

Financial Engagement Strategies 

Semlies: Absolutely. Self-help is one of the first things we first went live with using HealthPay24 technology. We knew the top 20 questions that patients were calling in about. We had lots of toolkits inside the organization, but we didn’t expose that information. It lived on our website somewhere, but it didn’t live in our website in a direct, easy to read, easy to understand way. It was written in our voice, not the patient’s voice, and it wasn’t exposed at the moment that a patient was actually making that transaction.

Part of the reason why we have seen such an increase in our online payments is because we posted literally, quite simply, frequently asked questions (FAQ) around this. Patients don’t want to call and wait 22 minutes to get an agent to hear them if they can self-serve themselves. And it’s really helped us a lot, and there is an analogy in that in lots of different places in the healthcare space.

Gardner: You need to have the right tools and capabilities internally to be able to satisfy the patient requirements. But the systems internally don’t always give you that single view of the patient, like what a customer relationship management (CRM) system does in other industries.

Would you like to have a complement to a CRM system in healthcare so that you have all the information that you need to interact properly?

Healthcare CRM as a way of life

Semlies: CRM is something that we didn’t talk about in healthcare previously. I very much believe that CRM is as much about an ethos and a philosophy as it is about a system. I don’t believe it is exclusively a system. I think it’s a way of life, an understanding of what the patient needs. You can have the information at your fingertips in the moment that you need it and be able to share that.

I think we’re evolving. We want to be customer-obsessed, but there is a big difference between wanting to be customer-obsessed and actually being customer-obsessed.

The other challenge is there are some inherent conflicts when you start talking about customer obsession and what other stakeholders inside the health system want to do with their patients, but it can be really hard to deliver. When a patient wants a real-time answer to something and your service level agreement (SLA) is a day, you can’t meet their expectation.

We’re evolving. We want to be customer-obsessed, but there is a big difference between wanting to be cusomter-obsessed and actually being customer-obsessed. It can be really hard to deliver.

And so how do you rethink your scope of service? How do you rethink the way you provide information to individuals? How do you rethink providing self-help opportunities so they can get what they need? Getting to that place starts with understanding the customer and understanding what their expectations are. The you can start delivering to them in the way the patients expect us to.

Erler: Within our organization, there’s an internal cultural shift to start thinking about a patient as being a customer. There was a feeling of insensitivity around calling a patient a customer or treating this more as consumerism, but that’s what it’s becoming.

As that culture shifts and we think more about consumerism and CRM, it’s going to enhance the patients’ experience. But we have to think about it differently because there’s the risk when you say “consumerism” that it’s all about the money, and that all we care about is money. That’s not what it is. It’s a component, but it’s about the full patient experience. CRM tools are going to be crucial for us in order to get to that next level.

Gardner: Again, Julie, it seems to me that if you can solve this on the financial side of things, you’ve set up the opportunity — a platform approach, and even a culture – to take on the larger digital experience of the patient. How close are we on the financial side when it comes to a single view approach?

Data to predict patient behavior, experience 

Gerdeman: From a financial perspective, we are down that path. We have definitely made strides in achieving technology and digital access for financial. That is just one component of a broader technology ecosystem that will have a bigger return on investment (ROI) for providers. That ROI then impacts revenue cycles, not just the backend financials but all the way to the post-experience for a patient. I believe financial is one component, and technology is an enabler.

One of the things that we’re really passionate about at HealthPay24 is the predictive capability of understanding the patient. And what I mean by that is the predictive analytics and the data that you already have — potentially in a CRM, maybe not – can be an indicator of patient behavior and what could be provided. And that will further drive in ROI by using predictive capabilities, better results, and ultimately a much better patient experience.

Gardner: On this question of ROI, Laura, how do you at Northwell make the argument of making investments and getting recurring payoffs? How do you create a virtuous adoption cycle benefit?

Gain a Detailed Look at Patient

Financial Engagement Strategies 

Semlies: We first started our digital patient experience improvements about 18 months ago, and that was probably late compared to some of our competitors, and certainly compared to other industries.

But part of the reason we did was because we knew that within the next 2 to 3 years, patients were going to bring their expectations from other industries to healthcare. We knew that that was going to happen. In a competitive market like New York, where I live and work, if we didn’t start to evolve and build sophisticated advanced experiences from a digital perspective, we would not have that differentiation and we would lose to competitors who had focused on that.

The hard part for the industry right now is that in healthcare, relationships with a provider and a patient are not enough anymore. We have to focus on the total experience. That was the first driver, but we also have to be cognizant of what we take in from a reimbursement perspective and what we put out in terms of investment and innovation.

The question of ROI is important. Where does the investment come from? It doesn’t come from digital itself. But it does come from the opportunities that digital creates for us. That can be from the access tools that create the capacity to invite patients that wouldn’t ordinarily have selected Northwell to become new patients. It can mean in-house patients who previously didn’t choose Northwell for their follow-up care and make it easy for them to do so and then we retain them.

The questions of ROI is important. Where does the investment come from? It doesn’t come from digital itself. It comes from the opportunities that digital creates for us. We have actually increased collections and decreased bad debts.

It means avoiding leakage into the payment space when we get to things like accelerating cash because it’s easy. You just click a button at the point of getting a bill and pay the bill. Now I have accelerated the cashflow. Maybe we can help pay more than one bill at a time, whereas previously they maybe didn’t even understand why there was more than one bill. So we have actually increased collections and decreased bad debts.

Those are the functions that we are going to see ROI in, not digital itself. And so, the conversation is a tricky one because I run the service line of digital and I have to partner with every one of my business associates and leaders to make sure that they are accounting for and helping give credit to the applications and the tools that we’re building so the ROI and the investment can continue. And so, it makes the conversation a little bit harder, but it certainly has to be there.

Gardner: Let’s take a look to the future. When you have set up the digital systems, have that adoption cycle, and can produce ROI appreciation, you are also setting the stage for having a lot more data to look at, to analyze, and to reapply those insights back into those earlier investments and processes.

What does the future hold and what would you like to see things like analytics provide?

Erler: From a treasury perspective, just taking out how cumbersome it is on the back end to handle all these different payment channels [would be an improvement]. If we could marry all of these systems together on the back end and deliver that to the patient to collect one payment and automate that process – then we are going to see an ROI no matter what.

Healthpay24When it comes to the digital experience, we can make something look really great on the front end, but the key is not burdening our resources on the back end and to make that a true digital experience.

Then we can give customer service to our patients and the tools that they need to get to that data right away. Having all that data in one place and being able to do those analytics [are key]. Right now, we have all these different merchant accounts. How do you pull all of that together and look at the span and how much you are collecting and what your revenue is? It’s virtually impossible now to pull all that together in one place on the back end.

Gardner: Julie, data and analytics are driving more of the strategic thinking about how to do IT systems. Where do you see it going? What will be some of the earlier payoffs from doing analytics properly in a healthcare payer-provider environment?

The analytics advantage

Gerdeman: We are just starting to do this with several of our customers, where we are taking data and analyzing the financials. That can be from the discount programs they are currently offering patients, or for the payment plans they’re tying to collection results.  We’re looking at the demographics behind each of those, and how it could be shifted in a way that they are able to collect more while providing a better experience.

Our vision is this: The provider knows the patient so well that in anticipation they are getting the financial offer that best supports their needs. I think we are in such an interesting time right now in healthcare. What happens now when I take my children to a doctor’s appointment is going to look and feel so different when they take their children to an appointment.

We are seeing just the beginnings of the text reminders, the digital engagement, you have an appointment, have you thought about this? They will be walking around and it’s going to be so incorporated in their lives — like Instagram that they are on all the time.

I can’t wait to see when they are taking their children — or not, right? Maybe they are going to be doing things much more virtually and digitally than we are with our own children. To me there will be broad cultural changes from how more data will be enabling us. It is very exciting.

Gardner: Laura, where do you see the digital experience potential going for healthcare?

Automation assists prevention 

Semlies: Automation is key to the functions that we do. We expend energy in people and resources that we could be using automation for. Data is key to helping us pick the right things to automate. The second is anticipation and being able to understand where the patient is and what the next step should be. Being able to predict and personalize is the next step. Data is obviously a critical component that’s going to help you do that.

Gain a Detailed Look at Patient

Financial Engagement Strategies 

The last piece is that prevention over time is going to be the name of the game. Healthcare will look very different tomorrow than today. You will see new models pop up that are very much moving the needle in terms of how we collect information about a person, what’s going on inside of their body, and then being able to predict what is going to happen next.

We will be able to take action to avert or prevent things from happening. Our entire model of how we treat wellness is going to shift. What primary care looks like is going to be different, and analytics is at the core of all of that — whether you’re talking about it from an artificial intelligence (AI) perspective, it’s all the same thing.

Our entire model of how we treat wellness is going to shift. What primary care looks like is going to be different, and analytics is at the core of all of that. But most doctors aren’t getting that kind of information today because we don’t have a great way of sharing patient-generated health data yet.

Did you get the data on the right thing to measure? Are you looking at it? Do you have the tools to be able to signal when something is going off? And is that signal in the right voice to the person who needs to consume that? Is it at the right time so that you can actually avert it?

When I use my Fitbit, it understands that my heart rate is up. It’s anticipating that it’s because I’m exercising. It asks me that, and it asks me in a voice that I understand and I can respond to.

But most doctors aren’t getting that same kind of information today because we don’t have a great way of sharing patient-generated health data yet. It just comes in as a lot of noise. So how do we take all of that data?

We need to package it and bring it to the right person at the right moment and in the right voice. Then it can be used to make things preventable. It can actually drive an outcome. That to me is the magic of where we can go. We are not there yet, but I think that’s where we have to go.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in Cloud computing, electronic medical records, Enterprise transformation, healthcare, Information management, professional services, supply chain, User experience | Tagged , , , , , , , , , , , | Leave a comment

How agile Enterprise Architecture builds agile business advantage

Light Bulbs ConceptThe next BriefingsDirect digital business trends discussion explores how Enterprise Architecture (EA) defines and supports more agile business methods and outcomes.

We will now learn how Enterprise Architects embrace agile approaches to build competitive advantages for enterprises and governments, as well as to keep those organizations more secure and compliant.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about attaining agility by the latest EA approaches, we are now joined by our panel, Mats Gejnevall, Enterprise Architect at minnovate and Member of The Open Group Agile Architecture Work Group; Sonia Gonzalez, Forum Director of the Architecture Forum at The Open Group; Walters Obenson, Director of the Agile Architecture Framework at The Open Group, and Łukasz Wrześniewski, Enterprise Architect and Agile Transformation Consultant. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mats, what trends are driving the choice and motivation behind a career in EA? What are some of the motivations these days that are driving people into this very important role?

Mats Gejnevall

Gejnevall

Gejnevall: Most people are going into EA because they want to have a holistic view of the problem at hand. I do think that EA is a mindset that you can use to apply to any type of issue or problem you have. You look at an issue from many different perspectives and try to understand the fit between the issue or the problem and potential solutions.

That’s human nature to want to do, to look at things from a holistic point of view. It’s such an interesting area to be in, because you can apply it to just about everything. Particularly, a general EA application, where you look at the business, how it works, and how that will affect the IT part of it. So looking at that holistic view I think is the important part — and that’s the motivation.

Gardner: Łukasz, why do you think agility particularly is well addressed by EA?

Wrześniewski: I agree with Mats that EA provides a holistic view to understand how organizations work and can enable agility. As one of the main enablers for agility, EA changes the organization in terms of value. Nowadays agility is the trend, the new way of working and how the organization transforms itself for scaling the enterprise. EA is one of the critical success factors.

EA’s holistic point of view

Gardner: It’s one thing to be a member of this noble profession; it’s another for organizations to use them well.

Mats, how should organizations leverage architects to better sustain an agile approach and environment? It takes a receptive culture. How do organizations need to adjust?

Gejnevall: First of all, we need to distinguish between being agile doing EA and EA supporting an Agile approach. They are two very different things.

Let’s discuss being agile doing EA. To create a true agile EA, the whole organization needs to be agile, it’s not just the IT part. EA needs to be agile and loosely coupled, one of the key concepts, applied both to the business and the IT side.

the-open-group-logoBut to become agile doing EA, means adopting the agile mindset, too. We talked earlier about EA being the mindset. Agile is also a mindset – how you think about things, how to do things in different ways than you have been doing before, and to look at all the different agile practices out there.

For instance, you have sprints, iterations, demos, and these kinds of things. You need to take them into your EA way of working and create an agile way of working. You also need to connect your EA with the solution development in agile ways. So EA and solution development in an agile way needs to connect in the long-term.

Gardner: Mats, it sounds a little bit like the chicken and the egg. Which comes first, the EA or the agile environment? Where do you begin?

Change your mind for enterprise agility

Wrześniewski:

Łukasz Wrześniewski

Wrześniewski

 Everything is about achieving the agility in the enterprise. It’s not about doing the architecture. Doing the architecture in an agile way is the one thing, but our main goal is to achieve enterprise agility. EA is just a means to do that. So we can do the architecture in a really agile way. We can do the sprints, iterations, and apply the different agile methodologies to deliver architecture.

But also, we can do architecture in more traditional way, the understanding of how a system is complex and how to transform the system in a proper way, the organization as a system, and we can achieve agility.

That’s a very important factor when it comes to people’s mentality and how the people work in the organization. That’s a very big challenge to an organization, to change the way of working, to change the mindset, and really the Enterprise Architect has to sometimes take the shoes of the psychologist.

Gonzalez: Like Łukasz said, it’s the mindset and to change your mind. At first, organizations need to be agile based on Agile principles, such as delivering value frequently and aligning with the business strategy. And when you do that, you also have to change your EA capability to become more agile, starting with the process and the way that you do EA.

For example, using sprints, like Łukasz said, and also to be aware of how EA governance can support agile. As you know, it’s important to deliver value frequently, but it has to be aligned with the organization view and strategy, like Mats said at the beginning, to have the overall view of the organization, but also to be aware, to handle risk, and also addressing compliance. You may go through an agile effort without considering the whole enterprise, and you are facing the risk of different teams doing things in an agile way, but not connected to each other.

It’s a change of mindset that will automatically make you change the way you are doing EA.

Value stream helps express the value that an organization produces for its stakeholders, the outcomes it produces, and the different stages needed to produce that value. It provides a concept that’s less detailed than looking at your individual business processes. 

Gejnevall: As Łukasz was saying, I think it’s very much connected to the entire organization becoming agile. It’s a challenge. If you want to do EA for an agile organization, that’s something that probably needs to be done. You need to plan, but also open up the change process so it can change in a correct and slower way, because you can’t just come at it top-down, to make an organization agile top-down, it has to come both from top-down and bottom-up.

Gardner: I also hear people asking, “I have heard of Agile development, and now I am hearing about agile enterprise. Is this something different than DevOps, is it more than DevOps?” My impression is that it is much more than DevOps, but maybe we can address that.

Mats, how does DevOps fit into this for those people that are thinking of agile only in terms of development?

Gejnevall: It depends on the normal way of doing Agile development, doing something in short iterations. And then you have some demos at the end, retrospectives, and some planning for the next iteration. And there is some discussion ongoing right now whether or not the demo needs to be something executable, that it’s used quickly in the organization. Or it could be just an architecture piece, a couple of models that are showing some aspect of things. In my view, it doesn’t have to be something executable.

And also when you look at DevOps as well, there are a lot of discussions now about industrial DevOps, where you actually produce not software but other technical stuff in an agile way, with iterations, and you do it incrementally.

Wrześniewski: EA and architecture work as an enabler that allow for increasing complexity. We have many distributed teams that are working on the one product in DevOps, not run on Agile, and the complexity of the product, of the environment will be growing.

Architecture can put it in a proper direction. And I mean intentional architecture that is not like big upfront design, like in traditional waterfall, but intentional architecture that enables the iterations and drives DevOps into the proper direction to reduce complexity — and reduces the possibility of failure in product development.

Gardner: I have also heard that architecture is about shifting from building to assembly, that it becomes repeatable and crosses organizational boundaries. Does anyone have a response to this idea of shifting from building to assembly and why it’s important?

Strong building blocks bring success

Wrześniewski: The use of microservicescontainers, and similar technologies will mean components that you can assemble into entire products. These components are replaceable. It’s like the basic elements of EA when talking about the architecture and the building blocks, and good composition of the building blocks to deliver products.

Architecture perfectly addresses this problem and shift. We have already had this concept for years in EA.

Gardner: Anyone else on this topic of moving toward assembly, repeatability, and standardization?

Gejnevall: On the IT side, I think that’s quite common. It’s been common for many years in different ways and then new things happen. We talked about service-orientation for quite a while and then we started talking about microservices. These are all types of loosely coupled systems that become much more agile in certain ways.

Automation of business workflows and processes with businessmanThe interesting thing is to look at the business side of things. How can you make the business side become more agile? We have done a lot of workshops around service-orienting the business, making it capability-based and sustainable. The business consists of a bunch of services, so capabilities, and you can connect these capabilities to value streams and change the value streams in reaction to changes in the business side. That’s much easier than the old way of having strict boundaries between business units and business services that are developed.

We are now trying to move the thinking from the IT side up into the business side to enable the business to become much more componentized as you put different business services that the organization produces together in new ways and allow the management to come up with new and innovative ideas.

Gardner: That gets to the heart of what we are trying to accomplish here. But what are some of the common challenges to attaining such agility, when we move both IT and the business to an agile perspective of being able to react and move, but without being brittle or having processes that can be extended — without chaos and complexity?

One of the challenges for the business architecture is the proper partitioning the architecture to distinguish the capabilities across the organizational silos.That means keeping the proper level of detail that is connected to the organizational strategy, and to be able to understand the system.

Wrześniewski: One of the challenges for the business architecture is the proper partitioning of the architecture to distinguish the capabilities across the organizational silos. That means keeping the proper level of detail that is connected to the organizational strategy, and to be able to understand the system. Another big challenge is also to get the proper sponsorship for such activity and so to proceed with the transformation across the organization.

Gejnevall: Change is always hard for a lot of people. And we are trying to change, and to have people live in a more changeable world than they have been in before. That’s going to be very hard. Because people don’t like change, we are going to have to motivate people much more and have them understand why we need to change.

But change is going to be happening quicker and quicker, and if we create a much more agile enterprise, changes will keep rolling in faster and faster all of the time.

Wrześniewski: One of the areas where I ran into a problem when creating an architecture in an agile way was that if you have lots and lots of agile projects ongoing, or agile teams ongoing, you have to have a lot of stakeholders that come and watch these demos and have relevant opinions about them. From my past experiences of doing EA, it’s always hard to get the correct stakeholders’ involvement. And that’s going to be even harder, because now the stakeholders are looking at hundreds of different agile sprints at the same time. Will there be enough stakeholders for all of that?

Gardner: Right, of course you have to address the people, the process, and the technology, so the people, maybe even the most important part nowadays.

Customer journey from finish to start

Sonia Gonzalez

Gonzalez

Gonzalez: With all of those agile digital trends, what is more important now is to have two things in mind, a product-centric view and the customer journey. In order to do that the different layers that aren’t traditional architecture are blurry, because now it’s not about business and IT anymore — it’s about the organization as a whole that needs to be agile.

And in that regard, for example, like Mats and Łukasz have said, the right stakeholder needs to be in for the whole process. So it’s no longer saying, “I am the business, I am giving this request.” And then the IT people need to solve it. It’s not about that anymore. It’s having in mind that the product has services included, has an IT component, and also a business component.

When you are building your customer journey, just start from the very end, the connection with the customer, and move back all the way to the background and platform that are delivering the IT capabilities.

So it’s about having a more cross view of doing architecture, which is important.

Gardner: How does modeling and a standardized approach to modeling help overcome some of these challenges? What is it about what EA that allows for agility to become a common thread across an organization?

Wrześniewski: When it comes to the modeling, the models are different, so different viewpoints are just the tools for EA. Enterprise Architects should choose proper means to define the architecture that should enable the change that the organization needs.

So the common understanding — or maybe some stereotype of the Enterprise Architect — is they are the guys that draw the lines and boxes and deliver only big documentation, but then nobody uses it.

The challenge here is to deliver the MVPs in terms of modeling that the development teams and business will consider as something valuable and that can guide them. It’s not about making nice documentation, depositories in the tools, even if somebody is happy with some nice sketch on paper. It’s good architecture for the architect, because the architecture is about enabling the change in the organization and supporting the business and IT to deliver value, it’s not about only documenting every component. This is my opinion about what is the role of the architect and the model.

And, of course, we many different methods and conventions and the architect should choose the proper one for the organization.

Model collaborations create solutions

Gejnevall: I don’t think that the architects should sit around and model on their own, it should be a collaboration between the solution architect and the solution developers in some ways. It’s a collaborative effort, where you actually work on the architecture together. So you don’t have to hand over a bunch of papers to the solution developers later on, they already know the whole stuff.

So you are working in a continuous way of moving the material over to them, and you send it over to them in pieces, start with the most important pieces first or the slices of the architecture that is the most important and is most valuable, that’s sort of the whole Minimum Viable Architecture (MVA) approach. You can create lots of small MVAs, and then together with the solution teams allow them to work on that. It continuously creates new MVAs and the solution team continuously develops new MVPs. And that will go on for the entire length of a project, if that’s what you are working on, or for a product.

Gonzalez: In terms of modeling, there are at least two ways to see this. One of them is the fact that you need to model your high-level landscape for the enterprise in order to have this strategic view. You have some tools to identify which items you should have priorities for, going into your backlog and then going into the iteration, you need to be aligned with that.

Also, for example, you can model high-level value streams, identify key capabilities and then try to define which one would be the item you would be delivering, in that you don’t need to do a lot of modeling, just high-level modeling which you are going to depict that.

Having lots of corporate architecture allows you to facilitate these different building blocks for changing. And there are lots of tools in the market now that will allow you to have automation in the things you are doing.

On the other hand, we have other models that are more solution-level-oriented and in that case, one of the challenges that architects have now in relationship to modeling is how to deal with the fact that models are changing – and should change faster now because trends are changing and the market is changing. So there are different techniques that can be used for that. For example, test-driven design, domain design, domain-driven design, refactoring, and some others that support agile modeling.

Also, like Mats mentioned, having lots of corporate architecture that would allow you to facilitate these different building blocks for changing. And there are a lot of tools in the market now that will allow you to have automation in the things you are doing. For example, to automate testing, which is something that we should do. It’s actually one of the key components of DevOps to automate the testing, to view how this facility really continues with the integration, the development, and finally, the delivery.

Gardner: Sonia, you mentioned automation, but a lot of organizations, enterprises and governments are saddled with legacy systems. That can be quite complex, having older back end systems that require a lot of manual support. How do we move past the restraints, if you will, of back-end systems, legacy systems, and still become agile?

Combine old and new 

Gonzalez: That’s a very good question, Dana. That’s precisely one of the stronger things of our EA. Łukasz mentioned that is the fact that you can use it in different ways and adapt it to different uses.

So, you can, for example, if you have a bank where you usually have a lot of systems, you have legacy systems that are very difficult to change and risky to change. So, what a company should do is to have this combined approach saying, “Okay, I have a more traditional EA to handle my background systems because they are more stable and perhaps require fewer changes.”

Walters Obenson

Obenson

But on the other hand, if you have your end-user platform, such as online banking or mobile banking, that development should be faster. You can have an agile view on that. So you can have a combined view.

However, we also depend on the background. One of the things that companies are doing right now is to try to go over components and services, microservices, and outsourcing to build a corporate architecture for customer services platforms without having to change all the background systems at once because that’s very risky.

So it’s some kind of like a combined effort that it can be used in these cases.

Gardner: Anyone else have some insights on how to make agile EA backward compatible?

Wrześniewski: What Sonia said is really important, that we have some sort of combined or hybrid approach for EA. You will always have some projects that run in the agile part, some projects that have a more traditional approach that are longer, and that the delivery of architecture will take a longer time to reduce the risk when we are replacing some, for example, core banking system. The role of the EA is to know how to combine these different approaches and how to find the silver bullets to solve all the different situations.

So, we wouldn’t be always looking for the organization on the one perspective that we are agile and everything that was before is a batch practice. We try to combine, and this is the evolution of organization’s new approach. So we will have to step by step improve the organization to get the best results if we are completely agile.

Gardner: Walters brought up the important issue of governance. How can agile EA allow organizations to be faster, focused on business outcomes, and also be more secure and more compliant? How does EA and agile EA help an organization attain both a secure and compliant environment?

Gejnevall: You need to have a security architecture, and that has to be set up in a very loosely coupled way so you can select the security features that are needed for your specific project.

You need to have that security architecture as a reference model at the bottom of your architecture. That is something you need to follow. But then the security architecture is not just the IT part of it, it’s also the business side of things, because security has got a lot to do with the processes and the way a company works.

All of that needs to be taken into consideration when we do the architecture and it needs to be known by all the solution development teams, these are the rules around security. I think you can’t let go early in that, but security architecture needs to be flexible as well, and it needs to be adapting continuously, because it needs to handle new threats all the time. You can’t do one security architecture and think it’s going to live there forever; it’s going to have the same type of renewal and refactoring things happening to it as anything else.

Blue Chip Client Testing Enterprise MobilityWrześniewski: I would like to add that, in general, the agile approaches are more transparent and the testing of the security requirements often is done in an interactive way, so this approach can ensure higher security.

Also, the governance should be adapted to the agile governance and some governance body that works in an agile way and you have different level of enterprise; I mean portfolio management, project management and teams. So, there is also some organizational change that needs to be done.

Gardner: Many times when I speak with business leaders, they are concerned about mounting complexity, and one of the ways that they are very attracted to trying to combat complexity is to move towards minimum viable products and minimum viable services. How does the concept of an MVA help agility, but at the same time combat complexity?

MVA moves product from plan to value

Wrześniewski: MVA is the architecture of minimum viable products that can enable the development of the product. This can help you to solve the complexity issues with the minimum viable product to focus on this functionality, the capabilities that are mandatory for the organization and can deliver the highest percentage of value in the software.

And also if the minimum viable product fails, we don’t invest too much for the entire product development.

Gejnevall: Inherently, organizations are complex. You have to start very much higher up than the IT side of it to take away complexity. You need to start at the business level, on the organizational level, on the process level, on how you actually do work. If that’s complex, the IT solutions for that will still be complex, so it needs to have a good EA and MVA can test out new things and new ways of organizing yourself, because everything doesn’t have to be an IT project in the end.

You do an MVA and that’s a process change or an organization will change, you test it out and you say, did it actually minimize our complexity or did it actually increase our complexity, at least you can stop the project very quickly and go in another direction instead.

Gonzalez: Handling complexities is challenging, especially for big organizations that have been in the market for a long time. You will need to focus on the minimum viable product for leveraging the MVA, and go by slices, like taking smaller pieces to avoid going into much modeling.

Handling complexity is challenging, especially for big organizations that have been in the market for a long time.You will need to focus on the minimum viable product for leveraging the MVA, and go by slices, like taking smaller pieces to avoid going into much modeling.

However, at the end, even though you are not conceding things to be only IT, at the end you have a platform which is the one that is providing your IT capabilities. In that case, my view is use of architecture is important. So you may have a more traditional EA for keeping the maintenance of your complex landscape. That’s already there. You cannot avoid that or ignore that, but you need to identify which components are there.

So, whenever you are deciding a new problem with MVA, you can also be aware of the dependencies there at the platform level, which is where most of the time the complexities rely on. So that’s in my view a combined use again of both of them.

And the other key thing here is having the good integration and automation tooling, because sometimes you need to do things manually and that’s where it takes a lot of time, so you just make some automations of that, then it will be easier to maintain and to allow you to handle that complexity without going against an agile view.

Gardner: And before we start to wrap up, I wanted to ask you what an organization will experience when they do leverage agile EA and become more adaptive in their business in total, holistically. What do you get when you do agile EA? What do you recognize as metrics of success if this is going well?

Deliver value and value delivery

Gejnevall: Each one of these MVAs and minimum viable products is actually supposed to leave us some business value at the end. If you look a the framework like the TOGAF® standard, a standard of The Open Group, there is a phase at the end where you actually look at to see, “Did we really achieve this value that we expected to?”

This a part of most product management frameworks as well. So we need to measure before we do something and then we need to measure afterward, did we get this business value that we expected, because just running a project at the demo, we can’t really tell if we got the value or not. We need to put it out in operations and measure it that way.

So getting that feedback loop much quicker than we did in the past when it took a couple of years to develop a new product and at the end of it we have changed and we didn’t get the value, even though we spent many million dollars to do that. Now we might spend a lot less money, but we can actually prove that we are getting some business value out of this and actually measure it appropriately as well.

Architecture_frameworkWrześniewski: I agree fully with Mats that the value is quicker delivery. Also, the product quality should be much higher and all the people should be much more satisfied. I mean the team that delivers the service or product changes the business, the stakeholders, and direct clients. This really impacts the clients and team’s satisfaction. This is one of the important benefits of agile EA as well.

Gejnevall: Just because you have a term called minimum viable product and you think it always needs to be IT that’s doing that, I think you can do a minimum viable product in many other ways. Like I was saying before, process changes, organizational changes and other things. So it doesn’t always have to be IT that is doing the minimum viable product that gives you the best business value.

Gardner: How about the role of The Open Group? You have a number of certification programs, standards, workgroups, and you are talking with folks in the EA field all the time. What is it that The Open Group is bringing to the table nowadays to help foster agile EA and therefore better, more secure, more business-oriented companies and governments?

Open Group EA and Agile offerings abound

Gonzalez: We have a series of standards from The Open Group. One of the subsets of that is the architecture portfolio. We have several activities going on. We have the Agile Architecture Framework snapshot, product of The Open Group Board Members’ activity which is already in the market for test and comments, but it’s not yet an approved standard. The Agile Architecture Framework™ (O-AAF) covers both Digital Transformation of the enterprise, together with Agile Transformation of the enterprise considering concepts like Lean and DevOps among others.

On the other hand, we have the Architecture or the Agile EA one at the level of the Architecture Forum, which is the one Mats and Łukasz are dealing with, of how to have an agile EA practice. There is a very good white paperpublished, and other deliverables, like a guide about how to use or make the TOGAF framework an agile sprint using the Architecture Development Method (ADM), so that’s another paper that is under construction, and there are also several that are on the way.

We also have in the ArchiMate® Forum, we have Agile Modeling Activity, which is precisely dealing with the modeling part of this, so the three activities are connected.

And into a separate working group, even though it is related, we have Digital Practitioners Work Group, aimed to address the digital enterprise. Also there is connection with the Agile Architecture Framework and we just started looking for some harmonization also with EA and the TOGAF standard.

In the security space, we recently started the Zero Trust Architecture product, which is precisely trained to address this part of Zero Trust Architecture, which is securing the resources instead of securing the network. That’s a joint activity between Security Forum and the Architecture Forum. So, some of those are the things that are going on.

And also at the level of the Agile Architecture Framework, there is also conversation about how to handle security and cloud in an agile environment, so you see we have several moving things at the table at the moment.

Gejnevall: Long-term, I think we need to look into agile enterprise much more, but I think that all these efforts sort of are converging up to that point sooner or later that we need to look to see what would an agile enterprise looks like and create reference architectures and ideas for that. And I think that that will be sort of the end result somewhere, but we are not there yet, but we are going in that direction with all these different projects.

Gardner: And, of course, more information is available at The Open Group website. They have many global events and conferences that people can go to and learn about these issues and contribute to these issues as well.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Business intelligence, Cloud computing, Cyber security, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Internet of Things, multicloud, Platform 3.0, The Open Group | Tagged , , , , , , , , , , , , , , , | Leave a comment

Cerner’s lifesaving sepsis control solution shows the potential of bringing more AI-enabled IoT to the healthcare edge

workingThe next BriefingsDirect intelligent edge adoption benefits discussion focuses on how hospitals are gaining proactive alerts on patients at risk for contracting serious sepsis infections.

An all-too-common affliction for patients around the world, sepsis can be controlled when confronted early using a combination of edge computing and artificial intelligence (AI). Edge sensors, Wi-Fi data networks, and AI solutions help identify at-risk situations so caregivers at hospitals are rapidly alerted to susceptible patients to head-off sepsis episodes and reduce serious illness and death.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us now as we hear about this cutting-edge use case that puts AI to good use by outsmarting a deadly infectious scourge with guests Missy Ostendorf, Global Sales and Business Development Practice Manager at Cerner Corp.; Deirdre Stewart, Senior Director and Nursing Executive at Cerner Europe, and Rich Bird, World Wide Industry Marketing Manager for Healthcare and Life sciences at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Missy, what are the major trends driving the need to leverage more technology and process improvements in healthcare? When we look at healthcare, what’s driving the need to leverage better technology now?

Missy Ostendorf

Ostendorf

Ostendorf: That’s an easy question to answer. Across all industries resources always drive the need for technology to make things more efficient and cost-conservative — and healthcare is no different.

If we tend to lead more slowly with technology in healthcare, it’s because we don’t have mission-critical risk — we have life-critical risk. And the sepsis algorithm is a great example of that. If a patient turns septic, they have four hours and they can die. So, as you can imagine, that clock ticking is a really big deal in healthcare.

Gardner: And what has changed, Rich, in the nature of the technology that makes it so applicable now to things like this algorithm to intercept sepsis quickly?

Bird: The pace of the change in technology is quite shocking to hospitals. That’s why they can really benefit when two globally recognized organizations such as HPE and Cerner can help them address problems.

cerner logoWhen we look at the demand-spike across the healthcare system, we see that people are living longer with complex long-term conditions. When they come into a hospital, there are points in time when they need the most help.

What [HPE and Cerner] are doing together is understanding how to use this connected technology at the bedside. We can integrate the Internet of Things (IoT) devices that the patients have on them at the bedside, medical devices traditionally not connected automatically but through the humans. The caregivers are now able to use the connected technology to take readings from all of the devices and analyze them at the speed of computers.

So we’re certainly relying on the professionalism, expertise, and the care of the team on the ground, but we’re also helping them with this new level of intelligence. It offers them and the patients more confidence in the fact that their care is being looked at from the people on the ground as well as the technology that’s reading all of their life science indicators flowing into the Cerner applications.

Win against sepsis worldwide 

Gardner: Deirdre, what is new and different about the technology and processes that makes it easier to consume intelligence at the healthcare edge? How are nurses and other caregivers reacting to these new opportunities, such as the algorithm for sepsis?

Deirdre Stewart

Stewart

Stewart: I have seen this growing around the world, having spent a number of years in the Middle East and looking at the sepsis algorithm gain traction in countries like Qatar, UAE, and Saudi Arabia. Now we’re seeing it deployed across Europe, in Ireland, and the UK.

Once nurses and clinicians get over the initial feeling of, “Hang on a second, why is the computer telling me my business? I should know better.” Once they understand how that all happens, they have benefited enormously.

But it’s not just the clinicians who benefit, Dana, it’s the patients. We have documented evidence now. We want to stop patients ever getting to the point of having sepsis. This algorithm and other similar algorithms alert the front-line staff earlier, and that allows us to prevent patients developing sepsis in the first place.

Some of the most impressive figures show the reduction in incidents of sepsis and the increase in the identification of the early sepsis stages, the severe inflammatory response part. When that data is fed back to the doctors and nurses, they understand the importance of such real-time documentation.

I remember in the early days of the electronic medical records; the nurses might be inclined to not do such real-time documentation. But when they understand how the algorithms work within the system to identify anything that is out of place or kilter, it really increases the adoption, and definitely the liking of the system and what it can provide for.

Gardner: Let’s dig into what this system does before we look at some of the implications. Missy, what does the Cerner’s CareAware platform approach do?

Ostendorf: The St. John Sepsis Surveillance Agent looks for early warning signs so that we can save lives. There are three pieces: monitoring, alerting, and then the prescribed intervention.

It goes to what Deirdre was speaking to about the documentation is being done in real-time instead of the previous practice, where a nurse in the intensive care unit (ICU) might have had a piece of paper in her pocket and she would write down, for instance, the patients’ vital signs.

A lot can happen in four hours in the ICU. By having all of the information flow into the electronic medical record we can now have the sepsis agent algorithm continually monitoring that data.

And maybe four hours later she would sit at a computer and put in four hours of vitals from every 15 minutes for that patient. Well, as you can imagine, a lot can happen in four hours in the ICU. By having all of the information flow into the electronic medical record we can now have the sepsis agent algorithm continually monitoring that data.

It surveys the patient’s temperature, heart rate, and glucose level — and if those change and fall outside of safe parameters, it automatically sends alerts to the care team so they can take immediate action. And with that immediate action, they can now change how they are treating that patient. They can give them intravenous antibiotics and fluids, and there is 80 percent to 90 percent improvement in lives saved when you can take that early intervention.

So, we’re changing the game by leveraging the data that was already there, we are just taking advantage of it, and putting it into the hands of the clinicians so that action can be taken early. That’s the most important part. We have been able to actionize the data.

Gardner: Rich, this sounds straightforward, but there is a lot going on to make this happen, to make the edge of where the patient exists able to deliver data, capture data, protect it and make it secure and in compliance. What has had to come together in order to support what was just described by Missy in terms of the Cerner solution?

Healthcare tech progresses to next level 

Rich Bird

Bird

Bird: Focusing on the outcomes is very important. It delivers confidence to the clinical team, always at the front of mind. But it provides that in a way that is secured, real-time, and available, no matter where the care team are. That’s very, very important. And the fact that all of the devices are connected poses great potential opportunities in terms of the next evolution of healthcare technology.

Until now we have been digitizing the workflows that have always existed. Now, for me, this represents the next evolution of that. It’s taking paper and turning it into digital information. But then how do we get more value from that? Having Wi-Fi connectivity across the whole of a site is not something that’s easy. It’s something that we pride ourselves on making simple for our clients, but a key thing that you mentioned was security around that.

When you have everything speaking to everything else, that also introduces the potential of a bad actor. How do we protect against that, how do we ensure that all of the data is collected, transported, and recorded in a safe way? If a bad actor were to become a part of external network and internal network, how do we identify them and close it down?

Working together with our partners, that’s something that we take great pride in doing. We spoke about mobility, and outside of healthcare, in other industries, mobility usually means people have wide access to things.

devicesBut within hospitals, of course, that mobility is about how clinicians can collect and access the data wherever they are. It’s not just one workstation in a corner that the care team uses every now and again. The technology now for the care team gives them the confidence to know the data they are taking action on is collected correctly, protected correctly, and provided to them in a timely manner.

Gardner: Missy, another part of the foundational technology here is that algorithm. How are machine learning (ML) and AI coming to bear? What is it that allowed you to create that algorithm, and why is that a step further than simple reports or alerts?

Ostendorf: This is the most exciting part of what we’re doing today at Cerner and in healthcare. While the St. John’s Sepsis Algorithm is saving lives in a large-scale way – and it’s getting most of the attention — there are many things we have been able to do around the world.

Deirdre brought up Ireland, and even way back in 2009 one of our clients there, St. James’s Hospital in Dublin, was in the news because they made the decision to take the data and build decision-making questions into the front-end application that the clinicians use to order a CT scan. Unlike other X-rays, CT scans actually provide radiation in a way that’s really not great. So we don’t want to have a patient unnecessarily go through a CT scan. The more they have, the higher their risks go up.

They take the data and build decision-making questions into the front-end of the application the clinicians use to order a CT scan. We don’t want to have a patient unnecessarily go through a CT scan. Now with ML, it can tell the clinician whether the CT scan is necessary for the treatment of that patient.

By implementing three questions, the computer looks at the trends and why the clinicians thought they needed it based on previous patients’ experiences. Did that CT scan make a difference and how they were diagnosed? And now with ML, it can tell the clinician on the front end that, “This really isn’t necessary for what you are looking for to treat this patient.”

Clinicians can always override that, they can always call the x-ray department and say, “Look, here’s why I think this one is different.” But in Ireland they were able to lower the number of CT scans that they had always automatically ordered. So with ML they are changing behaviors and making their community healthier. That’s one example.

Another example of where we are using the data and ML is with the Cerner Opioid Toolkit in the United States (US). We announced that in 2018 to help our healthcare system partners combat the opioid crisis that we’re seeing across America.

Deirdre, you could probably speak to the study as a clinician.

Algorithm assisted opioid-addiction help

Stewart: Yes, indeed. It’s interesting work being done in the US on what they call Opioid-Induced Respiratory Depression (OIRD). It looks like approximately 1 in 200 hospitalized surgical patients can end up with an opioid-induced ventilatory impairment. This results in a large cost in healthcare. In the US alone, it’s estimated in 2011 that it cost $2 billion. And the joint commission has made some recommendations on how the assessment of patients should be personalized.

It’s not just one single standardized form with a score that is generated based on questions that are answered. Instead it looks at the patients’ age, demographics, previous conditions, and any other history with opioid intake in the previous 24 hours. And according to the risks of the patient, it then recommends limiting the number of opioids they are given. They also looked at the patients who ended up in respiratory distress and they found that a drug agent to reverse that distress was being administered too many times and at too high a cost in relation to patient safety.

Now with the algorithm, they have managed to reduce the number of patients who end up in respiratory distress and limit the number of narcotics according to the specific patients. It’s no longer a generalized rule. It looks at specific patients, alerts, and intervenes. I like the way our clients worldwide work in the willingness to share this information across the world. I have been on calls recently where they voiced interest in using this in Europe or the Middle East. So it’s not just one hospital doing this and improving their outcomes — it’s now something that could be looked at and done worldwide. That’s the same whenever our clients devise a particular outcome to improve. We have seen many examples of those around the world.

Ostendorf: It’s not just collecting data, it’s being able to actualize the data. We see how that’s creating not only great experiences for a partner but healthier communities.

Gardner: This is a great example of where we get the best of what people can do with their cognitive abilities and their ability to contextualize and the best of the machines to where they can do automation and orchestration of vast data and analytics. Rich, how do you view this balancing act between attaining the best of what people can do and machines can do? How do these medical use cases demonstrate that potential?

Machines plus, not instead of, people 

Bird: When I think about AI, I grew up in the science fiction depiction where AI is a threat. If it’s not any taking your life, it’s probably going to take your job.

But we want to be clear. We’re not replacing doctors or care teams with this technology. We’re helping them make more informed and better decisions. As Missy said, they are still in control. We are providing data to them in a way that helps them improve the outcomes for their patients and reduce the cost of the care that they deliver.

It’s all about using technology to reduce the amount of time and the amount of money care costs to increase patient outcomes – and also to enhance the clinicians’ professionalism.

Missy also talked about adding a few questions into the workflow. I used to work with a chief technology officer (CTO) of a hospital who often talked about medicine as eminence-based, which is based on the individuals that deliver it. There are numerous and different healthcare systems based on the individuals delivering them. With this digital technology, we can nudge that a little bit. In essence, it says, “Don’t just do what you’ve always done. Let’s examine what you have done and see if we can do that a little bit better.”

We know that personal healthcare data cannot be shared. But when we can show the value of the data when shared in a safe way, the clinical teams can see the value generated . It changes the conversation. It helps people provide better care.

The general topic we’re talking about here is digitization. In this context we’re talking about digitizing the analog human body’s vital signs. Any successful digitization of any industry is driven by the users. So, we see that in the entertainment industry, driven by people choosing Netflix over DVDs from the store, for example.

When we talk about delivering healthcare technology in this context, we know that personal healthcare data cannot be shared. It is the most personal data in the world; we cannot share that. But when we can show the value of data when shared in a safe way — highly regulated but shared in a safe way — the clinical teams can then see the value generated from using the data. It changes the conversation to how much does the technology cost. How much can we save by using this technology?

For me, the really exciting thing about this is technology that helps people provide better care and helps patients be protected while they’re in hospital, and in some cases avoid having to come into the hospital in the first place.

Gardner: Getting back to the sepsis issue as a critical proof-point of life-enhancing and life-saving benefits, Missy, tell us about the scale here. How is this paying huge dividends in terms of saved lives?

Life-saving game changer 

Ostendorf: It really is. The World Health Organization (WHO) statistics from 2018 show that 30 million people worldwide experience a sepsis event. In their classification, six million of those could lead to deaths. In 2018 in the UK, there were 150,000 annual cases, with 44 of those ending in deaths.

You can see why this sepsis algorithm is a game-changer, not just for a specific client, but for everyone around the world. It gives clinicians the information they need in a timely manner so that they can take immediate action — and they can save lives.

doctorRich talked about the resources that we save, the cost that’s driven out, all those things are extremely important. When you are the patient or the patient’s family, that translates into a person who actually gets to go home from the hospital. You can’t put a dollar amount or an efficiency on that.

It’s truly saving lives and that’s just amazing to think that. We’re doing that by simply taking the data that was already being collected, running that through the St. John’s sepsis algorithm and alerting the clinicians so that they can take quick action.

Stewart: It was a profound moment for me after Hamad Medical Corp. in Qatar, where I had run the sepsis algorithm across their hospitals for about 11 months, did the data and they reckoned that they had potentially saved 64 lives.

And at the time when I was reading this, I was standing in a clinic there. I looked out at the clinic, it was a busy clinic, and I reckoned there were 60 to 70 people sitting there. And it just hit me like a bolt of lightning to think that what the sepsis algorithm had done for them could have meant the equivalent of every single person in that room being saved. Or, on the flipside, we could have lost every single person in that room.

Mothers, fathers, husbands, wives, sons, daughters, brothers, sisters — and it just hit me so forcefully and I thought, “Oh, my gosh, we have to keep doing this.” We have to do more and find out all those different additional areas where we can help to make a difference and save lives.

nurseGardner: We have such a compelling rationale for employing these technologies and processes and getting people and AI to work together. In making that precedent we’re also setting up the opportunity to gather more data on a historical basis. As we know, the more data, the more opportunity for analysis. The more analysis, the more opportunity for people to use it and leverage it. We get into a virtuous, positive adoption cycle.

Rich, once we’ve established the ability to gather the data, we get a historical base of that data. Where do we go next? What are some of the opportunities to further save lives, improve patient outcomes, enhance patient experience, and reduce costs? What is the potential roadmap for the future?

Personalization improves patients, policy 

Bird: The exciting thing is, if we can take every piece of medical information about an individual and provide that in a way that the clinical team can see it from one end of the user’s life right up to the present day, we can provide medicine that’s more personalized. So, treating people specifically for the conditions that they have.

Missy was talking about evaluating more precisely whether to send a patient for a certain type of scan. There’s also another side of that. Do we give a patient a certain type of medication?

When we’re in a situation where we have the patient’s whole data profile in front of us, clinical teams can make better decisions. Are they on a certain medication already? Are they allergic to a medication that you might prescribe to them? Will their DNA, the combination of their physiology, the condition that they have, the multiple conditions that they have – then we start to see that better clinical decisions can be made. We can treat people uniquely for the specific conditions.

At Hewlett Packard Labs, I was recently talking with an individual about how big data will revolutionize healthcare. You have certain types of patients with certain conditions in a cohort of patients, but how can we make better decisions on that cohort of patients with those co-conditions? You know, with at a specific time in their life, but then also how do we do that from an individual level of individuals?

Rather than just thinking about patients as cohorts, how could policymakers and governments around the world make decisions based on impacts of preventative care, such as more health maintenance? We can give visibility into that data to make better decisions for populations over long periods of time.

It all sounds very complicated, but my hope is, as we get closer, as the power of computing improves, these insights are going to reveal themselves to the clinical team more so than ever.

There’s also the population health side. Rather than just thinking about patients as individuals, or cohorts of patients, how could policymakers and governments around the world make decisions based on impacts of preventative care, such as incentivizing populations to do more health maintenance? How can we give visibility into that data into the future to make better decisions for populations over the longer period of time?

We want to bring all of this data together in a safe way that protects the security and the anonymity of the patients. It could provide those making clinical decisions about the people that are in front of them, as well as policymakers to look over the whole population, the means to make more informed decisions. We see massive potential around prevention. It could have an impact on how much healthcare costs before the patient actually needs treatment.

It’s all very exciting. I don’t think it’s too far away. All of these data points we are collecting are in their own silos right now. There is still work to do in terms of interoperability, but soon everybody’s data could interact with everybody else’s data. Cerner, for example, is making some great strides around the population health element.

Gardner: Missy, where do you see accelerating benefits happening when we combine edge computing, healthcare requirements, and AI?

At the leading edge of disease prevention

Ostendorf: I honestly believe there are no limits. As we continue to take the data in in places like in northern England, where the healthcare system is on a peninsula, they’re treating the entire population.

Rich spoke to population health management. Well, they’re now able to look across the data and see how something that affects the population, like diabetes, specifically affects that community. Clinicians can work with their patients and treat them, and then work the actual communities to reduce the amount of type 2 diabetes. It reduces the cost of healthcare and reduces morbidity rate.

That’s the next place where AI is going to make a massive impact. It will no longer be just saving a life with the sepsis algorithm running against those patients who are in the hospital. It will change entire communities and how they approach health as a community, as well as how they fund healthcare initiatives. We’ll be able to see more proactive management of health community by community.

hpe-logoGardner: Deirdre, what advice do you give to other practitioners to get them to understand the potential and what it takes to act on that now? What should people in the front lines of caregiving be thinking about on how to best utilize and exploit what can be done now with edge computing and AI services?

Stewart: Everybody should have the most basic analytical questions in their heads at all times. How can I make what I am doing better? How can I make what I am doing easier? How can I leverage the wealth of information that is available from people who have walked in my shoes and looked after patients in the same way as I’m looking after them, whether that’s in the hospital or at home in the community? How do I access that in an easier fashion, and how do I make sure that I can help to make improvements in it?

Access to information at your fingertips means not having to remember everything. It’s having it there, and having suggestions made to me. I’m always going back and reviewing what those results and analytics are to help improve the next time, the next time around.

From bedside to boardroom, everybody should be asking themselves those questions. Have I got access to the information I need? And how can I make things better? What more do I need?

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, electronic medical records, Enterprise transformation, healthcare, Hewlett Packard Enterprise, Information management, Internet of Things, machine learning, Security | Tagged , , , , , , , , , , , , , | Leave a comment

Three generations of Citrix CEOs on enabling a better way to work

Citrix-Workspace-StageFor the past 30 years, Citrix has made a successful habit of challenging the status quo. That includes:

  • Delivering applications as streaming services to multiple users

  • Making the entire PC desktop into a secure service

  • Enhancing networks that optimize applications delivery

  • Pioneering infrastructure-as-a-service (IaaS) now known as public cloud, and

  • Supplying a way to take enterprise applications and data to the mobile edge.

Now, Citrix is at it again, by creating digital workspaces and redefining the very nature of applications and business intelligence. How has one company been able to not only reinvent itself again and again, but make major and correct bets on the future direction of global information technology?

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To find out, Dana Gardner, Principal Analyst at Interarbor Solutions, recently sat down to simultaneously interview three of Citrix’s chief executives from the past 30 years, Roger Roberts, Citrix CEO and Chairman from 1990 to 2002; Mark Templeton, CEO of Citrix from 2001 to 2015, and David Henshall, who became the company’s CEO in July of 2017.

Here are some excerpts:

Dana Gardner: So much has changed across the worker productivity environment over the past 30 years. The technology certainly has changed. What hasn’t changed as fast is the human factor, the people.

How do we keep moving the needle forward with technology and also try to attain productivity growth when we have this lump of clay that’s often hard to manage, hard to change?

Mark Templeton

Templeton

Mark Templeton: The human factor “lump of clay” is changing as rapidly as technology because of the changing demographics of the workforce. Today’s baby boomers are being followed by generations of millennials, Gen Y, Gen X and then Gen Z will be making important decisions 20 years from now.

So the human factor clay is changing rapidly and providing great opportunity for innovation and invention of new technology in the workplace.

Gardner: The trick is to be able to create technology that the human factor will adopt. It’s difficult to solve a chicken and egg relationship when you don’t know what’s going to drive the other.

What about the past 30 years at Citrix gives you an edge in finding the right formula?

David Henshall: Citrix has always had an amazing ability to stay focused on connecting people and information — and doing it in a way that it’s secure, managed, and available so that we can abstract away a lot of the complexity that’s inherent with technology.

Because, at the end of the day, all we are really focused on is driving those outcomes and allowing people to be as productive, successful, and engaged as humanly possible by giving them the tools to — as we frame it up — work in a way that’s most effective for them. That’s really about creating the future of work and allowing people to be unleashed so that they can do their best working.

Gardner: Roger, when you started, so much of the IT world was focused on platforms and applications and how one drives the other. You seem to have elevated yourself above that and focused on services, on delivery of productivity – because, after all, they are supposed to be productivity applications. How were you able to see above and beyond the 1980s platform-application relationship?

Roger Roberts

Roberts

Roger Roberts: We grew up when the personal computer (PC) and local area networks (LANs) like when Novell NetWare came on the scene. Everybody wanted to use their own PC, driven primarily by things such as the Lotus applications.

So [applications like] spreadsheets, WordPerfectdBase were the tremendous bulk of the market demand at that time. However, with the background that I shared with [Citrix Co-Founder] Ed Iacobucci, we had been in the real world working from mainframes through minicomputers and then to the PCs, and so we knew there were applications out there, where the existing model – well, it really sucked.

The trick then was to take advantage of the increasing processing power we knew the PC was going to deliver and put it in a robust environment that would have stability so we could target specific customers with specific applications. Those customers were always intrigued with our story.

Our story was not formed to meet the mass market. Things like running ads or trying to search for leads would have been a waste of time and money. It made no sense in those days because the vast majority of the world had no idea of what we were talking about.

Gardner: What turned out to be the killer application for Citrix’s rise? What were the use cases you knew would pay off even before the PC went mainstream?

The personnel touch 

Roberts: The easiest one to relate to is personnel systems. Brown and Root Construction out of Houston, Texas was a worldwide operation. Most of their offices were on construction sites and in temporary buildings. They had a great deal of difficulty managing their personnel files, including salaries, when someone was promoted, reviewed, or there was a new hire.

The only way you could do it in the client-server LAN world was to replicate the database. And let me tell you, nobody wants to replicate their human resources (HR) database across 9,000 or 10,000 sites.

The only way you could do it in the client-server-LAN world was to replicate the database. And let me tell you, nobody wants to replicate their HR database across 10,000 sites. We came in and said, “We can solve that problem for you.”

So we came in and said, “We can solve that problem for you, and you can keep all of your data secure at your corporate headquarters. It will always be synchronized because there is only one copy. And we can give you the same capabilities that the LAN-based PC user experiences even over fairly slow telecommunication circuits.”

That really resonated with the people who had those HR problems. I won’t say it was an easy sell. When you are a small company, you are vulnerable. They ask, “How can we trust you to put in a major application using your technology when you don’t have a lot of business?” It was never the technology or the ability to get the job done that they questioned. It was more like having the staying power. That turned out to be the biggest obstacle.

Gardner: David, does it sound a little bit familiar? Today, 30 years later, we’re still dealing with distance, the capability of the network, deciding where the data should reside, how to manage privacy, and secure regulatory compliance. When you listen to Citrix’s use cases and requirements from 30 years ago, does it ring a bell?

Organize, guide, and predict work