Seven secrets to highly effective procurement: How business networks fuel innovation and transformation

The next BriefingsDirect innovation discussion focuses on how technology, data analysis, and digital networks are transforming procurement and the source-to-pay process as we know it. We’ll also discuss what it takes to do procurement well in this new era of business networks.

Far beyond just automating tasks and transactions, procurement today is a strategic function that demands an integrated, end-to-end approach built on deep insights and intelligence to drive informed source-to-pay decisions and actions that enable businesses to adopt a true business ecosystem-wide digital strategy.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

And according to the findings of a benchmarking survey conducted by SAP Ariba, there are seven essential traits of modern procurement organizations that are driving this innovation and business transformation.

To learn more about the survey results on procurement best practices, please join me in welcoming Kay Ree Lee, Director of Value Realization at SAP. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Procurement seems more complex than ever. Supply chains now stretch around the globe, regulation is on the rise, and risk is heightened on many fronts in terms of supply chain integrity.

Innovative companies, however, have figured out how to overcome these challenges, and so, at the value realization group you have uncovered some of these best practices through your annual benchmarking survey. Tell us about this survey and what you found.

Lee: We have an annual benchmarking program that covers purchasing operations, payables, sourcing, contract management, and working capital. What’s unique about it, Dana, is that it combines a traditional survey with data from our procurement applications and business network.

This past year, we looked at more than 200 customers who participated, covering more than $350 billion in spend. We analyzed their quantitative and qualitative responses and identified the intersection between those responses for top performers compared to average performers. Then, we drew correlations between which top performers did well and the practices that drove those achievements.

Gardner: By making that intersection, it’s an example of the power of business networks, because you’re able to gather intelligence from your business network environment or ecosystem and then apply a survey back into that. It seems to me that there is a whole greater than the sum of the parts between what the Ariba Network can do and what market intelligence is demanding.

Universe of insights

Lee: That’s right. The data from the applications in the Ariba Network contain a universe of insights, intelligence, and transactional data that we’ve amassed over the last 20-plus years. By looking at the data, we’ve found that there are specific patterns and trends that can help a lot of companies improve their procurement performance — either by processing transactions with fewer errors or processing them faster. They can source more effectively by collaborating with more suppliers, having suppliers bid on more events, and working collaboratively with suppliers.

Lee

Gardner: And across these 200 companies, you mentioned $350 billion of spend. Do you have any sense of what kind of companies these are, or do they cross a variety of different types of companies in different places doing different vertical industry activities?

Lee: They’re actually cross-industry. We have a lot of companies in the services industry and in the manufacturing industry as well.

Gardner: This sounds like a unique, powerful dataset, indicative of what’s going on not just in one or two places, but across industries. Before we dig into the detail, let’s look at the big picture, a 100,000-foot view. What would you say are some the major high-level takeaways that define best-in-class procurement and organizations that can produce it these days based on your data?

Lee: There are four key takeaways that define what best-in-class procurement organizations do.

The first one is that a lot of these best-in-class organizations, when they look at source-to-pay or procure-to-pay, manage it as an end-to-end process. They don’t just look at a set of discrete tasks; they look at it as a big, broad picture. More often than not, they have an assigned process expert or a process owner that’s accountable for the entire end-to-end process. That’s key takeaway number one.

Key takeaway number two is that a lot of these best-in-class organizations also have an integrated platform from which they manage all of their spend. And through this platform, procurement organizations provide their internal stakeholders with flexibility, based on what they’re trying to purchase.

For example, if a company needs to keep track of items that are critical to manufacturing and they need to have inventory visibility and tracking. That’s one requirement.

Another requirement is if they have to purchase manufacturing or machine parts that are not stocked, that can be purchased through supply catalogs with pre-negotiated part description and item pricing.

Gardner: Are you saying that this same platform can be used in these companies across all the different types of procurement and source-to-pay activities — internal services, even indirect, perhaps across different parts of a large company? That could be manufacturing or transportation? Is it the common platform common for all types of purchasing?

Common platform

Lee: That’s right. One common platform for different permutations of what you’re trying to buy. This is important.

The third key takeaway was that best-in-class organizations leverage technology to fuel greater collaboration. They don’t just automate tasks. One example of this is by providing self-service options.

Perhaps a lot of companies think that self-service options are dangerous, because you’re letting the person who is requesting items select on their own, and they could make mistakes. But the way to think about a self-service option is that it’s providing an alternative for stakeholders to buy and to have a guided buying experience that is both simple and compliant and that’s available 24/7.

You don’t need someone there supervising them. They can go on the platform and they can pick the items, because they know the items best — and they can do this around the clock. That’s another way of offering flexibility and fueling greater collaboration and ultimately, adoption.

Gardner: We have technologies like mobile these days that allow that democratization of involvement. That sounds like a powerful approach.

Lee: It is. And it ties to the fourth key takeaway, which is that best-in-class organizations connect to networks. Networks have become very prevalent these days, but best-in-class companies connect to networks to assess intelligence, not just transact. They go out to the network, they collaborate, and they get intelligence. A network really offers scale that organizations would otherwise have to achieve by developing multiple point-to-point connections for transacting across thousands of different suppliers.

You now go on a network and you have access to thousands of suppliers. Years ago, you would have had to develop point-to-point connectivity, which costs money, takes a long time, and you have to test all those connections, etc.

Gardner: I’m old enough to remember Metcalfe’s Law, which roughly says that the more participants in a network, the more valuable that network becomes, and I think that’s probably the case here. Is there any indication from your data and research that the size and breadth and depth of the business network value works in this same fashion?

Lee: Absolutely. Those three words are key. The size — you want a lot of suppliers transacting on there. And then the breadth — you want your network to contain global suppliers, so some suppliers that can transact in remote parts of the world, even Nigeria or Angola.

Then, the depth of the network — the types of suppliers that transact on there. You want to have suppliers that can transact across a plethora of different spend categories — suppliers that offer services, suppliers that offer parts, and suppliers that offer more mundane items.

But you hit the nail on the head with the size and breadth of the network.

Pretty straightforward

Gardner: So for industry analysts like myself, these seem pretty straightforward. I see where procurement and business networks are going, I can certainly agree that these are major and important points.

But I wonder, because we’re in such a dynamic world and because companies — at least in many of the procurement organizations — are still catching up in technology, how are these findings different than if you had done the survey four or five years ago? What’s been a big shift in terms of how this journey is progressing for these large and important companies?

Lee: I don’t think that there’s a big shift. Over the last two to five years, perhaps priorities have changed. So, there are some patterns that we see in the data for sure. For example, within sourcing, while sourcing savings continue to go up, go down, sourcing continues to be very important to a lot of organizations to deliver cost savings.

The data tells us organizations need to be agile and they need to continue to do more with less. Networks have become very prevalent these days, but best-in-class companies connect to networks to assess intelligence, not just transact.

One of the key takeaways from this is that the cost structure of procurement organizations have come down. They have fewer people operating certain processes, and that means that it costs organizations less to operate those processes, because now they’re leveraging technology even more. Then, they’re able to also deliver higher savings, because they’re including more and different suppliers as they go to market for certain spend categories.

That’s where we’re seeing difference. It’s not really a shift, but there are some patterns in the data.

Gardner: It seems to me, too, though, that because we’re adding through that technology more data and insight, we can elevate procurement more prominently into the category of spend management. That allows companies to really make decisions at a large environment level across the entire industries, maybe across the entire company based on these insights, based on best practices, and they can save a lot more money.

But then, it seems to me that that elevates procurement to a strategic level, not just a way to save money or to reduce costs, but to actually enable processes and agility, as you pointed out, that haven’t been done before.

Before we go the traits themselves, is there a sense that your findings illustrate this movement of procurement to a more strategic role?

Front and center

Lee: Absolutely. That’s another one of the key traits that we have found from the study. Top performing organizations do not view procurement as a back-office function. Procurement is front and center. It plays a strategic role within the organization to manage the organization’s spend.

When you talk about managing spend, you could talk about it at the surface level. But we have a lot of organizations that manage spend to a depth that includes performing strategic supplier relationship management, supplier risk management, and deep spend analysis. The ability to manage at this depth distinguishes top performers from average performers.

Gardner: As we know, Kay Ree, many people most trust their cohorts, people in other companies doing the same function they are, for business acumen. So this information is great, because we’re learning from the people that are doing it in the field and doing it well. What are some of the other traits that you uncovered in your research?

Lee: Let me go back to the first trait. The first one that we saw that drove top performing organizations was that top performers play a strategic role within the organization. They manage more spend and they manage that spend at a deep level.

One of the stats that I will share is that top performers see a 36 percent higher spend under management, compared to the average organization. And they do this by playing a strategic role in the organization. They’re not just processing transactions. They have a seat at the leadership table. They’re a part of the business in making decisions. They’re part of the planning, budgeting, and financial process.

They also ensure that they’re working collaboratively with their stakeholders to ensure that procurement is viewed as a trusted business adviser, not an administrator or a gatekeeper. That’s really the first trait that we saw that distinguishes top performers.

The second one is that top performers have an integrated platform for all procurement spend, and they conduct regular stakeholder spend reviews — resulting in higher sourcing savings.

And this is key. They conduct quarterly – or even more frequent — meetings with the businesses to review their spend. These reviews serve different purposes. They provide a forum for discussing various sourcing opportunities.

Imagine going to the business unit to talk to them about their spend from the previous year. “Here is who you have spent money with. What is your plan for the upcoming year? What spend categories can we help you source? What’s your priority for the upcoming year? Are there any capital projects that we can help out with?”

Sourcing opportunities

It’s understanding the business and requirements from stakeholders that helps procurement to identify additional sourcing opportunities. Then, collaborating with the businesses and making sure that procurement is being responsive and agile to the stakeholder requirements. Procurement, has to be proactive in collaborating with stakeholders and ensuring that they’re being responsive and agile to their requirements. That’s the second finding that we saw from the survey.

The third one is that top performers manage procure-to-pay as an end-to-end process with a single point of accountability, and this really drives higher purchase order (PO) and invoicing efficiency. This one is quite straightforward. Our quantitative and qualitative research tells us that having a single point of accountability drives a higher transactional efficiency.

Gardner: I can speak to that personally. In too many instances, I work with companies where one hand doesn’t know what the other is doing, and there is finger pointing. Any kind of exception management becomes bogged down, because there isn’t that point of accountability. I think that’s super important.

Lee: We see that as well. Top performers operationalize savings after they have sourced spend categories and captured negotiated savings. The question then becomes how do they operationalize negotiated savings so that it becomes actual savings? The way top performers approach it is that they manage compliance for those sourced categories by creating fit-for-purpose strategies for purchase. So, they drive more spend toward contract and electronic catalogs through a guided buying experience.

You do that by having available to your stakeholders contracts and catalogs that would guide them to the negotiated pricing, so that they don’t have to enter pricing, which would then dilute your savings. Top performers also look at working capital, and they look at it closely, with the ability to analyze historical payment trends and then optimize payment instruments resulting in higher discounts.

Sometimes, working capital is not as important to procurement because it’s left to the accounts payable (AP) function, but top performers or top performing procurement organizations look at it holistically; as another lever that they manage within the sourcing and procure-to-pay process.

So, it’s another negotiation point when they are sourcing, to take advantage of opportunities to standardize payment terms, take discounts when they need to, and also look at historical data and really have a strategy, and variations of the strategy, for how we’re going to pay strategic suppliers. What’s the payment term for standard suppliers, when do we pay on terms versus discounts, and then when do we pay on a P-Card? They look at working capital holistically as part of their entire procurement process.

Gardner: It really shows where being agile and intelligent can have major benefits in terms of your ability to time and enforce delivery of goods and services — and also get the best price in the market. That’s very cool.

Lee: And having all of that information and having the ability to transact efficiently is key. Let’s say you have all the information, but you can’t transact efficiently. You’re slow to make invoice payments, as an example. Then, while you have a strategy and approach, you can’t even make a change there (related to working capital). So, it’s important to be able to do both, so that you have the options and the flexibility to be able to operationalize that strategy.

Top performers leverage technology and provide self-service to enable around-the-clock business. This really helps organizations drive down cycle time for PO processing.

Within the oil and gas sector, for example, it’s critical for organizations to get the items out to the field, because if they don’t, they may jeopardize operations on a large scale. Offering the ability to perform self-service and to enable that 24×7 gives organizations flexibility and offers the users the ability to maneuver themselves around the system quite easily. Systems nowadays are quite user-friendly. Let the users do their work, trust them in doing their work, so that they can purchase the items they need to, when they want to.

User experience

Gardner: Kay Ree, this really points out the importance of the user experience, and not just your end-user customers, but your internal employee users and how younger folks, millennials in particular, expect that self-service capability.

Lee: That’s right. Purchasing shouldn’t be any different. We should follow the lead of other industries and other mobile apps and allow users to do self-service. If you want to buy something, you go out there, you pick the item, the pricing is out there, it’s negotiated pricing, so you pick the item, and then let’s go.

Gardner: That’s enabling a lot of productivity. That’s great. Okay, last one.

Lee: The last one is that top performers leverage technology to automate PO and invoice processing to increase administrative efficiency. What we see is best-in-class organizations leverage technology with various features and functionalities within the technology itself to increase administrative efficiency.

An example of this could be the ability to collaborate with suppliers on the requisitioning process. Perhaps you’re doing three bids and a buy, and during that process it’s not picking up the phone anymore. You list out your requirements for what you’re trying to buy and you send it out automatically to three suppliers, and then they provide responses back, you pick your responses and then the system converts the requirements to a PO.

So that flexibility by leveraging technology is key.

Gardner: Of course, we expect to get even more technology involved with business processes. We hear things about the Internet of Things (IoT), more data, more measurement, more scientific data analysis being applied to what may have been more gut instinct types of business decision making, now it’s more empirical. So I think we should expect to see even more technology being brought to bear on many of these processes in the next several years. So that’s kind of important to see elevated to a top trait.

All right, what I really like about this, Kay Ree, is this information is not just from an academic or maybe a theory or prediction, but this is what organizations are actually doing. Do we have any way of demonstrating what you get in return? If these are best practices as the marketplace defines them, what is the marketplace seeing when they adopt these principles? What do they get for this innovation? Brass tacks, money, productivity and benefits — what are the real paybacks?

Lee: I’ll share stats for top performers. Top performers are able to achieve about 7.8 percent in savings per year as a percent of source spend. That’s a key monetary benefit that most organizations look to. It’s 7.8 percent in savings.

Gardner: And 7.8 percent to someone who’s not familiar with what we’re talking about might not seem large, but this is a huge amount of money for many companies.

Lee: That’s right. Per billion dollars, that’s $78 million.

Efficient processing

They also manage more than 80 percent of their spend and they manage this spend to a greater depth by having the right tools to do it — processing transactions efficiently, managing contracts, and managing compliance.And they have data that lets them run deeper spend analysis. That’s a key business benefit for organizations that are looking to transact over the network, looking to leverage more technology.

Top performers also transact and collaborate electronically with suppliers to achieve a 99 percent-plus electronic PO rate. Best-in-class organizations don’t even attach a PDF to an email anymore. They create a requisition, it gets approved, it becomes a PO, and it is automatically sent to a supplier. No one is involved in it. So the entire process becomes touch-less.

Gardner: These traits promote that automation that then leads to better data, which allows for better process. And so on. It really is a virtuous cycle that you can get into when you do this.

Lee: That’s right. One leads to another.

Gardner: Are there other ways that we’re seeing paybacks?

Lee: The proof of the pudding is in the eating. I’ll share a couple of examples from my experience looking at data for specific companies. One organization utilizes the availability of collaboration and sourcing tools to source transportation lanes, to obtain better-negotiated rates, and drive higher sourcing savings.

A lot of organizations use collaboration and sourcing tools, but the reason why this is interesting is because when you think about transportation, there are different ways to source transportation, but doing it to an eSourcing tool and having the ability to generate a high percentage in savings through collaboration and sourcing tools, that was an eye-opener for me. That’s an example of an organization really using technology to its benefit of going out and sourcing an uncommon spend category.

For another example, I have a customer that was really struggling to get control of their operational costs related to transaction processing, while trying to manage and drive a high degree of compliance. What they were struggling with is that their cost structure was high. They wanted to keep the cost structure lower, but still drive a high degree of compliance.

When we looked at their benchmark data, it helped open the eyes of the customer to understand how to drive improvements by directing transactions to catalogs and contracts where applicable, driving suppliers to create invoice-based contracts in the Ariba Network and then they were enabling more suppliers to invoice electronically. This then helped increase administrative efficiency and reduced invoice errors, which were resulting in a lot of rework for the AP team.

So, these two examples, in addition to the quantitative benefits, show the tremendous opportunity organizations have to adopt and leverage some of these technologies.

Virtuous cycle

Gardner: So, we’re seeing more technology become available, more data and analytics become available with the business networks are being built out in terms of size, breadth and depth, and we’ve identified that the paybacks can lead to a virtuous cycle of improvement.

Where do you see things going now that you’ve had a chance to really dig into this data and see these best practices in actual daily occurrence? What would you see happening in the future? How can we extrapolate from what we’ve learned in the market to what we should expect to see in the market?

Lee: We’re still only just scratching the surface with insights. We have a roadmap of advanced insights that we’re planning for our customers that will allow us to further leverage the insights and intelligence embedded in our network to help our customers increase efficiency in operations and effectiveness of sourcing.

Gardner: It sounds very exciting, and I think we can also consider bringing artificial intelligence and machine learning capabilities into this as we use cloud computing. And so the information and insights are then shared through a sophisticated infrastructure and services delivery approach. Who knows where we might start seeing the ability to analyze these processes and add all sorts of new value-added benefits and transactional efficiency? It’s going to be really exciting in the next several years.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Business networks, procurement | Tagged , , , , , , , | Leave a comment

How flash storage provides competitive edge for Canadian music service provider SOCAN

The next BriefingsDirect Voice of the Customer digital business transformation case study examines how Canadian nonprofit SOCAN faced digital disruption and fought back with a successful storage modernizing journey. We’ll learn how adopting storage innovation allows for faster responses to end-user needs and opens the door to new business opportunities.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To describe how SOCAN gained a new competitive capability for its performance rights management business we’re joined by Trevor Jackson, Director of IT Infrastructure for SOCAN, the Society of Composers, Authors and Music Publishers of Canada, based in Toronto. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The music business has changed a lot in the past five years or so. There are lots of interesting things going on with licensing models and people wanting to get access to music, but people also wanting to control their own art.

Tell us about some of the drivers for your organization, and then also about some of your technology decisions.

A Tech Guide
For the Savvy
Flash Buyer

Jackson: We’ve traditionally been handling performances of music, which is radio stations, television and movies. Over the last 10 or 15 years, with the advent of YouTube, Spotify, Netflix, and digital streaming services, we’re seeing a huge increase in the volume of data that we have to digest and analyze as an organization.

Gardner: And what function do you serve? For those who are might not be familiar with your organization or the type of organization, tell us the role you play in the music and content industries.

Play music ethically

Jackson: At a very high level, what we do is license the use of music in Canada. What that means is that we allow businesses through licensing to ethically play any type of music they want within their environment. Whether it’s a bar, restaurant, television station, or a radio station, we collect the royalties on behalf of the creators of the music and then redistribute that to them.

Jackson

We’re a not-for-profit organization. Anything that we don’t spend on running the business, which is the collecting, processing, and payment of those royalties, goes back to the creators or the publishers of the music.

Gardner: When you talk about data, tell us about the type of data you collect in order to accomplish that mission?

Jackson: It’s all kinds of data. For the most part, it’s unstructured. We collect it from many different sources, again radio and television stations, and of course, YouTube is another example.

There are some standards, but one of the challenges is that we have to do data transformation to ensure that, once we get the data, we can analyze it and it fits into our databases, so that we can do the processing on information.

Gardner: And what sort of data volumes are we talking about here?

Jackson: We’re not talking about petabytes, but the thing about performance information is that it’s very granular. For example, the files that YouTube sends to us may have billions of rows for all the performances that are played, as they’re going through their cycle through the month; it’s the same thing with radio stations.

We don’t store any digital files or copies of music. It’s all performance-related information — the song that was played and when it was played. That’s the type of information that we analyze.

Gardner: So, it’s metadata about what’s been going on in terms of how these performances have been used and played. Where were you two years ago in this journey, and how have things changed for you in terms of what you can do with the data and how performance of your data is benefiting your business?

Jackson: We’ve been on flash for almost two years now. About two and a half years ago, we realized that the storage area network (SAN) that we did have, which was a traditional tiered-storage array, just didn’t have the throughput or the input/output operations per second (IOPS) to handle the explosive amount of data that we were seeing.

With YouTube coming online, as well as Spotify, we knew we had to do something about that. We had to increase our throughput.

Performance requirements

Gardner: Are you generating reports from this data at a certain frequency or is there streaming? How is the output in terms of performance requirements?

Jackson: We ingest a lot of data from the data-source providers. We have to analyze what was played, who owns the works that were played, correlate that with our database, and then ensure that the monies are paid out accordingly.

Gardner: Are these reports for the generation of the money done by the hour, day, or week? How frequently do you have to make that analysis?

Jackson: We do what we call a distribution, which is a payment of royalties, once a quarter. When we’re doing a payment on a distribution, it’s typically on performances that occurred nine months prior to the day of the distribution.

A Tech Guide
For the Savvy
Flash Buyer

Gardner: What did you do two and a half years ago in terms of moving to flash and solid state disk (SSD) technologies? How did you integrate that into your existing infrastructure, or create the infrastructure to accommodate that, and then what did you get for it?

Jackson: When we started looking at another solution to improve our throughput, we actually started looking at another tiered-storage array. I came to the HPE Discover [conference] about two years ago and saw the presentation on the all-flash [3PAR Storage portfolio] that they were talking about, the benefits of all-flash for the price of spinning disk, which was to me very intriguing.

I met with some of the HPE engineers and had a deep-dive discussion on how they were doing this magic that they were claiming. We had a really good discussion, and when I went back to Toronto, I also met with some HPE engineers in the Toronto offices. I brought my technical team with me to do a bit of a deeper dive and just to kick the tires to understand fully what they were proposing.

We came away from that meeting very intrigued and very happy with what we saw. From then on, we made the leap to purchase the HPE storage. We’ve had it running for about [two years] now, and it’s been running very well for us.

Gardner: What sort of metrics do you have in terms of technology, speeds and feeds, but also metrics in terms of business value and economics?

Jackson: I don’t want to get into too much detail, but as an anecdote, we saw some processes that we were running going from days to hours just by putting it on all-flash. To us, that’s a huge improvement.

Gardner: What other benefits have you gotten? Are there some analytics benefits, backup and recovery benefits, or data lifecycle management benefits?

OPEX perspective

Jackson: Looking at it from an OPEX perspective, because of the IOPS that we have available to us, planning maintenance windows has actually been a lot easier for the team to work with.

Before, we would have to plan something akin to landing the space shuttle. We had to make sure that we weren’t doing it during a certain time, because it could affect the batch processes. Then, we’d potentially be late on our payments, our distributions. Because we have so many IOPS on tap, we’re able to do these maintenance windows within business hours. The guys are happier because they have a greater work-life balance.

The other benefit that we saw was that all-flash uses less power than spinning disk. Because of less power, there less heat, and a need for less floor space. Of course, speed is the number one driving factor for a company to go all-flash.

Gardner: In terms of automation, integration, load-balancing, and some of those other benefits that come with flash storage media environments, were you able to use some of your IT folks for other innovation projects, rather than speeds and feeds projects?

Jackson: When you’re freeing up resources from keeping the lights on, it’s adding more value to the business. IT traditionally is a cost center, but now we can take those resources and take them off of the day-to-day mundane tasks and put them into projects, which is what we’ve been doing. We’re able to add greater benefit to our members.

Gardner: And has your experience with flash in modernizing your storage prompted you to move toward other infrastructure modernization techniques including virtualization, software-defined composable infrastructure, maybe hyper converged? Is this an end point for you or maybe a starting point?

Jackson: IT is always changing, always transforming, and we’re definitely looking at other technologies.

Some of the big buzzwords out there, blockchain, machine learning, and whatnot are things that we’re looking at very closely as an organization. We know our business very well and we’re hoping to leverage that knowledge with technology to further drive our business forward.

Gardner: We’re hearing a lot promising sorts of vision these days about how machine learning could be brought to bear on things like data transformation and making that analysis better, faster, cheaper. So, that’s a pretty interesting stuff.

A Tech Guide
For the Savvy
Flash Buyer

Are you now looking to extend what you do? Is the technology an enabler more than a cost center in some ways for your general SOCAN vision and mission?

Jackson: Absolutely. We’re in the music business, but there is no way we can do what we do without technology; technically it’s impossible. We’re constantly looking at ways that we can leverage what we have today, as well as what’s out in the marketplace or coming down the pipe, to ensure that we can definitely add the value to our members to ensure that they’re paid and compensated for their hard work.

Gardner: And user experience and user quality of experience are top-of-mind for everybody these days.

Jackson: Absolutely, that’s very true.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Hewlett Packard Enterprise, HP, Software-defined storage, storage | Tagged , , , , , , , , , , , | Leave a comment

Strategic DevOps—How advanced testing brings broad benefits to Independent Health

The next BriefingsDirect Voice of the Customer digital business transformation case study highlights how Independent Health in Buffalo, New York has entered into a next phase of “strategic DevOps.”

After a two-year drive to improve software development, speed to value, and improved user experience of customer service applications, Independent Health has further extended advanced testing benefits to ongoing apps production and ongoing performance monitoring.

Learn here how the reuse of proven performance scripts and replaying of synthetic transactions that mimic user experience have cut costs and gained early warning and trending insights into app behaviors and system status.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how to attain such new strategic levels of DevOps benefits are Chris Trimper, Manager of Quality Assurance Engineering at Independent Health in Buffalo, New York, and Todd DeCapua, Senior Director of Technology and Product Innovation at CSC Digital Brand Services Division and former Chief Technology Evangelist at Hewlett Packard Enterprise (HPE). The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What were the major drivers that led you to increase the way in which you use DevOps, particularly when you’re looking at user-experience issues in the field and in production?

Trimper: We were really hoping to get a better understanding of our users and their experiences. The way I always describe it to folks is that we wanted to have that opportunity to almost look over their shoulder and understand how the system was performing for them.

Whether your user is internal or external, if they don’t have that good user experience, they’re going to be very frustrated and they’re going to have a poor experience. Internally, time is money. So, if it takes longer for things to happen, and you get frustrated potential turnover, it’s an unfortunate barrier.

Gardner: What kind of applications are we talking about? Is this across the spectrum of different type of apps, or did you focus on one particular type of app to start out?

End users important

Trimper: Well, when we started, we knew that the end user, our members, were the most important thing to us, and we started off with the applications that our servicing center used, specifically our customer relationship management (CRM) tool.

Trimper

If the member information doesn’t pop fast when a member calls, it can lead to poor call quality, queuing up calls, and it just slows down the whole business. We pride ourselves on our commitment to our members. That goes even as far as, when you call up, making sure that the person on the other end of the phone can service you well. Unfortunately, they can only service you as well as the data that’s provided to them to understand the member and their benefits.

Gardner: It’s one thing to look at user experience through performance, but it’s a whole new dimension or additional dimension when you’re looking at user experience in terms of how they utilize that application, how well it suits their particular work progress, or the processes for their business, their line of business. Are you able to take that additional step, or are you at the point where the feedback is about how users behave and react in a business setting in addition to just how the application performs?

Trimper: We’re starting to get to that point. Before, we only had as much information as we were provided about how an application was used or what they were doing. Obviously, you can’t stand there and watch what they’re doing 24×7.

Lately, we’ve been consuming an immense amount of log data from our systems and understanding what they’re doing, so that we can understand their problems and their woes, or make sure that what we’re testing, whether it’s in production monitoring or pre-production testing, is an accurate representation of our user. Again, whether it’s internal or external, they’re both just as valuable to us.

Gardner: Before we go any further, Chris, tell us a little bit about Independent Health. What kind of organization is it, how big is it, and what sort of services do you provide in your communities?

Get the New Book

On Effective

Performance Engineering

Trimper: We’re a healthcare company for the Western New York area. We’re a smaller organization. We define the red-shirt treatment that stands for the best quality care that we can provide our members. We try to be very proactive in everything that we do for our members as well. We drive members to the provider to do preventative things, that healthier lifestyle that everybody is trying to go for.

Gardner: Todd, we’re hearing this interesting progression toward a feedback loop of moving beyond performance monitoring into behaviors and use patterns and improving that user experience. How common is that, or is Independent Health on the bleeding edge?

Ahead of the curve

DeCapua: Independent Health is definitely moving with, or maybe a little bit ahead of, the curve in the way that they’re leveraging some of these capabilities.

DeCapua

If we were to step back and look at where we’ve been from an industry perspective across many different markets, Agile was hot, and now, as you start to use Agile and break all the right internal systems for all the right reasons, you have to start adopting some of these DevOps practices.

Independent Health is moving a little bit ahead on some of those pieces, and they’re probably focusing on a lot of the right things, when you look across other customers I work with. It’s things like speed of time to value. That goes across technology teams, business teams, and they’re really focused on their end customer, because they’re talking about getting these new feature functions to benefit their end customers for all the right reasons.

You heard Chris talking about that improved end-user experience about around their customer service applications. This is when people are calling in, and you’re using tools to see what’s going on and what your end users are doing.

There’s another organization that actually recorded what their customers were doing when they were having issues. That was a production-monitoring type thing, but now you’re recording a video of this. If you called within 10 minutes of having that online issue, as you are calling in and speaking with that customer service representative, they’re able to watch the video and see exactly what you did to get that error online to cause that phone call. So having these different types of users’ exceptions, being able to do the type of production monitoring that Independent Health is doing is fantastic.

Another area that Chris was telling me about is some of the social media aspects and being able to monitor that is another way of getting feedback. Now, I do think that Independent Health is hitting the bleeding edge on that piece. That’s what I’ve observed.

Gardner: Let’s hear some more about that social media aspect, getting additional input, additional data through all the available channels that you can.

Trimper: It would be foolish not to pay attention to all aspects of our members, and we’re very careful to make sure that they’re getting that quality that we try to aim for. Whether it happens to be Facebook, Twitter, or some other mechanism that they give us feedback on, we take all that feedback very seriously.

I remember an instance or two where there might have been some negative feedback. That went right to the product-management team to try to figure out how to make that person’s experience better. It’s interesting, from a healthcare perspective, thinking about that. Normally, you think about a member’s copay or their experience in the hospital. Now, it’s their experience with this application or this web app, but those are all just as important to us.

Broadened out?

Gardner: You started this with those customer-care applications. Has this broadened out into other application development? How do you plan to take the benefits that you’ve enjoyed early and extend them into more and more aspects of your overall IT organization?

Trimper: We started off with the customer service applications and we’ve grown it into observing our provider portals as well. A provider can come in and look at the benefits of a member, the member portal that the members actually log in to. So, we’re actually doing production monitoring of pretty much all of our key areas.

We also do pre-production monitoring of it. So, as we are doing a release, we don’t have to wait until it gets to production to understand how it went. We’re going a little bit beyond normal performance testing. We’re running the same exact types of continuous monitoring in both our pre-production region and our production regions to ensure that quality that we love to provide.

Gardner: And how are the operations people taking this? Has this been building bridges? Has this been something that struck them as a foreign entity in their domain? How has that gone?

Trimper: At first, it was a little interesting. It felt like to them it was just another thing that they had to check out and had to look at, but I took a unique approach with it. I sat down and talked to them personally and said, “You hear about all these problems that people have, and it’s impossible for you to be an expert on all these applications and understand how it works. Luckily, coming from the quality organization, we test them all the time and we know the business processes.”

Get the New Book

On Effective

Performance Engineering

The way I sold it to them is, when you see an alert, when you look at the statistics, it’s for these key business processes that you hear about, but you may not necessarily want to know all the details about them or have the time to do that. So, we really gave them insight into the applications.

As far as the alerting, there was a little bit of an adoption practice for that, but overall we’ve noticed a decrease in the number of support tickets for applications, because we’re allowing them to be more proactive, whether it’s proactive of an unfortunately blown service-level agreement (SLA), or it’s a degradation in quality of the performance. We can observe both of those, and then they can react appropriately.

Gardner: Todd, he actually sat down and talked to the production people. Is this something novel? Are we seeing more of that these days?

DeCapua: We’re definitely seeing more of it, and I know it’s not unique for Chris. I know there was some push back at the beginning from the operations teams.

There was another thing that was interesting. I was waiting for Chris to hit on it, and maybe he can go into it a little bit more. It was the way that he rolled this out. When you’re bringing a monitoring solution in, it’s often the ops team that’s bringing in this solution.

Making it visible

What’s changing now is that you have these application-development testing teams that are saying, “We also want to be able to get access to these types of monitoring, so that our teams can see it and we can improve what we are doing and improve the quality of what we deliver to you, the ops teams. We are going to do instrumenting and everything else that we want to get this type of detail to make it visible.”

Chris was sharing with me how he made this available first to the directors, and not just one group of directors, but all the directors, making this very plain-sight visible, and helping to drive some of the support for the change that needed to happen across the entire organization.

As we think about that as a proven practice, maybe Chris is one of the people blazing the trail there. It was a big way of improving and helping to illuminate for all parties, this is what’s happening, and again, we want to work to deliver better quality.

Gardner: Anything to add to that, Chris?

Trimper: There were several folks in the development area that weren’t necessarily the happiest when they learned that the perception of what they originally thought was there and what was really there in terms of performance wasn’t that great.

One of the directors shared an experience with me. He would go into our utilities and look at the dashboards before he was heading to a meeting in our customer service center. He would understand what kind of looks he was going to be given when he walked in, because he was directly responsible for the functionality and performance of all this stuff.

He was pleased that, as they went through different releases and were able to continually make things better, he started seeing everything is green, everything is great today. So, when I walk in, it’s going to be sunshine and happiness, and it was sunshine and happiness, as opposed to potentially a little bit doomy and gloomy. It’s been a really great experience for everyone to have. There’s a little bit of pain going through it, but eventually, it has been seen as a very positive thing.

Gardner: What about the tools that you have in place? What allows you to provide these organizational and cultural benefits? It seems to me that you need to have data in your hands. You need to have some ability to execute once you have got that data. What’s the technology side of this; we’ve heard quite a bit about the people and the process?

Trimper: This whole thing came about because our CIO came to me and said. “We need to know more about our production systems. I know that your team is doing all the performance testing in pre-production. Some of the folks at HPE told me about this new tool called Performance Anywhere. Here it is, check it out, and get back to me. ”

We were doing all the pre-production testing and we learned that all the scripts that we did, which had already been tried and true and been running and continuously get updates as we get new releases, could just be turned into these production monitors. Then, we found through using the tool, through our trial, and now all of our two plus years that we have been working with it that it was a fairly easy process.

Difficult point

The most difficult point was understanding how to get production data that we could work with, but you could literally take a test on your VUGen script and turn it into a production monitor in 5-10 minutes, and that was pretty invaluable to us.

That means that every time we get a release, we don’t have to modify two sets of scripts and we don’t have two different teams working on everything. We have one team that is involved in the full life cycle of these releases and that can very knowledgeably make the change to those production monitors.

Gardner: HPE Performance Anywhere. Todd, are lot of people using it in the same fashion where they’re getting this dual benefit from pre-production and also in deployment and operations?

DeCapua: Yes, it’s definitely something that’s becoming more-and-more aware. It’s a capability that’s been around for a little while. You’ll also hear about things like IT4IT, but I don’t want to open up that whole can of worms unless we want to dive into it. But as that starts to happen, people like Chris, people like his CIO, want to be able to get better visibility into all systems that are in production, and is there an easy way to do that? Being able to provide that easy way for all of your stakeholders and all of your customers are capabilities that we’re definitely seeing people adopt. It was a big way of improving and helping to illuminate for all parties, this is what’s happening

Gardner: Can you provide a bit more detail in terms of the actual products and services that made this possible for you, Chris?

Trimper: We started with our HPE LoadRunner scripts, specifically the VUGen scripts, that we were able to turn into the production monitors. Using the AppPulse Active tool from the AppPulse suite of tools, we were able to build our scripts using their SaaS infrastructure and have these monitors built for us and available to test our systems.

Gardner: So what do you see in our call center? Are you able to analyze in any way and say, “We can point to these improvements, these benefits, from the ability for us to tie the loop back on production and quality assurance across the production spectrum?”

Trimper: We can do a lot of trend analysis. To be perfectly honest, we didn’t think that the report would run, but we did a year-to-date trend analysis and it actually was able to compile all of our statistics. We saw really two neat things.

When you had open enrollment, we saw this little spike that shot up there, which we would expect to see, but hopefully we can be more prepared for it as time goes. But we saw a gradual decrease, and I think, due to the ability to monitor, due to the ability to react and plan better for a better performing system, through the course of the year, for this one key piece of pulling member data, we went from an average of about 12-14 seconds down to 4 seconds, and that trend actually is continuing to go down.

I don’t know if it’s now 3 or less today, but if you think about that 12 or 14 down to about 4, that was a really big improvement, and it spoke volumes to our capabilities of really understanding that whole picture and being able to see all of that in one place was really helpful to us.

Where next?

Gardner: Looking to the future, now that you’ve made feedback loops demonstrate important business benefits and even move into a performance benefit for the business at large, where can you go next? Perhaps you’re looking at security and privacy issues, given that you’re dealing with compliance and regulatory requirements like most other healthcare organizations. Can you start to employ these methods and these tools to improve other aspects of your SLAs?

Trimper: Definitely, in terms of the SLAs and making sure that we’re keeping everything alive and well. As for some of the security aspects, those are still things where we haven’t necessarily gone down the channels yet. But we’ve started to realize that there are an awful lot of places where we can either tie back or really start closing the gaps in our understanding of just all that is our systems.

Gardner: Todd, last word, what should people be thinking about when they look at their tooling for quality assurance and extending those benefits into full production and maybe doing some cultural bonding at the same time?

DeCapua: The culture is a huge piece. No matter what we talk about nowadays, it starts with that. When I look at somebody like Independent Health, the focus of that culture and the organization is on their end user, on their customer.

When you look at what Chris and his team has been able to do, at a minimum, it’s reducing the number of production incidents. And while you’re reducing production incidents, you’re doing a number of things. There are actually hard costs there that you’re saving. There are opportunity costs now that you can have these resources working on other things to benefit that end customer.

We’ve talked a lot about DevOps, we’ve talked a lot about monitoring, we’ve mentioned now culture, but where is that focus for your organization? How is it that you can start small and incrementally show that value? Because now, what you’re going to do is be able to illustrate that in maybe two or three slides, two or three pages.

Get the New Book

On Effective

Performance Engineering

But some of the things that Chris has been doing, and other organizations are also doing, is showing, “We did this, we made this investment, this is the return we got, and here’s the value.” For Independent Health, their customers have a choice, and if you’re able to move their experience from 12-14 seconds to 4 seconds, that’s going to help. That’s going to be something that Independent Health wants to be able to share with their potential new customers.

As far as acquiring new customers and retaining their existing customers, this is the real value. That’s probably my ending point. It’s a culture, there are tools that are involved, but what is the value to the organization around that culture and how is it that you can then take that and use that to gain further support as you move forward?

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, DevOps, Hewlett Packard Enterprise, HP | Tagged , , , , , , , , , , | Leave a comment

How always-available data forms the digital lifeblood for a university medical center

The next BriefingsDirect Voice of the Customer digital business transformation case study examines how the Nebraska Medical Center in Omaha consolidated and unified its data-protection capacities.

We’ll explore how adopting storage innovation protects the state’s largest hospital from data disruption and adds operational simplicity to complex data lifecycle management.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To describe how more than 150 terabytes of data remain safe and sound, we’re joined by Jeff Bergholz, Manager of Technical Systems at The Nebraska Medical Center in Omaha. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about the major drivers that led you to seek a new backup strategy as a way to keep your data sound and available no matter what.

Bergholz: At Nebraska Medicine, we consist of three hospitals with multiple data centers. We try to keep an active-active data center going. Epic is our electronic medical record (EMR) system, and with that, we have a challenge of making sure that we protect patient data as well as keeping it highly available and redundant.

We were on HPE storage for that, and with it, were really only able to do a clone-type process between data centers and keep retention of that data, but it was a very traditional approach.

Bergholz

A couple of years ago, we did a beta program with HPE on the P6200 platform, a tertiary replica of our patient data. With that, this past year, we augmented our data protection suite. We went from license-based to capacity-based and we introduced some new D2D dedupe devices into that, and StoreOnce as well. What that affords us is to easily replicate that data over to another StoreOnce appliance with minimal disruption.

Part of our goal is to keep backup available for potential recovery solutions. With all the cyber threats that are going on in today’s world, we’ve recently increased our retention cycle from 7 weeks to 52 weeks. We saw and heard from the analysts that the average vulnerability sits in your system for 205 to 210 days. So, we had to come up with a plan for what would it take to provide recovery in case something were to happen.

We came up with a long-term solution and we’re enacting it now. Combining HPE 3PAR storage with the StoreOnce, we’re able to more easily move data throughout our system. What’s important there is that our backup windows have greatly been improved. What used to take us 24 hours now takes us 12 hours, and we’re able to guarantee that we have multiple copies of the EMR in multiple locations.

We demonstrate it, because we’re tested at least quarterly by Epic as to whether we can restore back to where we were before. Not only are we backing it up, we’re also testing and ensuring that we’re able to reproduce that data.

More intelligent approach

Gardner: So it sounds like a much more intelligent approach to backup and recovery with the dedupe, a lower cost in storage, and the ability to do more with that data now that it’s parsed in such a way that it’s available for the right reason at the right time.

Bergholz: Resource wise, we always have to do more with less. With our main EMR, we’re looking at potentially 150 terabytes of data in a dedupe that shrinks down greatly, and our overall storage footprint for all other systems were approaching 4 petabytes of storage within that.

We’ve seen some 30:1 decompression ratios within that, which really has allowed my staff and other engineers to be more efficient and frees up some of their time to do other things, as opposed to having to manage the normal backup and retention of that.

HPE Data Protector:
Backup with Brains
Learn More Here

We’re always challenged to do more and more. We grow 20-30 percent annually, and by having appropriate resources, we’re not going to get 20 to 30 percent more resources every year. So, we have to work smarter with less and leverage the technologies that we have.

Gardner: Many organizations these days are using hybrid media across their storage requirements. The old adage was that for backup and recovery, use the cheaper, slower media. Do you have a different approach to that and have you gone in a different direction?

Bergholz: We do, and backup is as important to us as our data that exists out there. Time and time again, we’ve had to demonstrate the ability to restore in different scenarios, the accepted time of being able to restore and provide service back. They’re not going to wait for that. When clinicians or caregivers are taking care of patients, they want that data as quickly as possible. While it may not be the EMR, it maybe some ancillary documents that they need to be able to get in order to provide better care.

We’re able, upon request, to enact and restore in 5 to 10 minutes. In many cases, once we receive a ticket or a notification, we have full data restoration within 15 minutes.

Gardner: Is that to say that you’re all flash, all SSD, or some combination? How did you accomplish that very impressive recovery rate?

Bergholz: We’re pretty much all dedupe-type devices. It’s not necessarily SSD, but it’s good spinning disk, and we have the technology in place to replicate that data and have it highly available on spinning disk, versus having to go to tape to do the restoration. We deal with bunches of restorations on a daily basis. It’s something we’re accustomed to and our customers require quick restoration.

In a consolidated strategic approach, we put the technology behind it. We didn’t do the cheapest, but we did the best sort of thing to do, and having an active-active data center and backing up across both data centers enables us to do it. So, we did spend money on the backup portion because it’s important to our organization.

Gardner: You mentioned capacity-based pricing. For those of our listeners and readers who might not be familiar with that, what is that and why was that a benefit to you?

Bit of a struggle

Bergholz: It was a little bit of a struggle for us. We were always traditionally client-based or application-based in the backup. If we needed Microsoft Exchange email boxes we had to have an Exchange plug-in. If we had Oracle, we had to have an Oracle plug-in, a SQL plug-in.

While that was great, it enabled us to do a lot, it we were always having to get another plug-in thing to do it. When we saw that with our dedupe compression ratios we were getting, going to a capacity-based license allowed us to strategically and tactically plan for any increase that we were doing within our environment. So now, we can buy in chunklets and keep ahead of the game, making sure that we’re effective there.

We’re in throes of enacting archive-type solution through a product called QStar, which I believe HPE is OEM-ing, and we’re looking at that as a long-term archive-type process. That’s going to a linear tape file system, utilizing the management tools that that product brings us to afford the long-term archive of patient information.

Our biggest challenge is that we never delete anything. It’s always hard with any application. Because of the age of the patient, many cases are required to be kept for 21 years; some, 7 years; some, 9 years. And we’re a teaching hospital and research is done on some of that data. So we delete almost nothing.

HPE Data Protector:
Backup with Brains
Learn More Here

In the case of our radiology system, we’re approaching 250 terabytes right now. Trying to backup and restore, that amount of data with traditional tools is very ineffective, but we need to keep it forever.

By going to a tertiary-type copy, which this technology brings us, we have our source array, our replicated array, plus now, a tertiary array to take that, too, which is our LTFS solution.

Gardner: And with your backup and recovery infrastructure in place and a sense of confidence that comes with that, has that translated back into how you do the larger data lifecycle management equation? That is to say, are there some benefits of knowledge of quality assurance in backup that then allows people to do things they may not have done or not worried about, and therefore have a better business transformation outcome for your patients and your clinicians?

Bergholz: From a leadership perspective, there’s nothing real sexy about backup. It doesn’t get oohs and ahs out of people, but when you need data to be restored, you get the oohs and ahs and the thank-yous and the praise for doing that. Being able to demonstrate solutions time and time again buys confidence through leadership throughout the organization and it makes those people sleep safer at night.

Recently, we passed HIMSS Level 7. One of the remarks from that group was that a) we hadn’t had any production sort of outage, and b) when they asked a physician on the floor, what do you do when things go down, and what do you do when you lose something? He said the awesome part here is that we haven’t gone down and, when we lose something, we’re able to restore that in a very timely manner. That was noted on our award.

Gardner: Of course, many healthcare organizations have been using thin clients and keeping everything at the server level for a lot of reasons, a edge to core integration benefit. Would you feel more enabled to go into mobile and virtualization knowing that everything that’s kept on the data-center side is secure and backed up, not worrying about the fact that you don’t have any data on the incline? Is that factored into any of your architectural decisions about how to do client decision-making?

Desktop virtualization

Bergholz: We have been in the throes of desktop virtualization. We do a lot of Citrix XenApp presentations of applications that keeps the data in a data center and a lot of our desktop devices connect to that environment.

The next natural progression for us is desktop virtualization (VDI), ensuring that we’re keeping that data safe in the data center, ensuring that we’re backing it up, protecting the patient information on that, and it’s an interesting thought and philosophy. We try to sell it as an ROI-type initiative to start with. By the time you start putting all pieces to the puzzle, the ROI really doesn’t pan out. At least we’ve seen in two different iterations.

Although it can be somewhat cheaper, it’s not significant enough to make a huge launch in that route. But the main play there, and the main support we have organizationally, is from a data-security perspective. Also, it’s the the ease of managing the virtual desktop environment. It frees up our desktop engineers from being feet on the ground, so to speak, to being application engineers and being able to layer in the applications to be provisioned through the virtual desktop environment.

And one important thing in the healthcare industry is that when you have a workstation that has an issue and requires replacement or re-imaging, that’s an invasive step. If it’s in a patient room or in a clinical-care area, you actually have to go in, disrupt that flow, put a different system in, re-image, make sure you get everything you need. It can be anywhere from an hour to a three-hour process.

We do have a smattering of thin devices out there. When there are issues, it’s merely just replaying or redoing a gold image to it. The great part about thin devices versus thick devices is that in lot of cases, they’re operating in a sterile environment. With traditional desktops, the fans are sucking air through infection control and all that; there’s noise; perhaps they’re blowing dust within a room, if it’s not entirely clean. SSD devices are a perfect-play there. It’s really a drop-off, unplug, and re-plug sort of technology.

We’re excited about that for what it will bring to the overall experience. Our guiding principle is that you have the same experience no matter where you’re working. Getting there from Step A to Step Z is a journey. So, you do that a little bit a time and you learn as you go along, but we’re going to get there and we’ll see the benefit of that.

HPE Data Protector:
Backup with Brains
Learn More Here

Gardner: And ensuring the recovery and voracity of that data is a huge part of being able to make those other improvements.

Bergholz: Absolutely. What we’ve seen from time to time is that users, while they’re fairly knowledgeable, save their documents where they save them to. Policy is to make sure you put them within the data center. That may or may not always be adhered to. By going to a desktop virtualization, they won’t have any other choice.

A thin client takes that a step further and ensures that nothing gets saved back to a device, where that device could potentially disappear and cause a situation.

We do encrypt all of our stuff. Any device that’s out there is covered by encryption, but still there’s information on there. It’s well-protected, but this just takes away that potential.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, Cyber security, data analysis, data center, Hewlett Packard Enterprise, HP | Tagged , , , , , , , , , , | Leave a comment

Loyalty management innovator Aimia’s transformation journey to modernized IT

The next BriefingsDirect Voice of the Customer digital business transformation case study examines how loyalty management innovator Aimia is modernizing, consolidating, and standardizing its global IT infrastructure.

As a result of rapid growth and myriad acquisitions, Montreal-based Aimia is in a leapfrog mode — modernizing applications, consolidating data centers, and adopting industry standard platforms. We’ll now learn how improving end-user experiences and leveraging big data analytics helps IT organizations head off digital disruption and improve core operations and processes.

 

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To describe how Aimia is entering a new era of strategic IT innovation, we’re joined by André Hébert, Senior Vice President of Technology at Aimia in Montreal. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are some of the major drivers that have made you seek a common IT strategy?

Hébert: If you go back in time, Aimia grew through a bunch of acquisitions. We started as Aeroplan, Air Canada’s frequent flyer program and decided to go in the loyalty space. That was the corporate strategy all along. We acquired two major companies, one in the UK and one that was US-based, which gave us a global footprint. As a result of these acquisitions, we ended up with quite a large IT footprint worldwide and wanted to look at ways of globalizing and also consolidating our IT footprint.

Hébert

Gardner: For many people, when they think of a loyalty program, it’s frequent flyer miles, perhaps points at a specific retail outlet, but this varies quite a bit market to market around the globe. How do you take something that’s rather fractured as a business and make it a global enterprise?

Hébert: We’ve split the business into two different business units. The first one is around coalition loyalty. This is where Aimia actually runs the program. Good examples are Aeroplan in Canada or Nectar in the UK, where we own the currency, we operate the program, and basically manage all of the coalition partners. That’s one side.

The other side is what we call our global loyalty solutions. This is where we run loyalty programs for other companies. Through our standard technology, we set up a technology footprint within the customer site or preferably in one of our data centers and we run the technology, but the program is often white-labeled, so Aimia’s name doesn’t appear anywhere. We run it for banks, retailers and many industry verticals.

Almost like money

Gardner: You mentioned the word currency, and as I think about it, loyalty points are almost like money — it is currency — it can be traded, and it can be put into other programs. Tell us about this idea. Are you operating almost like a bank or a virtual currency trader of some sort?

Hébert: You could say that the currency is like money. It is accumulated. If you look at our systems, they’re very similar to bank-account systems. So our systems are like banks’. If you look at debit and credit transactions, they mimic the accumulation and redemption transactions that our members do.

Gardner: What’s been your challenge from an IT perspective to allow your company to thrive in this digital economy?

Hébert: Our biggest challenge was how large the technology footprint was. We still operate many dozens of data centers across the globe. The project with HPE is to consolidate all of our technology footprint into four Tier 3 data centers that are scattered across the globe to better serve our customers. Those will benefit from the best security standards and extremely robust data-center infrastructure.

On the infrastructure side, it’s all about simplifying, consolidating, virtualizing, using the cloud, leveraging the cloud, but in a virtual private way, so that we also keep our data very secured. That’s on the infra side.

On the application side, we probably have more applications than we have customers. One of the big drivers there is that we have a global product strategy. Several loyalty products have now been developed. We’re slowly migrating all of our customers over to our new loyalty systems that we’ve created to simplify our application portfolios. We have a large number of applications today, and the plan is to try to consolidate all these applications into key products that we’ve been developing over the last few years.

Gardner: That’s quite a challenge. You’re modernizing and consolidating applications. At the same time, you’re consolidating and modernizing your infrastructure. It reminds me of what HPE did just a few years ago when it decided to split and to consolidate many data centers. Was that something that attracted you to HPE, that they have themselves gone through a similar activity?

Hébert: Yes, that is one of the reasons. We’ve shopped around for a partner that can help us in that space and we thought that HPE had the best credentials, the best offer for us to go forward.

Virtual Private Cloud (VPC), a solution that they have offered, is both innovative, yet it is virtual and private. So, we feel that our customer’s data will be significantly more secure than just going to any public cloud.

Gardner: How is consolidating applications and modernizing infrastructure at the same time helping you to manage these compliance and data-protection issues?

Raising the bar

Hébert: The modernization and infrastructure consolidation is, in fact, helping greatly in continuing to secure data and meet ever more difficult security standards, such as PCI and DSS 3.0. Through this process, we’re going to raise the bar significantly over data privacy.

Gardner: André, a lot of organizations don’t necessarily know how to start. There’s so much to do when it comes to apps, data, infrastructure modernization and, in your case, moving to VPC. Do you have any thoughts about how to chunk that out, how to prioritize, or are you making this sort of a big bang approach, where you are going to do it all at once and try to do it as rapidly as possible? Do you have a philosophy about how to go about something so complex?

Hébert: We’ve actually scheduled the whole project. It’s a three-year journey into the new HPE world. We decided to attack it by region, starting with Canada and the US, North America. Then, we moved on to zooming into Asia-Pacific, and the last phase of the project is to do Europe. We decided to go geographically.

The program is run centrally from Canada, but we have boots on the ground in all of those regions. HPE has taken the lead into the actual technical work. Aimia does the support work, providing documentation, helping with all of the intricacies of our systems and the infrastructure, but it’s a co-led project, with HPE doing the heavy lifting.

Gardner: Something about costs comes to mind when you go standard. Sometimes, there are some upfront cost, you have to leapfrog that hurdle, but your long-term operating costs can be significantly lower. What is it about the cost structure? Is it the standardized infrastructure platforms, are you using cheaper hardware, is it open source software, all the above? How do you factor this as a return on investment (ROI) type of an equation?

Hébert: It’s all of the above. Because we’re right in the middle of this project, it will allow us to standardize, to evergreen, a lot of our technology that was getting older. A lot of our servers were getting old. So, we’re giving the infrastructure a shot in the arm as far as modernization.

From a VPC point of view, we’re going to leverage this internal cloud much more significantly. From a CPU point of view, and from an infrastructure point of view, we’re going to have significantly fewer physical servers than what we have today. It’s all operated and run by HPE. So, all of the management, all of the ITO work is done by HPE, which means that we can focus on apps, because our secret sauce is in apps, not in infrastructure. Infrastructure is a necessary evil.

Gardner: That brings up another topic, DevOps. When you’re developing, modernizing, or having a continuous-development process for your applications, if you have that cloud and infrastructure in place and it’s modern, that can allow you to do more with the development phase. Is that something you’ve been able to measure at all in terms of the ability to generate or update apps more rapidly?

Hébert: We’re just dipping our toe into advanced DevOps, but definitely there are some benefits around that. We’re currently focused on trying to get more value from that.

Gardner: When you think about ROI, there are, of course, those direct costs on infrastructure, but there are ancillary benefits in terms of agility, business innovation, and being able to come to market faster with new products and services. Is that something that is a big motivator for you and do you have anything to demonstrate yet in terms of how that could factor?

Relationship 2.0

Hébert: We’re very much focused right now on what I would say is Relationship 1.0, but HPE was selected as a partner for their ability to innovate. They also are in a transition phase, as we all know, so while we’re focused on getting the heavy lifting done, we’re focusing on innovation and focusing on new projects with HPE. We actually call that Relationship 2.0.

Gardner: For others who are looking at similar issues — consolidation, modernization, reducing costs over time, leveraging cloud models — any words of advice now that you are into this journey as to how to best go about it or maybe things to avoid?

Hébert: When we first looked at this, we thought that we could do a lot of that consolidation work ourselves. Consolidating 42 data centers into 4 is a big job, and where HPE helps in that regard is that they bring the experience, they bring the teams, and they bring the focus to this.

We probably could have done it ourselves. It probably would have cost more and it probably would have taken longer. One of the benefits that I also see is that HPE manages thousands and thousands of servers. With their ability to automate all of the server management, they’ve taken it to a level. As a small company, we couldn’t afford to do all of the automation that they can afford doing on these thousands of servers.

Gardner: Before we close out, André, looking to the future — two, three, four years out — when you’ve gone through this process, when you’ve gotten those modern apps and they are running on virtual private clouds and you can take advantage of cloud models, where do you see this going next?

Do you have some ideas about mobile applications, about different types of transactional capabilities, maybe getting more into the retail sector? How does this enable you to have even greater growth strategically as a company in a few years?

Hébert: If you start with the cloud, the world is about to see a very different cloud model. If you fast forward five years, there will be mega clouds, and everybody will be leveraging these clouds. Companies that actually purchase servers will be a thing of the past.

When it comes to mobile, clearly Aimia’s strategy around mobile is very focused. The world is going mobile. Most apps will require mobile support. If you look at analytics, we have a whole other business that focuses on analytics. Clearly, loyalty is all about making all this data make sense, and there’s a ton of data out there. We have got a business unit that specializes in big data, in advanced analytics, as it pertains to the consumers, and clearly for us it is a very strategic area that we’re investing in significantly.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, data analysis, Hewlett Packard Enterprise, HP, User experience | Tagged , , , , , , , , , , | Leave a comment

Big data and cloud combo spark momentous genomic medicine advances at HudsonAlpha

The next BriefingsDirect Voice of the Customer IT innovation case study explores how the HudsonAlpha Institute for Biotechnology engages in digital transformation for genomic research and healthcare paybacks.

We’ll learn how HudsonAlpha leverages modern IT infrastructure and big-data analytics to power a pioneering research project incubator and genomic medicine innovator.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To describe new possibilities for exploiting cutting-edge IT infrastructure and big data analytics for potentially unprecedented healthcare benefits, we’re joined by Dr. Liz Worthey, Director of Software Development and Informatics at the HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems to me that genomics research and IT have a lot in common. There’s not much daylight between them — two different types of technology, but highly interdependent. Have I got that right?

Worthey: Absolutely. It used to be that the IT infrastructure was fairly far away from the clinic or the research, but now they’re so deeply intertwined that it necessitates many meetings a week between the leadership of both in order to make sure we get it right.

Gardner: And you have background in both.

Worthey: My background is primarily on the biology side, although I’m Director of Informatics and I’ve spent about 20 years working in the software-development and informatics side. I’m not IT Director, but I’m pretty IT savvy, because I’ve had to develop that skill set over the years. My undergraduate degree was in immunology, and since then, my focus has really been on genetics informatics and bioinformatics.

Gardner: Please describe what genetic informatics or genomic informatics is for our audience.

Worthey: Since 2003, when we received the first version of a human reference genome, there’s been a large field involved in the task of extracting knowledge that can be used for society and health from genomic data.

Worthey

A [human] genome is 3.2 billion nucleotides in length, and in there, there’s a lot of really useful information. There’s information about which diseases that individual may be more likely to get and which diseases they will get.

It’s also information about which drugs they should and shouldn’t take; information about which types of procedures, surveillance procedures, what colonoscopies they should have. And so, the clinical aspects of genomics are really developing the analytical capabilities to extract that data in real time so that we can use it to help an individual patient.

On top of that, there’s also a lot of research. A lot of that is in large-scale studies across hundreds of thousands of individuals to look for signals that are more difficult to extract from a single genome. Genomics, clinical genomics, is all of that together.

Parallel trajectory

Gardner: Where is the societal change potential in terms of what we can do with this information and these technologies?

Worthey: Genomics has existed for maybe 20 years, but the vast majority of that was the first step. Over the last six years, we’ve taken maybe the second or third step in a journey that’s thousands of steps long.

We’re right on the edge. We didn’t used to be able to do this, because we didn’t have any data. We didn’t have the capability to sequence a genome cheaply enough to sequence lots. We also didn’t have the storage capabilities to store that data, even if we could produce it, and we certainly didn’t have enough compute to do the analysis, infrastructure-wise. On top of that, we didn’t actually have the analytical know-how or capabilities either. All of that is really coalescing at the same time.

Start Your HPE Vertica

Community Edition Trial

As we are doing genomics, and that technology and the sequencing side has come up, the compute and the computing technologies have come up at the time. They’re feeding each other, and genomics is now driving IT to think about things in a very different way.

Gardner: Let’s dive into that a little bit. What are the hurdles technologically for getting to where you want to be, and how do you customize that or need to customize that, for your particular requirements?

Worthey: There are a number of hurdles. Certainly, there are simpler hurdles that we have to get past, like storage, storage tied with compression. How do you compress that data to where you can store millions of genomes at a price that’s affordable.

A bigger hurdle is the ability to query information at a lot of disparate sites. When we think about genomic medicine, one of the things that we really want do is share data between institutions that are geographically diverse. And the data that we want to share is millions of data points, each of which has hundreds or thousands of annotations or curations.

Those are fairly complex queries, even when you’re doing it in one site, but in order to really change the practice of medicine, we have to be able to do that regionally, nationally, and globally. So, the analytics questions there are large.

We have 3.2 billion data points for each individual. The data is quite broad, but it’s also pretty deep. One of the big problems is that we don’t have all the data that we need to do genomic medicine. There’s going to be data mining — generate the data, form a hypothesis, look at the data, see what you get, come back with a new hypothesis, and so on.

Finally, one of the problems that we have is that a lot of algorithms that you might use only exists in the brains of MDs, other clinical folks, or researchers. There is really a lot of human computer interaction work to be done, so that we can extract that knowledge.

There are lots of problems. Another big problem is that we really want to put this knowledge in the hands of the doctor while they have seven minutes to see the patient. So, it’s also delivery of answers at that point in time, and the ability to query the data by the person who is doing the analysis, which ideally will be an MD.

Cloud technology

Gardner: Interestingly, the emergence of cloud methods and technology over the past five or 10 years would address some of those issues about distributing the data effectively — and also perhaps getting actionable intelligence to a physician in an actual critical-care environment. How important is cloud to this process and what sort of infrastructure would be optimal for the types of tasks that you have in mind?

Worthey: If you had asked me that question two years ago, on the genomic medicine side, I would have said that cloud isn’t really part of the picture. It wasn’t part of the picture for anything other than business reasons. There were a lot of questions around privacy and sharing of healthcare information, and hospitals didn’t like the idea.

They’re very reluctant to move to the cloud. Over the last two years, that has started to change. Enough of them had to decide to do it, before everybody would view it as something that was permissible.

Cloud is absolutely necessary in many ways, because we have periods where lots of data that has to be computed and analytics has to be run. Then, we have periods where new information is coming off the sequencer. So, it’s that perfect crest and trough.

If you don’t have the ability to deal with that sort of fluctuation, if you buy a certain amount of hardware and you only have it available in-house, your pipeline becomes impacted by the crests and then often sits idle for a long time.

Start Your HPE Vertica

Community Edition Trial

But it’s also important to have stuff in-house, because sometimes, you want to do things in a different way. Sometimes, you want to do things in a more secure manner.

It’s kind of our poster child for many of the new technologies that are coming out that look at both of those, that allow you to run things in-house and then also allow you to run the same jobs on the same data in the cloud as well. So, it’s key.

Gardner: That brings me to the next question about this concept of genomics as a service or a platform to support genomics as a service. How do you envision that and how might that come about?

Worthey: When we think about the infrastructure to support that, it has to be something flexible and it has to be provided by organizations that are able to move rapidly, because the field is moving really quickly

It has to be infrastructure that supports this hypothesis-driven research, and it has to be infrastructure that can deal with these huge datasets. Much of the data is ordered, organized, and well-structured, but because it’s healthcare, a lot of the information that we use as part of the interpretation phase of genomic medicine is completely unstructured. There needs to be support for extraction of data from silos.

My dream is that the people who provide these technologies will also help us deal with some of these boundaries, the policy boundaries, to sharing data, because that’s what we need to do for this to become routine.

Data and policy

Gardner: We’ve seen some of that when it comes to other forms of data, perhaps in the financial sector. More and more, we’re seeing tokenization, authentication, and encryption, where data can exist for a period of time with a certain policy attached to it, and then something will happen if the data is a result for that policy. Is that what you’re referring to?

Worthey: Absolutely. It’s really interesting to come to a meeting like HPE Discover because you get to see what everybody else is doing in different fields. Much of the things that people in my field have regarded as very difficult are actually not that hard at all; they happen all the time in other industries.

A lot of this — the encryption, the encrypted data sharing, the ability to set those access controls in a particular way that only lasts for a certain amount of time for a particular set of users — seems complex, but it happens all the time in other fields. A big part of this is talking to people who have a lot of experience in a regulated environment. It’s just not this regulated environment and learning the language that they use to talk to the people that set policy there and transferring that to our policy makers and ideally getting them together to talk to one another.

Gardner: Liz, you mentioned the interest layers in getting your requirements to the technology vendors, cloud providers, and network providers. Is that under way? Is that something that’s yet to happen? Where is the synergy between the genomic research community and the technology-vendor platform provider community?

Worthey: This is happening fast. For genomics, there’s been a shift in the volume of genomic data that we can produce with some new sequencing technology that’s coming. If you’re a provider of hardware or service user solutions to deal with big data, looking at genomics, as the people here are probably going to overtake many of those other industries in terms of the volume and complexity of the data that we have.

The reason that that’s really interesting is because then you get invited to come and talk at forums, where there’s lots of technology companies and you make them aware of the work that has to be done in the field of medicine, and in genomic research, and then you can start having those discussions.

A lot of the things that those companies are already doing, the use cases, are similar and maybe need some refinement, but a lot of that capability is already there.

Gardner: It’s interesting that you’ve become sort of the “New York” of use cases. If you can make it there, you can make it anywhere. In other words, if we can solve this genomic data issue and use the cloud fruitfully to distribute and gather — and then control and monitor the data as to where it should be under what circumstances — we can do just about anything.

Correct me if I am wrong, though. We’re using data in the genomic sense for population groups. We’re winnowing those groups down into particular diseases. How farfetched is it to think about individuals having their own genomic database that would follow them like an authenticated human design? Is that completely out of the bounds? How far would that possibly be?

Technology is there

Worthey: I’ve had my genome sequenced, and it’s accessible. I could pick it up and look at it on the tools that I developed through my phone sitting here on the table. In terms of the ability to do that, a lot of that technology is already here.

The number of people that are being sequenced is increasing rapidly. We’re already using genomics to make diagnosis in patients and to understand their drug interactions. So, we are here.

One of the things that we are talking about just now is, at what point in a person’s life should you sequence their genome. I and a number of other people in the field believe that that is earlier, rather than later, before they get sick. Then, we have that information to use when they get those first symptoms. You are not waiting until they’re really ill before you do that.

I can’t imagine a future where that’s not what’s going to happen, and I don’t think that future is too far away. We’re going to see it in our lifetimes, and our children are definitely going to see it in theirs.

Gardner: The inhibitors, though, would be more of an ethical nature, not a technological nature.

Worthey: And policy, and society; the society impact of this is huge.

The data that we already have, clinical information, is really for that one person, but your genome is shared among your family, even distant relatives that you’ve never met. So, when we think about this, there are many very hard ethical questions that we have to think about. There are lots of experts that are working on that, but we can’t let that get in the way of progress. We have to do it. We just have to make sure we do it right.

Gardner: To come back down a little bit toward the technology side of things, seeing as so much progress has been made and that there is the tight relationship between information technology and some of the fantastic things that can happen with the proper knowledge around genomic information, can you describe the infrastructure you have in place? What’s working? What do you use for big-data infrastructure, and cloud or hybrid cloud as well?

Worthey: I’m not on the IT side, but I can tell you about the other side and I can talk a little bit on the IT side as well. In terms of the technologies that we use to store all of that varying information, we’re currently using Hadoop and Mongo DB. We finished our proof of concept with HPE, looking at their Vertica solution.

We have to work out what the next steps might be for our proof of concept. Certainly, we’re very interested in looking at the solutions that they have in here. They fit with our needs. The issue that’s been addressed on that side is lots of variants, complex queries, that you need to answer really fast.

Start Your HPE Vertica

Community Edition Trial

On the other side, one of the technological hurdles that we have to meet is the unstructured data. We have electronic health record (EHR) information that’s coming in. We want to hook up to those EHRs and we want to use systems to process that data to make it organized, so that we can use it for the interpretation part.

In-house solution

We developed in-house solutions that we’re using right now that allow humans to come in and look at that data and select the terms from it. So, you’d select disease terms. And then, we have in-house solutions to map them to the genomic side. We’re looking at things like HPE’s IDOL as a proof-of-concept (POC) on that side. We’re talking to some EHR companies about how to hook up the EHR to those solutions to our software to make it a seamless product and that would give us all that.

In terms of hardware, we do have HPE hardware in-house. I think we have 12 petabytes of their storage. We also have data direct network hardware, a general parallel file system solution. We even have things down to graphics processors for some of the analysis that we do. We’ve a large deck of such GPUs because in some cases it’s much faster for some other types of problems that we have to solve. So we are pretty IT-rich, a lot of heavy investment on the IT side.

Gardner: And cloud — any preference to the topology that works for you architecturally for cloud, or is that still something you are toying with?

Worthey: We’re currently looking at three different solutions that are all cloud solutions. We not only do the research and the clinical, but we also have a lab that produces lots of data for other customers, a lab that produces genomic data as a service.

They have a challenge of getting that amount of data returned to customers in a timely fashion. So, there are solutions that we’re looking at there. There are also, as we talked at the start, solutions to help us with that in-flow of the data coming off the sequencers and the compute — and so we’re looking at a number of different solutions that are cloud-based to solve some of those challenges.

Gardner: Before we close, we’ve talked about healthcare and population impacts, but I should think there’s also a commercial aspect to this. That kind of information will lend itself to entrepreneurial activities, products and services, a great demand in the marketplace? Is that something you’re involved with as well, and wouldn’t that help foot the bill for some of these many costly IT infrastructure investments?

Worthey: One of the ways that HudsonAlpha Institute was set up was just that model. We have a research, not-for-profit side, but we also have a number of affiliate companies that are for-profit, where intellectual property and ideas can go across to that site and be used to generate revenue that fund the research and keep us moving and be on the cutting-edge.

We do have a services lab that does genomic sequencing in analytics. You can order that from them. We also service a lot of people who have government contracts for this type of work. And then, we have an entity called Envision Genomics. For disclosure, I’m one of founders of that entity. It’s focused on empowering people to do genomic medicine and working with lots of different solution providers to get genomic medicine being done everywhere it’s applicable.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, Cloud computing, data analysis, Hewlett Packard Enterprise, HP, HP Vertica | Tagged , , , , , , , , , , | Leave a comment

Cybersecurity crosses the chasm: How IT now looks to the cloud for best security

The next BriefingsDirect cybersecurity innovation and transformation panel discussion explores how cloud security is rapidly advancing, and how enterprises can begin to innovate and prevail over digital disruption by increasingly using cloud-defined security.

We’ll examine how a secure content collaboration services provider removes the notion of organizational boundaries so that businesses can better extend processes. And we’ll hear how less boundaries and cloud-based security together support transformative business benefits.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To share how security technology leads to business innovations, we’re joined by Daren Glenister, Chief Technology Officer at Intralinks in Houston, and Chris Steffen, Chief Evangelist for Cloud Security at HPE. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Daren, what are the top three trends driving your need to extend security and thereby preserve trust with your customers?

Glenister

Glenister: The top thing for us is speed of business, people being able to do business beyond boundaries, and how can they enable the business rather than just protect it. In the past, security has always been about how we shut things down and stop data. But now it’s how we do it all securely, and how we perform business outside of the organization. So, it’s enabling business.

The second thing we’ve seen is compliance. Compliance is a huge issue for most of the major corporations. You have to be able to understand where the data is and who has access to it, and to know who’s using it and make sure that they can be completely compliant.

The third thing is primarily around the shift between security inside and outside of the organization. It’s been a fundamental shift for us, and we’ve seen that security has moved from people’s trust in their own infrastructure, versus using a third-party who can provide that security and have a far higher standard, because that’s what they do the whole day, every day. That security shift from on-premise to the cloud is a third big driver for us, and we’ve seen that in the market.

Gardner: You’re in a unique position to be able to comment on this. Tell us about Intralinks, what the company does, and why security at the edge is part of your core competency.

Secure collaboration

Glenister: We’re a software-as-a-service (SaaS) provider and we provide secure collaboration for data, wherever that data is, whether it’s inside a corporation or it’s shared outside. Typically, once people share data outside, whether it’s through e-mail or any other method, some of the commercial tools out there have lost control of that data.

We have the ability to actually lock that data down, control that, and put the governance and the compliance around that to secure that data, know where the high-value intellectual property (IP) is, who has access to it, and then be able to even share as well. And, if you’re in a situation of losing data, revoke access to someone who has left the organization.

Gardner: And these are industries that have security as a paramount concern. So, we’re talking about finance and insurance. Give us a little bit more indication of the type of data we’re talking about.

Glenister: It’s anybody with high-value IP or compliance requirements — banking, finance, healthcare, life sciences, for example, and manufacturing. Even when you’re looking at manufacturing overseas and you have IP going over to China to manufacture your product, your plans are also being shared overseas. We’ve seen a lot of companies now asking how to protect those plans and therefore, protect IP.

Critical Security
And Compliance Considerations
For Hybrid Cloud Deployments

Gardner: Chris, Intralinks seems to be ahead of the curve, recognizing how cloud can be an enabler for security. We’re surely seeing a shift in the market, at least I certainly am. In the last six months or so, companies that were saying that security was a reason not to go to the cloud are now saying that security is a reason they’re going to the cloud. They can attain security better. What’s happened that has made that perspective flip?

Steffen: I don’t know exactly what’s happened, but you’re absolutely right; that flip is going on. We’ve done a lot of research recently and shown that when you’re looking at inherent barriers going to a cloud solution, security and compliance considerations are always right there at the top. We commissioned the study through 451 Research, and we kind of knew that’s what was going on, but they sure nailed it down, one and two, security and compliance, right there. [Get a copy of the report.]

Steffen

The reality, though, is that that the C-table, executives, IT managers, those types, are starting to look at the massive burden of security and hoping to find help somewhere. They can look at a provider like Intralinks, they can look at a provider like HPE and ask, “How can they help us meet our security requirements?”

They can’t just third-party their security requirements away. That’s not going to cut it with all the regulators that are out there, but we have solutions. HPE has a solution, Intralinks has solutions, a lot of third-party providers have solutions that will help the customer address some of those concerns, so those guys can actually sleep at night.

Gardner: We’re hearing so much about digital disruption in so many industries, and we’re hearing about why IT can’t wait, IT needs to be agile and have change in the business model to appeal to customers to improve their user experience.

It seems that security concerns have been a governor on that. “We can’t do this because ‘blank’ security issue arises.” It seems to me that it’s a huge benefit when you can come to them and say, “We’re going to allow you to be agile. We’re going to allow you to fight back against disruption because security can, in fact, be managed.” How far are we to converting disruption in security into an enabler when you go to the cloud?

Very difficult

Glenister: The biggest thing for most organizations is they’re large, and it’s very difficult to transform just the legacy systems and processes that are in-place. It’s very difficult for organizations to change quickly. To actually drive that, they have to look at alternatives, and that’s why a lot of people move into cloud. Driving the move to the cloud is, “Can we quickly enable the business? Can we quickly provide those solutions, rather than having to spend 18 months trying to change our process and spend millions of dollars doing it?”

Enablement of the business is actually driving the need to go to the cloud, and obviously will drive security around that. To Chris’s point a few minutes ago, not all vendors are the same. Some vendors are in the cloud and they’re not as secure as others. People are looking for trusted partners like HPE and Intralinks, and they are putting their trust and their crown jewels, in effect, with us because of that security. That’s why we work with HPE, because they have a similar philosophy around security as we do, and that’s important.

Steffen: The only thing I would add to that is that security is not only a concern of the big business or the small business; it’s everybody’s concern. It’s one of those things where you need to find a trusted provider. You need to find that provider that will not only understand the requirements that you’re looking for, but the requirements that you have.

This is my opinion, but when you’re kicking tires and looking at your overall compliance infrastructure, there’s a pretty good chance you had to have that compliance for more than a day or two. It’s something that has been iterative; it may change, it may grow, whatever.

So, when you’re looking at a partner, a lot of different providers will start to at least try to ensure that you don’t start at square-one again. You don’t want to migrate to a cloud solution and then have all the compliance work that you’ve done previously just wiped away. You want a partner that will map those controls and that really understands those controls.

Perfect examples are in the financial services industry. There are 10 or 11 regulatory bodies that some of the biggest banks in the world all have to be compliant with. It’s extremely complicated. You can’t really expect that Big Bank 123 is going to just throw away all that effort, move to whatever provider, and hope for the best. Obviously, they can’t be that way. So the key is to take a map of those controls, understand those controls, then map those controls to your new environment.

Gardner: Let’s get into a little bit of the how … How this happens. What is it that we can do with security technology, with methodologies, with organizations that allow us to go into cloud, remove this notion of a boundary around your organization and do it securely? What’s the secret sauce, Daren?

Glenister: One of the things for us, being a cloud vendor, is that we can protect data outside. We have the ability to actually embed the security into documents wherever documents go. Instead of just having the control of data at rest within the organization, we have the ability to actually control it in motion inside and outside the perimeter.

You have the ability to control that data, and if you think about sharing with third parties, quite often people say, “We can’t share with a third-party because we don’t have compliance, we don’t have a security around it.” Now, they can share, they can guarantee that the information is secure at rest, and in motion.

Typically, if you look at most organizations, they have at-rest data covered. Those systems and procedures are relative child’s play. But that’s been covered for many years. The challenge is that it’s newly in motion. How do you actually extend working with third parties and working with outside organizations?

Innovative activities

Gardner: It strikes me that we’re looking at these capabilities through the lens of security, but isn’t it also the case that this enables entirely new innovative activities. When you can control your data, when you can extend where it goes, for how long, to certain people, under certain circumstances, we’re applying policies, bringing intelligence to a document, to a piece of data, not just securing it but getting control over it and extending its usefulness. So why would companies not recognize that security-first brings larger business benefits that extend for years?

Glenister: Historically, security has always been, “No, you can’t do this, let’s stop.” If you look in a finance environment, it’s stop using thumb drives, stop using emails, stop using anything rather than ease of solution. We’ve seen a transition. Over the last six months, you’re starting to see a transition where people are saying, “How do we enable? How do we get people to control them?’ As a result of that, you see new solutions coming out from organizations and how they can impact the bottom line.

Gardner: Behavior modification has always a big part of technology adoption. Chris, what is it that we can do in the industry to show people that being secure and extending the security to wherever the data is going to go gives us much more opportunity for innovation? To me this is a huge enticing carrot that I don’t think people have perhaps fully grokked.

Steffen: Absolutely. And the reality of it is that it’s an educational process. One of the things that I’ve been doing for quite some time now is trying to educate people. I can talk with a fellow CISSP and we can talk about Diffie-Hellman encryption and I promise that your CEO does not care, and he shouldn’t. He shouldn’t ever have to care. That’s not something that he needs to care about, but he does need to understand total cost of ownership (TCO), he needs to understand return on investment (ROI). He needs to be able to go to bed at night understanding that his company is going to be okay when he wakes up in the morning and that his company is secure.

It’s an iterative process; it’s something that they have to understand. What is cloud security? What does it mean to have defense in depth? What does it mean to have a matured security policy vision? Those are things that really change the attitudinal barriers that you have at a C-table that you then have to get past.

Security practitioners, those tinfoil hat types — I classify myself as one of those people, too — truly believe that they understand how data security works and how the cloud can be secured, and they already sleep well at night. Unfortunately, they’re not the ones who are writing the checks.

It’s really about shifting that paradigm of education from the practitioner level, where they get it, up to the CIO, the CISO who hopefully understands, and then up to the C-table and the CFO making certain that they can understand and write that check to ensure that going to a cloud solution will allow them to sleep at night and allow the company to innovate. They’ll take any security as an enabler to move the business forward.

Critical Security
And Compliance Considerations
For Hybrid Cloud Deployments

Gardner: So, perhaps it’s incumbent upon IT and security personnel to start to evangelize inside their companies as to the business benefits of extended security, rather than the glass is always half empty.

Steffen: I couldn’t agree more. It’s a unique situation. Having your — again, I’ll use the term — tinfoil hat people talking to your C-table about security — they’re big and scary, and so on. But the reality of it is that it really is critically important that they do understand the value that security brings to an organization.

Going back to our original conversations, in the last 6 to 12 months, you’re starting to see that paradigm shifted a little bit, where C-table executives aren’t satisfied with check-box compliance. They want to understand what it takes to be secure, and so they have experts in house and they want to understand that. If they don’t have experts in-house, there are third-party partners out there that can provide that amount of education.

Gardner: I think it’s important for us to establish that the more secure and expert you are at security the more of a differentiator you have against your competition. You’re going to clean up in your market if you can do it better than they can.

Step back

Steffen: Absolutely, and even bring that a step further back. People have been talking for two decades now about technology as a differentiator and how you can make a technical decision or embrace and exploit technology to be the differentiator in your vertical, in your segment, so on.

The credit reporting agency that I worked for a long time ago was one of those innovators, and people thought we were nuts for doing some of the stuff that we are doing. Years later, everybody is doing the same thing now.

It really can set up those things. Security is that new frontier. If you can prove that you’re more secure than the next guy, that your customer data is more secured than the next guy, and that you’re willing to protect your customers more than the next guy, maybe it’s not something you put on a billboard, but people know.

Would you go to retailer A because they have had a credit card breach or do you decide to go retailer B? It’s not a straw man. Talk to Target, talk to Home Depot, talk to some of these big big-box stores that have had breaches and ask how their numbers looked after they had to announce that they had a breach.

Gardner: Daren, let’s go to some examples. Can think of an example of IntraLinks and a security capability that became a business differentiator or enable?

Glenister: Think about banks at the moment, where they’re working with customers. There’s a drive for security. Security people have always known about security and how they can enable and protect the business.

But what’s happening is that the customers are now more demanding because the media is blowing up all of the cyber crimes, threats, and hacks. The consumer is now saying they need their data to be protected.

A perfect example is my daughter, who was applying for a credit card recently. She’s going off to college. They asked her to send a copy of her passport, Social Security card, and driver’s license to them by email. She looked at me and said, “What do you think?” It’s like, “No. Why would you?”

People have actually voted, saying they’re not going to do business with that organization. If you look in the finance organizations now, banks and the credit-card companies are now looking at how to engage with the customer and show that they have been securing and protecting their data to enable new capabilities like loan or credit-card applications and protecting the customer’s data, because customers can vote with their feet and choose not to do business with you.

So, it’s become a business-enabler to say we’re protecting your data and we have your concerns at heart.

Gardner: And it’s not to say that that information shouldn’t be made available to a credit card or an agency that’s ascertaining credit, but you certainly wouldn’t do it through email.

Insecure tool

Glenister: Absolutely, because email is the biggest sharing tool on the planet, but it’s also one of the most insecure tools on the planet. So, why would you trust your data to it?

Steffen: We’ve talked about security awareness, the security awareness culture, and security awareness programs. If you have a vendor management program and you’re subject to a vendor management from some other entity, one of the things they also would request is that you have a security awareness program?

Even five to seven years ago, people looked at that as drudgery. It was the same thing as all the other nonsensical HR training that you have to look at. Maybe, to some extent, it still is, but the reality is that when I’ve given those programs before, people are actually excited. It’s not only because you get the opportunity to understand security from a business perspective, but a good security professional will then apply that to, “By the way, your email is not secured here, but your email is not secured at home, too. Don’t be stupid here, but don’t be stupid there either.”

We’re going to fix the router passwords. You don’t need to worry about that, but you have a home router, change the default password. Those sounds like very simple straightforward things, but when you share that with your employees and you build that culture, not only do you have more secure employees, but then the culture of your business and the culture of security changes.

In effect, what’s happening is that you’ll finally be getting to see that translate into stuff going on outside of corporate America. People are expecting to have information security parameters around the businesses that they do business with. Whether it’s from the big-box store, to the banks, to the hospitals, to everybody, it really is starting to translate.

Glenister: Security is a culture. I look at a lot of companies for whom we do once-a-year certification or attestation, an online test. People click through it, and some may have a test at the end and they answer the questions and that’s it, they’re done. It’s nice, but it has to be a year-round, day-to-day culture with every organization understanding the implications of security and the risk associated with that.

If you don’t do that, if you don’t embed that culture, then it becomes a one-time entity and your security is secure once a year.

Steffen: We were talking about this before we started. I’m a firm believer in security awareness. One of the things that I’ve always done is take advantage of these pretend Hallmark holidays. The latest one was Star Wars Day. Nearly everybody has seen Star Wars or certainly heard of Star Wars at some point or another, and you can’t even go into a store these days without hearing about it.

For Star Wars Day, I created a blog to talk about how information-security failures led to the downfall of the Galactic Empire.

Critical Security
And Compliance Considerations
For Hybrid Cloud Deployments

It was a fun blog. It wasn’t supposed to be deadly serious, but the kicker is that we talked about key information security points. You use that holiday to get people engaged with what’s going on and educate them on some key concepts of information security and accidentally, they’re learning. That learning then comes to the next blog that you do, and maybe they pay a little bit more attention to it. Maybe they pay attention to simply piggybacking through the door and maybe they pay attention to not putting something in an e-mail and so on.

It’s still a little iterative thing; it’s not going to happen overnight. It sounds silly talking about information security failures in Star Wars, but those are the kind of things that engage people and make people understand more about information security topics.

Looking to the future

Gardner: Before we sign off, let’s put on our little tinfoil hat with a crystal ball in front. If we’ve flipped in the last six months or so, people now see the cloud as inherently more secure, and they want to partner with their cloud provider to do security better. Let’s go out a year or two, how impactful will this flip be? What are the implications when we think about this, and we take into consideration what it really means when people think that cloud is the way to go to be secure on the internet?

Steffen: The one that immediately comes to mind for me — Intralinks is actually starting to do some of this — is you’re going to see niche cloud. Here’s what I mean by niche cloud. Let’s just take some random regulatory body that’s applicable to a certain segment of business. Maybe they can’t go to a general public cloud because they’re regulated in a way that it’s not really possible.

What you’re going to see is a cloud service that basically says, “We get it, we love your type, and we’re going to create a cloud. Maybe it will cost you a little bit more to do it, but we understand from a compliance perspective the hell that you are going through. We want to help you, and our cloud is designed specifically to address your concerns.”

When you have niche cloud, all of a sudden, it opens up your biggest inherent barriers. We’ve already talked about security. Compliance is another one, and compliance is a big fat ugly one. So, if you have a cloud provider that’s willing to maybe even assume some of the liability that comes with moving to their cloud, they’re the winners. So let’s talk 24 months from now. I’m telling you that that’s going to be happening.

Gardner: All right, we’ll check back on that. Daren, your prediction?

Glenister: You are going to see a shift that we’re already seeing, and Chris will probably see this as well. It’s a shift from discussions around security to transformation. You definitely see security now transforming business, enabling businesses to do things and interact with their customs in ways they’ve never done before.

You’ll see that impacting two ways. One is going to be new business opportunities, so revenue coming in, but it’s also going to be streamlined in the internal processes, so making things easier to do internally. And you’ll see a transformation of the business inside and outside. That’s going to drive a lot of new opportunities and new capabilities and innovations we’ve seen before.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, HP, Security | Tagged , , , , , , , , | Leave a comment