Five ways to make identity management work best across hybrid computing environments

Any modern business has been dealing with identity and access management (IAM) from day one. But now, with more critical elements of business extending beyond the enterprise, access control complexity has been ramping up due to cloud, mobile, bring your own device (BYOD), and hybrid computing.

And such greater complexity forms a major deterrent to secure, governed, and managed control over who and what can access your data and services — and under what circumstances.The next BriefingsDirect thought leader discussion then centers on learning new best practices for managing the rapidly changing needs around IAM.

While cloud computing gets a lot of attention, those of us working with enterprises daily know that the vast majority of businesses are, and will remain, IT hybrids, a changing mixture of software as a service (SaaS), cloud, mobile, managed hosting models, and of course, on-premises IT systems.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We’re here with a Chief Technology Officer for a top IAM technology provider to gain a deeper understanding of the various ways to best deploy and control access management in this ongoing age of hybrid business.

Here to explore five critical tenets of best managing the rapidly changing needs around identity and access management is Darran Rolls, Chief Technology Officer at SailPoint Technologies in Austin, Texas. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There must be some basic, bedrock principles that we can look to that will guide us as we’re trying to better manage access and identity.

Rolls: Absolutely, there are, and I think that will be a consistent topic of our conversation today. It’s something that we like to think of as the core tenets of IAM. As you very eloquently pointed out in your introduction, this isn’t anything new. We’ve been struggling with managing identity and security for some time. The changing IT environment is introducing new challenges, but the underlying principles of what we’re trying to achieve have remained the same.

http://www.sailpoint.com/about-us/executive-team/darran-rolls
Rolls

The idea of holistic management for identity is key. There’s no question about that, and something that we’ll come back to is this idea of the weakest link — a very commonly understood security principle. As our environment expands with cloud, mobile, on-prem, and managed hosting, the idea of a weak point in any part of that environment is obviously a strategic flaw.

As we like to say at SailPoint, it’s an anywhere identify principle. That means all people — employees, contractors, partners, customers, basically from any device, whether you’re on a desktop, cloud, or mobile to anywhere. That includes on-prem enterprise apps, SaaS apps, and mobile. It’s certainly our belief that for any IAM technology to be truly effective, it has to span all for all — all access, all accounts, and all users; wherever they live in that hybrid runtime.

Gardner: So we’re in an environment now where we have to maintain those bedrock principles for true enterprise-caliber governance, security, and control, but we have a lot more moving parts. And we have a cavalcade of additional things you need to support, which to me, almost begs for those weak links to crop up.

So how do you combine the two? How do you justify and reconcile these two realities — secure and complex?

Addressing the challenge

Rolls: One way comes from how you address the problem and the challenge. Quite often, I’m asked if there’s a compromise here. If I move my IAM to the cloud, will I still be able to sustain my controls and management and do risk mitigation, which is what we were trying to get to.

My advice is if you’re looking at an identity-as-a-service (IDaaS) solution that doesn’t operate in terms of sustainable controls and risk mitigation, then stop, because controls and risk mitigation really are the core tenets of identity management. It’s really important to start a conversation around IDaaS by quite clearly understanding what identity governance really is.

This isn’t an occasional, office-use application. This is critical security infrastructure. We very much have to remember that identity sits at the center of that security-management lifecycle, and at the center of the users’ experience. So it’s super important that we get it right.

So in this respect, I like to think that IDaaS is more of a deployment option than any form of a compromise. There are a minimum set of table stakes that have to be in place. And, whether you’re choosing to deploy an IDaaS solution or an on-prem offering, there should be no compromise in it.

We have to respect the principles of global visibility and control, of consistency, and of user experience. Those things remain true for cloud and on-prem, so the song remains the same, so to speak. The IT environment has changed, and the IAM solutions are changing, but the principles remain the same.

Gardner: I was speaking with some folks leading up to the recent Cloud Identity Summit, and more and more, people seem to be thinking that the IAM is the true extended enterprise management. It’s more than just the identity in access, but across services and so essential for extended enterprise processes.

Also, to your point, being more inclusive means that you need to have the best of all worlds. You need to be able to be doing IAM well on-premises, as well as in the cloud — and not either/or.

Rolls: Most of the organizations that I speak to these days are trying to manage a balance between being enterprise-ready — so supporting controls and automation and access management for all applications, while being very forward looking, so also deploying that solution from the cloud for cost and agility reasons.

For these organizations, choosing an IDaaS solution is not a compromise in risk mitigation, it’s a conscious direction toward a more off-the-shelf approach to managing identity. Look, everyone has to address security and user access controls, and making a choice to do that as a service can’t compromise your position on controls and risk mitigation.

Gardner: I suppose the risk of going hybrid is that if you have somewhat of a distributed approach to your IAM capabilities, you’ll lose that all-important single view of management. I’d like to hear more, as we get into these tenets, of how you can maintain that common control.

You have put in some serious thought into making a logical set of five tenets that help people understand and deal with these changeable markets. So let’s start going through those. Tell me about the first tenet, and then we can dive in and maybe even hear an example of where someone has done this right.

Focusing on identity

Rolls: Obviously it would be easy to draw 10 or 20, but we like to try and compress it. So there’s probably always the potential for more. I wouldn’t necessarily say these are in any specific order, but the first one is the idea of focusing on the identity and not the account.

This one is pretty simple. Identities are people, not accounts in an on-line system. And something we learned early in the evolution of IAM was that in order to gain control, you have to understand the relationships between people — identities, and their accounts, and between those accounts and the entitlements and data they give access, too.

So this tenet really sits at the heart of the IAM value proposition — it’s all about understanding who has access to what, and what it really means to have that access. By focusing on the identity — and capturing all of the relationships it has to accounts, to systems, and to data — that helps map out the user security landscape and get a complete picture of how things are configured.

Gardner: If I understand this correctly, all of us now have multiple accounts. Some of them overlap. Some of them are private. Some of them are more business-centric. As we get into the Internet of Things, we’re going to have another end-point tier associated with a user, or an identity, and that might be sensors or machines. So it’s important to maintain the identity focus, rather than the account focus. Did I get that right?

Rolls: We see this today in classic on-prem infrastructure with system-shared and -privileged accounts. They are accounts that are operated by the system and not necessarily by an individual. What we advocate here, and what leads into the second tenet as well, is this idea of visibility. You have to have ownership and responsibility. You assign and align the system and functional accounts with people that can have responsibility.

In the Internet of Things, I would by no means say that it’s nothing new, because if nothing else, it’s potentially a new order of scale. But it’s functionally the same thing: Understanding the relationships.

For example, I want to tie my Nest account back to myself or to some other individual, and I want to understand what it means to have that ownership. It really is just more of the same, and those principles that we have learned in enterprise IAM are going to play out big time when everything has an identity in the Internet of Things.

Gardner: Any quick examples of tenet one, where we can identify that we’re having that focus on the user, rather than the account, and it has benefited them?

Rolls: For sure. The consequences of not understanding and accurately managing those identity and account relationships can be pretty significant. Unused and untracked accounts, something that we commonly refer to in the industry as “orphan accounts,” often lead to security breaches. That’s why, if you look at the average identity audit practice, it’s very focused on controls for those orphan accounts.

We also know for a fact, based on network forensic analysis that happens post-breach, that in many of the high-profile, large-scale security breaches that we’ve seen over the last two to five years, the back door is left open by an account that nobody owns or manages. It’s just there. And if you go over to the dark side and look at how the bad guys construct vulnerabilities, first things they look for are these unmanaged accounts.

So it’s low-hanging fruit for IAM to better manage these accounts because the consequences can be fairly significant.

Tenet two

Gardner: Okay, tenet two. What’s next on your priority list?

Rolls: The next is two-fold. Visibility is king, and silos are bad. This is really two thoughts that are closely related.

The first part is the idea that visibility is king, and this comes from the realization that you have to be able to capture, model, and visualize identity data before you have any chance of managing it. It’s like the old saying that you can’t manage what you can’t measure.

It’s same thing for identity. You can’t manage the access and security you don’t see, and what you don’t see is often what bites you. So this tenet is the idea that your IAM system absolutely must support this idea of rapid, read-only aggregation of account and entitlement information as a first step, so you can understanding the landscape.

The second part is around the idea that silos of identity management can be really, really bad. A silo here is a standalone IAM application or what one might think of as a domain-specific IAM solution. These are things like an IDaaS offering that only does cloud apps or an Active Directory-only management solution, basically any IAM tool that creates a silo of process and data. This isolation goes against the idea of visibility and control that we just covered in the first tenant.

You can’t see the data if its hidden in a siloed system. It’s isolated and doesn’t give you the global view you need to manage all identity for all users. As a vendor, we see some real-world examples of this. SailPoint just replaced a legacy-provisioning solution at a large US based bank, for example, because the old system was only touching 12 of their core systems.

The legacy IAM system the bank had was a silo managing just the Unix farm. It wasn’t integrated and its data and use case wasn’t shared. The customer needed a single place for their users to go to get access, and a single point of password control for their on-prem Unix farm, and for their cloud-based, front-end application. So today SailPoint’s IdentityNow provides that single view for them, and things are working much better.

Gardner: It also reminds me that we need to be conscious of supporting the legacy in the older systems, recognizing that they weren’t designed necessarily for the reality we’re in now. We also need to be flexible in the sense of being future-proof. So it’s having visibility across your models that are shifting in terms of hybrid and cloud, but also visibility across the other application sets and platforms that were never created with this mixture of models that we are now supporting.

Rolls: Exactly right. In education, we say “no child left behind.” In identity, we say “no account left behind, and no system left behind.” We also shouldn’t forget there is a cost associated with maintaining those siloed IAM tools, too. If the system only supports cloud, or only supports on-prem, or managing identity for mobile, SaaS, or just one area of the enterprise — there’s cost. There’s a real dollar cost for buying and maintaining the software, and probably more importantly, a soft cost in the end-user experience for the people that have to manage across those silos. So these IAM silos are not only preventing visibility and controls, but there is big cost here, a real dollar cost to the business, as well.

Gardner: This gets closer to the idea of a common comprehensive view of all the data and all the different elements of what we are trying to manage. I think that’s also important.

Okay, number three. What are we looking at for your next tenet, and what are the ways that we can prevent any of that downside from it?

Complete lifecycle

Rolls: This tenet comes from the school of identity hard knocks, and is something I’ve learned from being in the IAM space for the past 20 or so years — you have to manage the complete lifecycle for both the identity, and every account that the identity has access to.

Our job in identity management, our “place” if you will in the security ecosystem, is to provide cradle-to-grave management for corporate account assets. It’s our job to manage and govern the full lifecycle of the identity — a lifecycle that you’ll often hear referred to as JML, meaning Joiners, Movers and Leavers.

As you might expect, when gaps appear in that JML lifecycle, really bad things start to happen. Users don’t get the system access they need to get their jobs done, the wrong people get access to the wrong data and critical things get left behind when people leave.

Maybe the wrong people get access to the wrong data. They’re in the Move phase. Then things get left behind when people leave. You have to track the account through that JML lifecycle. I avoid using the term “cradle to grave,” but that’s really what it means.

That’s a very big issue for most companies that we talked to. It’s captured in that lifecycle.

Gardner: So it’s not just orphan accounts, but it’s inaccurate or outdated accounts that don’t have the right and up-to-date information. Those can become back doors. Those can become weak links.

It appears to me, Darran, that there’s another element here in how our workplace is changing. We’re seeing more and more of what they call “contingent workforces,” where people will come in as contractors or third-party suppliers for a brief period of time, do a job, and get out.

It’s this lean, agile approach to business. This also requires a greater degree of granularity and fine control. Do you have any thoughts about how this new dynamic workforce is impacting this particular tenet?

Rolls: It’s certainly increasing the pressure on IT to understand and manage all of its population of users, whether they’re short-term contractors or long-term employees. If they have access to an asset that the business owns, it’s the business’s fiduciary duty to manage the lifecycle for that worker.

In general, worker populations are becoming more transient and work groups more dynamic. Even if it’s not a new person joining the organization, we’re creating and using more dynamic groups of people that need more dynamic systems access.

It’s becoming increasingly important for businesses today to be able to put together the access that people need quickly when a new project starts and then accurately take it away when the project finishes. And if we manage that dynamic access without a high degree of assured governance, the wrong people get to the wrong stuff, and valued things get left behind.

Old account

Quite often, people ask me if it would really matter when the odd account gets left behind, and my answer usually is: It certainly can. A textbook example of this when a sales guy leaves his old company, goes to join a competitor, and no one takes away his salesforce.com account. He’s then spends the next six months dipping into his old company’s contacts and leads because he still has access to the application in the cloud.

This kind of stuff happens all the time. In fact, we recently replaced another IDaaS provider at a client on the West Coast, specifically because “the other vendor” — who shall remain nameless — only did just-in-time SAML provisioning, with no leaver-based de-provisioning. So customers really do understand this stuff and recognize the value. You have to support the full lifecycle for identity or bad things happen for the customer and the vendor.
Gardner: All right. We were working our way through our tenets. We’re now on number four. Is there a logical segue between three and four? How does four fit in?

Rolls: Number four, for me, is all about consistency. It talks to the fact that we have to think of identity management in terms of consistency for all users, as we just said, from all devices and accessing all of our applications.

Practically speaking, this means that whether you sit with your Windows desktop in the office, or you are working from an Android tablet back at the house, or maybe on your smartphone in a Starbucks drive-through, you can always access the applications that you need. And you can consistently and securely do something like a password reset, or maybe complete a quarterly user access certification task, before hitting the road back to the office.

Consistency here means that you get the same basic user experience, and I use the term user experience here very deliberately, and the same level of identity service, wherever you are. It has become very, very important, particularly as we have introduced a variety of incoming devices, that we keep our IAM services consistent.

Gardner: It strikes me that this consistency has to be implemented and enforced from the back-end infrastructure, rather than the device, because the devices are so changeable. We’re even thinking about a whole new generation of devices soon, and perhaps even more biometrics, where the device becomes an entry point to services.

Tell me a bit about the means by which consistency can take place. This isn’t something you build into the device necessarily.

Rolls: Yes, that consistency has to be implemented in the underlying service, as you’ve highlighted. It’s very easy to think of consistency as just being in the IAM UI or just in the device display, but it really extends to the identity API as well. A very good example to explore this concept of consistency of the API, is to think like a corporate application developer and consider how they look at consistency for IAM, too.

Assume our corporate application developer is developing an app that needs to carry out a password reset, or maybe it needs to do something with an identity profile. Does that developer write a provisioning connector themselves? Or should they implement a password reset in their own custom code?

The answer is, no, they don’t roll their own. Instead they should make use of the consistent API-level services that the IAM platform provides — they make calls to the IDaaS service. The IDaaS service is then responsible for doing the actual password reset using consistent policies, consistent controls, and a consistent level of business service. So, as I say, its about consistency for all use cases, from all devices, accessing all applications.

Thinking about consistency

Gardner: And even as we think about the back-end services support, that itself also needs to extend to on-prem legacy, and also to cloud and SaaS. So we’re really thinking about consistency deep and wide.

Rolls: Precisely, and if we don’t think about consistency for identity as a services, we’re never going to have control. And importantly, we’re never going to reduce the cost of managing all this stuff, and we’re never going to lower the true risk profile for the business.

Gardner: We’re coming up or our last tenet, number five. We haven’t talked too much about the behavior, the buy-in. You can lead a horse to water, but you can’t make him drink. This, of course, has an impact on how we enforce consistency across all these devices, as well as the service model. So what do we need to do to get user buy-in? How does number five affect that?

Rolls: Number five, for me, is the idea that the end-user experience for identity is everything. Once upon a time, the only user for identity management was IT itself and identity was an IT tool for IT practitioners. It was mainly used by the help desk and by IT pros to automate identity and access controls. Fortunately, things have changes a lot since then, both in the identity infrastructure and, very importantly, in the end users’ expectations.

Today, IAM really sits front and center for the business users IT experience. When we think of something like single sign-on (SSO), it literally is the front door to the applications and the services that the business is running. When a line-of-business person sits down at an application, they’re just expecting seamless access via secured single sing-on. The expectation is that they can just quickly and easily get access to the things they need to get their job done.

They also expect identity-management services, like password management, access request, and provisioning to be integrated, intuitive, and easy to use. So the way these identity services are delivered in the user experience is very important.

Pretty much everything is self-service these days. The expectation is to move the business user to self-service for pretty much everything, and that very much includes Identity Management as a Service (IDaaS) as well. So the UI just has to be done right and the overall users’ experience has to be consistent, seamless, intuitive, and just easy to deal with. That’s how we get buy-in for identity today, by making the identity management services themselves easy to use, intuitive, and accessible to all.

Gardner: And isn’t this the same as saying making the governance infrastructure invisible to the end user? In order to do that, you need to extend across all the devices, all the deployment models, and the APIs, as well as the legacy systems. Do you agree that we’re talking about making it invisible, but we can’t do that unless you’re following the previous four tenets?

Rolls: Exactly. There’s been a lot of industry conversation around this idea of identity being part of the application and the users’ flow, and that’s very true. Some large enterprises do have their own user-access portals, specific places that you go to carry out identity-related activities, so we need integration there. On the other hand, if I’m sitting here talking to you and I want to reset my Active Directory password, I just want to pick up my iPhone and do it right there, and that means secure identity API’s.

We talked a good amount about the business user experience. It is very important to realize that it’s not just about the end-user and the UI. It also affects how the IDaaS service itself is configured, deployed, and managed over time. This means the user experience for the system owner, be that someone in IT or in the line of business — it doesn’t really matter who — has to be consistent and easy to use and has to lead to easier configuration, faster deployment, and faster time-to-value. We do that by making sure that the administration interface and the API’s that support it are consistent and generally well thought out, too.

Intersect between tenets

Gardner: I can tell, Darran, that you’ve put an awful lot of thought into these tenets. You’ve created them with some order, even though they’re equally important. This must be also part of how you set about your requirements for your own products at SailPoint.

Tell me about the intersect between these tenets, the marketplace, and what SailPoint is bringing in order to ameliorate the issues that the problem side of these tenets identify, but also the solution side, in terms of how to do things well.

Rolls: You would expect every business to say these words, but they have great meaning for us. We’re very, very customer focused at SailPoint. We’re very engaged with our customers and our prospects. We’re continually listening to the market and to what the buying customer wants. That’s the outside-in part of the of the product requirements story, basically building solutions to real customer problems.

Internally, we have a long history in identity management at SailPoint. That shows itself in how we construct the products and how we think about the architecture and the integration between pieces of the product. That’s the inside-out part of the product requirements process, building innovative products that solutions that work well over time.

So I guess that all really comes down to good internal product management practices. Our product team has worked together for a considerable time across several companies. So that’s to be expected. It’s fair to say that SailPoint is considered by many in the industry as the thought leader on identity governance and administration. We now work with some of the largest and most trusted brand names in the world, helping them provide the right IAM infrastructure. So I think we’re getting it right.

As SailPoint has strategically moved into the IDaaS space, we’ve brought with us a level of trust, a breadth of experience, and a depth of IAM knowledge that shows itself in how we use and apply these tenets of identity in the products and the solutions that we put together for our customers.

Gardner: Now, we talked about the importance of being legacy-sensitive, focusing on what the enterprise is and has been and not just what it might be, but I’d like to think a little bit about the future-proofing aspects of what we have been discussing.

Things are still changing and, as we said, there are new generations of mobile devices, more biometrics perhaps doing away with passwords and identifying ourselves through the device that then needs to filter back throughout the entire lifecycle of IAM implications and end points.

So when you do this well, if you follow the five tenets, if you think about them and employ the right infrastructure to support governance in IAM for both the old and the new, how does that set you up to take advantage of some of the newer things? Maybe it’s big data, maybe it’s hybrid cloud, or maybe it’s agile business.

It seems to me that there’s a virtuous adoption benefit that when you do IAM well.

Changes in technologies

Rolls: As you’ve highlighted, there are lots of new technologies out there that are effecting change in corporate infrastructure. In itself, that change isn’t new. I came into IT with the advent of distributed systems. We were going to replace every mainframe. Mainframes were supposed to be dead, and it’s kind of interesting that they’re still here.

So infrastructure change is most definitely accelerating, and the options available for the average IT business these days — cloud, SaaS and on-prem — are all blending together. That said, when you look below the applications, and look at the identity infrastructure, many things remain the same. Consider a SaaS app like Salesforce.com. Yes, it’s a 100 percent SaaS cloud application, but it still has an account for every user.

I can provide you with SSO to your account using SAML, but your account still has fine-grained entitlements that need to be provisioned and governed. That hasn’t changed. All of the new generation of cloud and SaaS applications require IAM. Identity is at the center of the application and it has to be managed. If you adopt a mature and holistic approach to that management you are in good stead.

Another great example are the mobile device management (MDM) platforms out there — a new piece of management infrastructure that has come about to manage mobile endpoints. The MDM platforms themselves have identity control interfaces. Its our job in IAM to connect with these platforms and provide control over what’s happening to identity on the endpoint device, too.

Our job in identity is to manage identity lifecycles where ever they sit in the infrastructure. If you’re not on board, you’d better get on board, because the challenges for identity are certainly not going away.

Interestingly, I’m sometimes challenged when I make a statement like that. I’ll often get the reply that “with SAML single sign-on, the the passwords go away so the account management problem goes away, right?” The answer is that no, they don’t. They’re still accounts in the application infrastructure. So good best practice identity and access management will remain key as we keep moving forward.

Gardner: And of course as you pointed out earlier, we can expect the scale of what’s going to be involved here to only get much greater.

Rolls: Yes, 100 percent. Scale is key to architectural thinking when you build a solution today, and we’re really only just starting to touch where scale is going to go.

It’s very important to us at SailPoint, when we build our solutions, that the product we deliver understands the scale of business today and the scale that is to come. That affects how we design and integrate the solutions, it affects how they are configured and how they are deployed. It’s imperative to think scale — that’s certainly something we do.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SailPoint Technologies.

You may also be interested in:

Posted in Cloud computing | Leave a comment

Large Russian bank, Otkritie Bank, turns to big data analysis to provide real-time financial insights

The next BriefingsDirect deep-dive big data benefits case study interview explores how Moscow-based Otkritie Bank, one of the largest private financial services groups in Russia, has built out a business intelligence (BI) capability for wholly new business activity monitoring (BAM) benefits.

The use of HP Vertica as a big data core to the BAM infrastructure provides Otkritie Bank improved nationwide analytics and a competitive advantage through better decision-making based on commonly accepted best information that’s updated in near real-time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about Otkritie Bank’s drive for improved instant analytics, BriefingsDirect sat down with Alexei Blagirev, Chief Data Officer at Otkritie Bank, at the recent HP Big Data 2014 Conference in Boston. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your choice for BI platforms.

Blagirev: Otkritie Bank is a member of the Open Financial Corporation (now Otkritie Financial Corporation Bank), which is one of the largest private financial services groups in Russia. The reason we selected HP Vertica was that we tried to establish a data warehouse that could provide operational data storage and could also be an analytical OLAP solution.

Blagirev

It was a very hard decision. We tried to refer to the past experience from our team, from my side, etc. Everyone had some negative experience with different solutions like Oracle, because there was a big constraint.

We cannot integrate operational data storage and OLAP solutions. Why? Because there should be high transactional data put in the data warehouse (DWH), which in every case, was usually the biggest constraint to build high-transactional data storage.

Vertica was a very good solution that removed this constraint. While selecting Vertica, we were also evaluating different solutions like IBM. We identified advantages of Vertica against IBM from two different perspectives.

One was performance. The second was that Vertica is cost-efficient. Since we were comparing Netezza (now part of IBM), we were comparing not only software, but also software plus hardware. You can’t build a cluster of Netezza custom-size. You can only build it with 32 terabytes, and so on.

Very efficient

We were also limited by the logistics of these buildings blocks, the so-called big green box of Netezza. In terms of Vertica, it’s really efficient, because we can use any hardware.

So we calculated our total cost of ownership (TCO) on a horizon of five years, and it was lower than if we built the data warehouse with different solutions. This was the reason we selected Vertica.

Fully experience the HP Vertica analytics platform

Get the free HP Vertica Community Edition

Become a members of myVertica.

From the technical perspective and from the cost-efficient perspective, there was a big difference in the business case. Our bank is not a classical bank in the Russian market, because in our bank the technology team leads the innovation, and the technology team is actually the influence-maker inside the business.

So, the business was with us when we proposed the new data warehouse. We proposed to build the new solution to collect all data from the whole of Russia and to organize via a so-called continuous load. This means that within the day, we can show all the data, what’s going on with the business operations, from all line of business inside all of Russia. It sounds great.

When we were selecting HP Vertica, we selected not only Vertica, but the technical bundle. We also hosted the Replicator. We chose Oracle GoldenGate.

We selected the appropriate ETL tool, and the BI front end. So all together, it was a technical bundle, where Vertica was the middleware technical solution. So far, we have build a near-real-time DWH, but we don’t call it near-real-time; we call it “just-in-time, because we want to be congruent with the decision-making process. We want to influence the business to let them think more about their decisions and about their business processes.

As of now, I can show all data collected and put inside the DWH within 15 minutes and show the first general process in the bank, the process of the loan application. I can show the number of created applications, plus online scoring and show how many customers we have at that moment in each region, the amounts, the average check, the approval rate, and the booking rate. I can show it to the management the same day, which is absolutely amazing.

The tricky part is what the business will do with this data. It’s tricky, because the business was not ready for this. The business was actually expecting that they could run a script, go to the kitchen, make a coffee, and then come back.

But, boom, everything appears really quickly, and it’s actually influencing the business to make decisions, to think more, and to think fast. This, I believe, is the biggest challenge, to grow business analytics inside the business for those who will be able to use this data.

As of now, we are setting the pilot stage, the pilot phase of what we call business activity monitoring (BAM). This is actually a funny story, because this is the same term referenced in Russia to Baikal-Amur Mainline (BAM), a huge railroad across the whole country that connects all the cities. It’s kind of our story, too; we connect all departments and show the data in near real-time.

Next phase

In this case, we’re actually working on the next phase of BAM, and we’re trying to synchronize the methodology across all products, across all departments, which is very hard. For example, approval rates could be calculated differently for the credit cards or for the cash loans because of the process.

Since we’re trying to establish a BI function almost from ground zero, HP Vertica is only the technical side. We need to think more about the educational side, and we need to think about the framework side. The general framework that we’re trying to follow, since we’re trying to build a BI function, is a United Business Glossary (or accepted services directory), first of all.

It’s obvious to use Business Glossary and to use a single term to refer to the same entity everywhere. But it is not happening as of now, because the business unit is still trying to use different definitions. I think it’s a common problem everywhere in the business.

The second is to explain that there are two different types of BI tools. One is BI for the data mart, a so-called regular report. Another tool is a data discovery tool. It’s the tool for the data lab (i.e. mining tool).

Fully experience the HP Vertica analytics platform

Get the free HP Vertica Community Edition

Become a members of myVertica.

So we differentiate data lab from data mart. Why? Because we’re trying to build a service-oriented model, which in the end produces analytical services, based on the functional map.

When you’re trying to answer the question using some analytics, actually it is a regular question, this is tricky. All the questions that are raised by the business, by any business analyst, are regular questions; they are fundamental.

The correct way to develop an analytical service is to collect all these questions into kind of a question library. You can call it a functional map and such, but these questions, define the analytical service for those functions.

For example, if you’re trying to produce cost control, what kind of business questions do you want to answer? What kind of business analytics or metrics do you want to bring to the end-users? Is this really mapped to the question raised, or you are trying to present different analytics? As of now, we feel it’s difficult to present this approach. And this is the first part.

The second part is a data lab for ad hoc data discovery. When, for example, you’re trying to produce a marketing campaign for the customers, trying to produce customer segments, trying to analyze some great scoring methodology, or trying to validate scientific expectations, you need to produce some research.

It’s not a regular activity. It’s more ad hoc analysis, and it will use different tools for BI. You can’t combine all the tools and call it a universal BI tool, because it doesn’t work this way. You need to have a different tool for this.

Creating a constraint

This will create a constraint for the business users, because they need some education. In the end, they need to know many different BI tools.

This is a key constraint that we have now, because end-users are more satisfied to work with Excel, which is great. I think it’s the most popular BI data discovery tool in the world, but it has its own constraints.

I love Microsoft. Everyone loves Microsoft, but there are different beautiful tools like TIBCO Spotfire, for example, which combines MATLAB, R, and so on. You can input models of SAS and so on. You can also write the scripts inside it. This is a brilliant data discovery tool.

But try to teach this tool to your business analyst. In the beginning, it’s hard, because it’s like a J curve. They will work through the valley of despair, criticizing it. “Oh my God, what are you trying to create, because this is a mess from my perspective?” And I agree with them in the beginning, but they need to go through this valley of despair, because in the end, there will be really good stuff. This is because of the cultural influence.

Gardner: Tell me, Alexei, what sort of benefits have you been able to demonstrate to your banking officials, since you’ve been able to get this near real-time, or just-in-time analytics — other than the fact that you’re giving them reports? Are there other paybacks in terms of business metrics of success?

Blagirev: First of all, we differentiate our stakeholders. We have top management stakeholders, which is the board. There are the middle-level stakeholders, which are our regional directors.

I’ll start from the bottom, and the regional directors. They just open the dashboard. They don’t click anything or refresh. They just see that they have data and analytics, what’s going on in their region.

They don’t care about the methodology, because there is BAM, and they just use figures for decision making. You don’t think about how it got there, but you think about what to do with these figures. You focus more on your decision, which is good.

They start to think more on their decision and they start to think more on the processing side. We may show, for example, that at 12 o’clock our stream of cash loan applications went down. Why? I have no idea. Maybe they all went out for dinner. I don’t know.

But nobody says that. They say, “Alexei, something is happening.” They see true figures and they know they are true figures. They have instruments to exercise operational excellence. This is the first benefit.

Top management

The second, is top management. We had a management board where everyone came and showed different figures. We’d spend 30 minutes, or maybe hour, just debating which figures were true. I think this is a common situation in Russian banks, and maybe not only banks.

Now, we can just open the report, and I say, “This is a single report, because it shows intra-day figures and shows this metrics, it was calculated according to methodology.” We actually linked the time of calculation, which shows that this KPI, for example, was calculated at 12 o’clock. You can take figures at 12 o’clock, and if you don’t believe them, you can ask the auditors to repeat calculation, and it will be the same way.

Nobody cares about how to calculate the figures. So they started to think about what methodology to apply to the business process. Actually, this is reverse of the focus from the outside, focusing on what’s going on with our business process. This is the second benefit.

Gardner: Any other advice that you would give to organizations who are beginning a process toward BI?

Blagirev: First of all, don’t be afraid to make mistakes. It’s a big thing, and we all forget that, but don’t be afraid. Second, try to create your own vision of strategy for at least one year.

Third, try to disclose all your company and software vision, because HP Vertica or other BI tools are only a part. Try to see all the company’s lines, all information, because this is important. You need to understand where the value is, where is the shareholder value is lost, or are you creating the value for the shareholder. If the answer is, yes, don’t be afraid to protect your decision and your strategy, because otherwise in the end, there will be problems. Believe me.

As Gandhi mentioned, in the beginning everyone laughs, then they begin hating you, and in the end, you win.

Gardner: With your business activity monitoring, you’ve been able to change business processes, influence the operations, and maybe even the culture of the organization, focusing on the now and then the next set of processes. Doesn’t this give you a competitive advantage over organizations that don’t do this?

Blagirev: For sure. Actually, this gives a competitive advantage, but this competitive advantage depends on the decision that you’re making. This actually depends on everyone in the organization.

Understanding this brings a new value to the business, but this depends on the final decision from people who sit in the position. Now, those people understand. They’re actually handling the business and they see how they’re handling the business.

Fully experience the HP Vertica analytics platform

Get the free HP Vertica Community Edition

Become a members of myVertica.

I can compare the solution to other banks. I have been working for Société Générale and for the Alfa-Bank, which is the largest bank in Russia. I’ve been the auditor of financial services in PwC. I saw the different reporting and different processes, and I can say that this solution is actually unique in the market.

Why? It shows congruent information in near real-time, inside the day, for all the data, for the whole of Russia. Of course, it brings benefit, but you need to understand how to use it. If you don’t understand how to use this benefit, it’s going to be just a technical thing.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in Cloud computing | Leave a comment

A practical guide to rapid IT Service Management as a foundation for overall business agility

The next BriefingsDirect thought leadership panel discussion centers on how rapidly advancing IT service management (ITSM) capabilities form a bedrock business necessity, not just an IT imperative.

Businesses of all stripes rate the need to move faster as a top priority, and many times, that translates into the need for better and faster IT projects. But traditional IT processes and disjointed project management don’t easily afford rapid, agile, and adaptive IT innovation.

The good news is that a new wave of ITSM technologies and methods allow for a more rapid ITSM adoption — and that means better rapid support of agile business processes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To deeply explore a practical guide to fast ITSM adoption as a foundation for overall business agility the panel consists of John Stagaman, Principal Consultant at Advanced MarketPlace based in Tampa, Florida; Philipp Koch, Managing Director of InovaPrime, Denmark, and Erik Engstrom, CEO of Effectual Systems in Berkeley, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: John Stagaman, let me start with you. We hear a lot, of course, about the faster pace of business, and cloud and software as a service (SaaS) are part of that. What, in your mind, are the underlying trend or trends that are forcing IT’s hand to think differently, behave differently, and to be more responsive?

Stagaman: If we think back to the typical IT management project historically, what happened was that, very often, you would buy a product. You would have your requirements and you would spend a year or more tailoring and customizing that product to meet your internal vision of how it should work. At the end of that, it may not have resembled the product you bought. It may not have worked that well, but it met all the stakeholders’ requirements and roles, and it took a long time to deploy.

Stagaman

That level of customization and tailoring resulted in a system that was hard to maintain, hard to support, and especially hard to upgrade, if you had to move to a new version of that product down the line. So when you came to a point where you had to upgrade, because your current version was being retired or for some other reason, the cost of maintenance and upgrade was also huge.

It was a lesson learned by IT organizations. Today, saying that it will take a year to upgrade, or it will take six months to upgrade, really gets a response. Why should it? There’s been a change in the way it’s approached with most of the customers we go on-site to now. Customers say we want to use out of box, it used to be, we want to use out of box, and sometimes it still happens that they say, and here’s all the things we want that are not out of box.

But they’ve gotten much better at saying they want to start from out of box, leverage that, and then fill in the gaps, so that they can deploy more quickly. They’re not opening the box, throwing it away, and building something new. By working on that application foundation and extending where necessary, it makes support easier and it makes the upgrade path to future versions easier.

Moving faster

Gardner: It sounds like moving toward things like commodity hardware and open-source projects and using what you can get as is, is part of this ability to move faster. But is it the need to move faster that’s driving this or the ability to reduce customization? Is it a chicken and egg? How does that shape up?

Unleash the power of your user base . . .
Learn how to use big data for proactive problem solving
with a free white paper

Engstrom: I think that the old use case of “design, customize, and implement” is being forced out as an acceptable approach, because SaaS, platform as a service (PaaS), and the cloud are driving the ability for stakeholders. Stakeholders are retiring, and fresher sets of technologies and experiences are coming in. These two- and three-year standup projects are not acceptable.

Engstrom

If you’re not able to do fast time-to-value, you’re not going to get funding. Funding isn’t in the $8 million and $10 million tranches anymore; it’s in the $200,000 and $300,000 tranche. This is having a direct effect on on-premise tools, the way the customers are planning, and OPEX versus CAPEX.

Gardner: Philipp, how do you come down on this? Is this about doing less customization or doing customization later in the process and, therefore, more quickly?

Koch: I don’t think it’s about the customization element in itself. It is actually more that, in the past, customers reacted. They said they wanted to tailor the tool, but then they said they wanted this and they took the software off the shelf and started to rebuild it.

Now with the SaaS tool offerings coming into play, you can’t do that anymore. You can’t build your ITSM solution from scratch. You want be able to take it according to use case and adjust it with customization or configuration. You don’t want to be able to tailor.

Koch

But customization happens while you deploy the project and that has to happen in a faster way. I can only concur with all the other things that have already been said. We don’t have huge budgets anymore. IT, as such, never had huge budgets, but, in the past, it was accepted that a project like this took a long time to do. Nowadays, we want to have implementations of weeks. We don’t want to have implementations of months anymore.

Gardner: Let’s just unpack a little bit the relationship between ITSM and IT agility. Obviously, we want things to move quickly and be more predictable, but what is it about moving to ITSM rapidly that benefits? And I know this is rather basic, but I think we need to do it just for all the types of listeners we have.

Back to you, John. Explain and unpack what we mean by rapid ITSM as a means to better IT performance and rapid management of projects.

Best practices

Stagaman: For an organization that is new to ITSM processes, starting with a foundational approach and moving in with an out-of-box build helps them align with best practice and can be a lot faster than if they try to develop from scratch. SaaS is a model for that, because with SaaS you’re essentially saying you’re going to use this standard package.

The standard package is strong, and there’s more leverage to use that. We had a federal customer that, based on best practice, reorganized how they did all their service levels. Those service levels were aligned with services that allowed them, for the first time, to report to their consuming bureaus the service levels per application that those bureaus subscribed to. They were able to provide much more meaningful reporting.

They wouldn’t have done that necessarily if the model didn’t point in that direction. Previously, they hadn’t organized their infrastructure along the lines to say, “We provide these application services to our customer.”

Gardner: Erik, how do see the relationship between rapid and better ITSM and better IT overall performance? Are there many people struggle with this relationship?

Engstrom: Our approach at Effectual, what we focus on, is the accountability of data and the ability for an organization to reduce waste through using good data. We’re not service [process] management experts, in that we are going to define a best practice; we are strictly on “here is the best piece of data everyone on your team is working [with] across all tools.” In that way, what our customers are able to see is transparency. So data from one system is available on another system.

What that means is that you see a lot more reduction in types of servers that are being taken offline when they’re the wrong server. We had a customer bring down their [whole] retail zone of systems that the same team had just stood up the week before. Because of the data being good, and the fact they were using out-of-the-box features, they were able to reduce mistakes and business impact they otherwise would not have seen.

Had they stayed with one tool or one silo of data, it’s only one source of opinion. Those kinds of mistakes are reduced when you share across tools. So that’s our focus and that’s where we’re seeing benefit.

Gardner: Philipp, can you tell us why rapid ITSM has a powerful effect here in the market? But, before we get into that and how to do it, why is rapid ITSM so important now?

Koch: What we’re seeing in our market is that customers are demanding service like they’re getting at home at the end of the day. This sounds a little bit cliché-like, but they would like to get something ordered on the Internet, have it delivered 10 minutes later, and working half an hour later.

If we’re talking about doing a classical waterfall approach to projects as was done 5 or 10  years ago, we’re talking about months, and that’s not what the customer wants.

IT is delivering that. In a lot of organizations, IT is still fairly slow in delivering bigger projects, and ITSM is considered a bigger project. We’re seeing a lot of shadow IT appearing, where business units who are demanding that agility are not getting it from IT, So they’re doing it themselves, and then we have a big problem.

Counter the trend

With rapid ITSM, we can actually counter that trend. We can go in and give our customers what’s needed to be able to please the business demand of getting something fast. By fast, we’re talking about weeks now. We’re of course not talking 10 minutes in project sizes of an ITSM implementation, but we can do something where we’re deploying a SaaS solution.

We can have it ready for production after a week or two and get it into use. Before, when we did on-premise or when we did tailoring from scratch, we were talking months. That’s a huge business advantage or business benefit of being able to deliver what the business units are asking for.

Gardner: John Stagaman, what holds back successful rapid ITSM approach? What hinders speed, why has it been months rather than days typically?

Stagaman: Erik referenced one thing already. It has to do with the quality of source data when you go to build a system. One thing that I’ve run into numerous times is that there is often an assumption that finding all the canonical sources of data for just the general information that you need to drive your IT system is already available and it’s easy to populate. By that I mean things like, what are our locations, what are our departments, who are our people?

I’m not even getting to the point of asking what are our configuration items and how are they related? A lot of times, the company doesn’t have a good way to even identify who a person is uniquely over time, because they use something with their name. They get married, it changes, and all of a sudden that’s not a persistent ID.

One thing we address early is making sure that we identify those gold sources of data for who and what, for all the factual data that has to be loaded to support the process.

The other major thing that I run into that introduces risks into a project is when requirements aren’t really requirements. A lot of times, when we get requirements, it’s a bunch of design statements. Those design statements are about how they want to do this in the tool. Very often, it’s based on how the tool we’re replacing worked.

If you don’t go through those and say that this is the statement of design and not a statement of functional requirement and ask what is it that they need to do, it makes it very hard to look at the new tools you’re deploying to say that this new tool does that this way. It can lead to excess customization, because you’re trying to meet a goal that isn’t consistent with how your new product works.

Those are two things we usually do very early on, where we have to quality check the requirements, but those are also the two things that most often will cause a project to extend or derail.

Gardner: Philipp, any thoughts on problems, hurdles, why poor data quality or incomplete configuration management and data? What is it, from your perspective, that hold things back?

Old approach

Koch: I agree with what John says. That’s definitely something that we see when we meet customers.

Other areas that I see are more towards the execution of the projects itself. Quite often, customers know what agile is, but they don’t understand it. They say they’re doing something in an agile way. Then, they show us a drawing that has a circle on it and then they think they are agile.

When you start to actually work with them, they’re still in the old waterfall approach of stage gates, and milestones.

So, you’re trying to do rapid ITSM implementation that follows agile principles, but you’re getting stuck by internal unawareness or misunderstanding what this really means. Therefore, you’re struggling with doing an agile implementation, and they become non-agile by doing this. That, of course, delays projects.

Quite often, we see that. So in the beginning of the projects, we try to have a workshop or try to get the people to understand what it really means to do an agile project implementation for an ITSM project. That’s one angle.

The other angle, which I also see quite often, goes into the area of the requirements, the way John had described them. Quite often, those requirements are really features, as in they are hidden features that the customer wants. They are turned into some sort of requirements to achieve that feature. But very seldom do we see something that actually addresses the business problem.

They should not really care if you can right-click in the background and add a new field to this format. That’s not what they should be asking for. They should be asking whether it’s easy to tailor the solution. It doesn’t really matter how. So that’s where quite often you’re spending a lot of time reading those requirements and then readjusting them to match what you really should be talking about. That, of course, delays projects.

In a nutshell, we technology guys, who work with this on a daily basis, could actually deliver projects faster if we could manage to get the customers to accept the speed that we deliver. I see that as a problem.

Gardner: So being real about agile, having better data, knowing more about what your services are and responding to them are all part of overcoming the inertia and the old traditional approaches. Let’s look more deeply into what makes a big difference as a solution in practice.

Erik Engstrom, what helps get agile into practice? How are we able to overcome the drawbacks of over-customization and the more linear approach? Do you have any thoughts about moving towards a solution?

Maturity and integration

Engstrom: Our approach is to provide as much maturity, and as complete an integration as possible, on day one. We’ve developed a huge amount of libraries of different packages that do things such as to advance the tuning of a part of a tool, or to advance the integration between tools. Those represent thousands of hours that can be saved for the customer. So we start a project with capabilities that most projects would arrive at.

This allows the customer to be agile from day one. But it requires that mentality that both Philipp and John were speaking about, which is, if there’s a holdout in the room that says “this is the way you want things,” you can’t really work with the tools the way that they [actually] do work. These tools have a lot of money and history behind them, but one person’s vision of how the tools should work can derail everything.

We ask customers to take a look at an interoperable functioning matured system once we have turned the lights on, and have the data moving through the system. Then they can start to see what they can really do.

It’s a shift in thinking that we have covered well over the last few minutes, so I won’t go into it. But it’s really a position of strength for them to say, “We’ve implemented, we’ve integrated. Now, where do we really want to go with this amazing solution?

Gardner: What is it about the new toolset that’s allowing this improvement, the pre-customization approach? How does the technology come to bear on what’s really a very process-centric endeavor?

Engstrom: There are certain implementation steps that every customer, every project, must undergo. It’s that repetition that we’re trying to remove from the picture. It’s the struggle of how to help an organization start to understand what the tools can do. What does it really look like when people, party, location, and configuration information is on hand? Customers can’t visualize it.

So the faster we can help customers start to see a working system with their data, the easier it is to start to move and maintain an agile approach. You start to say, “Let’s keep this down to a couple of weeks of work. Let us show it to you. Let’s visit it.”

If we’re faster as consultancies, if we’re not taking six months, if we’re not taking two months and we can solve these things, they’ll start to follow our lead. That’s essential. That momentum has to be maintained through the whole project to really deliver fast.

Gardner: John Stagaman, thoughts about moving fast, first as consultants, but then also leveraging the toolsets? What’s better about the technology now that, in a sense, changes this game too?

Very different

Stagaman: In the ITSM space, the maturity of the product out of box, versus 10 years ago, is very different.  Ten or 15 years ago, the expectation was that you were going to customize the whole thing.

There would be all these options that were there so you could demo them, but they weren’t necessarily built in a cohesive way. Today, the tools are built in different ways so that it’s much closer to usable and deployable right out of the box.

The newest versions of those tools very often have done a much better job of creating broadly applicable process flow, so that you can use that same out of the box workflow if you’re a retailer, a utility, or want to do some things for the HR call center without significant change to the core workflow. You might need to have the specific data fields related to your organization.

And, there’s more. We can start from this ITSM framework that’s embedded and extend  it where we need to.

Gardner: Philipp, thoughts about what’s new and interesting about tools, and even the SaaS approach to ITSM, that drives, from the technology perspective, better results in ITSM?

Koch: I’ll concur with John and Erik that the tools have changed drastically. When I started in this business 10 or 15 years ago, it was almost like the green screens of computers that slide through when you look for the ITSM solution.

If you’re looking at ITSM solutions today, they’re web based. They’re Web 2.0 technology, HTML5, and responsive UIs. It doesn’t really matter which device you use anymore, mobile phone, tablet, desktop, or laptop. You have one solution that looks the same across all devices. A few years ago, you had to install a new server to be able to run a mobile client, if it even existed.

So, the demand has been huge for vendors to deliver upon what the need is today. That has drastically changed in regards to technology, because technology nowadays allows us, and allows the vendors, to deliver up on how it should be.

We want Facebook. We want to Tweet. We want an Amazon- or a Google-like behavior, because that’s what we get everywhere else. We want that in our IT tools as well, and we’re starting to see that coming into our IT tools.

In the past we had rule sets, objects, and conditions towards objects, but it wasn’t really a workflow engine. Nowadays, SaaS solutions, as well as on-premise solutions, have workflow engines that can be adjusted and tailored according to the business needs.

No difference

You’re relying on a best practice. An incident management process flow is an incident management process flow. There really is no difference no matter which vendor you go to, they all look the same, because they should. There is a best practice out there or a good practice out there. So they should look the same.

The only adjustments that customers will have to do is to add on that 10-20 percent that is customer-specific with a new field or a specific approval that needs to be put in between. That can be done with minimal effort when you have workflow engine.

Looking at this from a SaaS perspective, you want this off the shelf. You want to be able to subscribe to this on the Internet and adjust it in the evening, so when you come back the next day and go to work, it’s already embedded in the production environment. That’s what customers want.

Gardner: Now if we’ve gotten a better UI and we’re more ubiquitous with who can access the ITSM and how, maybe we’ve also muddied the waters about that data, having it in a single place or easily consolidated. Let’s go back to Erik, given that you are having emphasis on the data.

Unleash the power of your user base . . .
Learn how to use big data for proactive problem solving
with a free white paper

When we look at a new-generation ITSM solution and practice, how do we assure that the data integrity remains strong and that we don’t lose control, given that we’re going across peers of devices and across a cloud and SaaS implementations? How do we keep that data whole and central and then leverage it for better outcomes?

Engstrom: The concept of services and the way that service management is done is really around services. If we think about ITIL and the structure of ITIL [without getting into too many acronyms] the ability to take Services, Assets, and Configuration Management information, [and to have] all of that be consistent, it needs to be the same.

A platform that doesn’t have really good bidirectional working data integrations with things like your asset tool or your DCIM tool or your UCMDB tool or your – wherever it is your data is coming from– the data needs to be a primary focus for the future.

Because we’re talking about a system [UCMDB] that can not only discover things and manage computers, but what about the Internet of Things? What about cloud scenarios, where things are moving so quickly that traditional methods of managing information whether it would be a spreadsheet or even a daily automated discovery, will not support the service-management mission?

It’s very important, first of all, that all of the data be represented. Historically, we’ve not been able to do that because of performance. We’ve not been able to do that because of complexities. So that’s the implementation gap that we focus on, dropping in and making all of the stuff work seamlessly.

Same information

The benefit to that is that you’re operating as an organization on the same piece of information, no matter how it’s consumed or where it’s consumed. Your asset management folks would open their HP IT Asset Manager, see the same information that is shown downstream at Service Manager. When you model an application or service, it’s the same information, the same CI managed with UCMDB, that keeps the entire organization accountable. You can see the entire workflow through it.

If you have the ability to bridge data, if you have multiple tools taking the best of that information, making it an inherent automated part of service management, means that you can do things like Incident and Change, and Service Asset and Configuration Management (SACM) and roll up the costs of these tickets, and really get to the core of being efficient in service management.

Gardner: John Stagaman, if we have rapid ITSM multiple device ease of interface, but we also now have more of this drive towards the common data shared across these different systems, it seems to me that that leads to even greater paybacks. Perhaps it’s in the form of security. Perhaps it’s in a policy-driven approach to service management and service delivery.

Any thoughts about ancillary or future benefits you get when you do ITSM well and then you have that quality of data in mind that is extended and kept consistent across these different approaches?

Stagaman: Part of it comes to the central role of CMDB and the universality of that data. CMDB drives asset management. It can drive ITSM and the ability to start defining models and standards and compare your live infrastructure to those models for compliance along with discovery.

The ability to know what’s connected to your network can identify failure points and chokepoints or risks of failure in that infrastructure. Rather than being reactive, “Oh, this node went down. We have to address this,” you can start anticipating potential failures and build redundancy. Your possibility of outage can be significantly reduced, and you can build that CMDB and build the intelligence in, so that you can simulate what would happen if these nodes or these components went down. What’s the impact of that?

You can see that when you go to build, do a change, that level of integration with CMDB data lets you see well, if we have a change and we have an outage for these servers, what’s the impact on the end user due to the cascading effect of those outages through the related devices and services so that you can really say, well, if we bring this down, we were good, but oh, at the same time we have another change modifying this and with those together coming down we may interrupt service to online banking and we need to schedule those at different times.

The latest update we’re seeing is the ability to put really strict controls on the fact that this change will potentially impact this system or service and based on our business rules that say that this service can only be down during these times or may not be down at that time. We can even identify that time period conflict in an automated way and require additional process approvals for that to go forward at that time or require a reschedule.

Gardner: Philipp, any thoughts on this notion of predictive benefits from a good ITSM and good data, and perhaps even this notion of an algorithmic approach to services, delivery, and management?

Federation approach

Koch: It actually nicely fits into one of our reference installations, where that integration that Erik also talked about of having the data and utilize the data in a kind of on-the-fly federation approach. You can no longer wait to have a daily batch job to run. You need to have it at your fingertips. I can take an example from an Active Directory integration where we utilized all the data from active directory to allocate roles and rights and access inside HP Service Manager.

We’ve made a high-level analysis of how much we actually save by doing this. By doing that integration and utilizing that information, we say that we have an 80 percent reduction of manual labor done inside service manager for user administration.

Instead of having a technician to have to go into service manager to allocate the role, or to allocate rights, to a new employee who needs access to HP Service Manager, you actually get it automatic from Active Directory when the user logs in. The only thing that has to be done is for HR to say where this user sits, and that happens no matter what.

We’ve drastically reduced the amount of time spent there. There’s a tangible angle there, where you can save a lot of time and a lot of money, mainly with regards to human effort.

The second angle that you touched on is smart analytics, as we can call it as well, in the new solutions that we now have. It’s cool to see, and we now need to see where it’s going in the future and see how much further we can go with this. We can do smart analytics on utilizing all the data of the solutions. So you’re using the buzzword big data.

If we go in and analyze everything that’s happening to a change-management area, we now have KPIs that can tell me — this an old KPI as such — that 48 percent of your change records have an element of automation inside the change execution. You have the KPI of how much you’re automating in change management.

With smart analytics on top of that, you can get feedback in your KPI dashboard that says you have 48 percent. That’s nice, but below that you see if you enhance those two change models as well and automate them, you’ll get an additional 10 percent of automation on your KPI.

With big-data analytics, you’ll be able to see that manual change model is used often and it could be easily automated. That is the area that is so underutilized in using such analytics to go and focus on the areas that actually really make a difference and to be able to see that on a dashboard for a change manager or somebody who is responsible for the process.

That really sticks into your eye and says “Well, if I spend half an hour here, making this change model better, then I am going to save a lot more time, because I am automating 10 percent more.” That is extremely powerful. Now just extrapolating that to the rest of the processes, that’s the future.

Gardner: Well Erik, we’ve heard both John and Philipp describe intelligent ITSM. Do you have any examples where some of your customers are also exploring this new level of benefit?

Success story

Engstrom: Absolutely. Health Shared Services British Columbia (HSSBC) will be releasing a success story through HP shortly, probably in the next few weeks. In that case, it was a five-week implementation where we dropped in our packages for Asset Management (ITAM), Service Management (ITSM), and Executive Scorecard, which are all HP products.

We even used Business Service Management (BSM), but the thinking behind this was that this is a service-management project. It’s all about uniting different health agencies in British Columbia under one shared service.

The configuration information is there. The asset information is there, right down to purchase orders, maintenance contracts, all of the parties, all of the organizations. The customer was able to identify all of their business services. This was all built in, normalized in CMDB, and then pushed into ITSM.

With this capability, they’re able to see across these various organizations that roll-up in the shared service, who the parties are, because people opening tickets don’t work with those folks. They’re in different organizations. They don’t have relevant information about what services are impacted. They don’t have relevant information about who is the actual cost center or their budget. All that kind of stuff that becomes important in a shared service.

This customer, from week six to their go-live day had the ability see, what is allocated in assets, what is allocated in terms of maintenance and support, and this is the selected service that the ticket, incident, or change is being created upon.

They understood the impact for the organization as a result of having what we call a Configuration Management System (CMS), having all of these things working together. So it is possible. It gives you very high-level control, particularly when you put it into something like Executive Scorecard, to see where things are taking longer, how they’re taking longer, and what’s costing more.

More importantly, in a highly virtual environment, they can see whether they’re oversubscribed, whether they have their budgeted amount of ESX servers, or whether they have the right number of assets that are playing a part in service delivery. They can see the cost of every task, because it’s tied to a person, a business service, and an organization.

They started with a capability to do SACM, and this is what this case is really about. It plays into everything that we’ve talked about in this call. It’s agile and it is out-of-the-box. They’re using features from all of these tools that are out-of-the-box, and they’re using a solution to help them implement faster.

They can see what we call “total efficiency of cost.” What am I spending, but really how is it being spent and how efficient is it? They can see across the whole lifecycle of service management. It’s beautiful.

Future trends

Gardner: It’s impressive. What is it about the future trends that we can now see or have a good sense of how the events fold that makes rapid ITSM adoption, this common data, and this intelligent ITSM approach, all so important?

I’m thinking perhaps the addition of mobile tier and extensibility out through new networks. I’m thinking about DevOps and trying to coordinate a rapid-development approach with operations and making that seamless.

We’re hearing a lot about containers these days as well. I’m also thinking about hybrid cloud, where there’s a mixture of services, a mixture of hosting options, and not just static but dynamic, moving across these boundaries.

So, let’s go down the list, as this would be our last question for today. John Stagaman, what is it about some of these future trends that will make ITSM even more impactful, even more important?

Stagaman: One of the big shifts that we’re starting to see in self-service is the idea that you want to enable a customer to resolve their own issue in as many cases as possible. What you can see in the newest release of that product is the ability for them to search for a solution and start a chat.

When they ask a question, they can check your entire knowledge base and history to see the propose solutions. If that’s not the case, they can ask for additional information and then initialize a chat with the service desk, if needed.

Very often, if they say they’re unable to open this file or their headset is broken, someone can immediately tell them how to procure a replacement headset. It allows that person to complete that activity or resolve their issue in a guided way. It doesn’t require them to walk through a level of menus to find what they need. And it makes it much more approachable than finding a headset on the procurement system.

The other thing that we’re seeing is the ability to bridge between on-premises system and SaaS solution. We have some customers for whom certain data is required to be onsite  for compliance or policy reasons. They need an on-premise system, but they may have some business units that want to use a SaaS solution.

Then, when they have system supported by central IT, that SaaS system can do an exchange of that case with the primary system and have bidirectional updates. So we’re getting the ability to link between the SaaS world and the on-premises world more effectively.

Gardner: Philipp, thoughts from you on future trends that are driving the need for ITSM that will make it even more valuable, make it more important.

Connected intelligence

Koch: Definitely. Just to add on to what John said, it goes into the direction of the connected intelligence, utilizing that big data example that we have just gone through. It all points towards that we want to have a solution that is connected across and brings back intelligence towards the end user, just as much as towards the operator that has that integration.

Another angle, more from the technology side, is that now, with the SaaS offerings that we have today, the new way of going forward as I see it happening — and the way I think HP has made a good decision with HP Service Anywhere — is the continuous delivery. You’re losing the aspects of having version numbers for software. You no longer need to do big upgrades to move from version 9 to a version 10, because you are doing continuous delivery.

Every time new code is ready to be deployed, it is actually deployed. You do not wait and bundle it up in a yearly cycle to give a huge package that means months of upgrading. You’re doing this on the fly. So Service Anywhere or Agile Manager are good examples where HP is applying that. That is the future, because the customer doesn’t want to do upgrade projects anymore. Upgrades are of the past, if we really want to believe that. We hope we can actually go there.

You touched on mobile. Mobile and bring your own device were buzzwords — now it’s already here. We don’t really need to talk about it anymore, because it already exists. That’s now the standard. You have to do this, otherwise you’re not really a player in the market.

To close off with a paradigm statement, future solutions need to be implemented — and we consultants need to deliver solutions — that solve end-user problems compared to what we did in the past, where we deployed solutions manage tickets!

We’re no longer in the business of helping them and giving them features to more easily manage tickets and save money on quicker resolution. This is of the past. What we need to do today is to make it possible for organizations to empower end users to solve their problems themselves to become a ticket-less IT — this is ideal world of course — where we reduce the cost of an IT organization by giving as much as possible back to the end user to enable him to do self service.

Gardner: Last word to you, Erik. Any thoughts about future trends to drive ITSM and why it will be even more important to do it fast and do it well?

Engstrom: Absolutely. And in my worldview it’s SACM. It’s essentially using vendor strengths, the portfolio, the entire portfolio, such as HP’s Service and Portfolio Management (SPM), where you have all of these combined silos that normally operate completely independently of each other.

There are a couple of truths in IT. Data is expensive to re-create; the concept that you have knowledge, and that you have value in a tool. The next step in the new style of IT is going to require that these tools work together as one suite, one offering, so that your best data is coming from the best source and being used to make the best decisions.

Actionable information

It’s about making big data a reality. But in the use of UCMDB and the HP portfolio, data is very small, it’s actionable information, because it’s a set of tools. This whole portfolio helps customers save money, be more efficient with where they spend, and do more with “yes.”

Unleash the power of your user base . . .
Learn how to use big data for proactive problem solving
with a free white paper

So the idea that you have all of this data out there, what can it mean? It can mean, for example, that you can look and see that a business service is spending 90 percent more on licensing or ESX servers or hardware, anything that it might need. You have transparency across the board.

Smarter service management means doing more with the information you already have and making informed decision that really help you drive efficiencies. It’s doing more with “yes,” and being efficient. To me, that’s SACM. The requirement for a portfolio, it doesn’t matter how small or how large it is, is [that] it must provide the ways for which this data can be shared, so that information becomes intelligence.

Organizations that have these tools will beat the competition at an SG and A (Selling, General and Administrative) level. They will wipe them out, because they’re so efficient and so informed. Waste is reduced. Time is faster. Good decisions are made ahead of time. You have the data and you can act appropriately. That’s the future. That’s why we support HP software, because of the strength of the portfolio.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in Cloud computing | Leave a comment

Journey to SAP quality — Home Trust builds center of excellence with HP ALM tools

The next BriefingsDirect deep-dive IT operations case study interview details how Home Trust Company in Toronto has created a center of excellence to improve quality assurance for top performance of their critical SAP applications.

How do you properly structure your testing assets in quality control that makes sense for SAP?  What’s your proper defect flow? How do you design a configuration that fits all from the toolset? And where does automation best come into play?

These are some of the essential questions to answer for not only making apps perform well, but to allow for rapid deployment and refinement of new applications, as well as enhance ongoing security and compliance for both systems and data.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about building a center of excellence for business applications, BriefingsDirect sat down at the recent HP Discover 2014 Conference in Las Vegas with Cindy Shen, SAP QA Manager at Home Trust. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Shen: Home Trust one of the leading trust companies in Toronto, Canada. There are two main businesses we deal with. The first bucket is mortgages. We deal with a lot of residential mortgages.

Shen

The other bucket is we’re a deposit-taking institution. People will deposit their money with us, and they can invest in a registered retirement savings plan (RRSP) (along with other options for their investment), which is equivalent of the US 401(k) plan.

We’re also Canada Deposit Insurance Corporation (CDIC)-compliant. If a customer has money with us and if anything happens with the company, the customer can get back up to a certain amount of money.

We’re regulated under the Office of the Superintendent of Financial Institutions (OSFI), and they regulate the Banks and Trust Companies, including us.

Some of the hurdles

Gardner: So obviously it’s important for you to have your applications running properly. There’s a lot of auditing and a lot of oversight. Tell us what some of the hurdles were, some of the challenges you had as you began to improve your quality-assurance efforts.

Shen: We’re primarily an SAP shop. I was an SAP consultant for a couple of years. I’ve worked in North America, Europe, and Asia. I’ve been through many industries, not just the financial industry. I’ve touched on consumer packaged goods SAP projects, retail SAP projects, manufacturing SAP projects, and banking SAP projects. I usually deal with global projects, 100 million-plus, and 100-300 people.

What I noticed is that, regardless of the industries or the functional solutions that project has, it’s always a common set of QA challenges when it comes to their SAP testing and it’s very complicated. It took me a couple of years to figure the tools, where each tool fits into the whole picture, and how pieces fit together.

For example, some of the common challenges that I’m going to talk about in my session (here at HP Discover) is, first of all, what tools you should be using. The HP ALM, Test Management Tool is, in my opinion, the market leader. That’s what pretty much all the Fortune 500 companies, and even smaller companies, are using primarily as their test management tool. But testing SAP is unique.

Reduce post-production issues by 80% by building better apps.
Learn Seven Best Practices for Business-Ready Applications
with a free white paper

What are the additional tools on the SAP side that you need to have in order to integrate back to ALM test suite and have that system record of development plus the system record of testing, all integrated together, and make it flow which makes sense for SAP applications? That’s unique.

One is toolset and the other one is methodology. If you parachute me into any project, however large or small, complex or simple, local or global, I can guarantee you that the standards are not clear, or there is no standard in place.

For example, how do you properly write a test case to test SAP? You have to go into the granular detail that actually details the action words that you use for different application areas that can enable automation very easily in the future. How do you parameterize?

What’s the appropriate level of parameterization to enable that flexibility for automation? What’s the naming convention for your input parameter and output parameters to make it flow through from the very first test case, all the way to the end, when you test end to end application?

Most errors and defects happen in the integration area. So, how do you make sure your test coverage covers all your key integration points? SAP is very complex. If you change one thing, I can guarantee you that there’s something else in some other areas of the application or in the interface that’s going to change without your knowing it, and that’s going to cause problems for you sooner or later.

So, how do you have those standards and methodology consistently enforced through every person who’s writing test cases or who’s executing testing at the same quality, in the same format, so that you can generate the same reports across all different projects to have the executive oversight and to minimize the duplucate work you have to do on the manual test cases in order to automate in the future.

Testing assets

The other big part is how to maintain such testing assets, so it’s repeatable, reusable, and flexible — and so that you can shorten your project delivery time in the future through automation and a consistent writing test case in manual testing, accelerate new projects coming up, and also improve your quality in terms of post-production support so you can catch critical errors fast.

Those are all very common SAP testing QA themes, challenges, or problems that practitioners like me see in any SAP environment.

Gardner: So when you arrived at Home Trust, and you understood this unique situation, and how important SAP applications are, what did you do to create a center of excellence and an ability to solve these issues?

Shen: I was fortunate to have been the lead on the SAP area for a lot of global projects. I’ve seen the worst of it. I’ve also seen a fraction of the clients that actually do it much better than other companies. So, I’m fortunate to know the best practices I want to implement, what will work, and what won’t work, what are the critical things you have to get in place in the beginning, and what are the pieces you can wait for down the road.

Coming from an SAP background, I’m fortunate to have that knowledge. So, from the start, I had a very clear vision as to how I wanted to drive this. First, you need to conduct an analysis of the current state, and what I saw was very common in the industry as well.

When I started, there were only two people in the QA space. It was a brand new group. And there was an overall software development lifecycle (SDLC) methodology in the company. But the company had just gone live with SAP application. So it was basically a great opportunity to set up a methodology, because it was a green field. That was very exciting.

One of the things you have to have is an overarching methodology. Are you using Business Process Testing (BPT), or are you using some other methodology. We also had to comply with, or fit in with, the methodology of SAP which is ASAP, and that’s primarily the industry standard in the SAP space as well. So, we had to assess the current status and make sure to come up with a methodology that made sense for Home Trust Company.

Two, you had to get all the right tools in place. So, Home Trust is very good at getting the industry-leading toolsets. When I joined, they already had HP QC. At that time, it was called QC; now it’s ALM. Solution Manager, was part of the SAP solution of the purchase. So, it was free. We just had to configure and implement it.

We also had QTP, which now is called UFT, and we also had LoadRunner. All the right toolsets were already in place. So I didn’t have to go through the hassle of procuring all those tools.

Assessing the landscape

When we assessed the landscape of tools, we realized that, like any other company, they were not maximizing the return on investment (ROI) on the toolsets. The toolsets were not leveraged as much, because in a typical SAP environment, the demand of time to market is very high for project delivery and new product introduction.

When you have a new product, you have to configure the system fast, so it’s not too late to bring the product to the market. You have a lot of time pressure. You also have resource constraints, just like any other company. We started with two people, and we didn’t have a dedicated testing team. That was also something we felt we had to resolve.

We had to tackle it from a methodology and a toolset perspective, and we had to tackle it from a personnel perspective, how to properly structure the team and ramp the resource up. We had to tackle it through those three perspectives. Then, after all the strategic things are in place, you figure out your execution pieces.

From a methodology perspective, what are the authoring  standards, what are action words, and what are naming conventions? I can’t emphasize this enough, because I see it done so differently on each project. People don’t know the implications  down the road.

How do you properly structure your testing assets in QC that makes sense for SAP? That is a key area. You can’t structure at too high of a level. That means that you have a mega scenario of everything in one test case or just a few test cases. If something changes, which I can guarantee you it will, something changes in the application, because you have to redevelop it or modify it for another feature.

If you structure your testing assets at such a high level, you have to rewrite every single asset. You don’t know where it’s changing something somewhere else, because you probably hard-coded everything.

If you put it at a too much of a granular level, maintenance becomes a nightmare. It really has to be at the right level to enable the flexibility and get ready for automation. It also has to be easy to maintain, because maintenance is usually a higher cost than the actual initial creation. So, those are all the standards we are setting up.

What’s your proper defect flow? It’s different from company to company. You have to figure out the minimum effort required, but what makes sense. You also have to have the right control in place for this company. You have to figure out naming conventions, the relevant test cases, and all that. That’s the methodology part of it.

The toolset is a lot more technical. If you’re talking about the HP ALM Suite, what’s the standard configuration you need to enable for all your projects? I can guarantee you that every company has concurrent projects going on after post-production.

Even when they’re implementing their initial SAP, there are many concurrent streams going on at the same time. How do you make sure its configuration accommodates all the different types of projects? However, with the same set of configuration — this is a key point — you cannot, let me repeat, you cannot, have very different configurations for HP ALM  across different projects.

Sharing assets

This will prevent you from sharing the test assets across different projects or prevent you from automating them in the same manner or automating them for the near future and prevent you from delivering projects consistently with consistent quality and with consistent reporting format across the company. It prevents all of those and that would generate nightmares for maintenance and having standards put in place. That’s key. I can’t  emphasize that enough.

So from the toolset, how do you design a configuration that fits all? That’s the mandate. The rule of thumb is do not customize. Use out-of-box functionality. Do not code. If you really have to write a query, minimize it.

The good thing about HP ALM is that it’s flexible enough to accommodate all the critical requests. If you find you have to write something for it or you have to have a custom field or custom label, you probably should consider changing your process first, because ALM is a pretty mature toolset.

Reduce post-production issues by 80% by building better apps.
Learn Seven Best Practices for Business-Ready Applications
with a free white paper

I‘ve been on very complex global projects in different countries. HP ALM is able to accommodate all the key metrics, all the key deliverables you’re looking to deliver. It has the capacity.

When I see other companies that do a lot of customization, it’s because their process isn’t correct. They’re fixing the tool to accommodate for processes that don’t make sense. People really have to have that open mind, and seek out the best practice and expertise in the industry to understand what out of box functionality to configure for HP ALM to manage their SAP projects, instead of weakening the tool to fit how they do SAP projects.

Sometimes, it involves a lot of change management, and for any company, that’s hard. You really have to keep that open mind, stick with the best practice, and think hard about whether your process makes sense or whether you really need to tweak the tool.

Gardner: It’s fascinating that in doing due diligence on process, methodology, leveraging the tools, and recognizing the unique characteristics of this particular application set, if you do that correctly, you’re going to improve the quality of that particular roll out or application delivery into production, and whatever modifications you need to do over time.

It’s also going to set you up to be in a much better position to modernize and be aggressive with those applications, whether it’s delivering them out to a mobile tier, for example, or whether there’s different integrations with different data. So when you do this well, there are multiple levels of payback. Right?

Shen: I love this question, because this is really the million-dollar view, or the million dollar understanding, that anybody can take away from this podcast or my session (at HP Discover). This is the million dollar vision that you should seriously consider and understand.

From an SAP and HP ALM perspective and the Center for Excellence, the vision is this (I’m going to go slowly, so you get all the components and all the pieces):

Work closely

SAP and HP work very closely. So your account rep will help you greatly in the toolsets in that area. It starts with Solution Manager from SAP, which should be your system record of development. The best part is when you implement SAP, you use Solution Manager to input all your Business Process Hierarchy (BPH). BPH is your key ingredient in Solution Manager that lays out all the processes in your environment.

Tied with it you should input all the transaction codes (T-codes). The DNA of SAP is T-codes. If you go to any place in SAP, most likely you have to enter a T-code. That will bring you to the right area. When we scope out an SAP project, the key starts with the list of T-codes. The key is to build out that BPH in SAP and associate all the T-codes in different areas.

With that T-code, you actually have all the documentation, functional specification, technical specification, all of the documentation and mapping associated at each level in your BPH along with your T-code. Not only that, you should have all your security IDs and metrics associated with each level at the BPH and T-codes, and all the flows and requirements all tied together, and of course the development, the code.

So, your Solution Manager should be the system record of development. The best practice is to always implement your SAP initial implementation with Solution Manager. So by the time you go live, you’ve already done all that. That’s the first bucket.

The second bucket is HP Tool Suite. We’ll start with HP ALM Test Management Tool. It allows you to input your testing requirements, and they flow through the requirement to a test. If you’re using Business Process Testing (BPT), then you should flow through to the component in BPT, and flow through the test case module. Then, you flow through to the test plan, test lab and flow through to the defects. Everything is well integrated and connected.

And then there is something we call an adapter. It’s a Solution Manager and HP ALM adapter. It enables Solution Manager and HP ALM to talk. You have to configure that adapter between Solution Manager and ALM. This is able to bring your hierarchy, your BPH in Solution Manager, and all the related assets, including the T-codes, over to the requirement model in HP ALM.

So if you have your Solution Manager straightened out, whatever you bring over to ALM, that’s already your scope. It tells you what T-codes is in scope to test. By the way, in SAP it’s often a headache that each T-code can do many, many things, especially if you’re heavily customized.

So a T-code is not enough. You have to go down to a granular level of getting the variants. What are the typical scenarios or typical testing variants it has? Then, you can create that variance in the Solution Manager in the BPH. Then, it’s going to flow through to the Requirement module in HP ALM and list out all your T-codes’ possible variants.

Then, based on that, you start scoping out your testing assets. What are the components, test cases, or whatever you have to write. You put them in a BPT or you put them in your test case model. Then you link the requirement over. So you already have your test coverage. Then, you flow through a test case, flow through your execution in test lab, flow through to defects, and then it all ties back together.

And where does automation come in play? That’s the bucket after HP ALM. So, UFT today is still the primary tool people use to automate. In the SAP space, SAP actually has its own. It’s called, Test Acceleration and Optimization (TAO). That’s also leveraging UFT. That’s the foundation to create a specific SAP automation, but either is fine. If you already have UFT, you really could start today.

Back and forth

So, the automation comes in place. This is very interesting. This is how it goes back and forth. For example, you already transported something to production and you want to check if anything slipped through the cracks? Is all the testing coverage there?

There’s something called Solution Document Assistant. From the Solution Manager side, you can actually read from EarlyWatch reports to see what T codes are actually being used in your Production system today. After something is transported over into Prod, you can re-run it again to see what are the net new T-codes in the production system. Then, you can compare that. So there’s a process.

Then you can see what are the net new ones from the BPH and flow through that to your HP QC or HP ALM, and see whether we have coverage for that. If not, here’s your scope for net new manual and automated testing.

Then, you keep building that regression and you eventually will get a library. That’s how you flow through back and forth. There is also something called Business Process Change Analyzer (BPCA). That already comes free with Solution Manager. You just have to configure it.

It allows you to load whatever you want to change in production into the buffer. So, before you actually transfer the code into production, you’ll be able to know what area it impacts. It goes into the core level. So, it allows you to do targeted regression as well. We talked about Solution Manager. We talked about ALM. We talked about UFT. Then, there is LoadRunner, the performance center, the load testing, the performance testing, stress testing, etc., and this all goes into the same picture.

The ideal solution is that you can flow through your content in Solution Manager to HP ALM and you can enable automation for all tests together — and all those performance, stress, whatever, testing — in one end-to-end flow and you’re able to build that regression library. You’re able to build that technical testing library. And you’re able to build that library and Solution Manager and maintain them at same time.

Gardner: So the technology is really powerful, but it’s incumbent on the users to go through those steps of configuring, integrating, creating the diligence of the libraries and then building on that.

I’d like to go up to the business-level discussion. When you go to your boss’s boss, can you explain to them what they’re going to get as a value for having gone through this? It’s one thing to do it because it’s the right thing to do and it’s got super efficient benefits, but that needs to translate into dollars and cents and business metrics. So what do you tell them you get at that business level when they do this properly?

Business takes notice

Shen: Very good question, because this exercise we did can be applied to any other companies. It’s at the level that business really takes notice. One common challenge is that when you on-board somebody, do they have the proper documentation to ramp it up?

I yet have to see a company that’s very good with documentation, especially with SAP, where is that list of scope of all the T-codes that are today in production we use? What are the functional specs? What are the technical specs? Where is the field map? Where are the flows? You have to have that documentation in order to ramp somebody up or what typically ends up happening is that you hire somebody and you have to take other team members for a few weeks to ramp the person up.

Instead of putting them on the project to deliver right away, start writing the code, start configuring SAP, or whatever, they can’t start until few months later. How do you  accelerate that process? You build everything up with Solution Manager, you build everything up in HP ALM, you build everything up in your QTP and UFT and everything.

So this way, the person will come in, they can go to Solution Manager and look at all the T-codes and scope, look at all the updated T-codes, updated business areas, look at updated functional specs, understand what the company’s application does and what’s the logic and what’s configuration. Then, the person can easily go to HP ALM and figure out, the testing scenarios, how people test, how they use application, and what should be the expected behavior of the application.

Point one is that you can really speed up the hiring process and the knowledge transfer process for your new personnel. A more important application of this is on projects. Whether SAP or not, companies usually use very high-end products, because you have to constantly draw out new applications, new releases, and new features based on market conditions and based on business needs.

When a project starts, a very common challenge is the documentation of existing functionality? How can you identify what to build? If you have nothing, I can guarantee you that you’ll spend a few weeks of the entire project team trying to figure out current status.

Again, with the library and Solution Manager, the regression testing suite, the automated suite in HP ALM and UFT, and all of that, you can get that on day one. It’s going to shorten the project time. It’s going to accelerate the project time with good quality.

The other thing is that a project is so important that anything in the project is very necessary. When you actually figure out your status quo, you start building.

Testing is the most labor-intensive and painstaking process and probably one of the most expensive areas in any project delivery. How do you accelerate that? Without existing regression library, documented test scenarios, and even automated existing regression libraries, you have to invent everything from scratch.

By the way, that involves figuring out the scope, the testing scope that involves writing the test case from scratch, building all the parameters, and building all the data. That takes a lot of time. If you already have an existing library, that’s going to shorten your lifecycle a lot.

So all this translates into dollar saving plus better coverage and faster delivery, which is key for business. By the way, when you have all this set in place, you’re able to catch a lot more defects before it goes to production. I saw study that said it’s about 10 times more expensive if you catch a defect in production. So the earlier you catch it, the better.

Security confidence

Gardner:  Right, of course. It also strikes me that doing this will allow you to have better security confidence, governance risk and compliance benefits, and auditability when that kicks in. In a banking environment, of course, that’s really important.

Shen: Absolutely. The HP ALM tool allows the complete audit trail for the testing aspect of it. Not at this current company, but on other projects, usually an auditor comes in and they ask for access to HP QC. They look at HP ALM, auto test cases, who executed, the recorded results, and defects, that’s what auditors look for.

Reduce post-production issues by 80% by building better apps.
Learn Seven Best Practices for Business-Ready Applications
with a free white paper

Gardner: Cindy, what is it that’s of interest to you here at HP Discover in terms of what comes next in HP’s tool, seeing as they’re quite important to you? Also, are you looking for anything in the HP-SAP relationship moving forward?

Shen: I love that question. Sometimes, I feel very lonely in this niche field. SAP is a big beast. HP-SAP integration is part of what they do, but it’s not what they market. The good thing is that most SAP clients have HP ALM. It’s a very necessary toolset for both HP and SAP to continue to evolve and support.

It’s a niche market. There are only a handful of people in the world that can do this from end to end properly. HP has many other products. So, you’re looking at a small circle of SAP end clients who are using HP toolsets, who need to know how to properly configure and run this efficiently and properly. Sometimes I feel very lonely, overlapping the circle of HP and SAP.

That’s why Discover is very important to me. It feels like a homecoming, just because here I’ll actually speak to the project managers and experts on HP ALM sprinter, the integration, and the HP adapter. So I know what the future releases are. I know what’s coming down the line, and I know the configuration I might have to change in the future.

The other really good of part, which I’m passionate about, having doing enough projects, is that I’ve helped clients, and there’s always this common set of questions and challenges. It took me a couple of years to figure these out. There are many, many people out there in the same boat as I was years back, and I love to share my experience, expertise, and knowledge with the end clients.

They’re the ones managing and creating their end-to-end testing. They’re the ones facing all these challenges. I love to share with them what the best practices are, how to structure things correctly, so that you don’t have to suffer down the road. It really takes expertise to make it right. That’s what I love to share.

As far as the ecosystem of HP and SAP. I’d like to see them integrate more tightly. I’d like to see them engage more with the end-user community, so that we can definitely share the lessons and share the experience with end user more.

Also, I know all the vendors in the space. Basically, the vendors in the space are very niche and most of them come from SAP and HP backgrounds. So I keep running into people I know. My vendors keep running to people they know, and it’s that community that’s very critical to enable success for the end user and for the business.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in Cloud computing | Leave a comment

ITSM adoption forces a streamlined IT operations culture at Desjardins, paves the way to cloud

Our next innovation case study interview highlights how Desjardins Group in Montréal is improving their IT operations through an advanced IT services management (ITSM) approach.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more, BriefingsDirect sat down with Trung Quach, ITSM Manager at Desjardins in Québec, at the recent HP Discover conference in Las Vegas. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: First tell us a little bit about your organization, you have a large network of credit unions.

Quach: It’s more like cooperative banking. We are around 50,000 people across Québec, and we’ve started moving into both Canada and the US.

Gain better control over help desk quality and impact
Learn how to make your help desk more relevant
with a free white paper

Gardner: Tell us a little bit about your IT organization, the size, how many people, how many datacenters? What sort of IT organization do you have?

Quach: We’re around 2,500 and counting. We’re mainly based in Montréal and Lévis, which is near Québec City. Most of them are in Montréal, but some technical people are in Lévis.

Gardner: Tell us about your role. What are you doing there as ITSM manager?

The ITIL process

Quach: I joined Desjardins last year in the ITSM leader position. This is more about the process, the ITIL process and everything that’s invloved with the tool, as well as to support those overall processes.

Gardner: Tell us why ITSM has become important to you. What were some of the challenges, some of the requirements? What was the environment you were in that required you to adopt better ITSM principles?

Quach

Quach: A couple of years ago, when they merged 10-plus silos of IT into one big group, Desjardins needed to centralize the process, put best practice in place, to be more efficient and competitive — and to give a higher value to the business.

Gardner: What, in particular, were issues that cropped up as a result of that decentralization? Was this poor performance, too much cost, too many manual processes, all of the above?

Quach: We had a lot of manual processes, and a lot of tools. To be able to measure the performance of a team, you need to use the same process and the same tools, and then measure yourself on it. You need to optimize the way you do it, so that you can provide better IT services.

Gardner: What have been some of the results of your movement toward ITSM? What sort of benefits have you realized as a result?

Quach: We had many of them. Some were financial, but the most important thing, I think, is the services quality and the availability of those services. So one indicator is a reduction in major incidents of 30 percent for the last two years.

Gardner: What is it about your use of ITSM that has led to that significant reduction in incidents? How does that translate?

Quach: We put our new problem management approach to work as well with the problem incident processes. When we open tickets, we can take care of the incidents in a coordinated way at an enterprise level. So the impact is everywhere. We can now advise the line of businesses, follow up with the incident, and close the incident rapidly. We follow up with any problems, and then we fix the real issues so that they don’t come back.

Gardner: Have you used this to translate back to any applications development, or custom development in your organization? Or is this more on the operations side strictly?

Better support

Quach: We started all of this on the operations side. But then we started last year on the development side, too. They’re involved in our process slowly, and that’s going to soon get better, so we can support the full IT lifecycle better.

Gardner: Tell us about HP Discover. What’s of interest to you? Have you been looking at what HP has been doing with their tools? What’s of most importance to you in terms of what they do with their technology?

Quach: I can tell you how important it is for us. Last year we didn’t go to HP Discover. This year, around eight in my team and the architecture team are here. That shows you how important it is.

Now we spread out. A lot of my team members went to explore tools and everything else that HP has to offer — and HP has a lot of offer. We went to learn about the cloud, as well as big data. It all works together. That’s why it was important for us to come here. ITSM is the main reason we’re here, but I want to make sure that everything works together, because the IT processes touch everything.

Gain better control over help desk quality and impact
Learn how to make your help desk more relevant
with a free white paper

Gardner: I’ve talked to a number of organizations, Trung, and they’ve mentioned that before they feel comfortable moving into more cloud activities, and before they feel comfortable adopting big data, analytics platforms, they want to make sure they have everything else in order. So ITSM is an important step for them to then go to larger, more complex undertakings. Is that your philosophy as well?

Quach: Yes. There are two ways to do this. You use that technology to force yourself to be disciplined, or you discipline yourself. ITSM is one way to do it. You force yourself to work in a certain manner, a streamlined manner, and then you can go to the cloud. It’s easier that way.

Gardner: Then, of course, you also have standardization in culture, in organization, not just technology, but the people and the process, and that can be very powerful.

Quach: If asked me about cloud — and I have done this with another company — in a 30-minute interview about cloud, I would use 29 minutes to talk about technology, people, and process relationship.

Gardner: How about the future of IT? Any thoughts about or the big picture of where technology is going? Even as we face larger data volumes, perhaps more complexity, and mobile applications, what are your thoughts about how we solve some of those issues in the big picture?

Time to market

Quach: IT more and more is going to have a challenge for meeting the speed demanded for improved time to market. But to do that, you need processes, technology, and of course, people. So the client, the business, is going to ask us to be faster. That’s why we’ll need to go in that cloud. But to go in the cloud, we need to master our IT services, and then go in the cloud. If not, it would be like not going to the cloud and not having that agility. We would not be competitive.

Gain better control over help desk quality and impact
Learn how to make your help desk more relevant
with a free white paper

Gardner: Looking back, now that you have gone through an ITSM advancement, for those who are just beginning, what are some thoughts that you could share with them?

Quach: In an ITSM project, it’s very hard to manage change. I’m talking about the people change, not the change-management technology process. Most of the time, you put that in place and say that everybody has to work with it. If I would redo it, I would bring more people to understand the latest ITSM science and processes, and explain why in five or 10 years, it’s going to really help us.

After that, we’ll put in the project, but we’ll follow them and train them every year. ITSM is a never-ending story. You always have to be close to your clients. Even if they are IT, they are your client or partners. You need to coach them, to make sure they understand why they’re doing this. Sometimes it’s a bit longer to get it right at the beginning, but it’s all worth it at the end.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in Cloud computing | Leave a comment

MIT Media Lab computing director details the virtues of cloud for agility and disaster recovery

The next BriefingsDirect innovator case study interview focuses on the MIT Media Lab in Cambridge, Mass., and how they’re exploring the use of cloud and hybrid cloud to enjoy such use benefits as IT speed, agility and robust, three-tier disaster recovery (DR).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how the MIT Media Lab is exploiting cloud computing, we’re joined by Michail Bletsas, research scientist and Director of Computing at the MIT Media Lab. The discussion, at the recent VMworld 2014 Conference in San Francisco, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about the MIT Media Lab and how it manages its own compute requirements.

Bletsas: The organization is one of the many independent research labs within MIT. MIT is organized in departments, which do the academic teaching, and research labs, which carry out the research.

http://web.media.mit.edu/~mbletsas/
Bletsas

The Media Lab is a unique place within MIT. We deviate from the normal academic research lab in the sense that a lot of our funding comes from member companies, and it comes in a non-direct fashion. Companies become members of the lab, and then we get the freedom to do whatever we think is best.

We try to explore the future. We try to look at what our digital life will look like 10 years out, or more. We’re not an applied research lab in the sense that we’re not looking at what’s going to happen two or three years from now. We’re not looking at short-term future products. We’re looking at major changes 15 years out.

I run the group that takes care of the computing infrastructure for the lab and, unlike a normal IT department, we’re kind of heavy on computing. We use computers as our medium. The Media Lab is all about human expression, which is the reason for the name and computers are one of the main means of expression right now. We’re much heavier than other departments in how many devices you’re going to see. We’re on a pretty complex network and we run a very dynamic environment.

Major piece

A lot has changed in our environment in recent years. I’ve been there for almost 20 years. We started with very exotic stuff. These days, you still build exotic stuff, but you’re using commodity components. VMware, for us, is a major piece of this strategy because it allows us a more efficient utilization of our resources and allows us to control a little bit the server proliferation that we experienced and that everybody has experienced.

We normally have about 350 people in the lab, distributed among staff, faculty members, graduate students, and undergraduate students, as well as affiliates from the various member companies. There is usually a one-to-five correspondence between virtual machines (VMs), physical computers, and devices, but there are at least 5 to 10 IPs per person on our network. You can imagine that having a platform that allows us to easily deploy resources in a very dynamic and quick fashion is very important to us.

We run a relatively small operation for the size of the scope of our domain. What’s very important to us is to have tools that allow us to perform advanced functions with a relatively short learning curve. We don’t like long learning curves, because we just don’t have the resources and we just do too many things.

You are going to see functionality in our group that is usually only present in groups that are 10 times our size. Each person has to do too many things, and we like to focus on technologies that allow us to perform very advanced functions with little learning. I think we’ve been pretty successful with that.

Gardner: How have you created a data center that’s responsive, but also protects your property?

Bletsas: Unlike most people, we tend to have our resources concentrated close to us. We really need to interact with our infrastructure on a much shorter cycle than the average operation. We’ve been fortunate enough that we have multiple, small data centers concentrated close to where our researchers are. Having something on the other side of the city, the state, or the country doesn’t really work in an environment that’s as dynamic as we are.

We also have to support a much larger community that consists of our alumni or collaborators. If you look at our user database right now, it’s something in the order of 3,500, as opposed to 350. It’s a very dynamic in that it changes month to month. The important attributes of an environment like this is that we can’t have too many restrictions. We don’t have an approved list of equipment like you see in a normal corporate IT environment.

Our modus operandi is that if you bring it to us, we’ll make it work. If you need to use a specific piece of equipment in your research, we’ll try to figure out how to integrate it into your workflow and into what we have in there. We don’t tell people what to use. We just help them use whatever they bring to us.

In that respect, we need a flexible virtualization platform that doesn’t impose too many restrictions on what operating systems you use or what the configuration of the VMs are. That’s why we find that solutions, like general public cloud, for us are only applicable to a small part of our research. Pretty much every VM that we run is different than the one next to it.

Flexibility is very important to us. Having a robust platform is very, very important, because you have too many parameters changing and very little control of what’s going on. Most importantly, we need a very solid, consistent management interface to that. For us, that’s one of the main benefits of the vSphere VMware environment that we’re on.

Public or hybrid

Gardner: What about taking advantage of cloud, public cloud, and hybrid cloud to some degree, perhaps for disaster recovery (DR) or for backup failover. What’s the rationale, even in your unique situation, for using a public or hybrid cloud?

Bletsas: We use hybrid cloud right now that’s three-tiered. MIT has a very large campus. It has extensive digital infrastructure running our operations across the board. We also have facilities that are either all the way across campus or across the river in a large co-location facility in downtown Boston and we take advantage of that for first-level DR.

A solution like the vCloud Air allows us to look at a real disaster scenario, where something really catastrophic happens at the campus, and we use it to keep certain critical databases, including all the access tools around them, in a farther-away location.

It’s a second level for us. We have our own VMware infrastructure and then we can migrate loads to our central organization. They’re a much larger organization that takes care of all the administrative computing and general infrastructure at MIT at their own data centers across campus. We can also go a few states away to vCloud Air [and migrate our workloads there in an eme

So it’s a very seamless transition using the same tools. The important attribute here is that, if you have an operation that small, 10 people having to deal with such a complex set of resources, you can’t do that unless you have a consistent user interface that allows you to migrate those workloads using tools that you already know and you’re familiar with.

We couldn’t do it with another solution, because the learning curve would be too hard. We know that remote events are remote, until they happen, and sometimes they do. This gives us, with minimum effort, the ability to deal with that eventuality without having to invest too much in learning a whole set of tools, a whole set of new APIs to be able to migrate.

We use public cloud services also. We use spot instances if we need a high compute load and for very specialized projects. But usually we don’t put persistent loads or critical loads on resources over which we don’t have much control. We like to exert as much control as possible.

Gardner: It sounds like you’re essentially taking metadata and configuration data, the things that will be important to spin back up an operation should there be some unfortunate occurrence, and putting that into that public cloud, the vCloud Air public cloud. Perhaps it’s DR-as-a-service, but only a slice of DR, not the entire data. Is that correct?

Small set of databases

Bletsas: Yes. Not the entire organization. We run our operations out of a small set of databases that tend to drive a lot of our websites. A lot of our internal systems drive our CRM operation. They drive our events management. And there is a lot of knowledge embedded in those databases.

It’s lucky for us, because we’re not such a big operation. We’re relatively small, so you can include everything, including all the methods and the programs that you need to access and manipulate that data within a small set of VMs. You don’t normally use them out of those VMs, but you can keep them packaged in a way that in a DR scenario, you can easily get access to them.

Fortunately, we’ve been doing that for a very long time because we started having them as complete containers. As the systems scaled out, we tended to migrate certain functions, but we kept the basic functionality together just in case we have to recover from something.

In the older days, we didn’t have that multi-tiered cloud in place. All we had was backups in remote data centers. If something happened, you had to go in there and find out some unused hardware that was similar to what you had, restore your backup, etc.

Now, because most of MIT’s administrative systems run under VMware virtualization, finding that capacity is a very simple proposition in a data center across campus. With vCloud Air, we can find that capacity in a data center across the state or somewhere else.

Gardner: For organizations that are intrigued by this tiered approach to DR, did you decide which part of those tiers would go in which place? Did you do that manually? Is there a part of the management infrastructure in the VMware suite that allowed you to do that? How did you slice and dice the tiers for this proposition of vCloud Air holding a certain part of the data?

Bletsas: We are fortunate enough to have a very good, intimate knowledge of our environment. We know where each piece lies. That’s the benefit of running a small organization. We occasionally use vSphere’s monitoring infrastructure. Sometimes it reveals to us certain usage patterns that we were not aware of. That’s one of the main benefits that we found there.

We realized that certain databases were used more than we thought. Just looking at those access patterns told us, “Look, maybe you should replicate this.” It doesn’t cost much to replicate this across campus and then maybe we should look into pushing it even further out.

It is a combination of having a visibility and nice dashboards that reveal patterns of activity that you might not be aware of even in an environment that’s not as large as ours.

Gardner: At VMworld 2014, there was quite a bit of news, particularly in the vCloud Air arena. What intrigues you?

Standard building blocks

Bletsas: We like the move toward standardization of building blocks. That’s a good thing overall, because it allows you to scale out relatively quickly with a minor investment in learning a new system. That’s the most important trend out there for us. As I’ve said, we’re a small operation. We need to standardize as much as possible, while at the same time, expanding the spectrum of services. So how do you do that? It’s not a very clear proposition.

The other thing that is of great interest to us is network virtualization. MIT is in a very peculiar situation compared to the rest of the world, in the sense that we have no shortage of IP addresses. Unlike most corporations where they expose a very small sliver of their systems to the outside world and everything happens on the back-end, our systems are mostly exposed out there to the public internet.

We don’t run very extensive firewalls. We’re a knowledge dissemination and distribution organization and we don’t have many things to hide. We operate in a different way than most corporations. That shows also with networking. Our network looks like nothing like what you see in the corporate world. The ability to move whole sets of IPs around our domain, which is rather large and we have full control over, is a very important thing for us.

It allows for much faster DR. We can do DR using the same IPs across the town right now because our domain of control is large enough. That is very powerful because you can do very quick and simple DR without having to reprogram IP, DNS Servers, load balancers, and things like that. That is important.

The other trend that is also important is storage virtualization and storage tiering and you see that with all the vendors down in the exhibit space. Again, it allows you to match the application profile much easier to what resources you have. For a rather small group like ours, which can’t afford to have all of its disk storage and very high-end systems, having a little bit of expensive flash storage, and then a lot of cheap storage, is the way for us to go.

The layers that have been recently added to VMware, both on the network side and the storage side help us achieve that in a very cost-efficient way.

For us, experimentation is the most important thing. Spinning out a large number of VMs to do a specific experiment is very valuable and being able to commandeer resources across campus and across data centers is a necessary requirement for something like an environment like this. Flexibility is what we get out of that and agility and speed of operations.

In the older days, you had to go and procure hardware and switch hardware around. Now, we rarely go into our data centers. We used to live in our data centers. We go there from time to time but not as often as we used to do, and that’s very liberating. It’s also very liberating for people like me because it allows me to do my work anywhere.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Posted in Cloud computing | Leave a comment

Cloud services brokerages add needed elements of trust and oversight to complex cloud deals

Our BriefingsDirect discussion today focuses on an essential aspect of helping businesses make the best use of cloud computing.

We’re examining the role and value of cloud services brokers with an emphasis on small to medium-sized businesses (SMBs), regional businesses, and government, and looking for attaining the best results from a specialist cloud service brokerage role within these different types of organizations.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

No two businesses have identical needs, and so specialized requirements need to be factored into the use of often commodity-type cloud services. An intermediary brokerage can help companies and government agencies make the best use of commodity and targeted IaaS clouds, and not fall prey to replacing an on-premises integration problem with a cloud complexity problem.

To learn more about the role and value of the specialist cloud services brokerage, we’re joined by Todd Lyle, President of Duncan, LLC, a cloud services brokerage in Ohio, and Kevin Jackson, the Founder and CEO of GovCloud Network in Northern Virginia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How do we get regular companies to effectively start using these new cloud services?

Lyle: Through education. That’s our first step. The technology is clearly here, the three of us will agree. It’s been here for quite some time now. The beauty of it is that we’re able to extract bits and pieces for bundles, much like you get from your cell phone or your cable TV folks. You can pull those together through a cloud services brokerage.

Lyle

So brokerage firms will go out and deal with the cloud services providers like Amazon, Rackspace, Dell, and those types of organizations. They bring the strengths of each of those organizations together and bundle them. Then, the consumer gets that on a monthly basis. It’s non-CAPEX, meaning there is no capital expenditure.

You’re renting these services. So you can expand and contract as necessary. To liken this to a utility environment, utility organizations that do electric and do power, you flip the switch on or turn the faucet on and off. It’s a metered service.

 Learn more about Todd D. Lyle’s book,
“Grounding the Cloud: Basics and Brokerages,”
at groundingthecloud.org

That’s where you’re going to get the largest return on your collective investment when you switch from a traditional IT environment on-premises, or even a private cloud, to the public cloud and the utility that this brings.

Government agencies

Gardner: Kevin you’re involved more with government agencies. They’ve been using IT for an awfully long time. How is the adjustment to cloud models for them? Is it easier, is it better, or is it just a different type of approach, and therefore requires only adjustment?

Jackson: Thank you for bringing that up. Yes, I’ve been focused on providing advanced IT to the federal market and Fortune 500 businesses for quite a while. The advent of cloud computing and cloud services brokerages is a double-edged sword. At once, it provides a much greater agility with respect to the ability to leverage information technology.

Jackson

But, at the same time, it brings a much greater amount of responsibility, because cloud service providers have a broad range of capabilities. That broad range has to be matched against the range of requirements within an enterprise, and that drives a change in the management style of IT professionals.

You’re going more from your implementation skills to a management of IT skills. This is a great transition across IT, and is something that cloud services brokerages can really aid. [See Jackson's recent blog on brokerages.]

Gardner: Todd, it sounds as if we’re moving this from an implementation and a technology skill set into more of a procurement, governance, contracts, and creating the right service-level agreements (SLAs). These are, I think, new skills for many businesses. How is that coaching aspect of a cloud service’s brokerage coming out in the market? Is that something you are seeing a lot of demand for?

Lyle: It’s customer service, plain and simple. We hear about it all the time, but we also pass it off all the time. You have to be accessible. If you’re a 69-year-old business owner and embracing a technology from that demographic, it’s going to be different than if you are 23 years old, different in the approach that you take with that person.

As we all get more tenured, we’ll see more adaptability to new technologies in a workplace, but that’s a while out. That’s the 35-and-younger crowd. If you go to 35-and-above, it’s what Kevin mentioned — changing the culture, changing the way things are procured within those cultures, and also centralizing command. That’s where the brokerage or the exchange comes into place for this. [See Lyle's video on cloud brokerages.]

Gardner: One of the things that’s interesting to me is that a lot of companies are now looking at this as not just as a way of switching from one type of IT, say a server under a desk, to another type of IT, a server in a cloud.

It’s forcing companies to reevaluate how they do business and think of themselves as a new process-management function, regardless of where the services reside. This also requires more than just how to write a contract. It’s really how to do business transformation.

Does that play into the cloud services brokerage? Do you find yourselves coaching companies on business management?

Jackson: Absolutely. One of the things cloud services is bringing to the forefront is the rapidity of change. We’re going from an environment where organizations expect a homogenous IT platform to where hybrid IT is really the norm. Change management is a key aspect of being able to have an organization take on change as a normal aspect of their business.

This is also driving business models. The more effective business models today are taking advantage of the parallel and global nature of cloud computing. This requires experience, and cloud services brokerages have the experience of dealing with different providers, different technologies, and different business models. This is where they provide a tremendous amount of value.

Different types of services

Gardner: Todd, this notion of being a change agent also raises the notion that we’re not just talking about one type of cloud service. We’re talking about software as a service (SaaS), bringing communications applications like e-mail and calendar into a web or mobile environment. We’re talking about platform as a service (PaaS), if you’re doing development and DevOps. We’re talking about even some analytics nowadays, as people try to think about how to use big data and business intelligence (BI) in the cloud.

Tell me a bit more about why being a change agent across these different models — and not just a cloud implementer or integrator — raises the value of this cloud service brokerage role?

Lyle: It’s a holistic approach. I’ve been talking to my team lately about being the Dale Carnegie of the cloud, hence the specialist cloud services brokerage, because it really does come down to personalities.

In a book that I’ve recently written called Grounding the Cloud, Basics and Brokerages, I talk about the human element. That’s the personalities, expectations, and abilities of your workforce, not only your present workforce but your future workforce, which we discussed just a moment ago, as far as demographics were concerned.

It’s constant change. Kevin said it, using a different term, but that’s the world we live in. Some schools are doing this, where they’re adding this to their MBA programs. It is a common set of skills that you must have, and it’s managing personalities more than you’re managing technology, in my opinion.

Gardner: Tell me a bit more about this book, Todd, it’s called Grounding the Cloud. When is it available and how can people learn more about it?

Lyle: It’s available now on Amazon, and they can find out more at www.groundingthecloud.org. This is a layman’s introduction to cloud computing, and so it helps business men and women get a better understanding of the cloud — and how they could best maximize their time and their money, as it associates to their IT needs.

Gardner: Does the book get into this concept of the specialist cloud services brokerage (SCSB), as opposed to just a general brokerage, and getting at what’s the difference?

Lyle: That’s an excellent question, Dana. There are a lot of perceptions, you have one as well, of what a cloud services brokerage is. But, at the end of the day — and we’ve been talking about this in the entire discussion — it’s about the human element, our personalities, and how to make these changes so that the companies actually can speed up.

We discuss it here in the “flyover country,” in Ohio. We meet in the book with Cleveland State University. We meet with Allen Black Enterprises, and then even with a small landscaping company to demonstrate how the cloud is being applied from six and seven users, all the way up to 25,000 users. And we’re doing it here in the Midwest, where things tend to take a couple of years to change.

User advocate

Gardner: How is a cloud services brokerage different from a systems integrator? It seems there’s some commonality. But you are not just a channel, or reseller, you are really as much an advocate for the user.

Lyle: A specialist cloud services brokerage is going to be more like Underwriters Laboratories (UL). It’s going to go out, fielding all the different cloud flavors that are available, pick what they feel is best, and bring it together in a bundle. Then, the SCSB works with the entity to adapt to the culture and the change that’s going to have to occur and the education within their particular businesses, as opposed to a very high-level vertical, where some things are just pushed out at an enterprise level.

Jackson: I see this cloud services brokerage and specialist cloud services brokerage as the new-age system integrator, because there are additional capabilities that are offered.

For example, you need a trusted third-party to monitor and report on adherence to SLAs. The provider is not going to do that. That’s a role for your cloud services brokerage. Also you need to maintain viable options for alternative cloud-service providers. The cloud services brokerage will identify your options and give you choices, should you need the change. A specialist cloud services brokerage also helps to ensure portability of your business process and data from one cloud service provider to another.

Management of change is more than a single aspect within the organization. It’s how to adapt with constant change and make sure that your enterprise has options and doesn’t get locked into a single vendor.

Lyle: It comes to the point, Kevin, of building for constant change. You’re exactly right.

  Learn more about Todd D. Lyle’s book,
“Grounding the Cloud: Basics and Brokerages,”
at groundingthecloud.org

Gardner: You raise an interesting point too, Kevin, that one shouldn’t get lulled into thinking that they can just make a move to the cloud, and it will all be done. This is going to be a constant set of moves, a journey, and you’re going to want to avail yourself of the cloud services marketplace that’s emerging.

We’re seeing prices driven down. We’re seeing competition among commodity-level cloud services. I expect we’ll see other kinds of market forces at work. You want to be agile and be able to take advantage of that in your total cost of computing.

Jackson: There’s a broad range of providers in the marketplace, and that range expands daily. Similarly, there’s a large range of requirements within any enterprise of any size. Brokers act as matchmakers, avoiding common mistakes, and also help the organizations, the SMBs in particular, implement best practices in their adoption of this new model.

Gardner: Also, when you have a brokerage as your advocate, they’re keeping their eye on the cloud marketplace, so that you can keep your eye on your business and your vertical, too. Therefore, you’re going to have somebody to tip you off when things change and they will be on the vanguard for deals. Is that something that comes up in your book, Todd, of the public service brokerage being an educated expert in a field where the business really wants to stick to its knitting?

Primary goal

Lyle: Absolutely. That’s the primary goal, both at a strategic level, when you’re deciding what products to use — the Rackspaces, the Microsofts, the RightSignatures, etc. — all the way down to the tactical one of the daily operation. When I leave the company, how soon can we lock Todd out? How soon can we lock him down or lock him out? It becomes a security issue at a very granular level. Because it’s metered, you turn it off, you turn Todd off, you save his data, and put it someplace else.

That’s a role that, requires command and control and oversight, and that’s a responsibility. You’re part butler. You’re looking out for the day-to-day, the minute issues. Then you get up to a very high level. You’re like UL. You’re keeping an eye on everything that’s occurring. UL comes to mind because they do things that are tactile and those things that you can’t touch, and definitely the cloud is something you can’t touch.

Jackson: Actually, I believe it represents the embracing of a cooperative model of my consumers of this information technology, but embracing with open eyes. This is particularly of interest within the federal marketplace, because federal procurement executives have to stop their adversarial attitude toward industry. Cloud services brokerages and specialist cloud services brokerages sit at the same the table with these consumers.

Lyle: Kevin, your point is very well taken. I’ll go one step further. We were talking up and down the scales, strategic down to the daily operations. One of the challenges that we have to overcome is the signatories, the senior executives, that make these decisions. They’re in a different age group and they’re used to doing things a certain way.

That being said, getting legislation to be changed at the federal level, directives being pushed down, will make the difference, because they do know how to take orders. I know I’m speaking frankly, but what’s going to have to occur for us to see some significant change within the next five years is being told how the procurement process is going to happen.

You’re taking the feather; I’m taking the stick, but it’s going to take both of those to accomplish that task at the federal level.

Gardner: We know that Duncan, LLC is a specialized cloud services brokerage. Kevin, tell us a little bit about the GovCloud Network. What is your organization, and how do you align with cloud brokerages?

Jackson: GovCloud Network is a specialty consultancy that helps organizations modify or change their mission and business processes in order to take advantage of this new style of system integrator.

Earlier, I said that the key to transition in a cloud is adopting and adapting to the parallel nature and a global nature of cloud computing. This requires a second look at your existing business processes and your existing mission processes to do things in different ways. That’s what GovCloud Network allows. It helps you redesign your business and mission processes for this constant change and this new model.

Notion of governance

Gardner: I’d like to go back to this notion of governance. It seems to me, Todd, that when you have different parts of your company procuring cloud services, sometimes this is referred to as shadow IT. They’re not doing it in concert, through a gatekeeper like a cloud broker. Not only is there a potential redundancy of efforts in labor and work in process, but there is this governance and security risk, because one hand doesn’t know what the other hand is doing.

Let’s address this issue about better security from better governance by having a common brokerage gatekeeper, rather than having different aspects of your company out buying and using cloud services independently.

Lyle: We’re your trusted adviser. We’re also very much a trusted member of your team when you bring us into the fold. We provide oversight. We’re big brother, if you want to look at it that way, but big brother is important when you are dealing with your business and your business resources. You don’t want to leave a window open at night. You certainly don’t want to leave your network open.

There’s a lot going on in today’s world, a lot of transition, the NSA and everything we worry about. It’s important to have somebody providing command and control. We don’t sit there and stare at a monitor all day. We use systems that watch this, but we can tell when there’s an increase or decrease out of the norm of activities within your organization.

It really doesn’t matter how big or how small, there are systems that allow us to monitor this and give a heads up. If you’re part of a leadership team, you’d be notified that again Todd Lyle has left an open window. But if you don’t know that Todd even has the window, then that’s even a bigger concern. That comes down to the leadership again — how you want to manage your entity.

We all want to feel free to make decisions, but there are too many benefits available to us, transparent benefits, as Kevin put it, to using the cloud and hiding in plain sight, maximizing e-mail at 100,000 plus users. Those are all good things but they require oversight.

It’s almost like an aviation model, where you have your ground control and your flight crew. Everybody on that team is providing oversight to the other. Ultimately, you have your control tower that’s watching that, and the control tower, both in the air and on the ground, is your cloud services brokerage.

Jackson: It’s important to understand that cloud computing is the industrialization of information technology. You’re going from an age where the IT infrastructure is a hand-designed and built work of art to where your IT infrastructure is a highly automated assembly-line platform that requires real-time monitoring and metering. Your specialist cloud services brokerage actually helps you in that transition and operations within this highly automated environment.

Gardner: Todd, we spoke earlier about how we’re moving from implementation to procurement. We’ve also talked about governance being important, SLAs, and managing a contract across variety of different organizations that are providing cloud type services. It seems to me that we’re talking about financial types of relations.

How does the cloud services brokerage help the financial people in a company. Maybe it’s an individual who wears many hats, but you could think of them as akin to a chief financial officer, even though that might not be their title?

What is it that we are doing with the cloud services brokerage that is of a special interest and value to the financial people? Is it unified billing or is it one throat to choke? How does that work?

Lyle: Both, and then some. Ultimately it’s unified billing and unified management from daily operations. It’s helping people understand that we’re moving from a capitalized expense, the server, the software, things that are tactile that we are used to touching. We’re used to being able to count them and we like to see our stuff.

So it’s transitioning and letting go, especially for the people who watch the money. We have a fiduciary responsibility to the organizations that we work for. Part of that is communicating, educating, and helping the CFO-type person understand the transition not only from the CAPEX to the OPEX, because they get that, but also how you’re going to correlate it to productivity.

It’s letting them know to be patient. It’s going to take a couple months for your metering to level up. We have some statistics and we can read into that. It’s holding their hand, helping them out. That’s a very big deal as far as that’s concerned.

Gardner: Let’s start to think about how to get started. Obviously, every company is different. They’re going to be at a different place in terms of maturity, in their own IT, never mind the transition to cloud types of activities. Would you recommend the book as a starting point? Do you have some other materials or references? How do you help that education process get going. I’m thinking about organizations that are really at the very beginning?

Gateway cloud

Lyle: We’ve created a gateway cloud in our book, not to confuse the cloud story. Ultimately, we have to take in consideration our economy, the world economy today. We’re still very slow to move forward.

There are some activities occurring that are forcing us to make change. Our contracts may be running out. Software like XP is no longer supported. So we may be forced into making a change. That’s when it’s time to engage a cloud services brokerage or a specialist cloud services brokerage.

Go out and buy the book. It’s available on Amazon. It gives you a breakdown, and you can do an assessment of your organization as it currently is and it will help you map your network. Then, it will help you reach out to a cloud services brokerage, if you are so inclined, through points of interest for request for proposal or request for information.

The fun part is, it gives you a recipe using Rackspace, Jungle Disk, and gotomeeting.com, where you get to build a baby cloud. Then, you can go out and play with it.

You want to begin with three points: file sharing, remote access, and email. You can be the lighthouse or you can be a dry-cleaners, but every organization needs file sharing, remote access, and email. We open-sourced this recipe or what we call the industrial bundle for small businesses.

It’s not daunting. We’ve got some time yet, but I would encourage you to get a handle on where your infrastructure is today, digest that information, go out and play with the gateway cloud that we’ve created, and reach out to us if you are so inclined.

 Learn more about Todd D. Lyle’s book,
“Grounding the Cloud: Basics and Brokerages,”
at groundingthecloud.org

We’d love for you to use one of our organizations, but ultimately know that there are people out there to help you. This book was written for us, not for the technical person. It is not in geek speak. This is written for the layperson. I’ve been told it’s entertaining, which is the most important part, because you’re going to read it then.

Jackson: I would urge SMBs to take the plunge. Cloud can be scary to some, but there is very little risk and there is much to gain for any SMB. The using, leveraging, taking advantage of the cloud gateway that Todd mentioned is a very good, low risk, and high reward path towards the cloud.

Gardner: I would agree with both of what you all said. The notion of a proof of concept and dipping your toe in. You don’t have to buy it all at once, but find an area of your company where you’re going to be forced to make a change anyway and then to your point, Kevin, do it now. Take the plunge earlier rather than later.

Jackson: Before you’re forced.

Large changes

Gardner: Before you’re forced, but you want to look at a tactical benefit and where to work toward strategic benefit, but there is going to be some really large changes happening in what these cloud providers can do in a fairly short amount of time.

We’re moving from discrete apps into the entire desktop, so a full PC experience as a service. That’s going to be very attractive to people. They’re going to need to make some changes to get there. But rather than thinking about services discreetly, more and more of what they’re looking for is going to be coming as the entire IT services experience, and more analytics capabilities mixed into that. So I am glad to hear you both explaining how to do it, managed at a proof-of-concept level. But I would say do it sooner rather than later.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Duncan, LLC.

You may also be interested in:

Posted in Cloud computing | Leave a comment