Full 360 takes big data analysis cloud services to new business heights

The latest BriefingsDirect cloud innovation case study interview highlights how Full 360 uses big data and analytics to improve their applications support services for the financial industry — and beyond.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn how Full 360 uses HP Vertica in the Amazon cloud to provide data warehouse and BI applications and services to its customers from Wall Street to the local airport, BriefingsDirect sat down with Eric Valenzuela, Director of Business Development at Full 360, based in New York. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about Full 360.

Valenzuela: Full 360 is a consulting and services firm, and we purely focus on data warehousingbusiness intelligence (BI), and hosted solutions. We build and consult and then we do managed services for hosting those complex, sophisticated solutions in the cloud, in the Amazon cloud specifically.

Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition

Gardner: And why is cloud a big differentiator for this type of service in the financial sector?

Valenzuela: It’s not necessarily just for finance. It seems to be beneficial for any company that has a large initiative around data warehouse and BI. For us, specifically, the cloud is a platform that we can develop our scripts and processes around. That way, we can guarantee 100 percent that we’re providing the same exact service to all of our customers.

Valenzuela

We have quite a bit of intellectual property (IP) that’s wrapped up inside our scripts and processes. The cloud platform itself is a good starting point for a lot of people, but it also has elasticity for those companies that continue to grow and add to their data warehousing and BI solutions. [Register for the upcoming HP Big Data Conference in Boston on Aug. 10-13.]

Gardner: Eric, it sounds as if you’ve built your own platform as a service (PaaS) for your specific activities and development and analytics on top of a public cloud infrastructure. Is that fair to say?

Valenzuela: That’s a fair assumption.

Primary requirements

Gardner: So as you are doing this cloud-based analytic service, what is it that your customers are demanding of you? What are the primary requirements you fulfill for them with this technology and approach?

Valenzuela: With data warehousing being rather new, Vertica specifically, there is a lack of knowledge out there in terms of how to manage it, keep it up and running, tune it, analyze queries and make sure that they’re returning information efficiently, that kind of thing. What we try to do is to supplement that lack of expertise.

Gardner: Leave the driving to us, more or less. You’re the plumbers and you let them deal with the proper running water and other application-level intelligence?

Valenzuela: We’re like an insurance policy. We do all the heavy lifting, the maintenance, and the management. We ensure that your solution is going to run the way that you expect it to run. We take the mundane out, and then give the companies the time to focus on building intelligent applications, as opposed to worrying about how to keep the thing up and running, tuned, and efficient.

Gardner: Given that Wall Street has been crunching numbers for an awfully long time, and I know that they have, in many ways, almost unlimited resources to go at things like BI — what’s different now than say 5 or 10 years ago? Is there more of a benefit to speed and agility versus just raw power? How has the economics or dynamics of Wall Street analytics changed over the past few years?

Valenzuela: First, it’s definitely the level of data. Just 5 or 10 years ago, either you had disparate pieces of data or you didn’t have a whole lot of data. Now it seems like we are just managing massive amounts of data from different feeds, different sources. As that grows, there has to be a vehicle to carry all of that, where it’s limitless in a sense.

Early on, it was really just a lack of the volume that we have today. In addition to that, 8 or 10 years ago BI was still rather new in what it could actually do for a company in terms of making agile decisions and informed decisions, decisions with intent.

So fast forward, and it’s widely accepted and adopted now. It’s like the cloud. When cloud first came out, everybody was concerned about security. How are we going to get the data in there? How are we going to stand this thing up? How are we going to manage it? Those questions come up a lot less now than they did even two years ago. [Register for the upcoming HP Big Data Conference in Boston on Aug. 10-13.]

Gardner: While you may have cut your teeth on Wall Street, you seem to be branching out into other verticals — gaming, travel, logistics. What are some of the other areas now to which you’re taking your services, your data warehouse, and your BI tools?

Following the trends

Valenzuela: It seems like we’re following the trends. Recently it’s been gaming. We have quite a few gaming customers that are just producing massive amounts of data.

There’s also the airline industry. The customers that we have in airlines, now that they have a way to — I hate this term — slice and dice their data, are building really informed, intelligent applications to service their customers, customer appreciation. It’s built for that kind of thing. Airlines are now starting to see what their competition is doing. So they’re getting on board and starting to build similar applications so they are not left behind.

Banking was pretty much the first to go full force and adopt BI as a basis for their practice. Finance has always been there. They’ve been doing it for quite a long time.

Gardner: So as the director of business development, I imagine you’re out there saying, “We can do things that couldn’t have been done before at prices that weren’t available before.” That must give you almost an unlimited addressable market. How do you know where to go next to sell this?

Valenzuela: It’s kind of an open field. From my perspective, I look at the different companies out there that come to me. At first, we were doing a lot of education. Now, it’s just, “Yes, we can do this,” because these things are proven. We’re not proving any concepts anymore. Everything has already been done, and we know that we can do it.

It is an open field, but we focus purely on the cloud. We expect all of our customers will be in the Amazon cloud. It seems that now I am teaching people a little bit more — just because it’s cloud, it’s not magic. You still have to do a lot of work. It’s still an infrastructure.

Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition

But we come from that approach and we make sure that the customer is properly aligned with the vision that this is not just a one- or two-month type commitment. We’re not just going to build a solution, put it in our pocket, and walk away. We want to know that they’re fully committed for 6-12 months.

Otherwise, you’re not going to get the benefits of it. You’re just going to spend the money and the effort, and you’re not really going to get any benefits out of it if you’re not going to be committed for the longer period of time. There still are some challenges with the sales and business development.

Gardner: Given this emphasis on selling the cloud model as much as the BI value, you needed to choose an analytics platform that was cloud-friendly and that was also Amazon AWS cloud-friendly. Tell me how Vertica and Amazon — and your requirements — came together.

Good timing

Valenzuela: I think it was purely a timing thing. Our CTO, Rohit Amarnath, attended a session at MIT, where Vertica was first announced. So he developed a relationship there.

This was right around the time when Amazon announced that they were offering its public cloud platform, EC2. So it made a lot of sense to look at the cloud as being a vision, looking at the cloud as a platform, looking at column databases as a future way of managing BI and analytics, and then putting the two together.

It was more or less a timing thing. Amazon was there. It was new technology, and we saw the future in that. Analytics was newly adopted. So now you have the column database that we can leverage as well. So blend the two together and start building some platform that hadn’t been done yet.

Gardner: What about lessons learned along the way? Are there some areas to avoid or places that you think are more valuable that people might appreciate? If someone were to begin a journey toward a combination of BI, cloud, and vertical industry tool function, what might you tell them to be wary of, or to double-down on?

Valenzuela: We forged our own way. We couldn’t learn from our competitors’ mistakes because we were the ones that were creating the mistakes. We had to to clear those up and learn from our own mistakes as we moved forward.

Gardner: So perhaps a lesson is to be bold and not to be confined by the old models of IT?

Valenzuela: Definitely that. Definitely thinking outside the box and seeing what the cloud can do, focus on forgetting about old IT and then looking at cloud as a new form of IT. Understanding what it cannot do as a basis, but really open up your mind and think about it as to what it can actually do, from an elasticity perspective.

There are a lot of Vertica customers out there that are going to reach a limitation. That may require procuring more hardware, more IT staff. The cloud aspect removes all of that.

Gardner: I suppose it allows you as a director of business development to go downstream. You can find smaller companies, medium-sized enterprises, and say, “Listen, you don’t have to build a data warehouse at your own expense. You can start doing BI based on a warehouse-as-a-service model, pay as you go, grow as you learn, and so forth.”

Money concept

Valenzuela: Exactly. Small or large, those IT departments are spending that money anyway. They’re spending it on servers. If they are on-premises, the cost of that server in the cloud should be equal or less. That’s the concept. [Register for the upcoming HP Big Data Conference in Boston on Aug. 10-13.]

If you’re already spending the money, why not just migrate it and then partner with a firm like us that knows how to operate that. Then, we become your augmented experts, or that insurance policy, to make sure that those things are going to be running the way you want them to, as if it were your own IT department.

Gardner: What are the types of applications that people have been building and that you’ve been helping them with at Full 360? We’re talking about not just financial, but enterprise performance management. What are the other kinds of BI apps? What are some of the killer apps that people have been using your services to do?

Valenzuela: Specifically, with one of our large airlines, it’s customer appreciation. The level of detail on their customers that they’re able to bring to the plane, to the flight attendants, in a handheld device is powerful. It’s powerful to the point where you remember that treatment that you got on the plane. So that’s one thing.

That’s something that you don’t get if you fly a lot, if you fly other airlines. That’s just kind of some detail and some treatment that you just don’t get. I don’t know how that could be driven if it weren’t for analytics and if it weren’t for technology like Vertica to be able to provide that information.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in big data, data analysis, HP | Tagged , , , , , , , , | Leave a comment

How big data technologies Hadoop and Vertica drive business results at Snagajob

The next BriefingsDirect analytics innovation case study interview explores how Snagajob in Richmond, Virginia – one of the largest hourly employment networks for job seekers and employers – uses big data to finally understand their systems’ performance in action. The result is vast improvement in how they provide rapid and richer services to their customers.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

Snagajob recently delivered 4 million new jobs applications in a single month through their systems. To learn how they’re managing such impressive scale, BriefingsDirect sat down with Robert Fehrmann, Data Architect at Snagajob in Richmond, Virginia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your jobs matching organization. You’ve been doing this successfully since 2000. Let’s understand the role you play in the employment market.

Fehrmann: Snagajob, as you mentioned, is America’s largest hourly network for employees and employers. The hourly market means we have, relatively speaking, high turnover.

Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition

Another aspect, in comparison to some of our competitors, is that we provide an inexpensive service. So our subscriptions are on the low end, compared to our competitors.

Gardner: Tell us how you use big data to improve your operations. I believe that among the first ways that you’ve done that is to try to better analyze your performance metrics. What were you facing as a problem when it came to performance? [Register for the upcoming HP Big Data Conference in Boston on Aug. 10-13.]

Signs of stress

Fehrmann: A couple of years ago, we started looking at our environment, and it became obvious that our traditional technology was showing some signs of stress. As you mentioned, we really have data at scale here. We have 20,000 to 25,000 postings per day, and we have about 700,000 unique visitors on a daily basis. So data is coming in very, very quickly.

Fehrmann

We also realized that we’re sitting on a gold mine and we were able to ingest data pretty well. But we had problem getting information and innovation out of our big data lake.

Gardner: And of course, near real time is important. You want to catch degradation in any fashion from your systems right away. How do you then go about getting this in real time? How do you do the analysis?

Fehrmann: We started using Hadoop. I’ll use a lot of technical terms here. From our website, we’re getting events. Events are routed via Flume directly into Hadoop. We’re collecting about 600 million key-value pairs on a daily basis. It’s a massive amount of data, 25 gigabytes on a daily basis.

The second piece in this journey to big data was analyzing these events, and that’s where we’re using HP Vertica. Second, our original use case was to analyze a funnel. A funnel is where people come to our site. They’re searching for jobs, maybe by keyword, maybe by zip code. A subset of that is an interest in a job, and they click on a posting. A subset of that is applying for the job via an application. A subset is interest in an employer, and so on. We had never been able to analyze this funnel.

The dataset is about 300 to 400 million rows, and 30 to 40 gigabytes. We wanted to make this data available, not just to our internal users, but all external users. Therefore, we set ourselves a goal of a five-second response time. No query on this dataset should run for more than five seconds — and Vertica and Hadoop gave us a solution for this.

Gardner: How have you been able to increase your performance reach your key performance indicators (KPIs) and service-level agreements (SLAs)? How has this benefited you?

Fehrmann: Another application that we were able to implement is a recommendation engine. A recommendation engine is that use where our jobseekers who apply for a specific job may not know about all the other jobs that are very similar to this job or that other people have applied to.

Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition

We started analyzing the search results that we were getting and implemented a recommendation engine. Sometimes it’s very difficult to have real comparison between before and after. Here, we were able to see that we got an 11 percent increase in application flow. Application flow is how many applications a customer is getting from us. By implementing this recommendation engine, we saw an immediate 11 percent increase in application flow, one of our key metrics.

Gardner: So you took the success from your big-data implementation and analysis capabilities from this performance task to some other areas. Are there other business areas, search yield, for example, where you can apply this to get other benefits?

Brand-new applications

Fehrmann: When we started, we had the idea that we were looking for a solution for migrating our existing environment, to a better-performing new environment. But what we’ve seen is that most of the applications we’ve developed so far are brand-new applications that we hadn’t been able to do before.

You mentioned search yield. Search yield is a very interesting aspect. It’s a massive dataset. It’s about 2.5 billion rows and about 100 gigabytes of data as of right now and it’s continuously increasing. So for all of the applications, as well as all of the search requests that we have collected since we have started this environment, we’re able to analyze the search yield.

For example, that’s how many applications we get for a specific search keyword in real time. By real time, I mean that somebody can run a query against this massive dataset and gets result in a couple of seconds. We can analyze specific jobs in specific areas, specific keywords that are searched in a specific time period or in a specific location of the country.

Gardner: And once again, now that you’ve been able to do something you couldn’t do before, what have been the results? How has that impacted change your business? [Register for the upcoming HP Big Data Conference in Boston on Aug. 10-13.]

Fehrmann: It really allows our salespeople to provide great information during the prospecting phase. If we’re prospecting with a new client, we can tell him very specifically that if they’re in this industry, in this area, they can expect an application flow, depending on how big the company is, of let’s say in a hundred applications per day.

Gardner: How has this been a benefit to your end users, those people seeking jobs and those people seeking to fill jobs?

Fehrmann: There are certainly some jobs that people are more interested in than others. On the flip side, if a particular job gets a 100 or 500 applications, it’s just a fact that only a small number going to get that particular job. Now if you apply for a job that isn’t as interesting, you have much, much higher probability of getting the job.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in big data, data analysis, HP, HP Vertica | Tagged , , , , , , , , , | Leave a comment

Beyond look and feel–The new role that total user experience plays in business apps

The next BriefingsDirect business innovation thought leadership discussion focuses on the heightened role and impact of total user experience improvements for both online and mobile applications and services.

We’ll explore how user expectations and rethinking of business productivity are having a profound impact on how business applications are used, designed, and leveraged to help buyers, sellers, and employees do their jobs better.

We’ll learn about the advantages of new advances in bringing instant collaboration, actionable analytics, and contextual support capabilities into the application interface to create a total user experience.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. See a demo.

To examine why applications must have more than a pretty face to improve the modern user experience, we’re joined by Chris Haydon, Senior Vice President of Solutions Management for Procurement, Finance and Network at Ariba, an SAP company. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Chris, what sort of confluence of factors has come together to make this concept of user experience so important, so powerful? What has changed, and why must we think more about experience than interface?

Haydon: Dana, it’s a great question. There is the movement of hyper-collaboration, and things are moving faster and faster than ever before.

Haydon

We’re seeing major shifts in trends on how users view themselves and how they want to interact with their application, their business applications in particular, and more and more, drawing parallels from their consumer and how they bring that simple consumer-based experience into their daily work. Those are some of the mega trends.

Then, as we step down a little bit, within that is obviously this collaboration aspect and how people prefer to collaborate online at work more than they did in traditional mechanisms, certainly via phone or fax.

Then, there’s mobility. If someone doesn’t really have a smartphone in this day and age, certainly they’re behind the eight ball.

Last but not least, there’s the changing demographic of our workforce. In 2015, there are some stats out there that showed that millennials will become the single largest percentage of the workforce.

All of these macro trends and figures are going into how we need to think about our total user experience in our applications.

Something more?

Gardner: For those of us who have been using applications for years and years and have sort of bopped around — whether we’re on a mobile device or a PC — from application to application, are we just integrating apps so that we don’t have to change apps, or is it something more? Is this a whole greater than the existing sum of the parts?

Haydon: It’s certainly something more. It’s more the one plus one equals three concept here. The intersection of the connectivity powered by business networks, as well as the utility of mobile devices not tied to your desktop can fundamentally change the way people think about their interactions and think about the business processes and how they think about the work that needs to be done throughout the course of their work environment. That really is the difference.

This is not about just tweaking up as you open up a user interface. This is really about thinking about personal-based interactions in the context of mobility in a network-oriented or network-centric collaboration.

Gardner: When we think about collaboration, traditionally that’s been among people, but it seems to me that this heightened total user experience means we’re actually collaborating increasingly with services. They could be services that recognize that we’re at a particular juncture in a business process. They could be services that recognize that we might need help or support in a situation where we’ve run out of runway and don’t know what to do, or even instances where intelligence and analytics are being brought to us as we need it, rather than our calling out to it.

Tell me about this expanding definition of collaboration. Am I right that we’re collaborating with more than just other people here?

Haydon: That’s right. It’s putting information in the context of the business process right at the point of demand. Whether that’s predictive, intelligence, third party, the best user interfaces and the best total user experiences are bringing that context to the end user managing that complexity — but contextualizing it to bring it to their attention as they work through it.

So whether that’s a budget check and whether there is some gaming on budget, it’s saying, “You’re under budget; that’s great.” That’s an internal metric. Maybe in the future, you start thinking about  how others are performing in other segments of the business. If you want to take it even further, how are other potential suppliers doing on their response rate to their customers?

There is a whole new dimension on giving people contextualized information at the point where they need to make a decision, or even recommending the type of decisions they need to make. It could be from third-party sources that can come from a business network outside your firewall, or from smarter analysis and predictive analysis, from the transactions that are happening within the four walls of your firewall, or in your current application or your other business applications.

Gardner: It seems pretty clear that this is the way things are going. The logic behind why the user experience has expanded in its power and its utility makes perfect sense. I’m really enthused about this notion of contextual intelligence being brought to a business process, but it’s more than just a vision here.

Pulling this off must be quite difficult. I know that many people have been thinking about doing this, but there just isn’t that much of it actually going on yet. So we’re at the vanguard.

What are the problems? What are the challenges that it takes to pull this off to make it happen? It seems to me there are a lot back-end services, and while we focus on the user experience and user interface, we’re really talking about sophisticated technology in the data center providing these services.

Cloud as enabler

Haydon: There are a couple of enablers to this. I think the number one enabler here is cloud versus on-premise. When you can see the behavior in real time in a community aspect, you can actually build infrastructure services around that. In traditional on-premise models, when that’s locked in, all that burden is actually being pushed back to the corporate IT to be able to do that.

The second point is when you’re in the cloud and you think about applications that are network-aware, you’re able to bring in third-party, validated, trusted information to help make that difference. So there are those challenges.

I also think that it comes down to technology, but technology is moving to the focus of building applications actually for the end user. When you start thinking about the interactions with the end user and the focus on them, it really drives you to think about how you give that different contextualized information.

When you can have that level of granularity in saying, “I’m logging on as an invoicing processing assistant”, or “I’m logging on as just a casual ad-hoc requisitioner.” When the system knows you have done that, it’s actually able to be smart and pick up and contextualize that. That’s where we really see the future and the vision of how this is all coming together.

Gardner: When we spoke a while back about the traditional way that people got productivity was switching manually from application to application — whether that’s an on-premise application or a software-as-a-service (SaaS) based application — if they are losing the benefit of a common back-end intelligence capability or network services that are aware or access an identity management that’s coordinated, we still don’t get that total user experience, even though the cloud is an important factor here and SaaS is a big part of it.

What brings together the best of cloud, but also the best of that coordinated, integrated total experience when you know the user and all of their policies and information can be brought to bear on this experience and productivity demand?

Haydon: There are a couple of ways of doing that. You could talk here about the concept of hybrid cloud. The reality is that in most companies for the foreseeable future there will be some degree of on-premise applications that continue to drive businesses, and then there will be side-by-side cloud businesses.

So it’s the job of leading practice technology providers, SaaS and on-premise providers, to enable that to happen. There definitely is this notion of having a very robust platform that underpins the cloud product and can be seamlessly integrated to the on-premise product.

Again, from a technology and a trend perspective, that’s where it’s going. So if the provider doesn’t have a solid platform approach to be able to link the disparate cloud services to disparate on-premise solutions, then you can’t give that full context to the end user.

One thing too is thinking about the user interface. The user interface manages that complexity for the end user. The end user really shouldn’t need to know the mode of deployment, nor should they need to know really where they’re at. That’s what the new leading user interfaces and what the total experience is about, to take you guided through your workflow or your work that needs to be done irrespective of the deployment location of that service.

Ariba roadmap

Gardner: Chris, we spoke last at Ariba Live, the user conference back in the springtime and you were describing the roadmap for Ariba and other applications coming through 2015  into 2016.

What’s been happening recently? This week, I think, you’ve gone general availability (July 2015) with some of these services. Maybe you could quickly describe that. Are we talking about the on-premise apps, the SaaS apps, the mobile apps, all the above? What’s happening?

Haydon: We’re really excited about that. For our current releases that came out this week (see a demo), we launched our total user experience approach, where we have working anywhere, embracing the most modern user design interactions into our user interface, mobility, and also within that, and how we can enable our end users to learn the processes and context. All this has been launched in Ariba within the last 14 days.

Specifically, it’s about embracing modern user design principles. We have a great design principle here within SAP called Fiori. So we’ve taken that design principle and brought that into the world of procurement to put on top of our leading-practice capabilities today and we’re bringing this new updated user experience design.

But we haven’t stopped there. We’re embracing, as you mentioned, this mobility aspect and how can we design new interactions between our common user interface on our mobile and a common user interface on our cloud deployment as one. That’s a given, but what we are doing differently here is embracing the power and the capability of mobile devices with cloud and the work that needs to be done.

One idea of that is how we have a process continuity feature, where you can look on your mobile application, have a look at some activities that you might want to track later on. You can click or pin that activity on your mobile device and when you come to your desktop to do some work, that pinning activity is visible for you to go on tracking and get your job done.

Similarly, if you’re on the go to go and have a meeting, you’re able to push some reports down to your mobile tablet or your smartphone to be able to look and review that work on the go.

We’re really looking at that full, total user experience, whether you’re on the desktop or whether you are on the go on your mobile device, all underpinned by a common user design imperative based upon Fiori.

Gardner: Just to be clear, we’re talking about not only this capability across those network services for on-prem, cloud, and mobile, but we’re taking this across more than a handful of apps. Tell us a bit about how these Ariba applications and the Ariba Network also involve other travel and expense capabilities. What other apps are involved in terms of line-of-business platform that SAP is providing?

Leading practice

Haydon: From a procurement perspective, obviously we have Ariba’s leading practice procurement. As context, we have another fantastic solution for contingent labor, statement-of-work labor and other services, and that’s called Fieldglass. We’ve been working closely with the Fieldglass team to ensure that our user interface that we are rolling out on our Ariba procurement applications is consistent with Fieldglass, and it’s based again on the Fiori style of design construct.

We’re moving toward where an end user, whether they want to interact to do detailed time sheets or service entry, or they want to do requisitioning for powerful materials and inventory, on the Ariba side find a seamless experience.

We’re progressively moving forward to that same style of construct for the Concur applications for our travel and expense, and even the larger SAP, cloud and S4/HANA approaches as well.

Gardner: You mentioned SAP HANA. Tell us how we’re not only dealing with this user experience across devices, work modes, and across application types, but now we have a core platform approach that allows for those analytics to leverage and exploit the data that’s available, depending on the type of applications any specific organization is using.

It strikes me that we have a possibility of a virtuous adoption cycle; that is to say, the more data used in conjunction with more apps begets more data, begets more insights, begets more productivity. How is HANA and analytics coming to bear on this?

Haydon: We’ve had HANA running on analytics on the Ariba side for more than 12 months now. The most important thing that we see with HANA is that it’s not about HANA in itself. It’s a wonderful technology, but what we are really seeing is that the customer interactions change because they’re actually able to do different and faster types of iterations.

To us, that’s the real power of what HANA gives us from a technology and platform aspect to build on. When you can have real time analytics across massive amounts of information put into the context of what an end user does, that to us is where the true business and customer and end-user benefit will come from leveraging the HANA technology.

So we have it running in our analytics stack, progressively moving that through the rest of our analytics on the Ariba platform. Quite honestly, the sky’s the limit as it relates to what that technology can enable us to do. The main focus though is how we give different business interactions, and HANA is just a great engine that enables us to do that.

Gardner: It’s a fascinating time if you’re a developer, because previously, you had to go through a requirements process with the users, but using these analytics you can measure and see what those users are actually doing, or progressing and modernizing their processes, and then take that analytics capability back into the next iteration of the application.

So it’s interesting that we’re talking about total user experience. We could be talking about total developer experience, or even total IT operator experience when it comes to delivering security and compliance. Expand a little bit about how what you are doing on that user side actually benefits the entire life cycle of these applications.

Thinking company

Haydon: It’s really exciting. There are other great companies that do this, and SAP is really investing in this as well as Ariba, making sure we’re really a data-driven, real-time, thinking company.

And you’re right. In the simplest way, we’re rolling out our total user experience in the simplest model. We’re providing a toggle, meaning we’re enabling our end users to road test the user experience and then switch back. We don’t think anyone will want to switch back, but it’s great.

That’s the same type of experience that you experience in your personal life. When someone is trialing a new feature on an e-marketplace or in a consumer store, you’re able to try this experience and come back. What’s great about that is we’re getting real-time insight. We know which customers are doing this. We know which personas are doing this. We know how long they are doing this for their session time.

We’re able to bring that back to our developers, to our product managers, to our user design center experts, and just as importantly, back to our customers and also back to our partners to be able to say, “There is some info, doing these types of things, they are not on this page. They have been looking for this type of information when they do a query or request.”

These types of information we’re feeding into our roadmap, but we are also feeding back into our customers so they understand how their employees are working with our applications. As we step forward, we’re exposing this in the right way to our partners to help them potentially build applications on top of what we already have on the Ariba platform.

Gardner: So obviously you can look this in the face at the general level of productivity, but now we can get specific with partners into verticals, geographies, all the details that come along with business applications, company to company, region to region.

Let’s think about how this comes to market. You’ve announced the general availability in July 2015 on Ariba, and because this is SaaS, there are no forklifts, there are no downloads, no install, and no worries about configuration data. Tell us how this rolls out and how people can experience it if they’ve become intrigued about this concept of total user experience. How easy is it for them to then now start taking part in it?

Haydon: First and foremost, and it’s important, our customers entrust their business processes to us, and so it’s about zero business disruption, and no downtime is our number one goal.

When we rolled out our global network release to a 1.8 million suppliers two weeks ago (see a demo), we had zero downtime on the world’s largest business network. Similarly, as we rolled out our total user experience, zero downtime as well. So that’s the first thing. The number one thing is about business continuity.

The second thing really is a concept that we think about. It’s called agile adoption. This is again how we let end users and companies of end users adopt our solutions.

Educating customers

We have done an awful lot of work, before go live, on educating our customers, providing frequently asked questions, where required, training materials and updates, all those types of support aspects. But we really believe our work starts day plus one, not day minus one.

How are we working with our customers after this is turned on by monitoring, to know exactly what they are doing, giving them proactive support and communications, when we need to, when we see them either switching back or we have a distribution of a specific customer group or end user group within their company? We’ll be actively monitoring them and pushing that forward.

That’s what we really think it’s about. We’re taking this end user customer-centric view to roll out our applications, but letting our own customers find their own pathways.

Organic path

Gardner: If users want to go more mobile in how they do their business processes, want to get those reports and analytics delivered to them in the context of their activity, is there an organic path for them or do they have to wait for their IT department?

What do you recommend for people that maybe don’t even have Ariba in their organization? What are some steps they can take to either learn more or from a grassroots perspective encourage adoption of this business revolution really around total user experience emphasis?

Haydon: We have plenty of material from an Ariba perspective, not just about our solutions, but exactly what you’re mentioning, Dana, about what is going on there. My first recommendation to everyone would be to educate yourselves and have a look at your business — how many millennials are in your business, what are the new working paradigms that need to happen from a mobile approach — and go and embrace it.

The second lesson is that if businesses think that this is not already happening outside of the control of their IT departments, they’re probably mistaken. These things are already going on. So I think those are the kind of macro things to go and have a look at.

But, of course, we have a lot more information about Ariba’s total user experience thinking on thought leadership and then how we go about and implement that in our solutions for our customers, and I would just encourage anyone to go and have a look at ariba.com. You’ll be able to see more about our total user experience, and like I said, some of the leading practice thoughts that we have about implementations (see a demo).

Gardner: I’d also encourage people to listen or read the conversation you and I had just a month or two ago about the roadmap. There’s an awful lot that you’re working on that people will be able to exploit further for total user experience exploits.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. See a demo. Sponsor: Ariba, an SAP company.

You may also be interested in:

Posted in Ariba, SAP | Tagged , , , , , , | Leave a comment

Zynga builds big data innovation culture by making analytics open to all developers

The next BriefingsDirect analytics innovation case study interview explores how Zynga in San Francisco exploits big-data analytics to improve its business via a culture of pervasive, rapid analytics and experimentation.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about how big data impacts Zynga in the fast-changing and highly competitive mobile gaming industry, BriefingsDirect sat down with Joanne Ho, Senior Engineering Manager at Zynga, and Yuko Yamazaki, Head of Analytics at Zynga. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How important is big data analytics to you as an organization?

Ho

Ho: To Zynga, big data is very important. It’s a main piece of the company and as a part of the analytics department, big data is serving the entire company as a source of understanding our users’ behavior, our players, what they like, and what they don’t like about games. We are using this data to analyze the user’s behavior and we also will personalize a lot of different game models that fit the user’s player pattern.

Gardner: What’s interesting to me about games is the people will not only download them but that they’re upgradable, changeable. People can easily move. So the feedback loop between the inferences, information, and analysis you gain by your users’ actions is rather compressed, compared to many other industries.

What is it that you’re able to do in this rapid-fire development-and-release process? How is that responsiveness important to you?

Real-time analysis

Ho: Real-time analysis, of course, is critical, and we have our streaming system that can do it. We have our monitoring and alerting system that can alert us whenever we see any drops in user’s install rating, or any daily active users (DAU). The game studio will be alerted and they will take appropriate action on that.

Gardner: Yuko, what sort of datasets we are talking about? If we’re going to the social realm, we can get some very large datasets. What’s the volume and scale we’re talking about here?

Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition

Yamazaki: We get data of everything that happens in our games. Almost every single play gets tracked into our system. We’re talking about 40 billion to 60 billion rows a day, and that’s the data that our game product managers and development engineers decide what they want to analyze later. So it’s already being structured and compressed as it comes into our database.

Gardner: That’s very impressive scale. It’s one thing to have a lot of data, but it’s another to be able to make that actionable. What do you do once that data is assembled?

Yamazaki: The biggest success story that I will normally tell about Zynga is that we make data available to all employees. From day one, as soon as you join Zynga, you get to see all the data through our visualization to whatever we have. Even if you’re FarmVille product manager, you get to see what Poker is doing, making it more transparent. There is an account report that you can just click and see how many people have done this particular game action, for example. That’s how we were able to create this data-driven culture for Zynga.

Yamazaki

Gardner: And Zynga is not all that old. Is this data capability something that you’ve had right from the start, or did you come into it over time?

Yamazaki: Since we began Poker and Words With Friends, our cluster scaled 70 times.

Ho: It started off with three nodes, and we’ve grown to 230 node clusters.

Gardner: So you’re performing the gathering of the data and analysis in your own data centers?

Yamazaki: Yes.

Gardner: When you realized the scale and the nature of your task, what were some of the top requirements you had for your cluster, your database, and your analytics engine? How did you make some technology choices?

Biggest points

Yamazaki: When Zynga was growing, our main focus was to build something that was going to be able to scale and provide the data as fast as possible. Those were the two biggest points that we had in mind when we decided to create our analytics infrastructure.

Gardner: And any other more detailed requirements in terms of the type of database or the type of analytics engine?

Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition

Yamazaki: Those are two big ones. As I mentioned, we wanted to have everyone be able to access the data. So SQL would have been a great technology to have. It’s easy to train PMs instead of engineering sites, for example, MapReduce for Hadoop. Those were the three key points as we selected our database.

Gardner: What are the future directions and requirements that you have? Are there things that you’d like to see from HP, for example, in order to continue to be able do what you do at increasing scale?

Ho: We’re interested in real-time analytics. There’s a function aggregated projection that we’re interested in trying. Also Flex Tables [in HP Vertica] sounds like a very interesting feature that we also will attempt to try. And cloud analytics is the third one that we’re also interested in. We hope HP will get it matured, so that we can also test it out in the future.

Gardner: While your analytics has been with you right from the start, you were early in using Vertica?

Ho: Yes.

Gardner: So now we’ve determined how important it is, do you have any metrics of what this is able to do for you? Other organizations might be saying they we don’t have as much of a data-driven culture as Zynga, but would like to and they realize that the technology can now ramp-up to such incredible volume and velocity, What do you get back? How do you measure the success when you do big-data analytics correctly?

Yamazaki: Internally, we look at adoption of systems. We we have 2,000 employees, and  at least 1,000 are using our visualization tool on a daily basis. This is the way to measure adoption of our systems internally.

Externally, the biggest metric is retention. Are players coming back and, if so, was that through the data that we collect? Were we able to do personalization so that they’re coming back because of the experience they’ve had?

Gardner: These are very important to your business, obviously, and it’s curious about that buy-in. As the saying goes, you can lead a horse to water, but you can’t make him drink. You can provide data analysis and visualization to the employees, but if they don’t find it useful and impactful, they won’t use it. So that’s interesting with that as a key performance indicator for you.

Any words of advice for other organizations who are trying to become more data-driven, to use analytics more strategically? Is this about people, process, culture, technology, all the above? What advice might you have for those seeking to better avail themselves of big data analytics?

Visualization

Yamazaki: A couple of things. One is to provide end-to-end. So not just data storage, but also visualization. We also have an experimentation system, where I think we have about 400-600 experiments running as we speak. We have a report that shows you run this experiment, all these metrics have been moved because of your experiment, and A is better than B.

We run this other experiment, and there’s a visualization you can use to see that data. So providing that end-to-end data and analytics to all employees is one of the biggest pieces of advice I would provide to any companies.

One more thing is try to get one good win. If you focus too much on technology or scalability, you might be building a battleship, when you actually don’t need it yet. It’s incremental. Improvement is probably going to take you to a place that you need to get to. Just try to get a good big win of increasing installs or active users in one particular game or product and see where it goes.

Gardner: And just to revisit the idea that you’ve got so many employees and so many innovations going on, how do you encourage your employees to interact with the data? Do you give them total flexibility in terms of experiments? How do they start the process of some of those proof-of-concept type of activities?

Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition

Yamazaki: It’s all freestyle. They can log whatever they want. They can see whatever they want, except revenue type of data, and they can create any experiments they want. Her team owns this part, but we also make the data available. Some of the games can hit real time. We can do that real-time personalization using that data that you logged. It’s almost 360-degree of the data availability to our product teams.

Gardner: It’s really impressive that there’s so much of this data mentality ingrained in the company, from the start and also across all the employees, so that’s very interesting. How do you see that in terms of your competitive edge? Do you think the other gaming companies are doing the same thing? Do you have an advantage that you’ve created a data culture?

Yamazaki: Definitely, in online gaming you have to have big data to succeed. A lot of companies, though, are just getting whatever they can, then structure it, and make it analyzable. One of the things that we’ve done that do well was to make a structure to start with. So the data is already structured.

Product managers are already thinking about what they want to analyze before hand. It’s not like they just get everything in and then see what happens. They think right away about, “Is this analyzable? is this something we want to store?” We’re a lot smarter about what we want to store. Cost-wise, it’s a lot more optimized.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in big data, Business intelligence, data analysis, HP | Tagged , , , , , , , , , , | Leave a comment

A tale of two IT departments, or how cloud governance proves essential in the Bimodal IT era

Welcome to a special BriefingsDirect panel discussion in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore. Our panel of experts examines how cloud governance and enterprise architecture can prove essential in the Bimodal IT era, a period of increasingly fragmented IT.

Not only are IT organizations dealing with so-called shadow IT and myriad proof-of-concept affairs, there is now a strong rationale for fostering what Gartner calls Bimodal IT. There’s a strong case to be made for exploiting the strengths of several different flavors of IT, except that — at the same time — businesses are asking IT in total to be faster, better, and cheaper.

The topic before us then is how to allow for the benefits of Bimodal IT — or even multimodal IT — but without IT fragmentation leading to fractured and even broken businesses.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

Here to update us on the work of The Open Group Cloud Governance initiatives and working groups and to further explore the ways that companies can better manage and thrive with hybrid IT are our guests:

  • Dr. Chris Harding, Director for Interoperability and Cloud Computing Forum Director at The Open Group.
  • David Janson, Executive IT Architect and Business Solutions Professional with the IBM Industry Solutions Team for Central and Eastern Europe and a leading contributor to The Open Group Cloud Governance Project.
  • Nadhan, HP Distinguished Technologist and Cloud Adviser and Co-Chairman of The Open Group Cloud Governance Project.

The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Before we get into an update on The Open Group Cloud Governance initiatives, in many ways over the past decades IT has always been somewhat fragmented. Very few companies have been able to keep all their IT oars rowing in the same direction, if you will. But today things seem to be changing so rapidly that some degree of disparate IT methods are necessary. We might even think of old IT and new IT, and this may even be desirable.

But what are the trends that are driving this need for a multimodal IT? What’s accelerating the need for different types of IT, and how can we think about retaining a common governance, and even a frameworks-driven enterprise architecture umbrella, over these IT elements?

Nadhan: Basically, the change that we’re going through is really driven by the business. Business today has much more rapid access to the services that IT has traditionally provided. Business has a need to react to its own customers in a much more agile manner than they were traditionally used to.

Nadhan

We now have to react to demands where we’re talking days and weeks instead of months and years. Businesses today have a choice. Business units are no longer dependent on the traditional IT to avail themselves of the services provided. Instead, they can go out and use the services that are available external to the enterprise.

To a great extent, the advent of social media has also resulted in direct customer feedback on the sentiment from the external customer that businesses need to react to. That is actually changing the timelines. It is requiring IT to be delivered at the pace of business. And the very definition of IT is undergoing a change, where we need to have the right paradigm, the right technology, and the right solution for the right business function and therefore the right application.

Since the choices have increased with the new style of IT, the manner in which you pair them up, the solutions with the problems, also has significantly changed. With more choices, come more such pairs on which solution is right for which problem. That’s really what has caused the change that we’re going through.

A change of this magnitude requires governance that goes across building up on the traditional governance that was always in play, requiring elements like cloud to have governance that is more specific to solutions that are in the cloud across the whole lifecycle of cloud solutions deployment.

Gardner: David, do you agree that this seems to be a natural evolution, based on business requirements, that we basically spin out different types of IT within the same organization to address some of these issues around agility? Or is this perhaps a bad thing, something that’s unnatural and should be avoided?

Janson: In many ways, this follows a repeating pattern we’ve seen with other kinds of transformations in business and IT. Not to diminish the specifics about what we’re looking at today, but I think there are some repeating patterns here.

There are new disruptive events that compete with the status quo. Those things that have been optimized, proven, and settled into sort of a consistent groove can compete with each other. Excitement about the new value that can be produced by new approaches generates momentum, and so far this actually sounds like a healthy state of vitality.

Good governance

However, one of the challenges is that the excitement potentially can lead to overlooking other important factors, and that’s where I think good governance practices can help.

For example, governance helps remind people about important durable principles that should be guiding their decisions, important considerations that we don’t want to forget or under-appreciate as we roll through stages of change and transformation.

Attend The Open Group Baltimore 2015
July 20-23, 2015
Register Here

At the same time, governance practices need to evolve so that it can adapt to new things that fit into the governance framework. What are those things and how do we govern those? So governance needs to evolve at the same time.

There is a pattern here with some specific things that are new today, but there is a repeating pattern as well, something we can learn from.

Gardner: Chris Harding, is there a built-in capability with cloud governance that anticipates some of these issues around different styles or flavors or even velocity of IT innovation that can then allow for that innovation and experimentation, but then keep it all under the same umbrella with a common management and visibility?

Harding: There are a number of forces at play here, and there are three separate trends that we’ve seen, or at least that I have observed, in discussions with members within The Open Group that relate to this.

Harding

The first is one that Nadhan mentioned, the possibility of outsourcing IT. I remember a member’s meeting a few years ago, when one of our members who worked for a company that was starting a cloud brokerage activity happened to mention that two major clients were going to do away with their IT departments completely and just go for cloud brokerage. You could see the jaws drop around the table, particularly with the representatives who were from company corporate IT departments.

Of course, cloud brokers haven’t taken over from corporate IT, but there has been that trend toward things moving out of the enterprise to bring in IT services from elsewhere.

That’s all very well to do that, but from a governance perspective, you may have an easy life if you outsource all of your IT to a broker somewhere, but if you fail to comply with regulations, the broker won’t go to jail; you will go to jail.

So you need to make sure that you retain control at the governance level over what is happening from the point of view of compliance. You probably also want to make sure that your architecture principles are followed and retain governance control to enable that to happen. That’s the first trend and the governance implication of it.

In response to that, a second trend that we see is that IT departments have reacted often by becoming quite like brokers themselves — providing services, maybe providing hybrid cloud services or private cloud services within the enterprise, or maybe sourcing cloud services from outside. So that’s a way that IT has moved in the past and maybe still is moving.
Third trend

The third trend that we’re seeing in some cases is that multi-discipline teams within line of business divisions, including both business people and technical people, address the business problems. This is the way that some companies are addressing the need to be on top of the technology in order to innovate at a business level. That is an interesting and, I think, a very healthy development.

So maybe, yes, we are seeing a bimodal splitting in IT between the traditional IT and the more flexible and agile IT, but maybe you could say that that second part belongs really in the line of business departments — rather than in the IT departments. That’s at least how I see it.

Nadhan: I’d like to build on a point that David made earlier about repeating patterns. I can relate to that very well within The Open Group, speaking about the Cloud Governance Project. Truth be told, as we continue to evolve the content in cloud governance, some of the seeding content actually came from the SOA Governance Project that The Open Group worked on a few years back. So the point David made about the repeating patterns resonates very well with that particular case in mind.

Gardner: So we’ve been through this before. When there is change and disruption, sometimes it’s required for a new version of methodologies and best practices to emerge, perhaps even associated with specific technologies. Then, over time, we see that folded back in to IT in general, or maybe it’s pushed back out into the business, as Chris alluded to.

My question, though, is how we make sure that these don’t become disruptive and negative influences over time. Maybe governance and enterprise architecture principles can prevent that. So is there something about the cloud governance, which I think really anticipates a hybrid model, particularly a cloud hybrid model, that would be germane and appropriate for a hybrid IT environment?

David Janson, is there a cloud governance benefit in managing hybrid IT?

Janson: There most definitely is. I tend to think that hybrid IT is probably where we’re headed. I don’t think this is avoidable. My editorial comment upon that is that’s an unavoidable direction we’re going in. Part of the reason I say that is I think there’s a repeating pattern here of new approaches, new ways of doing things, coming into the picture.

Janson

And then some balancing acts goes on, where people look at more traditional ways versus the new approaches people are talking about, and eventually they look at the strengths and weaknesses of both.

There’s going to be some disruption, but that’s not necessarily bad. That’s how we drive change and transformation. What we’re really talking about is making sure the amount of disruption is not so counterproductive that it actually moves things backward instead of forward.

I don’t mind a little bit of disruption. The governance processes that we’re talking about, good governance practices, have an overall life cycle that things move through. If there is a way to apply governance, as you work through that life cycle, at each point, you’re looking at the particular decision points and actions that are going to happen, and make sure that those decisions and actions are well-informed.

We sometimes say that governance helps us do the right things right. So governance helps people know what the right things are, and then the right way to do those things.

Bimodal IT

Also, we can measure how well people are actually adapting to those “right things” to do. What’s “right” can vary over time, because we have disruptive change. Things like we are talking about with Bimodal IT is one example.

Within a narrower time frame in the process lifecycle, there are points that evolve across that time frame that have particular decisions and actions. Governance makes sure that people are well informed as they’re rolling through that about important things they shouldn’t forget. It’s very easy to forget key things and optimize for only one factor, and governance helps people remember that.

Also, just check to see whether we’re getting the benefits that people expected out of it. Coming back around and looking afterward to see if we accomplish what we thought we would or did we get off in the wrong direction. So it’s a bit like a steering mechanism or a feedback mechanism, in it that helps keep the car on the road, rather than going off in the soft shoulder. Did we overlook something important? Governance is key to making this all successful.

Gardner: Let’s return to The Open Group’s upcoming conference on July 20 in Baltimore and also learn a bit more about what the Cloud Governance Project has been up to. I think that will help us better understand how cloud governance relates to these hybrid IT issues that we’ve been discussing.

Nadhan, you are the co-chairman of the Cloud Governance Project. Tell us about what to expect in Baltimore with the concepts of Boundaryless Information Flow, and then also perhaps an update on what the Cloud Governance Project has been up to.

Attend The Open Group Baltimore 2015
July 20-23, 2015
Register Here

Nadhan: When the Cloud Governance Project started, the first question we challenged ourselves with was, what is it and why do we need it, especially given that SOA governance, architecture governance, IT governance, enterprise governance, in general are all out there with frameworks. We actually detailed out the landscape with different standards and then identified the niche or the domain that cloud governance addresses.

After that, we went through and identified the top five principles that matter for cloud governance to be done right. Some of the obvious ones being that cloud is a business decision, and the governance exercise should keep in mind whether it is the right business decision to go to the cloud rather than just jumping on the bandwagon. Those are just some examples of the foundational principles that drive how cloud governance must be established and exercised.

Subsequent to that, we have a lifecycle for cloud governance defined and then we have gone through the process of detailing it out by identifying and decoupling the governance process and the process that is actually governed.

So there is this concept of process pairs that we have going, where we’ve identified key processes, key process pairs, whether it be the planning, the architecture, reusing cloud service, subscribing to it, unsubscribing, retiring, and so on. These are some of the defining milestones in the life cycle.

We’ve actually put together a template for identifying and detailing these process pairs, and the template has an outline of the process that is being governed, the key phases that the governance goes through, the desirable business outcomes that we would expect because of the cloud governance, as well as the associated metrics and the key roles.

Real-life solution

The Cloud Governance Framework is actually detailing each one. Where we are right now is looking at a real-life solution. The hypothetical could be an actual business scenario, but the idea is to help the reader digest the concepts outlined in the context of a scenario where such governance is exercised. That’s where we are on the Cloud Governance Project.

Let me take the opportunity to invite everyone to be part of the project to continue it by subscribing to the right mailing list for cloud governance within The Open Group.

Gardner: Just for the benefit of our readers and listeners who might not be that familiar with The Open Group, perhaps you could give us a very quick overview — its mission, its charter, what we could expect at the Baltimore conference, and why people should get involved, either directly by attending, or following it on social media or the other avenues that The Open Group provides on its website?

Harding: The Open Group is a vendor-neutral consortium whose vision is Boundaryless Information Flow. That is to say the idea that information should be available to people within an enterprise, or indeed within an ecosystem of enterprises, as and when needed, not locked away into silos.

We hold main conferences, quarterly conferences, four times a year and also regional conferences in various parts of the world in between those, and we discuss a variety of topics.

In fact, the main topics for the conference that we will be holding in July in Baltimore are enterprise architecture and risk and security. Architecture and security are two of the key things for which The Open Group is known, Enterprise architecture, particularly with its TOGAF Framework, is perhaps what The Open Group is best known for.

We’ve been active in a number of other areas, and risk and security is one. We also have started a new vertical activity on healthcare, and there will be a track on that at the Baltimore conference.

There will be tracks on other topics too, including four sessions on Open Platform 3.0. Open Platform 3.0 is The Open Group initiative to address how enterprises can gain value from new technologies, including cloud computing, social computing, mobile computing, big data analysis, and the Internet of Things.

We’ll have a number of presentations related to that. These will include, in fact, a perspective on cloud governance, although that will not necessarily reflect what is happening in the Cloud Governance Project. Until an Open Group standard is published, there is no official Open Group position on the topic, and members will present their views at conferences. So we’re including a presentation on that.

Lifecycle governance

There is also a presentation on another interesting governance topic, which is on Information Lifecycle Governance. We have a panel session on the business context for Open Platform 3.0 and a number of other presentations on particular topics, for example, relating to the new technologies that Open Platform 3.0 will help enterprises to use.

There’s always a lot going on at Open Group conferences, and that’s a brief flavor of what will happen at this one.

Gardner: Thank you. And I’d just add that there is more available at The Open Group website, opengroup.org.

Going to one thing you mentioned about a standard and publishing that standard, is there a roadmap that we could look to in order to anticipate the next steps or milestones in the Cloud Governance Project? When would such a standard emerge and when might we expect it?

Nadhan: As I said earlier, the next step is to identify the business scenario and apply it. I’m expecting, with the right level of participation, that it will take another quarter, after which it would go through the internal review with The Open Group and the company reviews for the publication of the standard. Assuming we have that in another quarter, Chris, could you please weigh in on what it usually takes, on average, for those reviews before it gets published.

Harding: You could add on another quarter. It shouldn’t actually take that long, but we do have a thorough review process. All members of The Open Group are invited to participate. The document is posted for comment for, I would think, four weeks, after which we review the comments and decide what actually needs to be taken.

Certainly, it could take only two months to complete the overall publication of the standard from the draft being completed, but it’s safer to say about a quarter.

Gardner: So a real important working document could be available in the second half of 2015. Let’s now go back to why a cloud governance document and approach is important when we consider the implications of Bimodal or multimodal IT.

One of things that Gartner says is that Bimodal IT projects require new project management styles. They didn’t say project management products. They didn’t say, downloads or services from a cloud provider. We’re talking about styles.

So it seems to me that, in order to prevent the good aspects of Bimodal IT to be overridden by negative impacts of chaos and the lack of coordination that we’re talking about, not about a product or a download, we’re talking about something that a working group and a standards approach like the Cloud Governance Project can accommodate.

David, why is it that you can’t buy this in a box or download it as a product? What is it that we need to look at in terms of governance across Bimodal IT and why is that appropriate for a style? Maybe the IT people need to think differently about accomplishing this through technology alone?

First question

Janson: When I think of anything like a tool or a piece of software, the first question I tend to have is what is that helping me do, because the tool itself generally is not the be-all and end-all of this. What process is this going to help me carry out?

So, before I would think about tools, I want to step back and think about what are the changes to project-related processes that new approaches require. Then secondly, think about how can tools help me speed up, automate, or make those a little bit more reliable?

It’s an easy thing to think about a tool that may have some process-related aspects embedded in it as sort of some kind of a magic wand that’s going to automatically make everything work well, but it’s the processes that the tool could enable that are really the important decision. Then, the tools simply help to carry that out more effectively, more reliably, and more consistently.

We’ve always seen an evolution about the processes we use in developing solutions, as well as tools. Technology requires tools to adapt. As to the processes we use, as they get more agile, we want to be more incremental, and see rapid turnarounds in how we’re developing things. Tools need to evolve with that.

But I’d really start out from a governance standpoint, thinking about challenging the idea that if we’re going to make a change, how do we know that it’s really an appropriate one and asking some questions about how we differentiate this change from just reinventing the wheel. Is this an innovation that really makes a difference and isn’t just change for the sake of change?

Governance helps people challenge their thinking and make sure that it’s actually a worthwhile step to take to make those adaptations in project-related processes.

Once you’ve settled on some decisions about evolving those processes, then we’ll start looking for tools that help you automate, accelerate, and make consistent and more reliable what those processes are.

I tend to start with the process and think of the technology second, rather than the other way around. Where governance can help to remind people of principles we want to think about. Are you putting the cart before the horse? It helps people challenge their thinking a little bit to be sure they’re really going in the right direction.

Gardner: Of course, a lot of what you just mentioned pertains to enterprise architecture generally as well.

Nadhan, when we think about Bimodal or multimodal IT, this to me is going to be very variable from company to company, given their legacy, given their existing style, the rate of adoption of cloud or other software as a service (SaaS), agile, or DevOps types of methods. So this isn’t something that’s going to be a cookie-cutter. It really needs to be looked at company by company and timeline by timeline.

Is this a vehicle for professional services, for management consulting more than IT and product? What is n the relationship between cloud governance, Bimodal IT, and professional services?

Delineating systems

Nadhan: It’s a great question Dana. Let me characterize Bimodal IT slightly differently, before answering the question. Another way to look at Bimodal IT, where we are today, is delineating systems of record and systems of engagement.

In traditional IT, typically, we’re looking at the systems of record, and systems of engagement with the social media and so on are in the live interaction. Those define the continuously evolving, growing-by-the-second systems of engagement, which results in the need for big data, security, and definitely the cloud and so on.

The coexistence of both of these paradigms requires the right move to the cloud for the right reason. So even though they are the systems of record, some, if not most, do need to get transformed to the cloud, but that doesn’t mean all systems of engagement eventually get transformed to the cloud.

There are good reasons why you may actually want to leave certain systems of engagement the way they are. The art really is in combining the historical data that the systems of record have with the continual influx of data that we get through the live channels of social media, and then, using the right level of predictive analytics to get information.

I said a lot in there just to characterize the Bimodal IT slightly differently, making the point that what really is at play, Dana, is a new style of thinking. It’s a new style of addressing the problems that have been around for a while.

But a new way to address the same problems, new solutions, a new way of coming up with the solution models would address the business problems at hand. That requires an external perspective. That requires service providers, consulting professionals, who have worked with multiple customers, perhaps other customers in the same industry, and other industries with a healthy dose of innovation.

That’s where this is a new opportunity for professional services to work with the CxOs, the enterprise architects, the CIOs to exercise the right business decision with the rights level of governance.

Because of the challenges with the coexistence of both systems of record and systems of engagement and harvesting the right information to make the right business decision, there is a significant opportunity for consulting services to be provided to enterprises today.

Drilling down

Gardner: Before we close off I wanted to just drill down on one thing, Nadhan, that you brought up, which is that ability to measure and know and then analyze and compare.

One of the things that we’ve seen with IT developing over the past several years as well is that the big data capabilities have been applied to all the information coming out of IT systems so that we can develop a steady state and understand those systems of record, how they are performing, and compare and contrast in ways that we couldn’t have before.

So on our last topic for today, David Janson, how important is it for that measuring capability in a governance context, and for organizations that want to pursue Bimodal IT, but keep it governed and keep it from spinning out of control? What should they be thinking about putting in place, the proper big data and analytics and measurement and visibility apparatus and capabilities?

Janson: That’s a really good question. One aspect of this is that, when I talk with people about the ideas around governance, it’s not unusual that the first idea that people have about what governance is is about the compliance or the policing aspect that governance can play. That sounds like that’s interference, sand in the gears, but it really should be the other way around.

A governance framework should actually make it very clear how people should be doing things, what’s expected as the result at the end, and how things are checked and measured across time at early stages and later stages, so that people are very clear about how things are carried out and what they are expected to do. So, if someone does use a governance-compliance process to see if things are working right, there is no surprise, there is no slowdown. They actually know how to quickly move through that.

Good governance has communicated that well enough, so that people should actually move faster rather than slower. In other words, there should be no surprises.

Measuring things is very important, because if you haven’t established the objectives that you’re after and some metrics to help you determine whether you’re meeting those, then it’s kind of an empty suit, so to speak, with governance. You express some ideas that you want to achieve, but you have no way of knowing or answering the question of how we know if this is doing what we want to do. Metrics are very important around this.

We capture metrics within processes. Then, for the end result, is it actually producing the effects people want? That’s pretty important.

One of the things that we have built into the Cloud Governance Framework is some idea about what are the outcomes and the metrics that each of these process pairs should have in mind. It helps to answer the question, how do we know? How do we know if something is doing what we expect? That’s very, very essential.

Gardner: I am afraid we’ll have to leave it there. We’ve been examining the role of cloud governance and enterprise architecture and how they work together in the era of increasingly fragmented IT. And we’ve seen how The Open Group Cloud Governance Initiatives and Working Groups can help allow for the benefits of Bimodal IT, but without necessarily IT fragmentation leading to a fractured or broken business process around technology and innovation.

Attend The Open Group Baltimore 2015
July 20-23, 2015
Register Here

This special BriefingsDirect thought leadership panel discussion comes to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore. And it’s not too late to register on The Open Group’s website or to follow the proceedings online and via social media such as Twitter and LinkedIn.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in Bimodal IT, Enterprise architect, enterprise architecture, The Open Group | Tagged , , , , , , , , , | Leave a comment

Data-driven apps performance monitoring spurs broad business benefits for Swiss insurer and Turkish mobile carrier

The next BriefingsDirect applications performance management case study panel discussion explores how improved business service management and data access improvements at a Swiss insurance company and a Turkish mobile phone carrier lead to new business and IT benefits.

By more deeply examining how applications are performing via total performance monitoring and metrics these enterprises slashed mean time to resolution and later significantly reduced the number of IT incidents.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn how to build high confidence that IT disruptions can be managed, and even headed off in advance, BriefingsDirect sat down with Thomas Baumann, IT Performance Architect at Swiss Mobiliar in Bern, and Merve Duran, Management System Specialist at Avea in Istanbul. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Thomas, tell us about Swiss Mobiliar and what you’re doing to increase the performance of your applications.

Baumann: Swiss Mobiliar is Switzerland’s largest personal insurance company. It’s also the oldest insurance company, founded in 1826, and every third Swiss household is insured at Mobiliar. We are number one in household contents insurance.

Track your app’s user experience with Fundex
Download the infographic
Fix poor performing mobile apps

My role at Swiss Mobiliar is Minister of Performance. I’m responsible for optimizing all the applications, infrastructure, etc.

Gardner: Tell us about the challenges that you’ve had in terms of keeping all of your applications performing well. How many end users do you support with those applications?

Baumann: There are about 4,500 internal users, but we also deliver applications directly to our customers. So that makes a group of users that’s at about 2.5 million people.

Gardner: What requirements have you been trying to meet in terms of better performance and how do you go about achieving that?

Baumann

Baumann: About three years ago, we had a very reactive service model. We were only listening to customers or users complaining about bad response times, unresolved tickets, and things like that. Additionally, our event console was at the end of its life. So we were thinking about a more comprehensive solution.

We chose HP’s Real User Monitoring (RUM) and HP’s Operations Management i (OMi) to help us decrease our mean time to repair and to obtain a better understanding of end-user performance and how our applications are used by our customers.

Gardner: Thomas, how important is the acquisition of data and the use of that data? Have you changed either your attitude or culture when it comes to being data-driven, as a path to this better performance?

Baumann: Yes. Initially we had very little data, and data that was generated by syntactical measurements. Now, we measure real end-user traffic at all times, at all locations, and from all users, for the top applications that we have. We don’t use it for all applications.

Gardner: Do you have any sense of performance metrics and improvements? How do you measure your success?

Performance index

Baumann: Regarding end-user response times, we created a performance index, comparable to New York Stock Exchange’s Dow Jones Index. Calculating the average response times of the most important functions of an application, and the mean time of all these response times, gives us this performance score value.

We started in 2012, and there was a performance score value of 100, just to have a base level where we can measure the improvement. Now, with an important sales application, we’re at 220, an increase of a factor of 2.2 in performance.

Gardner: Have you been able to translate that through some sort of a feedback loop or the ability to predict where problems either are or are beginning, so that you can head off problems altogether? Has that been something you’ve been able to achieve?

Track your app’s user experience with Fundex
Download the infographic
Fix poor performing mobile apps

Baumann: Yes. OMi helped us to achieve this, because now we’re able to detect very small incidents before they start to impact our service. In many cases we can avoid a major incident or a large problem that would lead to an availability problem in our company just by analyzing those very small defects or incidents that are detected by our machines.

Gardner: Before your customers and users detect them?

Baumann: Exactly. Sometimes you tell the customers that they have to do this and this, and they’re very surprised because they didn’t know there was a problem before I mentioned it.

Gardner: Let’s now go to Merve at Avea. Tell us a little bit about your company, how large it is, and what you’re doing to improve your application’s performance?

Duran: Avea, is the sole GSM 1-800 mobile operator of Turkey and was founded in 2004. It’s the youngest operator in Turkey, and we perform as management at Avea’s IT domain.

Gardner: What did you put in place and what were you trying to improve upon as you’ve gone to a higher level of performance? How did you want to measure that? How did you want to improve?

Duran: As an example, we have more than 20 mobile applications in Avea for iOS and Android-based mobile devices. We know that these applications get many hits in a day and we know that the response times of these hits play a significant role in the overall user experience.

Duran

Also we know that mobile users are much less tolerant to application errors, slow response times and poor usability. That’s why we needed to manage our mobile application performance. So we are using HP Real and Synthetic User Mobile Monitoring solutions at Avea.

Gardner: Have you been able to measure and determine how that performance has improved? What you’ve been able to use to determine the success of your activities?

Duran: Before this solution, we had almost no end-user data on hand, so root cause analysis was too hard for us and it took long times when a problem occurred. Also we didn’t know how many problems were occurring. With this solution, we can do the root cause analysis and we know how many issues are occurred. Before this solution, we only found out if the customers complained. So the mobile RUM and BPM solution are quite important to us.

Metrics of success

Gardner: Looking to the future, Thomas, where do you see things going next? What’s the next plane of improvement when it comes to applications? Where do you see yourselves going next at your organization?

Baumann: For now, we use RUM to analyze response times. What we start to do now is analyze the behavior of the user: How are they using our applications? We can improve the workflow of whole business process by analyzing how the applications are used, who is using them, from which location, etc.

Gardner: And do you see the data that you’re gathering and using, being used in other aspects of IT? Does this have an adjacency benefit in some way, or is this something that you’re just using specifically for application performance?

Baumann: For now, we use it specifically for application performance, but we see large opportunities to mix these data with other data to get more insight and have a greater overview of how applications are used.

Maybe we can compare it to an airplane. We were flying as a visual-only flight and now we’ve migrated to instrumental flight. We also have those black boxes so we can analyze how all those measurements developed over the last period, what happened exactly before the crash, or in general how the systems are used and how we can improve it.

Gardner: That’s a very good analogy. It’s one thing to just get to your destination. Now, you can make that much more scientific and understood. Therefore, you can devise your future based on a plan rather than a reaction. That’s important.

I just want to go in one more direction before we end, and that would be the type of applications that you’re using. Do you see more of a feedback loop to your developers? You’re doing most of your activity in operations, but as we know that the better you design an application, the better it will then perform.

Track your app’s user experience with Fundex
Download the infographic
Fix poor performing mobile apps

DevOps is an important trend these days. Do you see yourselves as application performance professionals starting to have more impact on the development process of feedback of information to developers, maybe the next generation of an application, or may be for entirely new applications. Any thoughts, Merve?

Duran: For BPM mobile solution, yes, this is quite helpful for us, because we can use this solution when we develop a new release of an application. So it will be good to test it before new application releases.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in HP, Mobile apps | Tagged , , , , , , , , , , | Leave a comment

Securing business critical infrastructure via trusted technology, procurement paradigms, and cyber insurance

Welcome to a special BriefingsDirect panel discussion in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore.

The panel of experts examines how The Open Group Trusted Technology Forum (OTTF) standards and accreditation activities are enhancing the security of global supply chains and improving the integrity of openly available IT products and components.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

We’ll also learn how the age-old practice of insurance is coming to bear on the problem of IT supply-chain risk. Supply chain disruption and security ills may be significantly reduced by leveraging business insurance models.

Attend The Open Group Baltimore 2015
July 20-23, 2015
Register Here

To update us on the work of the OTTF, and explain the workings and benefits of supply-chain insurance, we’re joined by our panel of experts:

  • Sally Long, Director of The Open Group Trusted Technology Forum.
  • Andras Szakal, Vice President and Chief Technology Officer for IBM U.S. Federal and Chairman of The Open Group Trusted Technology Forum.
  • Bob Dix, Vice President of Global Government Affairs and Public Policy for Juniper Networks and member of The Open Group Trusted Technology Forum.
  • Dan Reddy, Supply Chain Assurance Specialist, college instructor and Lead of The Open Group Trusted Technology Forum Global Outreach and Standards Harmonization Work Group.

The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Sally, please give us an update on The Open Group Trusted Technology Forum (OTTF) and the supply-chain accreditation process generally. What has been going on?

Long: For some of you who might not have heard of the O-TTPS, which is the standard, it’s called The Open Trusted Technology Provider™ Standard. The effort started with an initiative in 2009, a roundtable discussion with U.S. government and several ICT vendors, on how to identify trustworthy commercial off-the-shelf (COTS) information and communication technology (ICT), basically driven by the fact that governments were moving away from high assurance customized solution and more and more using COTS ICT.

Long

That ad-hoc group formed under the OTTF and proceeded to deliver a standard and an accreditation program.

The standard really provides a set of best practices to be used throughout the COTS ICT product life cycle. That’s both during in-house development, as well as with outsourced development and manufacturing, including the best practices to use for security in the supply chain, encompassing all phases from design to disposal.

Just to bring you up to speed on just some of the milestones that we’ve had, we released our 1.0 version of the standard in 2013, launched our accreditation program to help assure conformance to the standard in February 2014, and then in July, we released our 1.1 version of the standard. We have now submitted that version to ISO for approval as a publicly available specification (PAS) and it’s a fast track for ISO.

The PAS is a process for adopting standards developed in other standards development organizations (SDOs), and the O-TTPS has passed the draft ISO ballot. Now, it’s coming up for final ballot.

That should bring folks up to speed, Dana, and let them know where we are today.

Gardner: Is there anything in particular at The Open Group Conference in Baltimore, coming up in July, that pertains to these activities? Is this something that’s going to be more than just discussed? Is there something of a milestone nature here, too?

Long: Monday, July 20, is the Cyber Security Day of the Baltimore Conference. We’re going to be meeting in the plenary with many of the U.S. government officials from NIST, GSA, and the Department of Homeland Security. So there is going to be a big plenary discussion on cyber security and supply chain.

We’ll also be meeting separately as a member forum, but the whole open track on Monday will be devoted to cyber security and supply chain security.

The one milestone that might coincide is that we’re publishing our Chinese translation version of the standard 1.1 and we might be announcing that then. I think that’s about it, Dana.

OTTF background

Gardner: Andras, for the benefit of our listeners and readers who might be new to this concept, perhaps you could fill us in on the background on the types of problems that OTTF initiatives and standards are designed to solve. What’s the problem that we need to address here?

Szakal: That’s a great question. We realized, over the last 5 to 10 years, that the traditional supply-chain management practices — supply-chain integrity practices, where we were ensuring the integrity of the delivery of a product to the end customer, ensuring that it wasn’t tampered with, effectively managing our suppliers to ensure they provided us with quality components — really had expanded as a result of the adoption of technology. There has been pervasive growth of technology in all aspects of manufacturing, but especially as IT has expanded into the Internet of Things, critical infrastructure and mobile technologies, and now obviously cloud and big data.

Szakal

And as we manufacture those IT products we have to recognize that now we’re in a global environment, and manufacturing and sourcing of components occurs worldwide. In some cases, some of these components are even open source or freely available. We’re concerned, obviously, about the lineage, but also the practices of how these products are manufactured from a secure engineering perspective, as well as the supply-chain integrity and supply-chain security practices.

What we’ve recognized here is that the traditional life cycle of supplychain security and integrity has expanded to include all the way down to the design aspects of the product through sustainment and managing that product over a period of time, from cradle to grave, and disposal of the product to ensure that those components, if they were hardware-based, don’t actually end up recycled in a way that they pose a threat to our customers.

Gardner: So it’s as much a lifecycle as it is a procurement issue.

Szakal: Absolutely. When you talk about procurement, you’re talking about lifecycle and about mitigating risks to those two different aspects from sourcing and from manufacturing.

So from the customer’s perspective, they need to be considering how they actually apply techniques to ensure that they are sourcing from authorized channels, that they are also applying the same techniques that we use for secure engineering when they are doing the integration of their IT infrastructure.

But from a development perspective, it’s ensuring that we’re applying secure engineering techniques, that we have a well-defined baseline for our life cycle, and that we’re controlling our assets effectively. We understand who our partners are and we’re able to score them and ensure that we’re tracking their integrity and that we’re applying new techniques around secure engineering, like threat analysis and risk analysis to the supply chain.

We’re understanding the current risk landscape and applying techniques like vulnerability analysis and runtime protection techniques that would allow us to mitigate many of these risks as we build out our products and manufacture them.

It goes all the way through sustainment. You probably recognize now, most people would, that your products are no longer a shrink-wrap product that you get, install, and it lives for a year or two before you update it. It’s constantly being updated. So to ensure that the integrity and delivery of that update is consistent with the principles that we are trying to espouse is also really important.

Collaborative effort

Gardner: And to that point, no product stands alone. It’s really a result of a collaborative effort, very complex number of systems coming together. Not only are standards necessary, but cooperation among all those players in that ecosystem becomes necessary.

Dan Reddy, how have we done in terms of getting mutual assurance across a supply chain, that all the participants are willing to take part? It seems to me that, if there is a weak link, everyone would benefit by shoring that up. So how do we go beyond the standards? How are we getting cooperation, get all the parties interested in contributing and being part of this?

Reddy: First of all, it’s an evolutionary process, and we’re still in the early days of fully communicating what the best practices are, what the standards are, and getting people to understand how that relates to their place in the supply chain.

Reddy

Certainly, the supplier community would benefit by following some common practices so they don’t wind up answering customized survey questions from all of their customers.

That’s what’s happening today. It’s pretty much a one-off situation, where each customer says, “I need to protect my supply chain. Let me go find out what all of my suppliers are doing.” The real benefit here is to have the common language of the requirements in our standard and a way to measure it.

So there should be an incentive for the suppliers to take a look at that and say, “I’m tired of answering these individual survey questions. Maybe if I just document my best practices, I can avoid some of the effort that goes along with that individual approach.”

Everyone needs to understand that value proposition across the supply chain. Part of what we’re trying to do with the Baltimore conference is to talk to some thought leaders and continue to get the word out about the value proposition here.

Gardner: Bob Dix, the government in the U.S., and, of course, across the globe, all the governments, are major purchasers of technology and also have a great stake in security and low risk. What’s been driving some of the government activities? They’re also interested in using COTS technology and cutting costs. So what role can governments play in driving some of these activities around the OTTF?

Risk management

Dix: This issue of supply chain assurance and cyber security is all about risk management, and it’s a shared responsibility. For too long I think that the government has had a tendency to want to point a finger at the private sector as not sufficiently attending to this matter.

Dix

The fact is, Dana, that many in the private sector make substantial investments in their product integrity program, as Andras was talking about, from product conception, to delivery, to disposal. What’s really important is that when that investment is made and when companies apply the standard the OTTF has put forward, it’s incumbent upon the government to do their part in purchasing from authorized and trusted sources.

In today’s world, we still have a culture that’s pervasive across the government acquisition community, where decision-making on procurements is often driven by cost and schedule, and product authenticity, assurance, and security are not necessarily a part of that equation. It’s driven in many cases by budgets and other considerations, but nonetheless, we must change that culture to focus to include authenticity and assurance as a part of the decision making process.

The result of focusing on cost and schedule is often those acquisitions are made from untrusted and unauthorized sources, which raises the risk of acquiring counterfeit, tainted, or even malicious equipment.

Part of the work of the OTTF is to present to all stakeholders, in industry and government alike, that there is a process that can be uniform, as has been stated by Sally and Dan, that can be applied in an environment to raise the bar of authenticity, security, and assurance to improve upon that risk management approach.

Gardner: Sally, we’ve talked about where you’re standing in terms of some progress in your development around these standards and activities. We’ve heard about the challenges and the need for improvement.

Before we talk about this interesting concept of insurance that would come to bear on — and perhaps encouraging standardization and giving people more ways to reduce their risk and adhere to best practices — what do you expect to see in a few years? If things go well and if this is adopted widely and embraced in true good practices, what’s the result? What do we expect to see as an insurance improvement?

Powerful impact

Long: The most important and significant aspect of the accreditation program is when you look at the holistic nature of the program and how it could have a very powerful impact if it’s widely adopted.

The idea of an accreditation program is that a provider gets accredited for conforming to the best practices. A provider that can get accredited could be an integrator, an OEM, the component suppliers of hardware and software that provide the components to the OEM, and the value-add resellers and distributors.

Every important constituent in that supply chain could be accredited. So not only from a business perspective is it important for governments and commercial customers to look on the Accreditation Registry and see who has been accredited for the integrators they want to work with or for the OEMs they want to work with, but it’s also important and beneficial for OEMs to be able to look at that register and say, “These component suppliers are accredited. So I’ll work with them as business partners.” It’s the same for value-add resellers and distributors.

Attend The Open Group Baltimore 2015
July 20-23, 2015
Register Here

It builds in these real business-market incentives to make the concept work, and in the end, of course, the ultimate goal of having a more secure supply chain and more products with integrity will be achieved.

To me, that is one of the most important aspects that we can reach for, especially if we reach out internationally. What we’re starting to see internationally is that localized requirements are cropping up in different countries. What that’s going to mean is that vendors need to meet those different requirements, increasing their cost, and sometimes even there will end up being trade barriers.

Back to what Dan and Bob were saying, we need to look at this global standard and accreditation program that already exists. It’s not in development; we’ve been working on it for five years with consensus from many, many of the major players in the industry and government. So urging global adoption of what already exists and what could work holistically is really an important objective for our next couple of years.

Gardner: It certainty sounds like a win, win, win if everyone can participate, have visibility, and get designated as having followed through on those principles. But as you know and as you mentioned, it’s the marketplace. Economics often drives business behavior. So in addition to a standards process and the definitions being available, what is it about this notion of insurance that might be a parallel market force that would help encourage better practices and ultimately move more companies in this direction?

Let’s start with Dan. Explain to me how cyber insurance, as it pertains to the supply chain, would work.
Early stages

Reddy: It’s an interesting question. The cyber insurance industry is still in the early stages, even though it goes back to the ’70s, where crime insurance started applying to outsiders gaining physical access to computer systems. You didn’t really see the advent of hacker insurance policies until the late ’90s. Then, starting in 2000, some of the first forms of cyber insurance covering first and third party started to appear.

What we’re seeing today is primarily related to the breaches that we hear about in the paper everyday, where some organization has been comprised, and sensitive information, like credit card information, is exposed for thousands of customers. The remediation is geared toward the companies that have to pay the claim and sign people up for identity protection. It’s pretty cut and dried. That’s the wave that the insurance industry is riding right now.

What I see is that as attacks get to be more sophisticated and potentially include attacks on the supply chain, it’s going to represent a whole new area for cyber insurance. Having consistent ways to address supplier-related risk, as well as the other infrastructure related risks that go beyond simple data breach, is going to be where the marketplace has to make an adjustment. Standardization is critical there.

Gardner: Andras, how does this work in conjunction with OTTF? Would insurance companies begin their risk assessment by making sure that participants in the supply chain are already adhering to your standards and seeking accreditation? Then, maybe they would have premiums that would reflect the diligence that companies extend into their supply chains. Maybe you could just explain to me, not just the insurance, but how it would work in conjunction with OTTF, maybe to each’s mutual benefit.

Szakal: You made a really great point earlier about the economic element that would drive compliance. For us in IBM, the economic element is the ability to prove that we’re providing the right assurance that is being specified in the requests for proposals (RFPs), not only in the federal sector, but outside the federal sector in critical infrastructure and finance. We continue to win those opportunities, and that’s driven our compliance, as well as the government policy aspect worldwide.

But from an insurance point of view, insurance comes in two forms. I buy policy insurance in a case where there are risks that are out of my control, and I apply protective measures that are under my control. So in the case of the supply chain, the OTTF is a set of practices that help you gain control and lower the risk of threat in the manufacturing process.

The question is, do you buy a policy, and what’s the balance here between a cyber threat that is in your control, and those aspects of supply chain security which are out of your control. This is with the understanding that there is an infinite number of a resources or revenue that you can apply to allocate to both of these aspects.

There’s going to have to be a balance, and it really is going to be case by case, with respect to customers and manufacturers, as to where the loss of potential intellectual property (IP) with insurance, versus applying controls. Those resources are better applied where they actually have control, versus that of policies that are protecting you against things that are out of your control.

For example, you might buy a policy for providing code to a third party, which has high value IP to manufacture a component. You have to share that information with that third-party supplier to actually manufacture that component as part of the overarching product, but with the realization that if that third party is somehow hacked or intruded on and that IP is stolen, you have lost some significant amount of value. That will be an area where insurance would be applicable.

What’s working

Gardner: Bob Dix, if insurance comes to bear in conjunction with standards like what the OTTF is developing in supply chain assurance, it seems to me that the insurance providers themselves would be in a position of gathering information for their actuarial decisions and could be a clearing house for what’s working and what isn’t working.

It would be in their best interest to then share that back into the marketplace in order to reduce the risk. That’s a market-driven, data-driven approach that could benefit everyone. Do you see the advent of insurance as a benefit or accelerant to improvement here?

Dix: It’s a tool. This is a conversation that’s been going on in the community for quite some time, the lack of actuarial data for catastrophic losses produced by cyber events, that is impacting some of the rate setting and premium setting by insurance companies, and that has continued to be a challenge.

But from an incentive standpoint, it’s just like in your home. If you have an alarm system, if you have a fence, if you do other kinds of protective measures, your insurance on your homeowners or liability insurance may get a reduction in premium for those actions that you have taken.

As an incentive, the opportunity to have an insurance policy to either transfer or buy down risk can be driven by the type of controls that you have in your environment. The standard that the OTTF has put forward provides guidance about how best to accomplish that. So, there is an opportunity to leverage, as an incentive, the reduction in premiums for insurance to transfer or buy down risk.

Gardner: It’s interesting, Sally, that the insurance industry could benefit from OTTF, and by having more insurance available in the marketplace, it could encourage more participation and make the standard even more applicable and valuable. So it’s interesting to see over time how that plays out.

Any thoughts or comments on the relationship between what you are doing at OTTF and The Open Group and what the private insurance industry is moving toward?

Long: I agree with what everyone has said. It’s an up-and-coming field, and there is a lot more focus on it. I hear at every conference I go to, there is a lot more research on cyber security insurance. There is a place for the O-TTPS in terms of buying down risk, as Bob was mentioning.

The other thing that’s interesting is the NIST Cybersecurity Framework. That whole paradigm started out with the fact that there would be incentives for those that followed the NIST Cybersecurity Framework – that incentive piece became very hard to pull together, and still is. To my knowledge, there are no incentives yet associated with it. But insurance was one of the ideas they talked about for incentivizing adopters of the CSF.

The other thing that I think came out of one of the presentations that Dan and Larry Clinton will be giving at our Baltimore Conference, is that insurers are looking for simplicity. They don’t want to go into a client’s environment and have them prove that they are doing all of these things required of them or filling out a long checklist.

That’s why, in terms of simplicity, asking for O-TTPS-accredited providers or lowering their rates based on that – would be a very simplistic approach, but again not here yet. As Bob said, it’s been talked about a lot for a long time, but I think it is coming to the fore.

Market of interest

Gardner: Dan Reddy, back to you. When there is generally a large addressable market of interest in a product or service, there often rises a commercial means to satisfy that. How can enterprises, the people who are consuming these products, encourage acceptance of these standards, perhaps push for a stronger insurance capability in the marketplace, or also get involved with some of these standards and practices that we have been talking about?

If you’re a publicly traded company, you would want to reduce your exposure and be able to claim accreditation and insurance as well. Let’s look at this from the perspective of the enterprise. What should and could they be doing to improve on this?

Reddy: I want to link back to what Sally said about the NIST Cyber Security Framework. What’s been very useful in publishing the Framework is that it gives enterprises a way to talk about their overall operational risk in a consistent fashion.

I was at one of the workshops sponsored by NIST where enterprises that had adopted it talked about what they were doing internally in their own enterprises in changing their practices, improving their security, and using the language of the framework to address that.

Yet, when they talked about one aspect of their risk, their supplier risk, they were trying to send the NIST Cybersecurity Framework risk questions to their suppliers, and those questions aren’t really sufficient. They’re interesting. You care about the enterprise of your supplier, but you really care about the products of your supplier.

So one of the things that the OTTF did is look at the requirements in our standard related to suppliers and link them specifically to the same operational areas that were included in the NIST Cybersecurity Framework.

This gives the standard enterprise looking at risk, trying to do standard things, a way to use the language of our requirements in the standard and the accreditation program as a form of measurement to see how that aspect of supplier risk would be addressed.

But remember, cyber insurance is more than just the risk of suppliers. It’s the risk at the enterprise level. But the attacks are going to change over time, and we’ll go beyond the simple breaches. That’s where the added complexity will be needed.

Gardner: Andras, any suggestions for how enterprises, suppliers, vendors, systems integrators, and now, of course, the cloud services providers, should get involved? Where can they go for more information? What can they do to become part of the solution on this?

International forum

Szakal: Well, they can always become a member of the Trusted Technology Forum, where we have an international forum.

Gardner: I thought you might say that.

Szakal: That’s an obvious one, right? But there are a couple of places where you can go to learn more about this challenge.

One is certainly our website. Download the framework, which was a compendium of best practices, which we gathered as a result of a lot of hard work of sharing in an open, penalty-free environment all of the best practices that the major vendors are employing to mitigate risks to counterfeit and maliciously tainted products, as well as other supply chain risks. I think that’s a good start, understanding the standard.

Then, it’s looking at how you might measure the standard against what your practices are currently using the accreditation criteria that we have established.

Other places would be NIST. I believe that it’s 161 that is the current pending standard for protecting supply chain security. There are several really good reports that the Defense Science Board and other organizations have conducted in the past within the federal government space. There are plenty of materials out there, a lot of discussion about challenges.

But I think the only place where you really find solutions, or at least one of the only places that I have seen is in the TTF, embedded in the standard as a set of practices that are very practical to implement.

Gardner: Sally, the same question to you. Where can people go to get involved? What should they perhaps do to get started?

Long: I’d reiterate what Andras said. I’d also point them toward the accreditation website, which is www.opengroup.org/accreditation/o-ttps. And on that accreditation site you can see the policy, standard and supporting docs. We publicize our assessment procedures so you have a good idea of what the assessment process will entail.

The program is based on evidence of conformance as well as a warranty from the applicant. So the assessment procedures being public will allow any organizations thinking about getting accredited to know exactly what they need to do.

As always, we would appreciate any new members, because we’ll be evolving the standard and the accreditation program, and it is done by consensus. So if you want a say in that, whether our standard needs to be stronger, weaker, broader, etc., join the forum and help us evolve it.

Impact on business

Gardner: Dan Reddy, when we think about managing these issues, often it falls on the shoulders of IT and their security apparatus, the Chief Information Security Officer perhaps. But it seems that the impact on business is growing. So should other people in the enterprise be thinking about this? I am thinking about procurement or the governance risk and compliance folks. Who else should be involved other than IT in their security apparatus in mitigating the risks as far as IT supply chain activity?

Reddy: You’re right that the old model of everything falls on IT is expanding, and now you see issues of enterprise risk and supply chain risk making it up to the boards of directors, who are asking tough questions. That’s one reason why boards look at cyber insurance as a way to mitigate some of the risk that they can’t control.

They’re asking tough questions all the way around, and I think acquisition people do need to understand what are the right questions to ask of technology providers.

To me, this comes back to scalability. This one-off approach of everyone asking questions of each of their vendors just isn’t going to make it. The advantage that we have here is that we have a consistent standard, built by consensus, freely available, and it’s measurable.

There are a lot of other good documents that talk about supply chain risk and secure engineering, but you can’t get a third-party assessment in a straightforward method, and I think that’s going to be appealing over time.

Gardner: Bob Dix, last word to you. What do you see happening in the area of government affairs and public policy around these issues? What should we hope for or expect from different governments in creating an atmosphere that improves risk across supply chain?

Dix: A couple things have to happen, Dana. First, we have got to quit blaming victims when we have breaches and compromises and start looking at solutions. The government has a tendency in the United States and in other countries around the world, to look at legislating and trying to pass regulatory measures that impose requirements on industry without a full understanding of what industry is already doing.

In this particular example, the government has had a tendency to take an approach that excludes vendors from being able to participate in federal procurement activities based on a risk level that they determine.

The really great thing about the work of the OTTF and the standard that’s being produced is it allows a different way to look at it and instead look at those that are accredited as having met the standard and being able to provide a higher assurance level of authenticity and security around the products and services that they deliver. I think that’s a much more productive approach.

Working together

And from a standpoint of public policy, this example on the great work that’s being done by industry and government working together globally to be able to deliver the standard provides the government a basis by which they can think about it a little differently.

Instead of just focusing on who they want to exclude, let’s look at who actually is delivering the value and meeting the requirements to be a trusted provider. That’s a different approach and it’s one that we are very proud of in terms of the work of The Open Group and we will continue to work that going forward.

Attend The Open Group Baltimore 2015
July 20-23, 2015
Register Here

Gardner: This special BriefingsDirect thought leadership panel discussion has been brought to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore. It’s not too late to register on The Open Group’s website or to follow the proceedings online and via Twitter and other social media during the week of the presentation.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in Security, The Open Group | Tagged , , , , , , , , , | Leave a comment