Forrester analyst Kurt Bittner on the inevitability of DevOps

Businesses today want to deliver software improvements at weekly and even daily intervals, especially in SaaS environments, for mobile apps, and for cloud-based workloads. Yet those kinds of delivery speeds are inconceivable with any kind of manual software development processes.

As competitive organizations move away from quarterly software releases to faster releases, they are being forced to face the inevitable adoption of DevOps processes and efficiencies.

The next BriefingsDirect thought leadership discussion therefore explores the building interest in DevOps — of making the development, test, and ongoing improvement in software creation a coordinated, lean, and proficient process for enterprises.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

BriefingsDirect sat down with a prominent IT industry analyst, Kurt Bittner, Principal Analyst, Application Development and Delivery at Forrester Research, to explore why DevOps is such a hot topic, and to identify steps that successful organizations are taking to make advanced applications development a major force for business success. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s start by looking at the building interest in DevOps. What’s driving that? 

Bittner: It’s essentially the end-user or client organizations as they face increasing pressure from competition and increasing expectations from customers to delivering functionality faster.

I was at a dinner the other night, and there were half a dozen or so large banks there. They were all saying, to my surprise, that they didn’t feel like they were competing with one another, but that they felt like they were competing with companies like Apple, Google, PayPal, and increasingly startup companies. Square is a good example, too.

They’re getting into the payment mechanism, and that’s siphoning our business from the banks. The banks are beginning to see drops in their own bottom lines because of the competition from … software companies. You see companies like Uber having a big impact on traditional taxi companies and transportation.

Increasing competition

So it’s essentially increasing competition, driven by increasing customer expectations. We’re all part of that as consumers where we’ve gravitating toward our mobile smartphones. We’re increasingly interacting with companies through mobile devices.


Delivering new functionality through mobile experiences, through cloud experiences, through the web, through various kinds of payment mechanisms — all of these things contribute to the need to deliver services much faster.

Startup companies get this and they’re already adopting these techniques in large numbers. What we’re finding is that traditional companies are increasingly saying, “We have to do this. This a competitive threat to us.” Like Blockbuster Video, they may cease to exist if they don’t.

Gardner: Companies like Apple or Uber probably define themselves as being technology companies. That’s what they do. Software is a huge part of what makes them a successful company. It defines them. What is it that DevOps brings to the table for them and others?

Bittner: DevOps optimizes the software delivery pipeline, all the steps that you have to go through between when you have an idea and when a customer starts benefiting from that idea. In the traditional delivery processes, you have lots of hand-offs, lots of stops and starts. You have relatively inefficient processes, and it can take months — and sometimes years — to go from idea to having somebody get a benefit.

With DevOps, we’re reducing the size of the things you’re delivering, so you can deliver more frequently. Then, you can eliminate hand-offs and inefficiencies in the delivery process, so that you can deliver it as fast as possible with higher quality.

Gardner: And what was broken? What needs to be fixed? Wasn’t Agile suppose to fix this?

Bittner: Agile is part of the solution, but many Agile teams find that they’d like to be more agile. They’re held back by lack of testing environments. They’re held back by lack of testing automation. They’re held back by lack of deployment automation. They, themselves, have lots of barriers.

Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information

So, Agile is part of the solution in the sense of involving the business more on a day-to-day basis in the project decision-making. It also provides the ability to break a problem down into smaller increments, and at least demonstrate in smaller increments, but it doesn’t actually deliver into production in smaller increments.

Other capabilities

You need to have other capabilities to do that. One illustration of how DevOps helps to accelerate Agile came in talking to a large manufacturing organization that was making the transition to Agile.

They had a problem in that they weren’t able to get to development or test environments for months. IT operations processes had been set up in a very siloed way. Development and testing environments got low priority when other things were going on.

So, as much as the team wanted to work in an Agile way, they couldn’t get a rapid test environment. In effect, they were completely stopped from any forward progress. There’s only so much you can do on a developer workstation.

These DevOps practices benefit Agile as well, by enabling Agile to really fully realize the promise that it’s had.

Gardner: Is there a change in philosophy, too, Kurt, where software is released before it’s really cooked and let the environment, the real world, be their test bed, their simulation if you will? And then they do rapid iterations? Are we going to begin seeing that now, as DevOps gains ground in established traditional enterprises?

Bittner: You’re right. There is a tendency toward getting functionality out there, seeing what the market says about it, and then improving. That works in certain areas. For example, Google has an internal motto that says if you’re not somewhat embarrassed by your first release, you didn’t move fast enough.

But we also have to realize that we have software in our automobiles and in our aircraft, and you don’t want to put something out there into those environments that’s basically not functional.

I separate the measures of quality from measures of aesthetic qualities. The software that gets delivered early has to be high-quality. It can’t be buggy. It has to work and satisfy a certain set of needs. But there’s a wide variety of variability on whether people will like it or not or whether people will use it or not.

So when organizations are delivering quickly and getting feedback from the market, they’re really getting feedback on things like usability and aesthetics and not necessarily on some critical business-processing capability. Or let’s say the software in your anti-lock braking system (ABS) system in your car. You don’t want that to fail, but you might be very interested in how the climate-control system works.

That may be subject to wide variations. To get better fuel efficiency, you may be willing to sacrifice something in the air conditioner to provide better efficiency. So, it’s largely driving feedback on non-safety-critical features. That’s where most organizations are focused.

More feedback

Gardner: You mentioned feedback. That seems to be a core aspect of DevOps, more feedback between operations, the real world, the use of software, and the development  and test process. How do we compress that feedback loop — not only for user experience, but also data coming out of an embedded system, for example — so that we can improve? Let’s address feedback and compressing the feedback-loop.

Bittner: If you think about what traditional application releases do, they tend to bundle a lot of different features into a single release. If you think about this from a statistical perspective, that means you have a lot of independent variables. You can’t tell when something improves. You can’t tell why it improved, because you have so many variables in there.

In the feedback loop with DevOps, you want to make the increment of releases as small as possible, basically one thing at a time, and then measure the result from that, so you know that your results improve because of that one single feature.

The other thing is that we start to shift toward a more outcome-oriented software release. You’re not releasing features, but you’re doing things that will change a customer’s outcome. If it doesn’t change a customer’s outcome, the customer doesn’t really care.

So by having the increment of a release be one outcome at a time, and then measuring the result from that, you get the capabilities out there as quickly as possible. Then you can tell whether you actually improved because of what you just did. If you didn’t improve, then you stop doing that and do something else.

Gardner: Is that what you mean by continuous delivery, these iterative small parts, rather than the whole big dump every six to 12 months?

Bittner: That’s a big part of it. Continuous delivery is also, more precisely, a process by which you make small changes. You optimize the delivery cycle, removing waste and hand-offs to make that as fast as possible with a high degrees of automation, so that you can get out there and get the feedback as quickly as possible.

So, it’s a combination. It needs not just fast delivery, but a number of techniques that are used to improve that delivery.

Gardner: Folks listening and reading this might very well like the idea of DevOps: “I’d like to do DevOps; where do I buy it?” DevOps, though, isn’t really a product, a box, or a download. It’s a way of thinking in a methodological approach. How people go about implementing DevOps? Where do you start?

Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information

Bittner: You’re right. It’s more of a philosophy than a product. It’s not even really a product category, but a bunch of different products, and processes, and to some degree, a philosophy behind that. When we talk to organizations that implemented this successfully, there are a couple of patterns.

First of all, you don’t implement DevOps across an entire organization all at once. It tends to happen product by product, team by team. It happens first in the applications that are very customer-facing, because that’s where the most pressure is right now. That’s where the biggest benefit is. So on the team-by-team basis, first of all you have to have some executive mandate to make a change. Somebody has to feel like this is important enough to the company.

While developers, engineers, and IT Ops people can be passionate about this, it typically requires executive leadership to get this to happen, because these changes cut across traditional organizational silos. Without some executive sponsorship, these initiatives tend not to go very far.

The first step – and this is sort of very mundane area — tends to be changing the way that environments are provisioned. That includes getting environments provisioned on-demand, using techniques like infrastructure-as-code to automatically generate environments based on configuration settings so that you can have an environment anytime you need it. That removes a lot of friction and a lot of delays.

The second thing that tends to be implemented are techniques like continuous integration and then, after that, test automation, based on APIs. There’s a shift to APIs on an integrated architecture for the applications, and then usually deployment automation comes after that. Once you have environments provisioned in code that you can put into those environments, you need a way to move that code between environments.

As you make those changes, you start to run into organizational barriers, silos in the organization, that prevent effectively working together. There’s too much wait-time when people are assigned to multiple projects or multiple applications.

There’s a shift in team structure to become more product-oriented with dedicated resources to a product, so that you can release, and do release after release most effectively. That tends to break the organization silos down and start shifting to a more product-centric organization and away from a functionally oriented organization.

All of those changes together typically take years, but it usually starts with some sort of executive mandate, then environment provisioning, and so on.

Management capability

Gardner: It sounds, too, that it’s important to have better management capabilities across these silos — with metrics, dashboards, validating efforts, of being able to measure discretely what’s going on, and then reinforce the good and discard the bad.

Are there any particular existing ways of doing that? I’m thinking about the long-term application lifecycle management (ALM) marketplace. Does that lend itself to DevOps? Should we start from scratch and create a new management layer, if you will, across the whole continuum of software design, test, and delivery?

Bittner: It’s a little bit of both. DevOps is really an outgrowth of ALM, and all of the aspects of ALM are there. You need to be able to manage the work, track the work, and to determine what work got done. In addition to that, you’re adding automation in the areas that I was just describing; environment provisioning, continuous integration, test automation, and deployment automation.

There’s another component that becomes really important, because out of those applications, you want to start gathering customer experience data. So things like operational and application analytics are important to start measuring the customer experience.

Combining all of those into a single view, single dashboard is evolving now. The ALM tools are evolving in that direction, and there are ways of visualizing that. But right now it tends to be a multi-vendor ecosystem. You don’t find one DevOps suite from one company that provides everything.

But the good news is that the same thing that’s been happening in the rest of the industry around services and interoperability has happened in applications. We have a high degree of interoperability between tools from different vendors today that allows you to customize this delivery pipeline to give you the DevOps capability.

Gardner: It seems that, in some ways, the prominence of hybrid cloud models, mobile, and mobile-first thinking, when it comes to development, are accelerants to DevOps. If you have that multiple cloud goal, you’re going to want to standardize on your production environment. Hence, also the interest in containers these days. And, of course, mobile-first forces you to think about user experience, small iterations apps, rather than applications. Do you see an acceleration from these other trends reinforcing DevOps?

Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information

Bittner: It’s both reinforcing it and, to some degree, causing it, because it’s mobile that’s triggered this explosion and the need for DevOps — the need for faster delivery. To a large degree, the mobile application is the proverbial tip of the iceberg. Very few mobile applications stand alone. They all have very rich services running behind them. They have systems of record providing the data. Virtually every mobile application is really a composite application with some parts in the cloud and some parts in traditional data centers.

The development across all of those different code lines and the coordination of releases across all those different code lines really requires the DevOps approach to be able to do that successfully.

Demand and complexity

So it’s both demand created by higher customer expectations from mobile customers, but also the complexity of delivering these applications in a really rapid way across all those different platforms. You made an interesting point about cloud and containers being both drivers for demand and also enablers, but they’re also changing the nature of the work.

As containers and microservices become more prevalent — we’re seeing growth in those areas — it’s increasing the complexity of application delivery. It simplifies the deployment, but it increases the complexity. Now, instead of having to coordinate dozens of moving parts, you have to coordinate hundreds and, we think, in the future, thousands of moving parts. That’s well beyond what somebody can do with spreadsheets and manual management techniques.

The other thing is that cloud simplifies environment provisioning tremendously and it provides this great elastic infrastructure for deploying applications. But it also simplifies it by standardizing environments, making it all software configurable. It’s a tremendous benefit to delivering applications faster and it gives you much more flexibility than traditional data-center applications. There’s definitely movement toward those kind of applications, especially for DevOps.

Gardner: When I heard you mention the complexity, it certainly sounds like automating and moving away from manual processes, standardizing processes across your development test-to-deploy continuum, would be really important steps to take.

Bittner: Absolutely. I would say more than important. It’s absolutely essential that, without automation and that data-driven visibility into what’s happening in the applications, there’s almost no way to deliver these applications at speed. We find that many organizations are releasing quarterly now, not necessarily the same app every quarter, but they have a quarterly release cycle. At quarterly rates of speed, through seat of the pants and sort of brute force, you can manage to get that release out. It’s pretty painful, but you can survive.

If you turn up the clock rate faster than that and try to get down to monthly, those manual processes completely fall apart. We have organizations today that want to be delivering at weekly and daily intervals, especially in SaaS-based environments or cloud-based environments. Those kinds of delivery speeds are inconceivable with any kind of manual processes. As organizations move away from quarterly releases to faster releases, they have to adopt these techniques.

Gardner: Listening to you Kurt, it sounds like DevOps isn’t another buzzword or another flashy marketing term. It really sounds inevitable, if you’re going to succeed in software.

Bittner: It is inevitable, and over the next five years, what we’ll see is that the word itself will probably fade, because it will simply become the way that organizations work.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in DevOps, Hewlett Packard Enterprise, HP | Tagged , , , , , , , | Leave a comment

Agile on fire: IT enters the new era of ‘continuous’ everything

The next BriefingsDirect DevOps thought leadership discussion explores the concept of continuous processes around the development and deployment of applications and systems. Put the word continuous in front of many things and we help define DevOps: continuous delivery, continuous testing, continuous assessment, and there is more.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To help better understand the continuous nature of DevOps, we’re joined by two guests, James Governor, Founder and Principal Analyst at RedMonk, and Ashish Kuthiala, Senior Director of Marketing and Strategy for Hewlett Packard Enterprise (HPE) DevOps. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We hear a lot about feedback loops in DevOps between production and development, test and production. Why is the word “continuous” now cropping up so much? What do we need to do differently in IT in order to compress those feedback loops and make them impactful?

Kuthiala: Gone are the days where you would see the next version 2.0 coming out in six months and 2.1 coming out three months after that.


If you use some of the modern applications today, you never see Facebook 2.0 is coming out tomorrow or Google 3.1 is being released. They are continuously and always making improvements from the back-end onto the platforms of the users — without the users even realizing that they’re getting improvements, a better user experience, etc.

In order to achieve that, you have to continuously be building those new innovations into your product. And, of course, as soon as you change something you need to test it and roll it all the way into production.

In fact, we joke a lot about how if everything is continuous, why don’t we drop the word continuous and just call it planning, testing, or development, like we do today, and just say that you continuously do this. But we tend to keep using this word “continuous” before everything.

I think a lot of it is to drive home the point across the IT teams and organizations that you can no longer do this in chunks of three, six, or nine months — but you always have to keep doing this.

Governor: How do you do the continuous assessment of your continuous marketing?

Continuous assessment

Kuthiala: We joke about the continuous marketing of everything. The continuous assessment term, despite my objections to the word continuous all the time, is a term that we’ve been talking about at HPE.

The idea here is that for most software development teams and production teams, when they start to collaborate well, take the user experience, the bugs, and what’s not working on the production end at the users’ hands — where the software is being used — and feed those bugs and the user experience back to the development teams.

When companies actually get to that stage, it’s a significant improvement. It’s not the support teams telling you that five users were screaming at us today about this feature or that feature. It’s the idea that you start to have this feedback directly from the users’ hands.

We should stretch this assessment piece a little further. Why assess the application or the software when it’s at the hands of the end users? The developer, the enterprise architects, and the planners design an application and they know best how it should function.

Whether it’s monitoring tools or it’s the health and availability of the application, start to shift left, as we call it. I’d like James to comment more about this, because he knows a lot about the development space. The developer knows his code best; let him experience what the user is starting to experience.

Governor: My favorite example of this is that, as an analyst, you’re always looking for those nice metaphors and ways to talk about the world — one notion of quality I was very taken with was when I was reading about the history if ship-building and the roles and responsibilities involved in building a ship.


One of the things they found was that if you have a team doing the riveting separate from doing the quality assurance (QA) on the riveting, the results are not as good. Someone will happily just go along — rivet, rivet, rivet, rivet — and not really care if they’re doing a great job, because somebody else is going to have to worry about the quality.

As they moved forward with this, they realized that you needed to have the person doing the riveting also doing the QA. That’s a powerful notion of how things have changed.

Certainly the notion of shifting left and doing more testing earlier in the process, whether that be in terms of integration, load testing, whatever, all the testing needs to happen up front and it needs to be something that the developers are doing.

The new suite of tools we have makes it easier for developers to have better experiences around that, and we should take advantage.

Lean manufacturing

One of the other things about continuous is that we’re making reference to manufacturing modes and models. Lean manufacturing is something that led to fewer defects, apart from one catastrophic example to the contrary. And we’re looking at that and asking how we can learn from that.

So lean manufacturing ties into lean startups, which ties into lean and continuous assessment.

What’s interesting is that now we’re beginning to see some interplay between the two and paying that forward. If you look at GM, they just announced a team explicitly looking at Twitter to find user complaints very, very early in the process, rather than waiting until you had 10,000 people that were affected before you did the recall.

Last year was the worst year ever for recalls in American car manufacturing, which is interesting, because if we have continuous improvement and everything, why did that happen? They’re actually using social tooling to try to identify early, so that they can recall 100 cars or 1,000 cars, rather than 50,000.

It’s that monitoring really early in the process, testing early in the process, and most importantly, garnering user feedback early in the process. If GM can improve and we can improve, yes.

Gardner: I remember in the late ’80s, when the Japanese car makers were really kicking the pants out of Detroit, that we started to hear a lot about simultaneous engineering. You wouldn’t just design something, but you designed for its manufacturability at the same time. So it’s a similar concept.

Solutions That Unify Development and Operations
To Accelerate Business Innovation
Get More Information

But going back to the software process, Ashish, we see a level of functionality in software that needs to be rigorous with security and performance, but we’re also seeing more and more the need for that user experience for features and functions that we can’t even guess at, that we need to put into place in the field and see what happens.

How does an enterprise get to that point, where they can so rapidly do software that they’re willing to take a chance and put something out to the users, perhaps a mobile app, and learn from its actual behavior? We can get the data, but we have to change our processes before we can utilize it.

Kuthiala: Absolutely. Let me be a little provocative here, but I think it’s a well-known fact that the era of the three-year, forward-looking roadmaps is gone. It’s good to have a vision of where you’re headed, but what feature, function and which month will you release so that the users will find it useful? I think that’s just gone, with this concept of the minimum viable product (MVP) that more startups take off with and try to build a product and fund themselves as they gain success.

It’s an approach even that bigger enterprises need to take. You don’t know what the end users’ tastes are.

I change my taste on the applications I use and the user experience I get, the features and functionality. I’m always looking at different products, and I switch my mind quite often. But if I like something and they’re always delivering the right user experience for me, I stick with them.

Capture the experience

The way for an enterprise to figure out what to build next is to capture this experience, whether it’s through social media channels or engineering your codes so that you can figure out what the user behavior actually is.

The days of business planners and developers sitting in cubicles and thinking this is the coolest thing I’m going to invent and roll out is not going to work anymore. You definitely need that for innovation, but you need to test that fairly quickly.

Also gone are the days of rolling back something when something doesn’t work. If something doesn’t work, if you can deliver software really quickly at the hands of end users, you just roll forward. You don’t roll back anymore.

It could be a feature that’s buggy. So go and fix it, because you can fix it in two days or two hours, versus the three- to six-month cycle. If you release a feature and you see that most users — 80 percent of the users — don’t even bother about it, turn it off, and introduce the new feature that you were thinking about.

This assessment from the development, testing, and production that you’re always doing starts to benefit you. When you’re standing up for that daily sprint and wondering what are the three features I’m going to work on as a team, whether it’s the two things that your CEO told you you have to absolutely do it, because “I think it’s the greatest thing since sliced bread,” or it’s the developer saying, “I think we should build this feature,” or some use case is coming out of the business analyst or enterprise architects.

Now you have data. You have data across all these teams. You can start to make smarter decisions and you can choose what to build and not build. To me, that’s the value of continuous assessment. You can invest your $100 for that day in the two things you want to do. None of us has unlimited budgets.

Gardner: For organizations that grok this, that say, “I want continuous delivery. I want continuous assessment,” what do we need to put in place to actually execute on it to make it happen?

Governor: We’ve spoken a lot about cultural change, and that’s going to be important. One of the things, frankly, that is an underpinning, if we’re talking about data and being data-driven, is just that we have wonderful new platforms that enable us to store a lot more data than we could before at a reasonable cost.

There were many business problems that were stymied by the fact that you would have to spend the GDP of a country in order to do the kind of processing that you wanted to, in order to truly understand how something was working. If we’re going to model the experiences, if we are going to collect all this data, some of the thinking about what’s infrastructure for that so that you can analyze the data is going to be super important. There’s no point talking in being data-driven if you don’t have architecture for delivering on that.

Gardner: Ashish, how about loosely integrated capabilities across these domains, tests, build, requirements, configuration management, and deployment? It seems that HPE is really at the center of a number of these technologies. Is there a new layer or level of integration that can help accelerate this continuous assessment capability?

Rich portfolio

Kuthiala: You’re right. We have a very rich portfolio across the entire software development cycle. You’ve heard about our Big Data Platform. What can it really do, if you think about it? James just referred to this. It’s cheaper and easier to store data with the new technologies, whether it’s structured, unstructured, video, social, etc., and you can start to make sense out of it when you put it all together.

There is a lot of rich data in the planning and testing process, and all the different lifecycles. A simple example is a technology that we’ve worked on internally, where when you start to deliver software faster and you change one line of code and you want this to go out. You really can’t afford to do the 20,000 tests that you think you need to do, because you’re not sure what’s going to happen.

We’ve actually had data scientists working internally in our labs, studying the patterns, looking at the data, and testing concepts such as intelligent testing. If I change this one line of code, even before I check it in, what parts of the code is it really affecting, what functionality? If you are doing this intelligently, does it affect all the regions of the world, the demographics? What feature function does it affect?

It’s helping you narrow down whether will it break the code, whether it will actually affect certain features and functions of this software application that’s out there. It’s narrowing it down and helping you say, “Okay, I only need to run these 50 tests and I don’t need to go into these 10,000 tests, because I need to run through this test cycle fast and have the confidence that it will not break something else.”

So it’s a cultural thing, like James said, but the technologies are also helping make it easier.

Gardner: It’s interesting. We’re borrowing concepts from other domains in the past as well — just-in-time testing or fit-for-purpose testing, or lean testing?

Kuthiala: We were talking about Lean Functional Testing (LeanFT) at HP Discover. I won’t talk about that here in terms of product, but the idea is exactly that. The idea is that the developer, like James said, knows his code well. He can test it well before and he doesn’t throw it over the wall and let the other team take a shot at it. It’s his responsibility. If he writes a line of code, he should be responsible for the quality of it.

Gardner: And it also seems that the integration across this continuum can really be the currency of analysis. When we have data and information made available, that’s what binds these processes together, and we’re starting to elevate and abstract that analysis up and it make it into a continuum, rather than a waterfall or a hand-off type of process.

Before we close out, any other words that we should put in front of continuous as we get closer to DevOps — continuous security perhaps?

Security is important

Kuthiala: Security is a very important topic and James and I have talked about it a lot with some other thought leaders. Security is just like testing. Anything that you catch early on in the process is a lot easier and cheaper to fix than if you catch it in the hands of the end users, where now it’s deployed to tens and thousands of people.

It’s a cultural shift. The technology has always been there. There’s a lot of technology within and outside of HP that you need to incorporate the security testing and the discipline right into the development and planning process and not leave it towards the end.

In terms of another continuous word, I mean I can come up with continuous Dana Gardner podcast.

Governor: There you go.

Gardner: Continuous discussions about DevOps.

Governor: One of the things that RedMonk is very interested in, and it’s really our view in the world, is that, increasingly, developers are making the choices, and then we’re going to find ways to support the choices they are making.

It was very interesting to me that the term continuous integration began as a developer term, and then the next wave of that began to be called continuous deployment. That’s quite scary for a lot of organizations. They say, “These developers are talking about continuous deployment. How is that going to work?”

The circle was squared when I had somebody come in and say what we’re talking to customers about is continuous improvement, which of course is a term again that we saw in manufacturing and so on.

But the developer aesthetic is tremendously influential here, and this change has been driven by them. My favorite “continuous” is a great phrase, continuous partial attention, which is the world we all live in now.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in DevOps, Hewlett Packard Enterprise, HP | Tagged , , , , , , , | Leave a comment

Big data enables top user experiences and extreme personalization for Intuit TurboTax

The next BriefingsDirect big-data innovation case study highlights how Intuit uses deep-data analytics to gain a 360-degree view of its TurboTax application’s users’ behavior and preferences. Such visibility allows for rapid applications improvements and enables the TurboTax user experience to be tailored to a highly detailed degree.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to share how analytics paves the way to better understanding of end-user needs and wants, we’re joined by Joel Minton, Director of Data Science and Engineering for TurboTax at Intuit in San Diego. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s start at a high-level, Joel, and understand what’s driving the need for greater analytics, greater understanding of your end-users. What is the big deal about big-data capabilities for your TurboTax applications?

Minton: There were several things, Dana. We were looking to see a full end-to-end view of our customers. We wanted to see what our customers were doing across our application and across all the various touch points that they have with us to make sure that we could fully understand where they were and how we can make their lives better.


We also wanted to be able to take that data and then give more personalized experiences, so we could understand where they were, how they were leveraging our offerings, but then also give them a much more personalized application that would allow them to get through the application even faster than they already could with TurboTax.

And last but not least, there was the explosion of available technologies to ingest, store, and gain insights that was not even possible two or three years ago. All of those things have made leaps and bounds over the last several years. We’ve been able to put all of these technologies together to garner those business benefits that I spoke about earlier.

Gardner: So many of our listeners might be aware of TurboTax, but it’s a very complex tax return preparation application that has a great deal of variability across regions, states, localities. That must be quite a daunting task to be able to make it granular and address all the variables in such a complex application.

Minton: Our goal is to remove all of that complexity for our users and for us to do all of that hard work behind the scenes. Data is absolutely central to our understanding that full end-to-end process, and leveraging our great knowledge of the tax code and other financial situations to make all of those hard things easier for our customers, and to do all of those things for our customers behind the scenes, so our customers do not have to worry about it.

Gardner: In the process of tax preparation, how do you actually get context within the process?

Always looking

Minton: We’re always looking at all of those customer touch points, as I mentioned earlier. Those things all feed into where our customer is and what their state of mind might be as they are going through the application.

To give you an example, as a customer goes though our application, they may ask us a question about a certain tax situation.

When they ask that question, we know a lot more later on down the line about whether that specific issue is causing them grief. If we can bring all of those data sets together so that we know that they asked the question three screens back, and then they’re spending a more time on a later screen, we can try to make that experience better, especially in the context of those specific questions that they have.

HP Vertica Community Edition
Start Your Free Trial Now
Gain Access to All Features and Functionality

As I said earlier, it’s all about bringing all the data together and making sure that we leverage that when we’re making the application as easy as we can.

Gardner: And that’s what you mean by a 360-degree view of the user: where they are in time, where they are in a process, where they are in their particular individual tax requirements?

Minton: And all the touch points that they have with not only things on our website, but also things across the Internet and also with our customer-care employees and all the other touch points that we use try to solve our customers’ needs.

Gardner: This might be a difficult question, but how much data are we talking about? Obviously you’re in sort of a peak-use scenario where many people are in a tax-preparation mode in the weeks and months leading up to April 15 in the United States. How much data and how rapidly is that coming into you?

Minton: We have a tremendous amount of data. I’m not going to go into the specifics of the complete size of our database because it is proprietary, but during our peak times of the year during tax season, we have billions and billions of transactions.

We have all of those touch points being logged in real-time, and we basically have all of that data flowing through to our applications that we then use to get insights and to be able to help our customers even more than we could before. So we’re talking about billions of events over a small number of days.

Gardner: So clearly for those of us that define big data by velocity, by volume, and by variety, you certainly meet the criteria and then some.

Unique challenges

Minton: The challenges are unique for TurboTax because we’re such a peaky business. We have two peaks that drive a majority of our experiences: the first peak when people get their W-2s and they’re looking to get their refunds, and then tax day on April 15th. At both of those times, we’re ingesting a tremendous amount of data and trying to get insights as quickly as we can so we can help our customers as quickly as we can.

Gardner: Let’s go back to this concept of user experience improvement process. It’s not just something for tax preparation applications but really in retail, healthcare, and many other aspects where the user expectations are getting higher and higher. People expect more. They expect anticipation of their needs and then delivery of that.

This is probably only going to increase over time, Joel. Tell me a little but about how you’re solving this issue of getting to know your user and then being able to be responsive to an entire user experience and perception.

Minton: Every customer is unique, Dana. We have millions of customers who have slightly different needs based on their unique situations. What we do is try to give them a unique experience that closely matches their background and preferences, and we try to use all of that information that we have to create a streamlined interaction where they can feel like the experience itself is tailored for them.

It’s very easy to say, “We can’t personalize the product because there are so many touch points and there are so many different variables.” But we can, in fact, make the product much more simplified and easy to use for each one of those customers. Data is a huge part of that.

Specifically, our customers, at times, may be having problems in the product, finding the right place to enter a certain tax situation. They get stuck and don’t know what to enter. When they get in those situations, they will frequently ask us for help and they will ask how they do a certain task. We can then build code and algorithms to handle all those situations proactively and be able to solve that for our customers in the future as well.

So the most important thing is taking all of that data and then providing super-personalized experience based on the experience we see for that user and for other users like them.

Gardner: In a sense, you’re a poster child for a lot of elements of what you’re dealing with, but really on a significant scale above the norm, the peaky nature, around tax preparation. You desire to be highly personalized down to the granular level for each user, the vast amount of data and velocity of that data.

What were some of your chief requirements at your architecture level to be able to accommodate some of this? Tell us a little bit, Joel, about the journey you’ve been on to improve that architecture over the past couple of years?

Lot of detail

Minton: There’s a lot of detail behind the scenes here, and I’ll start by saying it’s not an easy journey. It’s a journey that you have to be on for a long time and you really have to understand where you want to place your investment to make sure that you can do this well.

One area where we’ve invested in heavily is our big-data infrastructure, being able to ingest all of the data in order to be able to track it all. We’ve also invested a lot in being able to get insights out of the data, using Hewlett Packard Enterprise (HPE) Vertica as our big data platform and being able to query that data in close to real time as possible to actually get those insights. I see those as the meat and potatoes that you have to have in order to be successful in this area.

On top of that, you then need to have an infrastructure that allows you to build personalization on the fly. You need to be able to make decisions in real time for the customers and you need to be able to do that in a very streamlined way where you can continuously improve.

We use a lot of tactics using machine learning and other predictive models to build that personalization on-the-fly as people are going through the application. That is some of our secret sauce and I will not go into in more detail, but that’s what we’re doing at a high level.

Gardner: It might be off the track of our discussion a bit, but being able to glean information through analytics and then create a feedback loop into development can be very challenging for a lot of organizations. Is DevOps a cultural parallel path along with your data-science architecture?

HP Vertica Community Edition
Start Your Free Trial Now
Gain Access to All Features and Functionality

I don’t want to go down the development path too much, but it sounds like you’re already there in terms of understanding the importance of applying big-data analytics to the compression of the cycle between development and production.

Minton: There are two different aspects there, Dana. Number one is making sure that we understand the traffic patterns of our customer and making sure that, from an operations perspective, we have the understanding of how our users are traversing our application to make sure that we are able to serve them and that their performance is just amazing every single time they come to our website. That’s number one.

Number two, and I believe more important, is the need to actually put the data in the hands of all of our employees across the board. We need to be able to tell our employees the areas where users are getting stuck in our application. This is high-level information. This isn’t anybody’s financial information at all, but just a high-level, quick stream of data saying that these people went through this application and got stuck on this specific area of the product.

We want to be able to put that type of information in our developer’s hands so as the developer is actually building a part of the product, she could say that I am seeing that these types of users get stuck at this part of the product. How can I actually improve the experience as I am developing it to take all of that data into account?

We have an analyst team that does great work around doing the analytics, but in addition to that, we want to be able to give that data to the product managers and to the developers as well, so they can improve the application as they are building it. To me, a 360-degree view of the customer is number one. Number two is getting that data out to as broad of an audience as possible to make sure that they can act on it so they can help our customers.

Major areas

Gardner: Joel, I speak with HPE Vertica users quite often and there are two major areas that I hear them talk rather highly of the product. First, has to do with the ability to assimilate, so that dealing with the variety issue would bring data into an environment where it can be used for analytics. Then, there are some performance issues around doing queries, amid great complexity of many parameters and its speed and scale.

HP Vertica Community Edition
Start Your Free Trial Now
Gain Access to All Features and Functionality

Your applications for TurboTax are across a variety or platforms. There is a shrink-wrap product from the legacy perspective. Then you’re more along the mobile lines, as well as web and SaaS. So is Vertica something that you’re using to help bring the data from a variety of different application environments together and/or across different networks or environments?

Minton: I don’t see different devices that someone might use as a different solution in the customer journey. To me, every device that somebody uses is a touch point into Intuit and into TurboTax. We need to make sure that all of those touch points have the same level of understanding, the same level of tracking, and the same ability to help our customers.

Whether somebody is using TurboTax on their computer or they’re using TurboTax on their mobile device, we need to be able to track all of those things as first-class citizens in the ecosystem. We have a fully-functional mobile application that’s just amazing on the phone, if you haven’t used it. It’s just a great experience for our customers.

From all those devices, we bring all of that data back to our big data platform. All of that data can then be queried, because you want to understand, many questions, such as when do users flow across different devices and what is the experience that they’re getting on each device? When are they able to just snap a picture of their W-2 and be able to import it really quickly on their phone and then jump right back into their computer and finish their taxes with great ease?

We need to be able to have that level of tracking across all of those devices. The key there, from a technology perspective, is creating APIs that are generic across all of those devices, and then allowing those APIs to feed all of that data back to our massive infrastructure in the back-end so we can get those insights through reporting and other methods as well.

Gardner: We’ve talked quite a bit about what’s working for you: a database column store, the ability to get a volume variety and velocity managed in your massive data environment. But what didn’t work? Where were you before and what needed to change in order for you to accommodate your ongoing requirements in your architecture?

Minton: Previously we were using a different data platform, and it was good for getting insights for a small number of users. We had an analyst team of 8 to 10 people, and they were able to do reports and get insights as a small group.

But when you talk about moving to what we just discussed, a huge view of the customer end-to-end, hundreds of users accessing the data, you need to be able to have a system that can handle that concurrency and can handle the performance that’s going to be required by that many more people doing queries against the system.

Concurrency problems

So we moved away from our previous vendor that had some concurrency problems and we moved to HPE Vertica, because it does handle concurrency much better, handles workload management much better, and it allows us to pull all this data.

The other thing that we’ve done is that we have expanded our use of Tableau, which is a great platform for pulling data out of Vertica and then being able to use those extracts in multiple front-end reports that can serve our business needs as well.

So in terms of using technology to be able to get data into the hands of hundreds of users, we use a multi-pronged approach that allows us to disseminate that information to all of these employees as quickly as possible and to do it at scale, which we were not able to do before.

Gardner: Of course, getting all your performance requirements met is super important, but also in any business environment, we need to be concerned about costs.

Is there anything about the way that you were able to deploy Vertica, perhaps using commodity hardware, perhaps a different approach to storage, that allowed you to both accomplish your requirements, goals in performance, and capabilities, but also at a price point that may have been even better than your previous approach?

Minton: From a price perspective, we’ve been able to really make the numbers work and get great insights for the level of investment that we’ve made.

How do we handle just the massive cost of the data? That’s a huge challenge that every company is going to have in this space, because there’s always going to be more data that you want to track than you have hardware or software licenses to support.

So we’ve been very aggressive in looking at each and every piece of data that we want to ingest. We want to make sure that we ingest it at the right granularity.

Vertica is a high-performance system, but you don’t need absolutely every detail that you’ve ever had from a logging mechanism for every customer in that platform. We do a lot of detail information in Vertica, but we’re also really smart about what we move into there from a storage perspective and what we keep outside in our Hadoop cluster.

Hadoop cluster

We have a Hadoop cluster that stores all of our data and we consider that our data lake that basically takes all of our customer interactions top to bottom at the granular detail level.

We then take data out of there and move things over to Vertica, in both an aggregate as well as a detail form, where it makes sense. We’ve been able to spend the right amount of money for each of our solutions to be able to get the insights we need, but to not overwhelm both the licensing cost and the hardware cost on our Vertica cluster.

The combination of those things has really allowed us to be successful to match the business benefit with the investment level for both Hadoop and with Vertica.

Gardner: Measuring success, as you have been talking about quantitatively at the platform level, is important, but there’s also a qualitative benefit that needs to be examined and even measured when you’re talking about things like process improvements, eliminating bottlenecks in user experience, or eliminating anomalies for certain types of individual personalized activities, a bit more quantitative than qualitative.

Do you have any insight, either anecdotal or examples, where being able to apply this data analytics architecture and capability has delivered some positive benefits, some value to your business?

Minton: We basically use data to try to measure ourselves as much as possible. So we do have qualitative, but we also have quantitative.

Just to give you a few examples, our total aggregate number of insights that we’ve been able to garner from the new system versus the old system is a 271 percent increase. We’re able to run a lot more queries and get a lot more insights out of the platform now than we ever could on the old system. We have also had a 41 percent decrease in query time. So employees who were previously pulling data and waiting twice as long had a really frustrating experience.

Now, we’re actually performing much better and we’re able to delight our internal customers to make sure that they’re getting the answers they need as quickly as possible.

We’ve also increased the size of our data mart in general by 400 percent. We’ve massively grown the platform while decreasing performance. So all of those quantitative numbers are just a great story about the success that we have had.

From a qualitative perspective, I’ve talked to a lot of our analysts and I’ve talked to a lot of our employees, and they’ve all said that the solution that we have now is head and shoulders over what we had previously. Mostly that’s because during those peak times, when we’re running a lot of traffic through our systems, it’s very easy for all the users to hit the platform at the same time, instead of nobody getting any work done because of the concurrency issues.

Better tracking

Because we have much better tracking of that now with Vertica and our new platform, we’re actually able to handle that concurrency and get the highest priority workloads out quickly, allow them to happen, and then be able to follow along with the lower-priority workloads and be able to run them all in parallel.

The key is being able to run, especially at those peak loads, and be able to get a lot more insights than we were ever able to get last year.

HP Vertica Community Edition
Start Your Free Trial Now
Gain Access to All Features and Functionality

Gardner: And that peak load issue is so prominent for you. Another quick aside, are you using cloud or hybrid cloud to support any of these workloads, given the peak nature of this, rather than keep all that infrastructure running 365, 24×7? Is that something that you’ve been doing, or is that something you’re considering?

Minton: Sure. On a lot of our data warehousing solutions, we do use cloud in points for our systems. A lot of our large-scale serving activities, as well as our large scale ingestion, does leverage cloud technologies.

We don’t have it for our core data warehouse. We want to make that we have all of that data in-house in our own data centers, but we do ingest a lot of the data just as pass-throughs in the cloud, just to allow us to have more of that peak scalability that we wouldn’t have otherwise.

Gardner: We’re coming up toward the end of our discussion time. Let’s look at what comes next, Joel, in terms of where you can take this. You mentioned some really impressive qualitative and quantitative returns and improvements. We can always expect more data, more need for feedback loops, and a higher level of user expectation and experience. Where would you like to go next? How do you go to an extreme focus even more on this issue of personalization?

Minton: There are a few things that we’re doing. We built the infrastructure that we need to really be able to knock it out of the park over the next couple of years. Some of the things that are just the next level of innovation for us are going to be, number one, increasing our use of personalization and making it much easier for our customers to get what they need when they need it.

So doubling down on that and increasing the number of use cases where our data scientists are actually building models that serve our customers throughout the entire experience is going to be one huge area of focus.

Another big area of focus is getting the data even more real time. As I discussed earlier, Dana, we’re a very peaky business and the faster than we can get data into our systems, the faster we’re going to be able to report on that data and be able to get insights that are going to be able to help our customers.

Our goal is to have even more real-time streams of that data and be able to get that data in so we can get insights from it and act on it as quickly as possible.

The other side is just continuing to invest in our multi-platform approach to allow the customer to do their taxes and to manage their finances on whatever platform they are on, so that it continues to be mobile, web, TVs, or whatever device they might use. We need to make sure that we can serve those data needs and give the users the ability to get great personalized experiences no matter what platform they are on. Those are some of the big areas where we’re going to be focused over the coming years.


Gardner: Now you’ve had some 20/20 hindsight into moving from one data environment to another, which I suppose is equivalent of keeping the airplane flying and changing the wings at the same time. Do you have any words of wisdom for those who might be having concurrency issues or scale, velocity, variety type issues with their big data, when it comes to moving from one architecture platform to another? Any recommendations you can make to help them perhaps in ways that you didn’t necessarily get the benefit of?

Minton: To start, focus on the real business needs and competitive advantage that your business is trying to build and invest in data to enable those things. It’s very easy to say you’re going to replace your entire data platform and build everything soup to nuts all in one year, but I have seen those types of projects be tried and fail over and over again. I find that you put the platform in place at a high-level and you look for a few key business-use cases where you can actually leverage that platform to gain real business benefit.

When you’re able to do that two, three, or four times on a smaller scale, then it makes it a lot easier to make that bigger investment to revamp the whole platform top to bottom. My number one suggestion is start small and focus on the business capabilities.

Number two, be really smart about where your biggest pain points are. Don’t try to solve world hunger when it comes to data. If you’re having a concurrency issue, look at the platform you’re using. Is there a way in my current platform to solve these without going big?

Frequently, what I find in data is that it’s not always the platform’s fault that things are not performing. It could be the way that things are implemented and so it could be a software problem as opposed to a hardware or a platform problem.

HP Vertica Community Edition
Start Your Free Trial Now
Gain Access to All Features and Functionality

So again, I would have folks focus on the real problem and the different methods that you could use to actually solve those problems. It’s kind of making sure that you’re solving the right problem with the right technology and not just assuming that your platform is the problem. That’s on the hardware front.

As I mentioned earlier, looking at the business use cases and making sure that you’re solving those first is the other big area of focus I would have.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, data analysis, Hewlett Packard Enterprise, HP, HP Vertica | Tagged , , , , , , , , , , , | Leave a comment

Spirent Leverages Big Data to Keep User Experience Quality a Winning Factor for Telcos

Transcript of a discussion on the use of big data to provide improved user experiences for telecommunications operators’ customers.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Podcast Series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT innovation and how it’s making an impact on people’s lives.


Our next big-data case study discussion explores the ways that Spirent Communications advances the use of big data to provide improved user experiences for telecommunications operators.

We’ll learn how advanced analytics that draws on multiple data sources provide Spirent’s telco customers’ rapid insights into their networks and operations.  That insight, combined with analysis of user actions and behaviors, provides a “total picture” approach to telco services and uses that both improves the actual services proactively — and also boosts the ability to better support help desks.

Spirent’s insights thereby help operators in highly competitive markets reduce the spend on support, reduce user churn, and better adhere to service-level agreements (SLAs), while providing significant productivity gains.

HP Big Data Analytics Engines
Meet Complex Enterprise-scale OEM Requirements
Get More Information

To hear how Spirent uses big data to make major positive impacts on telco operations, we’re joined by Tom Russo, Director of Product Management and Marketing at Spirent Communications in Matawan, New Jersey. Welcome, Tom.

Tom Russo: Hi, Dana. Thanks for having me.

Gardner: User experience quality enhancement is essential, especially when we’re talking about consumers that can easily change carriers. Controlling that experience is more challenging for an organization like a telco. They have so many variables across networks. So at a high-level, tell me how Spirent masters complexity using big data to help telcos maintain the best user experience.

Russo: Believe it or not, historically, operators haven’t actually managed their customers as much as they’ve managed their networks. Even within the networks, they’ve done this in a fairly siloed fashion.


There would be radio performance teams that would look at whether the different cell towers were operating properly, giving good coverage and signal strength to the subscribers. As you might imagine, they wouldn’t talk to the core network people, who would make sure that people can get IP addresses and properly transmit packets back and forth. They had their own tools and systems, which were separate, yet again, from the services people, who would look at the different applications. You can see where it’s going.

There were also customer-care people, who had their own tools and systems that didn’t leverage any of that network data. It was very inefficient, and not wrapped around the customer or the customer experience.

New demands

They sort of got by with those systems when the networks weren’t running too hot. When competition wasn’t too fierce, they could get away with that. But these days, with their peers offering better quality of service, over-the-top threats, increasing complexity on the network in terms of devices, and application services, it really doesn’t work any more.

It takes too long to troubleshoot real customer problems. They spend too much time chasing down blind alleys in terms of solving problems that don’t really affect the customer experience, etc. They need to take a more customer-centric approach. As you’d imagine that’s where we come in. We integrate data across those different silos in the context of subscribers.

We collect data across those different silos — the radio performance, the core network performance, the provisioning, the billing etc. — and fuse it together in the context of subscribers. Then, we help the operator identify proactively where that customer experience is suffering, what we call hotspots, so that they can act before the customers call and complain, which is expensive from a customer-care perspective and before they churn, which is very expensive in terms of customer replacement. It’s a more customer-centric approach to managing the network.

Automate Data Collection and Analysis
In Support of Business Objectives
With Spirent InTouch Analytics

Gardner: So your customer experience management does what your customers had a difficult time doing internally. But one aspect of this is pulling together disparate data from different sources, so that you can get the proactive inference and insights. What did you do better around data acquisition?

Russo: The first key step is being able to integrate with a variety of these different systems. Each of the groups had their different tools, different data formats, different vendors.

Our solution has a very strong what we call extract, transform, load (ETL), or data mediation capability, to pull all these different data sources together, map them into a uniform model of the telecom network and the subscriber experience.

This allows us to see the connections between the subscriber experience, the underlying network performance and even things like outcomes — whether people churn, whether they provide negative survey responses, whether they’ve called and complained to  customer care, etc.

Then, with that holistic model, we can build high-level metrics like quality of experience scores, predictive models, etc. to look across those different silos, help the operators see where the hot spots of customer dissatisfaction is, where people are going to eventually churn, or where other costs are going to be incurred.

Gardner: Before we go more deeply into this data issue, tell me a bit more about Spirent. Is the customer experience division the only part? Tell me about the larger company, just so we have a sense of the breadth and depths of what you offer.

World leader

Russo: Most people, at least in telecom, know Spirent as a lab vendor. Spirent is one of the world leaders in the markets for simulating, emulating, and testing devices, network elements, applications, and services, as they go from the development phase to the launch phase in their lifecycle. Most of their products focus on that, the lab testing or the launch testing, making sure that devices are, as we call it, “fit for launch.”

Spirent has historically had less of a presence in the live network domain. In the last year or two, they’ve made a number of strategic acquisitions in that space. They’ve made a number of internal investments to leverage the capabilities and knowledge base that they have from the lab side into the live network.

One of those investments, for example, was an acquisition back in early 2014 of DAX Technologies, a leading customer experience management vendor. That acquisition, plus some additional internal investments has led to the growth of our Customer Experience Management (CEM) Business Unit.

Gardner: Tom, tell me some typical use cases where your customers are using Spirent in the field. Who are those that are interacting with the software? What is it that they’re doing with it? What are some of the typical ways in which it’s bringing value there?

Russo: Basically, we have two user bases that leverage our analytics. One is the customer-care groups. What they’re trying to do is obtain, very quickly, a 360-degree view of the experience that a subscriber is seeing — who is calling in and complaining about their service and the root causes of problems that they might be having with their services.

HP Big Data Analytics Engines
Meet Complex Enterprise-scale OEM Requirements
Get More Information

If you think about the historic operation, this was a very time-intensive, costly operation, because they would have to swivel chair, as we call it, between a variety of different systems and tools trying to figure out whether I had a network-related issue, a provisioning issue, a billing issue, or something else. These all could potentially take hours, even hundreds of hours, to resolve.

With our system, the customer-care groups have one single pane of glass, one screen, to see all aspects of the customer experience to very quickly identify the root causes of issues that they are having and resolve them. So it keeps customers happier and reduces the cost of the customer-care operation.

The second group that we serve is on the engineering side. We’re trying to help them identify hotspots of customer dissatisfaction on the network, whether that be in terms of devices, applications, services, or network elements so that they can prioritize their resources around those hotspots, as opposed to noisy, traditional engineering alarms. The idea here is that this allows them to have maximal impact on the customer experience with minimal costs and minimal resources.

Gardner: You recently rolled out some new and interesting services and solutions. Tell us a little but about that.

Russo: We’ve rolled out the latest iteration of our InTouch solution, our flagship product. It’s called InTouch Customer and Network Analytics (CNA) and it really addresses feedback that we’ve received from customers in terms of what they want in an analytic solution.

We’re hearing that they want to be more proactive and predictive. Don’t just tell me what’s going on right now, what’s gone on historically, how things have trended, but help me understand what’s going to happen moving forward, where our customer is going to complain. Where is the network going to experience performance problems in the future. That’s an increasing area of focus for us and something that we’ve embedded to a great degree in the InTouch CNA product.

More flexibility

Another thing that they’ve told us is that they want to have more flexibility and control on the visualization and reporting side. Don’t just give me a stock set of dashboards and reports and have me rely on you to modify those over time. I have my own data scientists, my own engineers, who want to explore the data themselves.

We’ve embedded Tableau business intelligence (BI) technology into our product to give them maximum flexibility in terms of report authorship and publication. We really like the combination of Tableau and Hewlett Packard Enterprise (HPE) Vertica because it allows them to be able to do those ad-hoc reports and then also get good performance through the Vertica database.

And another thing that we are doing more and more is what we call Closed Loop Analytics. It’s not just identifying an issue or a customer problem on the network, but it’s also being able to trigger an action. We have an integration and partnership with another business unit in Spirent called Mobilethink that can change device settings for example.

If we see a device is mis-provisioned, we can send alert to Mobilethink, and they can re-provision the device to correct something like a mis-provisioned access point name (APN) and resolve the problem. Then, we can use our system to confirm indeed that the fix was made and that the experience has improved.

Gardner: It’s clear to me, Tom, how we can get great benefits from doing this properly and how the value escalates the more data and the more information you get, and the better you can serve those customers. Let’s drill down a bit into how you can make this happen. As far as data goes, are we talking about 10 different data types, 50? Given the stream and the amount of data that comes off of a network, what size data we are talking about and how do you get a handle on that?

Russo: In our largest deployment, we’re talking about a couple of dozen different data sources and a total volume of data on the order of 50 to 100 billion transactions a day. So, it’s large volume, especially on the transactional side, and high variety. In terms of what we’re talking about, it’s a lot of machine data. As I mentioned before, there is the radio performance, core network performance, and service performance type of information.

We also look at things like whether you’re provisioning correctly for the services that you’re trying to interact with. We look at your trouble ticket history to try and correlate things like network performance and customer care activity. We will look at survey data, net promoter score (NPS) type information, billing churn, and related information.

We’re trying to tie it all together, everything from the subscriber transactions and experience to the underlying network performance, again to the outcome type information — what was the impact of the experience on your behavior?

Gardner: What specifically is your history with HPE Vertica? Has this been something that’s been in place for some time? Did you switch to it from something else? How did that work out?

Finishing migration

Russo: Right now, we’re finishing the migration to HP Vertica technology, and it will be embedded in our InTouch CNA solution. There are a couple of things that we like about Vertica. One is the price-performance aspects. The columnar lookups, the projections, give us very strong query response performance, but it’s also able to run on commodity hardware, which gives us price advantage that’s also bolstered by the columnar compression.

So price performance-wise and maturity-wise we like it. It’s a field-proven, tested solution. There are some other features in terms of strong Hadoop integration that we like. A lot of carriers will have their own Hadoop clusters, data oceans, etc. that they want us to integrate with. Vertica makes that fairly straightforward, and we like a lot of the embedded analytics as well, the Distributed R capability for predictive analytics and things along those lines.

Gardner: It occurs to me that the effort that you put into this at Spirent and being able to take vast amounts of data across a complex network and then come out with these analytic benefits could be extended to any number of environments. Is there a parallel between what you are doing with mobile and telco carriers that could extend to maybe networks that are managing the Internet of Things (IoT) types of devices?

Russo: Absolutely. We’re working with carriers on IoT already. The requirements that these things have in terms of the performance that they need to operate properly are different than that of human beings, but nevertheless, the underlying transactions that have to take place, the ability to get a radio connection and set up an IP address and communicate data back and forth to one another and do it in a robust reliable way, is still critical.

We definitely see our solution helping operators who are trying to be IoT platform providers to ensure the performances of those IoT services and the SLAs that they have for them. We also see a potential use for our technology going a step further into the vertical IoT applications themselves in doing, for example, predictive analytics on sensor data itself. That could be a future direction for us.

Gardner: Any words of wisdom for folks that are starting to do with large data volumes across wide variety of sources and are looking also for that more real-time analytics benefit? Any lessons learned that you could share from where Spirent has been and gone for others that are going to be facing some of these same big data issues?

Automate Data Collection and Analysis
In Support of Business Objectives
With Spirent InTouch Analytics

Russo: It’s important to focus on the end-user value and the use cases as opposed to the technology. So, we never really focus on getting data for the sake of getting data. We focus more on what problem a customer is trying to accomplish and how we can most simply and elegantly solve it. That steered us clear from jumping on the latest and greatest technology bandwagons, instead going with the proven technologies and leveraging our subject-matter expertise.

Gardner: I’m afraid we’ll have to leave it there. We’ve been exploring the ways that Spirent Communications advances the use of big data to provide improved user experiences for their telecommunications operator’s customers. We’ve identified some advanced analytics and how they’re drawing on more data sources and providing their telco customers more rapid insights into their networks and operations.

So join me in thanking Tom Russo, Director of Product Management and Marketing at Spirent Communications in Matawan, New Jersey. Thanks so much.

Russo: Thanks very much, Dana. Thanks for having me.

Gardner: And a big thank you to our audience as well for joining us for this big data information innovation case study discussion.

I’m Dana Gardner; Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

Transcript of a discussion on the use of big data to provide improved user experiences for telecommunications operators’ customers. Copyright Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Posted in big data, data analysis, Hewlett Packard Enterprise, HP | Tagged , , , , , , , | Leave a comment

Powerful reporting from YP’s data warehouse helps SMBs deliver the best ad campaigns

The next BriefingsDirect big-data innovation case study highlights how Yellow Pages (YP) has developed a massive enterprise data warehouse with near real-time reporting capabilities that pulls oceans of data and information from across new and legacy sources.

We explore how YP then continuously delivers precise metrics to over half a million paying advertisers — many of them SMBs and increasingly through mobile interfaces — to best analyze and optimize their marketing and ad campaigns.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more, BriefingsDirect recently sat down with Bill Theisinger, Vice President of Engineering for Platform Data Services at YP in Glendale, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about YP, the digital arm of what people would have known as Yellow Pages a number of years ago. You’re all about helping small businesses become better acquainted with their customers, and vice versa.

Hewlett Packard Enterprise
Vertica Community Edition

 Start Your Free Trial Now

Theisinger: YP is a leading local marketing solutions provider in the U.S., dedicated to helping local businesses and communities grow. We help connect local businesses with consumers wherever they are and whatever device they are on, desktop and mobile.


Gardner: As we know, the world has changed dramatically around marketing and advertising and connecting buyers and sellers. So in the digital age, being precise, being aware, being visible is everything, and that means data. Tell us about your data requirements in this new world.

Theisinger: We need to be able to capture how consumers interact with our customers, and that includes where they interact — whether it’s a mobile device or web device — and also within our network of partners. We reach about 100 million consumers across the U.S and we do that through both our YP network and our partner network.

Gardner: Tell us too about the evolution. Obviously, you don’t build out data capabilities and infrastructure overnight. Some things are in place, and you move on, you learn, adapt, and you have new requirements. Tell us your data warehouse journey.

Needed to evolve

Theisinger: Yellow Pages saw the shift of their print business moving heavily online and becoming heavily digital. We needed to evolve with that, of course. In doing so, we needed to build infrastructure around the systems that we were using to support the businesses we were helping to grow.

And in doing that, we started to take a look at what the systems requirements were for us to be able to report and message value to our advertisers. That included understanding where consumers were looking, what we were impressing to them, what businesses we were showing them when they searched, what they were clicking on, and, ultimately what businesses they called. We track all of those different metrics.

When we started this adventure, we didn’t have the technology and the capabilities to be able to do those things. So we had to reinvent our infrastructure. That’s what we did

Gardner: And as we know, getting more information to your advertisers to help them in their selection and spending expertise is key. It differentiates companies. So this is a core proposition for you. This is at the heart of your business.

Given the mission criticality, what are the requirements? What did you need to do to get that reporting, that warehouse capability?

Theisinger: We need to be able to scale to the size of our network and the size of our partner network, which means no click left behind, if you will, no impression untold, no search unrecognized. That’s billions of events we process every day. We needed to look at something that would help us scale. If we added a new partner, if we expanded the YP network, if we added hundreds, thousands, tens of thousands of new advertisers, we needed the infrastructure to able to help us do that.

Gardner: I understand that you’ve been using Hadoop. You might be looking at other technologies as they emerge. Tell us about your Hadoop experience and how that relates to your reporting capabilities.

Theisinger: When I joined YP, Hadoop was a heavy buzz product in the industry. It was a proven product for helping businesses process large amounts of unstructured data. However, it still poses a problem. That unstructured data needs to be structured at some point, and it’s that structure that you report to advertisers and report internally.

That’s how we decided that we needed to marry two different technologies — one that will allow us to scale a large unstructured processing environment like Hadoop and one that will allow us to scale a large structured environment like Hewlett Packard Enterprise (HPE) Vertica.

Business impact

Gardner: How has this impacted your business, now that you’ve been able to do this and it’s been in the works for quite a while? Any metrics of success or anecdotes that can relate back to how the people in your organization are consuming those metrics and then extending that as service and product back into your market? What has been the result?

Theisinger: We have roughly 10,000 jobs that we run every day, both to process data and also for analytics. That data represents about five to six petabytes of data that we’ve been able to capture about consumers, their behaviors, and activities. So we process that data within our Hadoop environment. We then pass that along into HPE Vertica, structure it in a way that we can have analysts, product owners, and other systems retrieve it, pull and look at those metrics, and be able to report on them to the advertisers.

Hewlett Packard Enterprise
Vertica Community Edition

 Start Your Free Trial Now

Gardner: Is there an automation to this as you look to present a more and better analytics on top of the Vertica? What are you doing to make that customizable to people based on their needs, but at the same time, controlled and managed so that it doesn’t become unwieldy?

Theisinger: There is a lot of interaction between customers, both internal and external, when we decide how and what we’re going to present in terms of data, and there are a lot of ways we do that. We present data externally through an advertiser portal. So we want to make sure we work very closely with human factors and ergonomics (HFE) and the use experience (UX) designers as well as our advertisers, through focus groups, workshops, and understanding what they want to understand about the data that we present them.

Then, internally, we decide what would make sense and how we feel comfortable being able to present it to them, because we have a universe of a lot more data than what we probably want to show people.

We also do the same thing internally. We’ve been able to provide various teams internally whether its sales, marketing, or finance, insights into who’s clicking on various business listings, who’s viewing various businesses, who’s calling businesses, what their segmentation is, and what their demographics look like and it allows us a lot of analytical insight. We do most of that work through the analytics platforms, which is, in this case, HPE Vertica.

Gardner: Now, that user experience is becoming more and more important. It wasn’t that long ago when these reports were going to people who were data scientists or equivalent, but now we’re taking the amount to those 600,000 small businesses. Can you tell us a little bit about lessons learned when it comes to delivering an end analytics product, versus building out the warehouse? They seem to be interdependent but we’re seeing more and more emphasis on that user experience these days.

Theisinger: You need to bridge the gap between analytics and just data storage and processing. So you have to present them in-state. This is what happens. It’s very descriptive of what’s going on, and we try to be a little bit more predictive when it comes to the way we want to do analysis at YP. We’re looking to go beyond just descriptive analytics.

What has also changed is the platform by which you present the data. It’s going highly mobile. Small businesses need to be able to just pick up their mobile device and look at the effectiveness of their campaigns with YP. They’re able to do that through a mobile platform we’ve built called YP for Merchants.

They can log in and see their metrics that are core to their business and how those campaigns are performing. They can even see some details, like if they missed a phone call and they want to be able to reach back out to a consumer and see if they need to help, solve a problem, or provide a service.

Developer perspective

Gardner: And given that your developers had to go through the steps of creating that great user experience and taking it to the mobile tier, was there anything about HPE Vertica, your warehouse, or your approach to analytics that made that development process easier? Is there an approach to delivering this from a developer perspective that you think others might learn from?

Hewlett Packard Enterprise
Vertica Community Edition

 Start Your Free Trial Now

Theisinger: There is, and it takes a lot more people than just the analytics team in my group or the engineers in my team. It’s a lot of other teams within YP that build this. But first and foremost, people want to see the data as real time and as near real time as they can.

When a small business relies on contact from customers, we track those calls. When a potential customer calls a small business and that small business isn’t able to actually get to the call or respond to that customer because maybe they are on a job, it’s important to know that that call happened recently. It’s important for that small business to reach back out to the consumer, because that consumer could go somewhere else and get that service from a competitor.

To be able to do that as quickly as possible is a hard-and-fast requirement. So processing the data as quickly as you can and presenting that, whether it be on a mobile device, in this case, as quickly as you can is definitely paramount to making that a success.

Gardner: I’ve spoken to a number of people over the years and one of the takeaways I get is that infrastructure is destiny. It really seems to be the case in your business that having that core infrastructure decision process done correctly has now given you the opportunity to scale up, be innovative, and react to the market. I think it’s also telling that, in this data-driven decade that we’ve been in for a few years now, the whole small business sector of the economy is a huge part of our overall productivity and growth as an economy.

Any thoughts, generally about making infrastructure decisions for the long run, decisions you won’t regret, decisions that that can scale over time and are future proof?

Theisinger: Yeah, for speaking about what I’ve seen through the job that we’ve had it here at YP, we reach over half a million paying advertisers. The shift is happening between just telling the advertisers what’s happened to helping them actually drive new business.

So it’s around the fact that I know who my customers are now, how do I find more of them, or how do I reach out to them, how do I market to them? That’s where the real shift is. You have to have a really strong scalable and extensible platform to be able to answer that question. Having the right infrastructure puts you in the position to be able to do that. That’s where businesses are going to end up growing, whether it’s ours or small businesses.

And our success is hinged to whether or not we can get these small businesses to grow. So we are definitely 100 percent focused on trying to make that happen.

Gardner: It’s also telling that you’ve been able to adjust so rapidly. Obviously, your business has been around for a long time. People are very familiar with the Yellow Pages, the actual physical product, but you’ve gone to make software so core to your value and your differentiation. I’m impressed and I commend you on being able to make that transitions fairly rapidly.

Core talent

Theisinger: Yeah, well thank you. We’ve invested a lot in the people within the technology team we have there in Glendale. We’ve built our own internal search capabilities, our own internal products. We’ve pulled a lot of good core talent from other companies.

I used to work at Yahoo with other folks, and YP is definitely focused on trying to make this transition a successful one, but we have our eye on our heritage. Over a hundred years of being very successful in the print business is not something you want to turn your back on. You want to be able to embrace that, and we’ve learned a lot from it, too.

So we’re right there with small businesses. We have a very large sales force, which is also very powerful and helpful in making this transition a success. We’ve leaned on all of that and we become one big kind of happy family, if you will. We all worked very closely together to make this transition successful.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, Cloud computing, data analysis, data center | Tagged , , , , , , , , , , , | Leave a comment

IoT brings on development demands that DevOps manages best, say experts

The next BriefingsDirect DevOps thought leadership discussion explores how continuous processes around the development and deployment of applications are both impacted by — and a benefit to — the Internet of Things (IoT) trend.

Listen to the podcast. Find it on iTunesGet the mobile app. Read a full transcript or download a copy. Watch for Free: DevOps, Catalyst of the Agile Enterprise.

To help better understand the relationship between DevOps and a plethora of new end-devices and data please welcome Gary Gruver, consultant, author and a former IT executive who has led many large-scale IT transformation projects, and John Jeremiah, Technology Evangelist at Hewlett Packard Enterprise (HPE), on Twitter at @j_jeremiah. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s talk about how the DevOps trend extends not to just traditional enterprise IT and software applications, but to a much larger set of applications — those in the embedded space, mobile, and end-devices of all sorts. Gary, why is DevOps even more important when you have so many different moving parts as we expect with the IoT?

Gruver: In software development, everybody needs to be more productive. Software is no longer just on websites and in IT departments. It’s going on everywhere in the industry. It’s gone to every product in every place, and being able to differentiate your product with software is becoming more and more important to everybody.

Gardner: John, from your perspective, is there a sense that DevOps is more impactful, more powerful when we apply it to IoT?


Jeremiah: The reality is it that IoT is moving as fast as mobile is — and even faster. If you don’t have the ability to change your software to evolve — to iterate as there is new business innovation — you’re not going to be able to keep up to be competitive. So IoT is going to require a DevOps approach in order to be successful.

Gardner: In the past, we’ve had a separate development organization and approach to embedded devices. Do we need to still to do that, or can we combine traditional enterprise software with DevOps and apply the same systems architecture and technologies to all sorts of development?

Learn how DevOps solutions unify development and operations
To accelerate business innovation

Gruver: The principles of being able to keep your code base more “releasable,” to work under a prioritized backlog, to work through the process of adding automated testing, and frequent feedback to the developers so that they get better at it — this all applies.


Therefore, for embedded systems you are going to need to develop simulators and emulators for automated testing. A simulator is a representation of the final product that can be run on a server. As much as possible, you want to be able to create a simulator that represents the software characteristics of the final product. You can then use this and trust it to find defects, because the amount of automated testing you are going to need to be running to transform your businesses is huge. If you don’t have an affordable place like a server farm to run that, it just doesn’t work. [Watch for Free: DevOps, Catalyst of the Agile Enterprise.]

If you have custom ASICs in the product, you’re also going to need to create an emulator to test the low-level firmware interacting with the ASIC. This is similar to the simulator, but also includes the custom ASIC and electronics from the final product. I see way too many organizations that are embedded and are trying to transform their process giving up on using simulators and emulators because they’re not finding the defects that they want to. Yet they haven’t invested in making them robust so they can be effective.

One of first things I talk about to people that have embedded systems is that you’re not going to be successful transforming your business until you create simulators and emulators that you can trust as a test environment to find defects.

Gardner: How about working as developers and testers with more of an operations mentality?

Gruver: At HPE and HP, we were running 15,000 hours of testing on the code base every day. When it was manual, we couldn’t do that and we really couldn’t transform our business until we fundamentally put that level of automated testing in place.

For laser printer testing, there’s no way we would have been able to have enough paper to run that many hours of testing, and we would have worn out printers. There weren’t enough trees in Idaho to make enough paper to do that testing on the final product. Therefore, we needed to create a test farm of simulators and emulators to drive testing upstream as much as possible to get rapid feedback to our developers.

Gardner: Tell us how DevOps helped in the firmware project for HP printers, and how that illustrates where DevOps and embedded development come together?

No new features

Gruver: I had an opportunity to take over leading the LaserJet FW for our organization several years ago. It had been the bottleneck for the organization for two decades. We couldn’t add a new product or plans without checking the firmware, and we had given up asking for new features.

Then, when 2008 hit, and we were forced to cut our spending, as a of lot of people out in the industry at that time. We could no longer invest to spend our way out of problems. So we had to engineer our solution.

Discover how to use big data platforms
To unlock value of Internet of Things

We were fundamentally looking for anything that we could do to improve productivity. We went on a journey of what I would call applying Agile and DevOps principles at scale, as opposed to trying to scale small teams in the organization. We went through this process of continually trying to improve with a group of 400-800 engineers and working through that process. At the end of three years, firmware was no longer the bottleneck.

We had gone from five percent of our capacity going to innovation to 40 percent and we were supporting 1.5 times more products. So we took something that was a bottleneck for the business, completely unleashed that capability, and fundamentally transformed the business.

The details are captured in my first book, A Practical Approach to Large-Scale Agile Development. It’s available at all your finest bookstores. [Also see Gary’s newest book, Leading the Transformation: Applying Agile and DevOps Principles at Scale.]

Gardner: And how does this provide a harbinger of things to come? What you’ve done with firmware at HP and Laser Printers several years ago, how does that paint a picture of how DevOps can be powerful and beneficial in the larger IoT environment?

Gruver: Well, IoT is going to move so fast that nobody knows exactly what they need and what the capabilities are. It’s the ability to move fast. At HP and HPE, we went 2-3 times faster than we ever thought possible. What you’re seeing in DevOps is that the unicorns of the world are showing that software development can go much faster than anybody ever thought was possible before.

That’s going to be much more important as you’re trying to understand how this market evolves, what capabilities customers want, and where they want them in IoT. The companies that can move fast and respond to the feedback from the customers are going to be the ones that win. [Watch for Free: DevOps, Catalyst of the Agile Enterprise.]

Gardner: John, we’ve seen sort of a dip in the complexity around mobile devices in particular when people consolidated around iOS and Android after having hit many targets, at least for a software platform, in the past. That may have given people a sense of solace or complacency that they can develop mobile applications rapidly.

But we are now getting, to Gary’s point, to a place where we don’t really know what sort of endpoints we’re going to be dealing with. We’re looking at automated cars, houses, drones, appliances, and even sensors within our own bodies.

What are some of the core principles we need to keep in mind to allow for the rapid and continuous development processes for IoT to improve, but without stumbling again as we hit complexity when it comes to new targets?
New technologies

Jeremiah: One of the first things that you’re going to have to do is embrace service virtualization and strategies in order to quickly virtualize new technologies and to be able to quickly simulate those technologies when they come to life. We don’t know exactly what they’re going to be, but we have to be able to embrace that and to bring that into our process and methodology.

And as Gary was talking about earlier, the strategies of going fast that apply in firmware, apply in the enterprise as well about building automated testing, failing as fast as you can, and learning as you go. As we see complexity increase, the real key is going to be able to harness that, and use virtualization as strategy to move that forward.

Gardner: Any other metrics of success? How do we know we’re succeeding with DevOps? We talked about speed. We talked about testing early and often. How do you know you’re doing this well? For organizations that want to have a good way to demonstrate success, how do they present that?

Gruver: I wouldn’t just start off by trying to do DevOps. If you’re going to transform your software development processes, the only reason you would go through that much turmoil is because your current development processes aren’t meeting the needs of your business. Start off with how your current development processes aren’t meeting your business needs.

The executives are in a best position to clarify exactly this gap and get the organization going down a continuous improvement process to improve the development and delivery processes.

Most organizations will quickly find that DevOps has some key tools in the toolbox that they want to start using immediately to start take some inefficiencies out of the development process.

But don’t go off to do DevOps and measure how well you did it. We’re all business executives. We run businesses, we manage businesses, and we need to focus on what the business is trying to achieve and just use the tools that will best help that.

Gardner: Where do we go next? DevOps has become a fairly popular concept now. It’s getting a lot of attention. People understand that it can have a very positive impact, but getting it in place isn’t always easy. There are a lot of different spinning variables — culture, organization, management. In an enterprise that’s looking to expand in the internet of things, perhaps they’re not doing that level of development and deployment.

They probably have been a bit more focused on enterprise applications, rather than devices and embedded. How do you start up that capability and do it well within a software development organization? Let’s look at moving from traditional development to the IoT development. What should we be keeping in mind?

Learn how DevOps solutions unify development and operations
To accelerate business innovation

Gruver: There are two approaches. One is, if you have loosely coupled architectures like most unicorns do, then you can empower the teams, add some operational members, and let figure it out. Most large enterprise organizations have more tightly coupled architectures that require large numbers of people working together to develop and deliver things together. I don’t think those transformations are going to be effective until you find inspired executives who are willing to lead the transformation and work through the process.

Successful transformations

I‘ve led a couple of successful transformations. If you look at examples from the DevOps Enterprise Summit that Gene Kim led, the common thing that you saw in most of those is that the organizations that were making progress had an executive that was leading the charge, rallying the troops, and making that happen. It requires coordinating work across a large number of teams, and you need somebody who can look across the value chain and muster the resources to make the technical and the cultural changes. [Read a recent interview with Kim on DevOps and security.]

Where a lot of my passion lies now, and the reason I wrote my second book is, that I don’t think there are a lot of resources for the executives to learn how to transform large organizations. So I tried to capture everything that I knew about how best to do that.

My second book, Leading the Transformation: Applying Agile and DevOps Principles at Scale, is a resource that enables people to go faster in the organization. I think that’s the next key launch point — getting the executives engaged to lead that change. That’s going to be the key to getting the adoption going much better. [Watch for Free: DevOps, Catalyst of the Agile Enterprise.]

Gardner: John, what about skills? It’s one thing to get the top-down buy-in, and it’s one thing to recognize the need for transformation and put in some of the organizational building blocks. But ultimately you need to be have the right people with the right skills.

Any thoughts about how IoT will create demand for a certain set of skills and how well we’re in a position to train and find those people?

Jeremiah: IoT requires people to embrace skills and understand much broader than their narrow silo. They’ll need to develop an expertise in what they do, but they have to have the relationships. They have to have the ability to work across the organization to learn. One of the skills is constantly learning as they go. As Gary mentioned earlier, it’s not a “done” for DevOps. It’s a journey of learning. It’s a journey of growing and getting better.

Skills such as understanding process and understanding how things are working so you can continuously improve them is a skill that a lot of times people don’t bring to the table. They know their piece, but they don’t often think about the bigger picture. So it’s a set of skills. It’s beyond a single technology. It’s understanding that that they are really not in IT — they’re really a part of the business. I love the way Gary said that earlier, and I agree with him. Seeing themselves as part of the business is a different mindset that they have to have as they go to work.

Then, as they apply their skills, they’re focusing on how they deliver business value. That’s really the change. [Watch for Free: DevOps, Catalyst of the Agile Enterprise.]

Gardner: How do you do DevOps effectively when you’re outsourcing a good part of your development? You may need to do that to find the skills.

For embedded systems, for example, you might look to an outside shop that has special experience in that particular area, but you may still want to get DevOps. How does that work?

Gruver: I think DevOps is key to making outsourcing work, especially if you have different vendors that you’re outsourcing to because it forces coordination of the work on a frequent basis. Continuous integration, automated testing, and continuous deployment are the forcing functions that align the organization with working code across the system.

When you’re enabling people to go off and work on separate branches and separate issues and you have an integration cycle late in the process, that’s where you get the dysfunction — with a bunch of different organizations coming together with stuff that doesn’t work. If you force that to happen on a daily, or multiple times a day, basis, you get that system aligned and working well before people spend time and energy working on something that either don’t work together or won’t work well in production.

Listen to the podcast. Find it on iTunesGet the mobile app. Read a full transcript or download a copy. Watch for Free: DevOps, Catalyst of the Agile Enterprise. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in DevOps, Hewlett Packard Enterprise, Internet of Things | Tagged , , , , , , , | Leave a comment

Big data generates new insights into what’s happening in the world’s tropical ecosystems

The next BriefingsDirect big-data innovation case study interview explores how large-scale monitoring of rainforest biodiversity and climate has been enabled and accelerated by cutting-edge big-data capture, retrieval, and analysis.

We’ll learn how quantitative analysis and modeling are generating new insights into what’s happening in tropical ecosystems worldwide, and we’ll hear how such insights are leading to better ways to attain and verify sustainable development and preservation methods and techniques.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about data science — and how hosting that data science in the cloud — helps the study of biodiversity, we’re pleased to welcome Eric Fegraus, Senior Director of Technology of the TEAM Network at Conservation International and Jorge Ahumada, Executive Director of the TEAM Network, also at Conservation International in Arlington, Virginia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Knowing what’s going on in environments in the tropics helps us understand what to do and what not to do to preserve them. How has that changed? We spoke about a year ago, Eric. Are there any trends or driving influences that have made this data gathering more important than ever.

Fegraus: Over this last year, we’ve been able to roll out our analytic systems across the TEAM Network. We’re having more-and-more uptake with our protected-area managers using the system and we have some good examples where the results are being used.


For example, in Uganda, we noticed that a particular cat species was trending downward. The folks there were really curious why this was happening. At first, they were excited that there was this cat species, which was previously not known to be there.

This particular forest is a gorilla reserve, and one of the main economic drivers around the reserve is ecotourism, people paying to go see the gorillas. Once they saw that these cats are going down, they started asking what could be impacting this. Our system told them that the way they were bringing in the eco-tourists to see the gorillas had shifted and that was potentially having an impact of where the cats were. It allowed them to readjust and think about their practices to bring in the tourists to the gorillas.

Information at work

Gardner: Information at work.

Fegraus: Information at work at the protected-area level.

Gardner: Just to be clear for our audience, the TEAM Network stands for the Tropical Ecology Assessment and Monitoring. Jorge, tell us a little bit about how that came about, the TEAM Network and what it encompasses worldwide?

No-Compromise Big Data Analytics
With HP Vertica OnDemand
Request Your 30-Day Free Trial

Ahumada: The TEAM Network was a program that started about 12 years ago and it was started to fill a void in the information we have from tropical forests. Tropical forests cover a little bit less than 10 percent of the terrestrial area in the world, but they have more than 50 percent of the biodiversity.


So they’re the critical places to be conserved from that point of view, despite the fact we didn’t have any information about what’s happening in these places. That’s how the TEAM Network was born, and the model was to use data collection methods that were standardized, that were replicated across a number of sites, and have systems that would store and analyze that data and make it useful. That was the main motivation.

Gardner: Of course, it’s super-important to be able to collect and retrieve and put that data into a place where it can be analyzed. It’s also, of course, important then to be able to share that analysis. Eric, tell us what’s been happening lately that has led to the ability for all of those parts of a data lifecycle to really come to fruition?

Fegraus: Earlier this year, we completed our end-to-end system. We’re able to take the data from the field, from the camera traps, from the climate stations, and bring it into our central repository. We then push the data into Vertica, which is used for the analytics. Then, we developed a really nice front-end dashboard that shows the results of species populations in all the protected areas where we work.

The analytical process also starts to identify what could be impacting the trends that we’re seeing at a per-species level. This dashboard also lets the user look at the data in a lot of different ways. They can aggregate it and they can slice and dice it in different ways to look at different trends.

Gardner: Jorge, what sort of technologies are they using for that slicing and dicing? Are you seeing certain tools like Distributed R or visualization software and business-intelligence (BI) packages? What’s the common thread or is it varied greatly?

Ahumada: It depends on the analysis, but we’re really at the forefront of analytics in terms of big data. As Michael Stonebraker and other big data thinkers have said, the big-data analytics infrastructure has concentrated on the storage of big data, but not so much on the analytics. We break that mold because we’re doing very, very sophisticated Bayesian analytics with this data.

One of the problems of working with camera-trap data is that you have to separate the detection process from the actual trend that you’re seeing because you do have a detection process that has error.

Hierarchical models

We do that with hierarchical models, and it’s a fairly complicated model. Just using that kind of model, a normal computer will take days and months. With the power of Vertica and power of processing, we’ve been able to shrink that to a few hours. We can run 500 or 600 species from 13 sites, all over the world in five hours. So it’s a really good way to use the power of processing.

We’d been also more recently working with Distributed R, a new package that was written by HP folks at Vertica, to analyze satellite images, because we’re also interested in what’s happening at these sites in terms of forest loss. Satellite images are really complicated, because you have millions of pixels and you don’t really know what each pixel is. Is it forest, agricultural land, or a house? So running that on normal R, it’s kind of a problem.

No-Compromise Big Data Analytics
With HP Vertica OnDemand
Request Your 30-Day Free Trial

Distributed R is a package that actually takes some of those functions, like random forest and regression trees, and takes full power of the vertical processing of Vertica. So we’ve seen a 10-fold increase in performance with that, and it allows us to get much more information out of those images.

Gardner: Not only are you on the cutting-edge for the analytics, you’ve also moved to the bleeding edge on infrastructure and distribution mechanisms. Eric, tell us a little bit about your use of cloud and hybrid cloud?

Fegraus: To back up a little bit, we ended up building a system that uses Vertica. It’s an on-premise solution and that’s what we’re using in the TEAM Network. We’ve since realized that this solution we built for the TEAM Network can also be readily scalable to other organizations and government agencies, etc., different people that want to manage camera trap data, they want to do the analytics.

So now, we’re at a process where we’ve been essentially doing software development and producing software that’s scalable. If an organization wants to replicate what we’re doing, we have a solution that we can spin up in the cloud that has all of the data management, the analytics, the data transformations and processing, the collection, and all the data quality controls, all built into a software instance that could be spun up in the cloud.

Gardner: And when you say “in the cloud,” are you talking about a specific public cloud, in a specific country or all the above, some of the above?

Fegraus: All of the above. We’ll be using Vertica or we’re using Vertica OnDemand. We’re actually going to transition our existing on-premise solution into Vertica OnDemand. The solution we’re developing uses mostly open-source software and it can be replicated in the Amazon cloud or other clouds that have the right environments where we can get things up and running.

Gardner: Jorge, how important is that to have that global choice for cloud deployment and attract users and also keep your cost limited?

Ahumada: It’s really key, because in many of these countries, it’s very difficult for some of those governments to expand out their old solutions on the ground. Cloud solutions offer a very good, effective way to manage data. As Eric was saying, the big limitation here is which cloud solutions are available in each country. Right now, we have something with cloud OnDemand here, but in some of the countries, we might not have the same infrastructure. So we’ll have to contract different vendors or whatever.

But it’s a way to keep cost down, deliver the information really quick, and store the data in a way that is safe and secure.

What’s next?

Gardner: Eric, now that we have this ability to retrieve, gather, analyze, and now distribute, what comes next in terms of having these organizations work together? Do we have any indicators of what the results might be in the field? How can we measure the effectiveness at the endpoint — that is to say, in these environments based on what you have been able to accomplish technically?

Fegraus: One of the nice things about the software that we built that can run in the various cloud environments, is that it can also be connected. For example, if we start putting these solutions in a particular continent, and there are countries that are doing this next to each other, there are not going to be silos that will be unable to share an aggregated level of data across each other so that we can get a holistic picture of what’s happening.

So that was very important when we started going down this process, because one of the big inhibitors for growth within the environmental sciences is that there are these traditional silos of data that people in organizations keep and sit on and essentially don’t share. That was a very important driver for us as we were going down this path of building software.

Gardner: Jorge, what comes next in terms of technology. Are the scale issues something you need to hurdle to get across? Are there analytics issues? What’s the next requirements phase that you would like to work through technically to make this even more impactful?

Ahumada: As we scale up in size and  start  having more granularity in the countries where we work, the challenge is going to be keeping these systems responsive and information coming. Right now, one of the big limitations is the analytics. We do have analytics running at top speeds, but once we started talking about countries, we’re going to have an the order of many more species and many more protected areas to monitor.

This is something that the industry is starting to move forward on in terms of incorporating more of the power of the hardware into the analytics, rather than just the storage and the management of data. We’re looking forward to keep working with our technology partners, and in particular HP, to help them guide this process. As a case study, we’re very well-positioned for that, because we already have that challenge.

Gardner: Also it appears to me that you are a harbinger, a bellwether, for the Internet of Things (IoT). Much of your data is coming from monitoring, sensors, devices, and cameras. It’s in the form of images and raw data. Any thoughts about what others who are thinking about the impact of the IoT should consider, now that you have been there?

Fegraus: When we talk about big data, we’re talking about data collected from phones, cars, and human devices. Humans are delivering the data. But here we have a different problem. We’re talking about nature delivering the data and we don’t have that infrastructure in places like Uganda, Zimbabwe, or Brazil.

No-Compromise Big Data Analytics
With HP Vertica OnDemand
Request Your 30-Day Free Trial

So we have to start by building that infrastructure and we have the camera traps as an example of that. We need to be able to deploy much more, much larger-scale infrastructure to collect data and diversify the sensors that we currently have, so that we can gather sound data, image data, temperature, and environmental data in a much larger scale.

Satellites can only take us some part of the way, because we’re always going to have problems with resolution. So it’s really deployment on the ground which is going to be a big limitation, and it’s a big field that is developing now.

Gardner: Drones?

Fegraus: Drones, for example, have that capacity, especially small drones that are showing to be intelligent, to be able to collect a lot of information autonomously. This is at the cutting edge right now of technological development, and we’re excited about it.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, data analysis, Hewlett Packard Enterprise, HP, HP Vertica | Tagged , , , , , , , , , , , | Leave a comment