The next BriefingsDirect DevOps innovation case study explores how telecommunications giant Sprint places an emphasis on orchestration and automation to bring IT culture and infrastructure into readiness for cloud, software-defined data center (SDDC) and DevOps.
Learn how Sprint has made IT infrastructure orchestration and automation a pillar of its strategic IT architecture future from Chris Saunderson, Program Manager and Lead Architect for Data Center Automation at Sprint in Kansas City, Missouri. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: I’m intrigued by your emphasis on working toward IT infrastructure, of getting to more automation at a strategic level. Tell us why you think automation and orchestrations are of strategic benefit to IT.
Saunderson: We’ve been doing automation since 2011, but it came out of an appreciation that the velocity of change inside that data center is just going to increase over time.
In 2009, my boss and I sat down and said, “Look, this is going nowhere. We’re not going to make a significant enough impact on the way that the IT division works … if we just keep doing the same thing.”
That’s when we sat down and saw the orchestrated data center coming. I encapsulated it as the “data center of things.” When you look at the journey that most enterprises go through, right around 2009 is when the data center of things emerged. You then began to lose track of where things are, what they are, who uses them, how long they’ve been there, and what their state is.
When we looked at automation and orchestration in 2009, it was very solidly focused on IT operational efficiency, but we had the longer-term view that it that it was going to be foundational for the way to do things going forward — if for nothing else than to handle the data center of things. We could also see changes coming in the way that our IT organization was going to have to respond to the change in our business, let alone just the technology change.
Gardner: So that orchestration has allowed you to not only solve the problems of the day, but put a foundation in place for the new problems, rapid change, cloud, mobile, all of these things that are happening. Before we go back to the foundational aspects, tell us a little bit about Sprint itself, and why your business is changing.
Provider of choice
Saunderson: The Sprint of today … We’re number three, with aspirations to be bigger, better, and faster, and the provider of choice in wireless, voice and data. We’re a tier 1 network service provider of global IP network along with private network, MPLS backbone, all that kind of stuff. We’re a leader in TRS — Telecommunication Relay Services for the deaf.
The Sprint of old is turning into the Sprint of new, where we look at mobile and we say mobile is it, mobile is everything — that Internet of Things (IoT). That’s what we want to foster growth. I see an exciting company that’s coming in terms of connecting people not only to each other, but to their partners, the people who supply services to them, to their entertainment, to their business. That’s what we do.
Gardner: When you started that journey for automation — getting out of those manual processes and managing complexity, but repeatedly getting the manual labor out of the way — what have you learned that you might relate to other people? What are some of the first things people should keep in mind as they embark on this journey?
Saunderson: It’s really a two-part answer. Orchestration comes after automation, because orchestration is there to consume the new automation services. So let’s take that one first. The big things to remember is that change is hard for people. Not technology change. People are very good about doing technology change, but unwiring people’s brains is a problem, and you have to acknowledge that up-front. You’re going to have a significant amount of resistance from people to change the way that they’re used to doing things.
Now addressing that is also a human problem, but in a certain sense, the technology helps because you’re able to say things like, “Let’s just look at the result and let’s compare what it takes to get to the result. Was it the humans doing it, and what does it take to get to the result with the machines doing it?” Let’s just call it what it is. It’s machines doing things. If the result is the same, then it doesn’t require the humans. That’s challenge number one, unwiring people’s minds.
The second is making sure that you are articulating the relevance of what you’re doing. We had an inbuilt advantage, at least in the automation space, of having some external forces that were driving us to do this.
It’s really regulatory compliance, right? Sarbanes-Oxley (SOX) is what it is. PCI is what it is — SAS70, FISMA, those things. We had to recognize the excessive amount of labor that we were expending to try and keep up with regulatory change.
PCI changes every year or 18 months. So it’s just going through every rule set and saying, “Yes, this doesn’t apply to me; I’m more restricted.” That takes six people. We were able to turn that. We were able to address the requirement to continue to do compliance more effectively and more efficiently. Don’t lose that upward communication, the relevancy thing — which is not only are we doing this more efficiently, but we are better at it?
When you get to orchestration, now you’re really talking about some interesting stuff because this is where you begin to talk about being able to do continuous compliance, for example. That says, “Okay, we used to treat this activity as once a quarter or maybe once a month. Let’s just do it all the time, but don’t even have a human involved in it.” Anybody who has talked to me about this will hear this over and over again. I want smart people working on smart things. I do not want smart people working on dumb things. Truth be told, 99 percent of the things that IT people do are dumb things.
The problem with them is that they’re dumb because they force a human to look at the thing and make a decision. Orchestration allows you take that one level out, look at the thing, and figure out how to make that decision without a human having to make it. Then, tie that to your policy, then report on policy compliance, and you’re done.
The moment you do that, you’re freeing people up to go have the harder discussions. This is where we start to talk about DevOps and this is where we start to talk about some of the bigger blocks that grind against each other in the IT world.
Gardner: “Continuous” is very interesting. You use the PCI compliance issue, but it’s also very important when it comes to applications, software development, test, and deploy. Is there anything that you can explain for us about the orchestration and automation that lends itself to that continuous delivery of applications? People might not put the two together, but I’m pretty sure there’s a connection here.
Saunderson: There is. DevOps is a philosophy. There was a fantastic discussion from Adobe where it was very clear that DevOps is a philosophy, an organizational discussion. It’s not necessarily a technology discussion. The thing that I would say, though, is that you can apply continuous everywhere.
The successes that we’re having in that orchestration layer is that it’s a much easier discussion to go in and say, “You know how we do this over here? Well, what if it was a release candidate code?” The real trick there, when you go back to the things that I want people to think about, is that DevOps is a philosophy, because it requires development and operations to work together, not one hand off to the other, and not one superior to the other; it’s together.
If they’re not willing to walk down the same path together, then you have an organizational problem, but you may also have a toolset problem as well. We’re an Application Lifecycle Manager (ALM) shop. We have it there. Does it cover all of our applications? No. Are we getting all of the value out of it that we could? No.
But that’s because we’re spending time in getting ready to do things like connect ALM into the running environment. The bigger problem, Dana, is that the organization has to be ready for it, because your philosophical changes are way more difficult than technical changes. Continuous means everything else has to be continuous along with it.
If you’re in the ITIL model, you’re still going to need configuration items (CIs). How do CIs translate to Docker containers? Do they need to be described in the same way? If the operations team isn’t necessarily as involved in the management of continuously deployed applications, who do I route a ticket to and how do they fix it?
This is where I look at it and say that this is the opportunity for orchestration to sit underneath that and say it not only has the capability to enable people to deploy continuously — whether it’s into test or production, disaster recovery, or any other environment.
To equip them to be able to operate the continuous operation (that’s coming after the continuous integration and development and deployment), that has to be welded on because you’re going to enforce dis-synergy if you don’t address it all at the same time as you do with integration and deployment.
Gardner: Let’s look at some other values that you can derive from better orchestration and automation. I’m thinking about managing complexity, managing scale, but also more of the software-defined variety. We are seeing a lot of software-defined storage (SDS), software-defined networking (SDN), ultimately software-defined data center (SDDC), all of which is abstracted and managed. How do you find the path to SDDC, vis-à-vis better orchestration and automation?
At the core
Saunderson: Orchestration is going to have to be at the core of that. If you look at the product offerings just across the space, you’re starting to see orchestration pop up in every last one of them — simply because there’s no other way to do it.
RESTFul APIs are nice, but it’s not enough because, at that point, you’re asking customers to start bolting things together themselves, as opposed to saying, “I’m going to give you a nice orchestrated interface, where I have a predefined set of actions that are going be executed when you poll that orchestration to make it work and then apply that across the gamut.”
SDS is coming after SDN. Don’t misunderstand me. We’re not even at the point of deploying software defined networks, but we look at it and we say, “I have to have that, if for no other reason than I need to remove the human hands out of the delivery chain for things that touch the network.”
I go back to the data center of things. The moment you go to 10Gbit, where you are using virtual context, just anything that’s in the current lexicon of new networking as opposed to VLANs, versus all that stuff, switchboards, etc., you’re absolutely losing visibility.
Without orchestration, and, behind that, without the analytics to look at what’s happening in the orchestration that’s touching the elements in your data center, you’re going to be blind. Now, we’re starting to talk about going back to the dark ages. I think we’re smarter than that.
By looking at orchestration as the enabler for all of that, you start to get better capability to deliver that visibility that you’re after, as well as the efficiency. We should never lose sight of the fact that the whole reason to do this is to say, “Deploy the thing.”
That’s fine, but how do I run it, how do I assure it, how do I find it? This keeps coming up over and over. Eventually, you’re going to have to do something to that thing, whether it’s deployed again, whether you have some kind of security event that is attached to it, or the business just decides not to do it any more. Then, I have to find it and do something to it.
Gardner: Given your deep knowledge and understanding of orchestration and automation, what would you like to see done better for the tools that are provided to you to do this?
Is there a top-three list of things you’d like to see that would help you extend the value of your orchestration and automation, do things like software-defined, do things like DevOps as a philosophy, ultimately to be have more of a data-driven IT of strategic operation?
Saunderson: I’m not sure I have a top three. I can certainly talk about generic principal stuff, which is, I want open. That’s what I really want. Just to take the sideline for a second, it’s fascinating. It’s just absolutely fascinating. IT operations is starting to become a software development shop now.
I’m not resistant to that in the least because, just in this conversation, we’ve been talking about RESTFul APIs and we were talking about orchestration. None of this is IT operations stuff. This isn’t electrons flowing through copper anymore. It’s business process translated into a set of actions, open, and interoperable.
Then, just give me rich data about those things, very rich data. We’re getting to that point, just by the shear evolution of big data, that it doesn’t matter anymore. Just give it all to me, and I will filter it out to what I’m looking for.
Gardner: The thing that is interesting with Hewlett Packard Enterprise (HPE) is that they do have a big-data capability, as well as a leading operations capability and they’re starting to put it all together.
Saunderson: In the same way the orchestration is starting to pop up everywhere. If you look at the HPE product portfolio and you look at network coordination, it’s going to have an operations orchestration interface into it. Server automation is welded into operations orchestration and it’s going to appear everywhere else. Big data is coming with it.
I’m not hesitant on it. It’s just that it introduces complexity for me. The fact that the reporting engine is starting to turn big data is good. I’m happy for that. It just has to get more. It’s not enough to just be giving me job results that are easy to find and easy to search. Now, I want to get some really rich metadata out of things.
Software-defined network is a good example. The whole open flow activity just by itself looks like network management until it goes into a big-data thing and then suddenly, now I have a data source that I can start correlating events to that turn into actions inside the control that turns into change on the network.
Let’s extend that concept. Let’s put that into orchestration, into service management, or into automation. Give me that and it doesn’t have to be the single platform. Give me a way to anticipate HPE’s product roadmap. The challenge for HPE is delivery.
Gardner: Before we sign off, one of the important things about IT investment is getting the buy-in and support from your superiors or the other aspects of your enterprise. Are there some tangible metrics of success, returns on investment (ROIs), improvements and productivity that you can point to from your orchestration, not just helping smart people do smart things, but benefiting the business at large?
Saunderson: So organizations often only do the things that the boss checks. The obvious priorities for us are straight around our business case.
If you want to look at real tangibles, our virtual server provisioning, even though it’s the heavyweight process that it is today, is turning from days into hours. That’s serious change, that’s serious organizational cultural change, but it’s not enough. It has to be minutes not hours, right?
Then there’s compliance. I keep coming back to it as this is a foundational thing. We’re able to continue to pass SOX and PCI every time, but we do it efficiently. That’s a cultural change as well, but that’s something that CIOs and above do care about, because it’s kind of important.
One gets your CFO in trouble, and the other ones stops you taking payments. That gets people’s attention really quickly. The moment you can delve into those and demonstrate that not only are you meeting those regulatory requirements, and you’re able to report all of them and have auditors look at it and say yes we agree, you are doing all those things that you should be doing.
Then, you can flip that into the next area which is that we do have to go look at our applications for compliance. We have rich metadata over here that was able to articulate things. So let’s apply it there.
You may also be interested in:
- How fast analytics changes the game and expands the market for big data value
- How HTC centralizes storage management to gain visibility and IT disaster avoidance
- Big data, risk, and predictive analysis drive use of cloud-based ITSM, says panel
- Rolta AdvizeX experts on hastening big data analytics in healthcare and retail
- The future of business intelligence as a service with GoodData and HP Vertica
- Enterprises opting for converged infrastructure as stepping stone to hybrid cloud
- Redcentric orchestrates networks-intensive merger using advanced configuration management database
- HP pursues big data opportunity with updated products, services, developer program
- How eCommerce sites harvest big data across multiple clouds
- How Localytics uses big data to improve mobile app development and marketing