Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: Quest Software.
The benefits of server virtualization are clear for many more companies now as they reach higher percentages of workloads supported by virtual machines (VMs). But the complexity impacts on other IT functions can also ramp up quickly, in some cases jeopardizing the overall benefits.
When it come to the relationship between increasingly higher levels of virtualization and the need for new data backup and recovery strategies, for example, the impacts can be multiplier of improvement when both are done properly and in context to one another.
The next BriefingsDirect enterprise IT discussion then focuses on how virtualization provides an excellent on-ramp to improved data lifecycle benefits and efficiencies. What’s more, the elevation of data to the lifecycle efficiency level also forces a rethinking of the culture of data, of who owns data, and when, and who is responsible for managing it in a total lifecycle.
This is different from the previous and current system of data management as a fragmented approach, with different oversight for data across far-flung instances and uses.
Here to share insights on where the data availability market is going — and how new techniques are being adopted to make the value of data ever greater — we’re joined by John Maxwell, Vice President of Product Management for Data Protection, at Quest Software. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Quest Software is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:
Gardner: Why has server virtualization become a catalyst to data modernization?
Maxwell: I think it’s a natural evolution, and I don’t think it was even intended on the part of the two major hypervisor vendors, VMware and Microsoft with their Hyper-V. As we know, five or 10 years ago, virtualization was touted as a means to control IT costs and make better use of servers.
Utilization was in single digits, and with virtualization you could get it much higher. But the rampant success of virtualization impacted storage and the I/O where you store the data.
Upped the ante
If you look at the announcements that VMware did around vSphere 5, around storage, and the recent launch of Windows Server 2012, Hyper-V, where Microsoft even upped the ante and added support for Fibre Channel with their hypervisor, storage is at the center of the virtualization topic right now.
It brings a lot of opportunities to IT. Now, you can separate some of the choices you make, whether it has to do with the vendors that you choose or the types of storage, network-attached storage (NAS), shared storage and so forth. You can also make the storage a lot more economical with thin disk provisioning, for example.
There are a lot of opportunities out there that are going to allow companies to make better utilization of their storage, just as they’ve done with their servers. It’s going to allow them to implement new technologies without necessarily having to go out and buy expensive proprietary hardware.
From our perspective, the richness of what the hypervisor vendors are providing in the form of APIs, new utilities, and things that we can call on and utilize, means there are a lot of really neat things we can do to protect data. Those didn’t exist in a physical environment.
It’s really good news overall. Again, the hypervisor vendors are focusing on storage, and so are companies like Quest when it comes to protecting that data.
Gardner:What is it about data that people need to think differently about?First of all, people shouldn’t get too complacent.
Maxwell: First of all, people shouldn’t get too complacent. We’ve seen people load up virtual disks, and one of the areas of focus at Quest, separate from data protection, is in the area of performance monitoring. That’s why we have tools that allow you to drill down and optimize your virtual environment from the virtual disks and how they’re laid out on the physical disks.
And even hypervisor vendors — I’m going to point back to Microsoft with Windows Server 2012 — are doing things to alleviate some of the performance problems people are going to have. At face value, your virtual disk environment looks very simple, but sometimes you don’t set it up or it’s not allocated for optimal performance or even recoverability.
There’s a lot of education going on. The hypervisor vendors, and certainly vendors like Quest, are stepping up to help IT understand how these logical virtual disks are laid out and how to best utilize them.
See it both ways
A t face value, virtualization makes it really easy to go out and allocate as many disks as you want. Vendors like Quest have put in place solutions that make it so that within a couple of mouse clicks, you can expose your environment, all your virtual machines (VMs) that are out there, and protect them pretty much instantaneously.
From that aspect, I don’t think there needs to be a lot of thought, as there was back in the physical days, of how you had to allocate storage for availability. A lot of it can be taken care of automatically, if you have the right software in place.
That said, a lot of people may have set themselves up, if they haven’t thought of disaster recovery (DR), for example. When I say DR, I also mean failover of VMs and the like, as far as how they could set up an environment where they could ensure availability of mission-critical applications.
For example, you wouldn’t want to put everything, all of your logical volumes, all your virtual volumes, on the same physical disk array. You might want to spread them out, or you might want to have the capabilities of replicating between different hypervisor, physical servers, or arrays.
Gardner:I understand that you’ve conducted a survey to try to find out more about where the market is going and what the perceptions are in the market. Perhaps you could tell us a bit about the survey and some of the major findings.Our survey showed that 70 percent of organizations now consider at least 50 percent of their data mission critical.
Maxwell: One of the findings that I find most striking, since I have been following this for the past decade, is that our survey showed that 70 percent of organizations now consider at least 50 percent of their data mission critical.
That may sound ambiguous at first, because what is mission critical? But from the context of recoverability, that generally means data that has to be recovered in less than an hour and/or has to be recovered within an hour from a recovery-point perspective.
This means that if I have a database, I can’t go back 24 hours. The least amount of time that I can go back is within an hour of losing data, and in some cases, you can’t go back even a second. But it really gets into that window.
I remember in the days of the mainframe, you’d say, “Well, it will take all day to restore this data, because you have tens or hundreds of tapes to do it.” Today, people expect everything to be back in minutes or seconds.
The other thing that was interesting from the survey is that one-third of IT departments were approached by their management in the past 12 months to increase the speed of the recovery time. That really dovetails with the 50 percent of data being mission critical. So there’s pressure on the IT staff now to deliver better service-level agreements (SLAs) within their company with respect to recovering data.
Terms are synonymous
The other thing that’s interesting is that data protection and the term backup are synonymous. It’s funny. We always talk about backup, but we don’t necessarily talk about recovery. Something that really stands out now from the survey is that recovery or recoverability has become a concern.
Case in point: 73 percent of respondents, or roughly three quarters, now consider recovering lost or corrupted data and restoring those mission critical applications their top data-protection concern. Only 4 percent consider the backup window the top concern. Ten years ago, all we talked about was backup windows and speed of backup. Now, only 4 percent considered backup itself, or the backup window, their top concern.
So 73 percent are concerned about the recovery window, only 4 percent about the backup window, and only 23 percent consider the ability to recover data independent of the application their top concerns.
Those trends really show that there is a need. The beauty is that, in my opinion, we can get those service levels tighter in virtualized environments easier than we can in physical environments.
Gardner:What’s the relationship between moving toward higher levels of virtualization and cutting costs?.A company has to look at which policies or which solutions to put in place to address the criticality of data, but then there is a cost associated with it.
Maxwell: You have to look at a concept that we call tiered recovery. That’s driven by the importance now of replication in addition to traditional backup, and new technology such as continuous data protection and snapshots.
That gets to what I was mentioning earlier. Data protection and backup are synonymous, but it’s a generic term. A company has to look at which policies or which solutions to put in place to address the criticality of data, but then there is a cost associated with it.
For example, it’s really easy to say, “I’m going to mirror 100 percent of my data,” or “I’m going to do synchronous replication of my data,” but that would be very expensive from a cost perspective. In fact, it would probably be just about unattainable for most IT organizations.
Categorize your data
What you have to do is understand and categorize your data, and that’s one of the focuses of Quest. We’re introducing something this year called NetVault Extended Architecture (NetVault XA), which will allow you to protect your data based on policies, based on the importance of that data, and apply the correct solution, whether it’s replication, continuous data protection, traditional backup, snapshots, or a combination.
You can’t just do this blindly. You have got to understand what your data is. IT has to understand the business, and what’s critical, and choose the right solution for it.What we see now are the traditional people who were responsible for physical storage taking over the responsibility of virtual storage.
… Because of the mission criticality of data, they’re going from being people who looked at data as just a bunch of volumes or arrays, logical unit numbers (LUNs), to “these are the applications and this is the service level associated with the applications.”
When they go to set up policies, they are not just thinking of, “I’m backing up a server” or “I’m backing up disk arrays,”, but rather, “I’m backing up Oracle Financials,” “I’m backing up SAP,” or “I’m backing up some in-house human resources application.”
Adjust the policy
And the beauty of where Quest is going is, what if those rules change? Instead of having to remember all the different disk arrays and servers that are associated with that, say the Oracle Financials, I can go in and adjust the policy that’s associated with all of that data that makes up Oracle Financials. I can fine-tune how I am going to protect that and the recoverability of the data.
Gardner: How do we look at this shift and think about extending that policy-driven and dynamic environment at the practical level of use?
Maxwell: With the increased amount of virtual data out there, which just adds to the whole pot of heterogeneous environments, whether you have Windows and Linux, MySQL, Oracle, or Exchange, it’s impossible for these people who are responsible for the protection and the recoverability of data to have the skills needed to know each one of those apps.
We want to make it as easy to back up and recover a database as it is a flat file. The fine line that we walk is that we don’t want to dumb the product down. We want to provide intuitive GUIs, a user experience that is a couple of clicks away to say, “Here is a database associated with the application. What point do I want to recover to?” and recover it.
If there needs to be some more hands-on or more complicated things that need to be done, we can expose features to maybe the database administrator (DBA), who can then use the product to do more complex recovery or something to that effect.It’s impossible for these people who are responsible for the protection and the recoverability of data to have the skills needed to know each one of those apps.
Again, they’re responsible for everything. They’re setting the policies, and they shouldn’t have to be qualified. They shouldn’t have to be an Exchange administrator, an Oracle DBA, or a Linux systems administrator to be able to recover this data.
We’re going to do that in a nice pretty package. Today, there are many people here at Quest who walk around with a tablet PC as much as they do with their laptop. So our next-generation user interface (UI) around NetVault XA is being designed with a tablet computing scenario, where you can swipe data, and your toolbar is on the left and right, as if you are holding it using your thumb — that type of thing.
Gardner:Are there any other technology approaches that Quest is involved with that further explain how some of these challenges can be met?.We’re envisioning customer environments where they’re going to have multiple hypervisors, just as today people have multiple operating system databases.
Maxwell: There are two things I want to mention. Today, Quest protects VMware and Microsoft Hyper-V environments, and we’ll be expanding the hypervisors that we’re supporting over the next 12 months. Certainly, there are going to be a lot of changes around Windows Server 2012 or Hyper-V, where Microsoft has certainly made it a lot more robust.
There are a lot more things for us exploit, because we’re envisioning customer environments where they’re going to have multiple hypervisors, just as today people have multiple operating system databases.
We want to take care of that, mask some complexity and allow people to possibly have cross-hypervisor recoverability. So, in other words, we want to enable safe failover of a VMware ESXi system to Microsoft Hyper-V, or vice versa..
There’s another thing that’s interesting and is a challenge for us and it’s something that has challenged engineers here at Quest. This gets into the concepts of how you back up or protect data differently in virtual environments. Our vRanger product is the market leader with more than 40,000 customers, and it’s completely agentless.
As we have evolved the product over the past seven years, we’ve had three generations of the product and have exploited various APIs. But with vRanger, we’ve now gone to what is called a virtual appliance architecture. We have a vRanger service that performs backup and replication for one or hundreds of VMs that exist either on that one physical server or in a virtual cluster. So this VM can even protect VMs that exist on other hardware.
The beauty of this is first the scalability. I have one software app that’s running that’s highly controllable. You can control what resources are replicating, protecting, and recovering all of my VMs. So that’s easy to manage, versus having to have an agent installed in every one of those VMs.
Two, there’s no overhead. The VMs don’t even know, in most cases, that a backup is occurring. We use the services, in the case of VMware, of ESXi, that allows us to go out there, snapshot the virtual volumes called VMDKs, and back up or replicate the data.
There’s a service in Windows called Volume Shadow Copy Service, or VSS for short, and one of the unique things that Quest does with our backup software is synchronize the virtual snapshot of the virtual disks with the application of VSS, so we have a consistent point-in-time backup.
To communicate, we dynamically inject binaries into the VM that do the process and then remove themselves. So, for a very short time, there’s something running in that VM, but then it’s gone, and that allows us to have consistent backup.One of the beauties of virtualization is that I can move data without the application being conscious of it happening.
That way, from that one image backup that we’ve done, I can restore an entire VM, individual files, or in the case of Microsoft Exchange or Microsoft SharePoint, I can recover a mailbox, an item, or a document out of SharePoint.
We replicate data amongst various Quest facilities. Then, we can bring up an application that was running in location A in point B, on unlike hardware. It can be completely different storage, completely different servers, but since they’re VMs, it doesn’t matter.
That kind of flexibility that virtualization brings is going to give every IT organization in the world the type of failover capabilities that used to only exist for the Global 1000, where they used to have to set up a hot site or had to have a data center. They would use very expensive proprietary hardware-based replication and things like that. So you had to have like arrays, like servers, and all that, just to have availability.
Now, with virtualization, it doesn’t matter, and of course, we have plenty of bandwidth, especially here in the United States. So it’s very economical, and this gets back to our survey that showed that for IT organizations, 73 percent were concerned about recovering data, and that’s not just recovering a file or a database.
Two years ago, when people talked about cloud and data protection, it was just considering the cloud as a target. I would back up the cloud or replicate the cloud. Now, we are talking about actually putting data protection products in the cloud, so you can back up the data locally within the cloud and then maybe even replicate it or back it up back to on-prem, which is kind of a novel concept if you think about it.
If you host something up in cloud, you can back it up locally up there and then actually keep a copy on-prem. Also, the cloud is where we’re certainly looking at having generic support for being able to do failover into the cloud and working with various service providers where you can pre-provision, for example, VMs out there.
You’re replicating data. You sense that you have had a failure, and all you have to do is, via software, bring up those VMs, pointing them at the disk replicas you put up there.
Different cloud providers
Then, there’s the concept of what you do if a certain percentage of all your IT apps are hosted in cloud by different cloud providers. Do you want to be able to replicate the data between cloud vendors? Maybe you have data that’s hosted at Amazon Web Services. You might want to replicate it to Microsoft Azure or vice versa or you might want to replicate it on-premise (on-prem).
So there’s going to be a lot of neat hybrid options. The hybrid cloud is going to be a topic that we’re going to talk about a lot now, where you have that mixture of on-prem, off-prem, hosted applications, etc., and we are preparing for that.
Gardner:Are there some best practices you’ve seen in the market about how to go about this, or at least to get going?.The cloud is where we’re certainly looking at having generic support for being able to do failover into the cloud.
Maxwell: The number one thing is to find a partner. At Quest, we have hundreds of technology partners that can help companies architect a strategy utilizing the Quest data protection solutions.
Again, choose a solution that hits all the key points. In the case of VMware, you can go to VMware’s site and look for VMware Ready-Certified Solutions. Same thing with Microsoft, whether it’s Windows Server 2008 or 2012 certified. Make sure that you are getting a solution that’s truly certified. A lot of products say they support virtual environments, but then they don’t have that real certification, and a result, they can’t do lot of the innovative things that I’ve been talking about .
So find a partner who can help, or, we at Quest can certainly help you find someone who can help you architect your environment and even implement the software for you, if you so choose. Then, choose a solution that is blessed by the appropriate vendor and has passed their certification process.
- Data Explosion and Big Data Demand New Strategies for Data Management, Backup and Recovery, Say Experts
- Big Data and a Brave New World
- Big Data: Crunching the Numbers
- Case Study: Strategic Approach to Disaster Recovery and Data Lifecyce Management Pays off for Australia’s SAI Global
- Enterprise Architecture and Enterprise Transformation: Related But Distinct Concepts That Can Change the World
- Capgemini’s CTO on Why Cloud Computing Exposes the Duality Between IT and Business