Data Explosion and Big Data Require New Strategies for Data Management and Recovery
Data Explosion and Big Data Require New Strategies forData Management and RecoveryTranscript of a sponsored BrieﬁngDirect podcast on how data-recovery products can providequicker access to data and analysis.Listen to the podcast. Find it on iTunes/iPod. Sponsor: Quest SoftwareDana Gardner: Hi. This is Dana Gardner, Principal Analyst at Interarbor Solutions, and yourelistening to BrieﬁngsDirect.Today, we present a sponsored podcast discussion on why businesses need a better approach to their data recovery capabilities. Well examine how major trends like virtualization, big data, and calls for comprehensive and automated data management, are driving the need for change. The current landscape for data management, backup, and disaster recovery (DR), too often ignores the transition from physical to virtualized environment and sidesteps the heightened real-time role that data now plays in enterprise. [Disclosure: Quest Software is a sponsor of BrieﬁngsDirectpodcasts.]Whats needed are next-generation, integrated, and simpliﬁed approaches, the fast backup andrecovery that spans all essential corporate data. The solution therefore means bridging legacy andnew data, scaling to handle big data, implementing automation and governance, and integratingthe functions of backup protection and DR.The payoffs come in the form of quicker access to needed data and analytics, highly protecteddata across its lifecycle, ease in DR, and overall improved control and management of key assets,especially by non-specialized IT administrators.To share insights into why data recovery needs a new approach and how that can beaccomplished, were joined by two experts. Were here with John Maxwell, Vice President ofProduct Management for Data Protection at Quest Software. Welcome to the show, John.John Maxwell: Thank you. Glad to be here.Gardner: Were also here with Jerome Wendt. He is the President and Lead Analyst of DCIG, anindependent storage analyst and consulting ﬁrm. Welcome, Jerome.Jerome Wendt: Thank you Dana. Its a pleasure to join the call.Gardner: Let me posit my ﬁrst question to you, Jerome. Im sensing a major shift in howcompanies view and value their data assets. Is data really a different thing than, say, ﬁve yearsago in terms of how companies view it and value it?
Wendt: Absolutely. Theres no doubt that companies are viewing it much more holistically. It used to be just data that was primarily in structured databases, or even semi- structured format, such as email, was where all the focus was. Clearly, in the last few years, weve seen a huge change, where unstructured data now is the fastest growing part of most enterprises and where even a lot of their intellectual property is stored. So I think there is a huge push to protect and mine that data. But were also just seeing more of a push to get to edge devices. We talk a lot about PCs and laptops, and there is more of a push to protect data in that area,but all you have to do is look around and see the growth.When you go to any tech conference, you see iPads everywhere, and people are storing moredata in the cloud. Thats going to have an impact on how people and organizations manage theirdata and what they do with it going forward.Gardner: John Maxwell, it seems that not that long ago, data was viewed as a byproduct ofbusiness, Now, for more and more companies, data is the business, or at least the analytics thatthey derive from it. Has this been a sea change, from your perspective?Mission criticalMaxwell: It’s funny that you mention that, because Ive been in the storage business for over 15 years. I remember just 10 years ago, when studies would ask people what percentage of their data was mission critical, it was maybe around 10 percent. That aligns with what youre talking about, the shift and the importance of data. Recent surveys from multiple analyst groups have now shown that people categorize their mission-critical data at 50 percent. Thats pretty profound, in that a company is saying half the data that we have, we cant live without, and if we did lose it, we need it back in less than an hour, or maybe in minutes or seconds.Gardner: So we have a situation where more data is considered important, they need it faster,and they cant do without it. It’s as if our dependency on data has become heightened and ever-increasing. Is that a fair characteristic, Jerome?Wendt: Absolutely.Gardner: So given the requirement of having access to data and it being more important all thetime, were also seeing a lot of shifting on the infrastructure side of things. Theres much moremovement towards virtualization, whole new ways of storage when it comes to trying to reducethe overall cost of that, reducing duplication and that sort of business. How is the shift and the
change in infrastructure impacting this simultaneous need for access and criticality? Lets startwith you, John.Maxwell: Well, the biggest change from an infrastructure standpoint has been the impact ofvirtualization. This year, well over 50 percent of all the server images in the world are virtualizedimages, which is just phenomenal.Quest has really been in the forefront of this shift in infrastructure. We have been, for example, backing up virtual machines (VMs) for seven years with our Quest vRanger product. Weve seen that evolve from when VMs or virtual infrastructure were used more for test and dev. Today, Ive seen studies that show that the shops that are virtualized are running SQL Server, MicrosoftExchange, very mission-critical apps.We have some customers at Quest that are 100 percent virtualized. These are large organizations,not just some mom and pop company. That shift to virtualization has really made companiesassess how they manage it, what tools they use, and their approaches. Virtualization has a largeimpact on storage and how you backup, protect, and restore data.Gardner: John, it sounds like youre saying that its an issue of complexity, but from a lot of thefolks I speak to, when they get through it at the end of their journey through virtualization, theyﬁnd that there are a lot of virtuous beneﬁts to be extended across the data lifecycle. Is it the casethat this is not all bad news, when it comes to virtualization?Maxwell: No. Once you implement and have the proper tools in place, your virtual life is goingto be a lot easier than your physical one from an IT infrastructure perspective. A lot of peopleinitially moved to virtualization as a cost savings, because they had under-utilization ofhardware. But one of the beneﬁts of virtualization is the freedom, the dynamics. You can create anew VM in seconds. But then, of course, that creates things like VM sprawl, the amount of datacontinues to grow, and the like.At Quest weve adapted and exploited a lot of the features that exist in virtual environments, butdont exist in physical environments. It’s actually easier to protect and recover virtualenvironments than it is physical, if you have tools that are exploiting the APIs and theinfrastructure that exists in that virtual environment.Signiﬁcant beneﬁtsGardner: Jerome, do you concur that, when you are through the journey, when you are doingthis correctly, that a virtualized environment gives you signiﬁcant beneﬁts when it comes tomanaging data from a lifecycle perspective?Wendt: Yes, I do. One of the things Ive clearly seen is that it really makes it more of a businessenabler. We talk a lot these days about having different silos of data. One application creates data
that stays over here. Then, its backed up separately. Then, another application or another groupcreates data back over here.Virtualization not only means consolidation and cost savings, but it also facilitates a moreholistic view into the environment and how data is managed. Organizations are ﬁnally able to gettheir arms around the data that they have.Before, it was so distributed that they didnt really have a good sense of where it resided or howto even make sense of it. With virtualization, there are initial cost beneﬁts that help bring italtogether, but once its altogether, theyre able to go to the next stage, and it becomes thebusiness enabler at that point.Gardner: I suppose the key now is to be able to manage, automate, and bring the comprehensivecontrol and governance to this equation, not just the virtualized workloads, but also of course thedata that theyre creating and bringing back into business processes.So what about that? What’s this other trend afoot? How do we move from sprawl to control andmake this ﬂip from being a complexity issue to a virtuous adoption and beneﬁts issue? Lets startwith you, John.Maxwell: Over the years, people had very manual processes. For example, when you brought anew application online or added hardware, server, and that type of thing, you asked, "Oops, didwe back it up? Are we backing that up?"One thing that’s interesting in a virtual environment is that the backup software we have at Questwill automatically see when a new VM is created and start backing it up. So it doesnt matter ifyou have 20 or 200 or 2,000 VMs. Were going to make sure theyre protected.Where it really gets interesting is that you can protect the data a lot smarter than you can in aphysical environment. Ill give you an example.In a VMware environment, there are services that we can use to do a snapshot backup of a VM.In essence, it’s an immediate backup of all the data associated with that machine or thosemachines. It could be on any generic kind of hardware. You don’t need to have proprietaryhardware or more expensive software features of high-end disk arrays. That is a feature that wecan exploit built within the hypervisor itself.Image backupEven the way that we move data is much more efﬁcient, because we have a process that wepioneered at Quest called "backup once, restore many," where we create whats called imagebackup. From that image backup I can restore an entire system, individual ﬁle, or an application.But Ive done that from that one path, that one very effective snapshot-based backup.
If you look at physical environments, there is the concept of doing physical machine backups andﬁle level backups, speciﬁc application backups, and for some systems, you even have to employa hardware-based snapshots, or you actually had to bring the applications down.So from that perspective, weve gotten much more sophisticated in virtual environments. Again,were moving data by not impacting the applications themselves and not impacting the VMs. Theway we move data is very fast and is very effective.Gardner: Jerome, when we start to do these sorts of activities, whether we are backing up atvery granular level or even thinking about mirroring entire data centers, how does governance,management, and automation come to play here? Is this something that couldn’t have been donein the physical domain?Wendt: I don’t think it could have been done on the physical domain, at least not very easily. Wedo these buyer’s guides on a regular basis. So we have a chance to take in-depth looks at all thesedifferent backup software products on the market and how theyre evolving.One of the things we are really seeing, also to your point, is just a lot more intelligence goinginto this backup software. Theyre moving well beyond just “doing backups” any more. Theresmuch more awareness of what data is included in these data repositories and how theyresearched.And also with more integration with platforms like vSphere Center, administrators can centrallymanage backups, monitor backup jobs, and do recoveries. One person can do so much more thanthey could even a few years ago.And really the expectation of organizations is evolving that they don’t want to necessarily wantseparate backup admin and system admin anymore. They want one team that manages theirvirtual infrastructure. That all kind of rolls up to your point where it makes it easy to govern,manage, and execute on corporate objectives.Gardner: I think it’s important to try to ﬁlter how this works than in terms of total cost. If youreadding, as you say, more intelligence to the process, if you don’t have separate administrators foreach function, if you are able to provide a workﬂow approach to your data lifecycle, you havefewer duplications, youre using less total storage, youre able to support the requirements of theapplications and so on. Is this really a case, John Maxwell, where we are getting more andpaying less?Maxwell: Absolutely. Just as the cost per gigabyte has gone down over the past decade, theeffectiveness of the software and what it can do is way beyond what we had 10 years ago.
Simpliﬁed processToday, in a virtual environment, we can provide a solution that simpliﬁes the process, whereone person can ensure that hundreds of VMs are protected. They can literally right-click andrestore a VM, a ﬁle, a directory, or an application.One of the focuses we have had at Quest, as I alluded earlier, is that there are a lot of mission-critical apps running on these machines. Jerome talked about email. A lot of people consideremail one of their most mission-critical applications. And the person responsible for protectingthe environment that Microsoft Exchange is running on, may not be an Exchange administrator,but maybe theyre tasked with being able to recover Exchange.That’s why weve developed technologies that allow you to go out there, and from that one imagebackup, restore an email conversation or an attachment email from someone’s mailbox. Thatperson doesn’t have to be a guru with Exchange. Our job is to, behind the scenes, ﬁgure how todo this and make this available via a couple of mouse-clicks.Gardner: So were moving the administration app’s direction up, rather than app by app, serverby server. Were really looking at it as the function of what you want to do with that data. Thatstrikes me as a big deal. Is that a whole new thing that were doing with data, Jerome?Wendt: Yes, it is. As John was speaking, I was going to comment. I spoke to a Quest customerjust a few weeks ago. He clearly had some very speciﬁc technical skills, but hes responsible fora lot of things, a lot of different functions -- server admin, storage admin, backup admin.I think a lot of individuals can relate to this guy. I know I certainly did, because that was my rolefor many years, when I was an administrator in the police department. You have to try to juggleeverything, while youre trying to do your job, with backup just being one of those tasks.In his particular case, he was called upon to do a recovery, and, to John’s point, it was anExchange recovery. He never had any special training in Exchange recovery, but it just happenedthat he had Quest Software in place. He was able to use its FastRecover product to recover hisExchange Server and had it back up and going in a few hours.What was really amazing, in this particular case, is that he was traveling at the time it happened.So he had to talk to his manager through the process, and was able to get it up and going. Oncehe had the system up, he was able to log on and get it going fairly quickly.That just illustrates how much the world has changed and how much backup software and theseproducts have evolved to the point where you need to understand your environment, probablymore than you need to understand the product, and just ﬁnd the right product for yourenvironment. In this case, this individual clearly accomplished that.Gardner: It sounds like youre moving more to be an architect than a carpenter, right?
Wendt: Exactly.Gardner: So we understand that management is great and oversight at that higher abstraction isgoing to get us a lot of beneﬁts. But we mentioned earlier that some folks are at 20 percentvirtualization, while others are at 90 percent. Some data is mission-critical, while other doesntrequire the same diligence, and thats going to vary from company to company.Hybrid modelSo my question to you, John Maxwell, is how do organizations approach this being in a hybridsort of a model, between physical and virtual, and recognizing that different apps have differentcriticality for their data, and that might change. How do we manage the change? How do we getfrom the old way of doing this into these newer beneﬁts?Maxwell: Well, there are two points. One, we cant have a bunch of niche tools, one for virtual,one for physical, and the like. Thats why, with our vRanger product, which has been the marketleader in virtual data protection for the past seven years, were coming out with physical supportin that product in the fall of 2012. Those customers are saying, "I want one product that handlesthat non-virtualized data."The second part gets down to what percentage of your data is mission-critical and how complexit is, meaning is it email, or a database, or just a ﬂat ﬁle, and then asking if these different typesof data have speciﬁc service-level agreements (SLAs), and if you have products that can deliveron those SLAs.Thats why at Quest, were really promoting a holistic approach to data protection that spansreplication, continuous data protection, and more traditional backup, but backup mainly based onsnapshots.Then, that can map to the service level, to your business requirements. I just saw some data froman industry analyst that showed the replication software market is basically the same size now asthe backup software market. That shows the desire for people to have kind of that real-timefailover for some application, and you get that with replication.When it comes to the example that Jerome gave with that customer, the Quest product that wereusing is NetVault FastRecover, which is a continuous data protection product. It backs upeverything in real-time. So you can go back to any point in time.It’s almost like a time machine, when it comes to putting back that mailbox, the SQL database, orOracle database. Yet, its masking a lot of the complexity. So the person restoring it may not be aDBA. Theyre going to be that jack of all trades whos responsible for the storage and maybebackup overall.
Gardner: Jerome, what are you seeing in the ﬁeld? Are there folks that are saying, "Okay, thevalue here is so compelling and we have such a mess, were going to bite the bullet and just gototally virtual in three to sixth months. And, at least for our mission-critical apps, were going tomove them over into this data lifecycle approach for our recovery, backup, and DR?"Or are you seeing companies that are saying, "Well, this is a ﬁve year plan and were going to dothis ﬁrst and we are going to kind of string it out?" Which of these seems to be in vogue at themoment? What works, a bite the bullet, all or nothing, or the slow crawl-walk-run approach?Wendt: It really depends on the size of the organization youre talking about. When I talk tosmall and medium size businesses (SMBs), 500-1,000 employees or fewer, they may have 100terabyte of storage and may have 200 servers. I see them just biting the bullet. Theyre doing thethree- to six-month approach. Lets make the conversion, do the complete switchover, and govirtual as much as possible.Few legacy systemsAlmost all of them have a few legacy systems. They may be running some application onWindows 2000 Server or some old version of AIX. Who knows what a lot of companies haverunning in the background? They cant just virtualize everything, but where they can, they get toa 98 percent virtualized environment.When you start getting to enterprises, I see it a little bit different. Its more of a staged approach,because it just takes more coordination across the enterprise to make it all happen. There are a lotmore logistics and planning going on.I haven’t talked to too many that have taken ﬁve years to do it. Its mostly two to maybe fouryears at the outside range. But the move is to virtualize as much as possible, except for thoselegacy apps, which for some reason they cant tackle.Gardner: John Maxwell, for those two classes of user, what does Quest suggest? Is there a paththat you have for those who want to do it as rapidly as possible? And then is that meteredapproach also there in terms of how you support the journey?Maxwell: Its funny that you mention the difference between SMB and the enterprise. Im a ﬁrmbeliever that one size doesn’t ﬁt all, which is why we have solutions for speciﬁc markets. Wehave solutions for the SMB along with enterprise solutions, but we do have a lot of commonalitybetween the products. Were even developing for our SMB product a seamless upgrade path toour enterprise-class product.Again, theyre different markets, just as Jerome said. We found exactly what he just mentioned,which is the smaller accounts tend to be more homogenous and they tend to virtualize a lot more,whereas in the enterprise theyre more heterogeneous and they may have a bigger mix of physicaland virtual.
And they may have really more complex systems. That’s where you run into big data and morecomplex challenges, when it comes to how you can back data up and how you can recover it.And there are also different price points.So our approach is to have solution speciﬁc to the SMB and speciﬁc to the enterprise. There is alot of cross-functionality that exists in the products, but were very crisp in our positioning, ourgo-to-market strategy, the price points, and the features, because one of the things you don’t wantto do with SMB customers is overwhelm them.I meet hundreds of customers a year, and one of our top customers has an exabyte of data.Jerome, I don’t know if you talk to many customers that have exabyte, but I don’t really run intoa lot of customers that have an exabyte of data. Their requirements are completely different thanour average vRanger customer, who has around ﬁve terabytes of data.We have products that are speciﬁc to the market segments, to speciﬁcation or non-speciﬁcationof that user, and at the price point. Yet, its one vendor, one throat to choke, and there are pathsfor upgrade if you need to.Gardner: John, in talking with Quest folks, Ive heard them refer to a next-generation platformor approach, or a whole greater than the sum of the parts. How do you deﬁne next generationwhen it comes to data recovery in your view of the world?New beneﬁtsMaxwell: Well, without hyperbole, for us, our next generation is a new platform that we callNetVault Extended Architecture, and this is a way to provide several beneﬁts to our customers.One is that with NetVault Extended Architecture we now are delivering a single user experienceacross products. So this gets into SMB-versus-enterprise for a customer that’s using maybe oneof our point solutions for application or database recovery, providing that consistent look andfeel, consistent approach. We have some customers that use multiple products. So with this, theynow have a single pane of glass.Also, its just I think important to offer a consistent means for administering and managing thebackup and recovery process, because as weve been talking, why should a person have to havemultiple skill sets? If you have one view, one console into data protection, that’s going to makeyour life a lot easier than have to learn a bunch of other types of solutions.That’s the immediate beneﬁt that I think people see. What NetVault Extended Architectureencompasses under the covers, though, is a really different approach in the industry, which ismodularization of a lot of the components to backup and recovery and making them plug andplay.Let me give you an example. With the increase in virtualization a lot of people just equatevirtualization with VMware. Well, weve got Hyper-V. We have initiatives from Red Hat. We
have Xen, Oracle, and others. Jerome, Im kind of curious about your views, but just as we sawin the 90s and in the 00s, with people having multiple platforms, whether its Windows andLinux or Windows and Linux and, as you said, AIX, I believe we are going to start seeingmultiple hypervisors.So one of the approaches that NetVault Extended Architecture is going to bring us is a capabilityto offer a consistent approach to multiple hypervisors, meaning it could be a combination ofVMware and Microsoft Hyper-V and maybe even KVM from Red Hat.But, again, the administrator, the person who is managing the backup and recovery, doesn’t haveto know any one of those platforms. That’s all hidden from them. In fact, if they want to restoredata from one of those hypervisors, say restore a VMware as VMDK, which is their volume inVMware speak, into whats called a VHD and a Hyper-V, they could do that.That, to me, is really exciting, because this is exploiting these new platforms and environmentsand providing tools that simplify the process. But that’s going to be one of the many beneﬁts ofour new NetVault Extended Architecture next generation, where we can provide that singularexperience for our customer base to have a faster go-to-market, faster time to market, with newsolutions, and be able to deliver in a modular approach.Customers can choose what they need, whether theyre an SMB customer, or one of the largestcustomers that we have with hundreds of petabytes or exabytes of data.Wendt: Id like to elaborate on what John just said. Im really glad to hear that’s where Quest isgoing, John, I haven’t had a chance to discuss this with you guys, but DCIG has a lot ofconversations with managed-service providers, and youd be surprised, but there are actually veryfew that are VMware shops. I ﬁnd the vast majority are actually either Microsoft Hyper-V orusing Red Hat Linux as their platform, because theyre looking for a cost-effective way to delivervirtualization in their environments.Weve seen this huge growth in replication, and people want to implement disaster recoveryplans or business continuity planning. I think this ability to recover across different hypervisorsis going to become absolutely critical, maybe not today or tomorrow, but I would say in the newfew years. People are going to say, "Okay, now that weve got our environment virtualized, wecan recover locally, but how about recovering into the cloud or with a cloud service provider?What options do we have there?"More choiceIf theyre using VMware and their provider isn’t, theyre almost forced to use VMware orsomething like this, whereas your platform gives them much more choice for managed serviceproviders that are using platforms other than VMware. It sounds like Quest will really give themthe ability to backup VMware hypervisors and then potentially recover into Red Hat or MicrosoftHyper-V at MSPs. So that could be a really exciting development for Quest in that area.
Gardner: So being able to support the complexity and the heterogeneity, whether its at theapplication level, the platform level, the VM, and hypervisor level, all of that is part and parcel ofextracting data recovery to the manage and architected level.Do we have any examples, John, of companies that are already doing that? Are you are familiarwith organizations -- maybe you can name them -- that are doing just that, managing aheterogeneity issue and coming up with some metrics of success for their data recovery and datamanagement and lifecycle approach, as a result?Maxwell: Id like to give you an example of one customer, one of our European customers calledCMC Markets. They use our entire NetVault family of products, both the core NetVault Backupproduct and the NetVault FastRecover product that Jerome mentioned.They are a company where data is their lifeblood. Theyre an options trading company. Theyprocess tens of thousands of transactions a day. They have a distributed environment. They havetheir main data center in London, and that’s where their network operations center is. Yet, theyhave eight ofﬁces around the world.One of the challenges of having remote data and/or big data is whether you can really usetraditional backup to do it. And the answer is no. With big data, theres no way that you will haveenough time in a day to make that happen. With remote data, you want to put something that’smanual out in that remote ofﬁce, where youre not going to have IT people.CMC Markets has come to this approach of move data smarter, versus harder. Theyveimplemented our NetVault FastRecover product, where it’s backed up to disk at their remotesites. Then, the product automatically replicates its backups to the home ofﬁce in London.Then, for some of their more mission-critical data in the London data center, databases such asSQL Server and Oracle, they do real-time backup. So theyre able to recover the data at any pointin time, literally within seconds. We have 17 patents on this product, most of them around afeature we call Flash Restore, that allows you to get an application up and running in less than 30seconds.But the real-life example is pretty interesting in that, one of their remote ofﬁces is in Tokyo. Ifyou go back to March 11, 2011, when the 9.+ earthquakes happened, the tsunami, they lostpower. They had damage to some of their server racks.Since they were replicating in London and those backups were done locally in Tokyo, theyactually got their employees up and running using Terminal Server, which enables the Tokyoemployees to connect to the applications that had been recovered in London, because they hadcopies of those backups. So there was no disruption to their business.
Second problemAnd, as luck will have it, two weeks later, they had a problem at one of the other remote ofﬁces,where a server crashed, and then they were able to bring up data remotely. Then, they hadanother instance, where they had to just recover data. Because it was so quick, end-users didn’teven know that disk drive had crashed.So I think thats a really neat example of a customer who is exploiting today’s technology. Thisgets back to the discussion we had earlier about service levels and managing of service levels inthe business and making sure theres not a disruption of the business. If youre doing real-timetrades in a stock exchange type of environment, you cant suffer any outages, because theres notonly the monetary problems, but you don’t want to be in the cover of BBC.com.Gardner: Of course regulation and compliance issues to consider.Maxwell: Absolutely.Gardner: Were getting towards the end of our time. Jerome, quickly, do you have any use casesor examples that youre familiar with that illustrate this concept of next-generation and lifecycleapproach to data recovery that we have been discussing?Wendt: Well, it’s not an example, just a general trend I am seeing in products, because most ofDCIG’s focus is just on analyzing the products themselves and comparing, traversing, andidentifying general broader trends within those products.There are two things were seeing. One, were struggling calling backup software backupsoftware anymore, because it does so much more than that. You mentioned earlier about so muchmore intelligence in these products. We call it backup software, because that’s the context inwhich everyone understands it, but I think going forward, the industry is probably going to haveto ﬁnd a better way to refer to these products. Quest is a whole lot more than just running abackup.And then second, people, as they view backup and how they manage their infrastructure, reallyhave to go from this reactive, "Okay, today I am going to have to troubleshoot 15 backup jobsthat failed overnight." Those days are over. And if theyre not over, you need to be looking fornew products that will get you over that hump, because you should no longer be troubleshootingfailed backup jobs.You should be really looking more towards, how you can make sure all your environment isprotected, recoverable, and really moving to the next phase of doing disaster recoveries andbusiness continuity planning. The products are there. They are mature and people should bemoving down that path.
Gardner: Jerome, we mentioned at the outset, mobile and the desire to deliver more data andapplications to edge devices, and of course cloud was mentioned. People are going to be lookingto take advantage of cloud efﬁciencies internally, but then also look to mixed-sourcingopportunities, hybrid-computing opportunities, different apps from different places, and the datalifecycle and backup that needs to be part and parcel with that.We also mentioned the fact that big data is more important and that the timeframe of gettingmission-critical data to the right people is shortening all the time. This all pulls together, for me,this notion that in the future youre not going to be able to do this any other way. This is not aluxury, but a necessity. Is that fair, Jerome?Wendt: Yes, it is. That’s a fair assessment.Crystal ballGardner: John, the same question to you basically. When we look into the crystal ball, even notthat far out, it just seems that in order to manage what you need to do as a business, getting goodcontrol over your data, being able to ensure that it’s going to be available anytime, anywhere,regardless of the circumstances is, again, not a luxury, it’s not a nice to have. It’s really just goingto support the viability of the business.Maxwell: Absolutely. And what’s going to make it even more complex is going to be the cloud,because whats your control, as a business, over data that is hosted some place else?I know that at Quest we use seven SaaS-based applications from various vendors, but what’s ourguarantee that our data is protected there? I can tell you that a lot of these SaaS-based companiesor hosting companies may offer an environment that says, "Were always up," or "We have ahigher level of availability," but most recovery is based on logical corruption of data.As I said, with some of these smaller vendors, you wonder about what if they went out ofbusiness, because I have heard stories of small service providers closing the doors, and you say,"But my data is there."So the cloud is really exciting, in that were looking at how were going to protect assets that maybe off-premise to your environment and how we can ensure that you can recover that data, incase that provider is not available.Then theres something that Jerome touched upon, which is that the cloud is going to offer somany opportunities, the one that I am most excited about is using the cloud for failover. Thatreally getting beyond recovery into business continuity.And something that has only been afforded by the largest enterprises, Global 1000 typecustomers, is the ability to have a stand up center, a SunGard or someone like that, which is verycostly and not within reach of most customers. But with virtualization and with the cloud, theresa concept that I think were going to see become very mainstream over the next ﬁve years, which
is failover recovery to the cloud. Thats something that’s going to be within reach of even SMBcustomers, and that’s really more of a business continuity message.So now were stepping up even more. Were now saying, "Not only can we recover your datawithin seconds, but we can get your business back up and running, from an IT perspective, fasterthan you probably ever presumed that you could."Gardner: That sounds like a good topic for another day. I am afraid we are going to have toleave it there.Youve been listening to a sponsored BrieﬁngsDirect podcast discussion on the value aroundnext-generation, integrated and simpliﬁed approaches to fast backup and recovery. We have seenhow a comprehensive approach to data recovery bridges legacy and new data, scales to handlebig data, and provides automation and governance across the essential functions of backup,protection, and disaster recovery.Id like to thank our guests. Weve been joined by John Maxwell, the Vice President of ProductManagement for Data Protection at Quest Software. Thanks so much, John.Maxwell: Thank you.Gardner: Weve also been joined by Jerome Wendt. He is the President and Lead Analyst atDCIG, an independent storage analyst and consulting ﬁrm. Thanks so much, Jerome.Wendt: Thank you, Dana.Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again to you,our audience, for listening, and come back next time.Listen to the podcast. Find it on iTunes/iPod. Sponsor: Quest SoftwareTranscript of a sponsored BrieﬁngDirect podcast on how data-recovery products can providequicker access to data and analysis. Copyright Interarbor Solutions, LLC, 2005-2012. All rightsreserved.You may also be interested in: • Big Data and a Brave New World • Big Data: Crunching the Numbers • Case Study: Strategic Approach to Disaster Recovery and Data Lifecyce Management Pays off for Australias SAI Global • Enterprise Architecture and Enterprise Transformation: Related But Distinct Concepts That Can Change the World • Capgeminis CTO on Why Cloud Computing Exposes the Duality Between IT and Business