Efficient Data Center Transformation Requires Consolidation and Standardization Across Critical IT Tasks
Efficient Data Center Transformation Requires Consolidationand Standardization Across Critical IT TasksTranscript of a sponsored podcast discussion in conjunction with an HP video series on the bestpractices for developing a common roadmap for DCT.Listen to the podcast. Find it on iTunes/iPod. Sponsor: HP For more information on The HUB -- HPs video series on data center transformation, go to www.hp.com/go/thehub.Dana Gardner: Hi. This is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’relistening to BrieﬁngsDirect.Today, we present a sponsored podcast discussion on quick and proven ways to attain signiﬁcantly improved IT operations and efﬁciency. Well hear from a panel of HP experts on some of their most effective methods for fostering consolidation and standardization across critical IT tasks and management. This is the second in a series of podcast on data center transformation (DCT) best practices and is presented in conjunction with a complementary video series. [Disclosure: HP is a sponsor of BrieﬁngsDirect podcasts.]Here today we will speciﬁcally explore building quick data center project wins, leveragingproject tracking and scorecards, as well as developing a common roadmap for both facilities andIT infrastructure.With us now to explain how these solutions can drive successful data center transformation is our panel. Please join me in welcoming Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and small to medium-sized businesses (SMBs). Second, were here with Randy Lawton, Practice Principal for Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP. And last, Larry Hinman, Critical Facilities Consulting Director and WorldwidePractice Leader for HP Critical Facility Services and HP Technology Services. Welcome to youall.You don’t need to go very far in IT to ﬁnd people who are diligently working to do more withless, even as theyre working to transform and modernize their environments.
One way to keep the interest high and those operating and investment budgets in place is to showfast results and then use that to prime the pump for even more improvement and even morefunding with perhaps even growing budgets.Lets go ﬁrst to Duncan Campbell on communicating an ongoing stream of positive results, whythat’s important and necessary to set the stage for an ongoing virtuous adoption cycle for datacenter transformation and converged infrastructure projects.Duncan Campbell: You bet Dana. Weve seen that when a customer is successful in breakingdown a large project into a set of quick wins, there are some very positive outcomes from that.Breeds conﬁdenceNumber one, it breeds conﬁdence, and this is a conﬁdence that is actually felt within the organization, within the IT team, and into the business as well. So it builds conﬁdence both inside and outside the organization. The other key beneﬁt is that when you can manifest these quick wins in terms of some speciﬁc return on investment (ROI) business outcome, that also translates very nicely as well and gets a lot of key attention, which I think has some downstream beneﬁts that actually help out the team in multiple ways.Gardner: I suppose its not only getting these quick wins, but effectively communicating themwell. People really need to know about them.Campbell: Right. So this is one of the things that some of the real leaders in IT realize. Its notjust about attracting the best talent and executing well, but its about marketing the team’s resultsas well.One of the beneﬁts in that is that you can actually break down these projects just in terms ofsome speciﬁc type of wins. That might be around standardization, and you can see a lot of winsthere. You can quickly consolidate to blades. You can look at virtualization types of quick wins,as well as some automation quick wins.We would advocate that customers think about this in terms of almost a step-by-step approach,knocking that down, getting those quick wins, and then marketing this in some very tangibleways that resonate very strongly.Gardner: When you start to develop a cycle of recognition, incentives, and buy-in, I suppose wecould also start to see some sort of a virtuous adoption cycle, whereby that sets you up for moreinterest, an easier time evangelizing, and so on.
Campbell: That’s exactly right. A virtuous cycle is well put. That allows really the team to getthe additional green light to go to the next step in terms of their blueprint that they are trying toexecute on. It gets a green light also in terms of additional dollars and, in some cases, additionalheadcount to add to their team as well.What this does is, and I like this term the virtuous cycle, not only allow you to attract key talent,but it really allows you to retain folks. That means youre getting the best team possible toduplicate that, to get those additional wins, and it really does indeed become a virtuous cycle.Gardner: I suppose one last positive beneﬁt here might be that, as enterprises adopt more ofwhat we call social networking and social media, the ability for the rank and ﬁle, those usersinvolved with these products and services, can start to be your best word-of-mouth marketinginternally.TCO savingsCampbell: That’s right. A good example is where we have been able to see a signiﬁcant totalcost of ownership (TCO) type of savings with one of our customers, McKesson, that in fact wastaking one of these consolidated approaches with all their development tools. They saw aconsiderable savings, both in terms of dollars, over $12.9 million, as well as a percentage ofTCO savings that was upwards of 50 percent.When you see tangible exciting numbers like that, that does grab people’s attention and, you bet,it becomes part of the whole social-media fabric and people want to go to a winner. Successbreeds success here.Gardner: Thank you. Next, were going to go to Randy Lawton and hear some more about whytracking scorecards and managing expectations through proven data and metrics also contributesto a successful ongoing DCT activity.Randy, why is it so important to know your baseline tracks and then measure them each andevery step along the way? Randy Lawton: Thank you, Dana. Many of the transformation programs we engage in with our customers are substantially complex and span many facets of the IT organization. They often involve other vendors and service providers in the customer organization. So there’s a tremendous amount of detail to pull together and organize in these complex engagements and initiatives. We ﬁnd that there’s really noway to do that, unless you have a good way of capturing the data that’s necessary for a baseline.It’s important to note that we manage these programs through a series of phases in ourmethodology. The ﬁrst phase is strategy and analysis. During that phase, we typically run a
discovery on all IT assets that would include the data center, servers, storage, the networkenvironment, and the applications that run on those environments.From that, we bridge into the second phase, which is architect and validate, where we begin tosolution out and develop the strategies for a future-state design that includes the standardizationand consolidation approaches, and on that begin to assemble the business case. In a detaileddesign, we build out those speciﬁcations and begin to create the data that determines what thefuture-state transformation is.Then, through the implementation phase, we have detailed scorecards that are required to betracked to show progress of the application teams and infrastructure teams that contribute to theprogram in order to guarantee success and provide visibility to all the stakeholders as part of theprogram, before we turn everything over to operations.During the course of the last few years, our services unit has made investments in a number oftools that help with the capture and management of the data, the scorecarding, and the analyticsthrough each of the phases of these programs. We believe that helps offer a competitiveadvantage for us and helps enable more rapid achievement of the programs from our customerperspective.Gardner: As we heard from Duncan about why it’s important to demonstrate wins, I sense thatorganizations are really data driven now more than ever. It seems important to have actualmetrics in place and be able to prove your work each step of the way.Complex engagementsLawton: That’s very true. In these complex engagements, it’s normally some time before thereare quick-win type of achievements that are really notable.For example, in the HP IT transformation program we undertook over several years back through2008, we were building six new data centers so that we could consolidate 185 worldwide. So itwas some period of time from the beginning of the program until the point where we moved theﬁrst application into production.All along the way we were scorecarding the progress on the build-out of the data centers. Then, itwas the build-out of the compute infrastructure within the data centers. And then it was a matterof being able to show the scorecarding against the applications, as we could get them into thenext generation data centers.If we didnt have the ability to show and demonstrate the progress along the way, I think ourstakeholders would have lost patience or would not have felt that the momentum of the programwas going on the kind of track that was required. With some of these tools and approaches andthe scorecarding, we were able to demonstrate the progress and keep very visible to managementthe movements and momentum of the program.
Gardner: Randy, I know that many organizations are diligent about the scorecarding across allsorts of different business activities and metrics. Have you noticed in some of these engagementsthat these readouts and feedback in the IT and data center transformation activities are somehowjoined with other business metrics? Is there an executive scorecard level that these feed into togive more of a holistic overview? Is this something that works in tandem with other scorecardingactivities in a typical corporation?Lawton: It absolutely is, Dana. Often in these kind of programs there are business activities andprojects that are going on within the business units. There are application projects that work intothe program and then there are the infrastructure components that all have to be ﬁt together atsome level.What we typically see is that the business will be reporting its set of metrics, each of theapplication areas will be reporting their metrics, and it’s typically from the infrastructureperspective where we pull together all of the application and infrastructure activities andsometimes the business metrics as well.Weve seen multiple examples with our customers where they are either all consolidated intoexecutive scorecards that come out of the reporting from the infrastructure portion of theprogram that rolls it all together, or that the business may be running separate metrics and thenapplication teams and infrastructure are running the IT level metrics that all get rolled togetherinto some consolidated reporting on some level.Gardner: And that, of course, ensures that IT isn’t the odd man out, when it comes to being ontime and in alignment with these other priorities. That sounds like a very nice addition to the waythings may have been done 5 or 10 years ago.Lawton: Absolutely.Gardner: Any examples, Randy, either with organizations you could name, or use cases whereyou could describe, where the use of this ongoing baselining, tracking, measuring, and deliveringmetrics facilitates some beneﬁts? Any stories that you can share?Cloning applicationsLawton: A very notable example is one of our telecom customers we worked with during thelast year and ﬁnished a program earlier this year. The company was purchasing the assets ofanother organization and needed to be able to clone the applications and infrastructure thatsupported business processes from the acquired company.Within the mix of delivery for stakeholders in the program, there were nine different companiesrepresented. There were some outsourced vendors from the application support side in theacquiree’s company, outsourcers in the application side for the acquiring company, and
outsourcers in the data centers that operated data center infrastructure and operations for thetarget data centers we were moving into.What was really critical in pulling all this together was to be able to map out, at a very detailedlevel, the tasks that needed to be executed, and in what time frame, across all of these teams.The ﬁnal cutover migration required over 2,500 tasks across these 9 different companies that allneeded to be executed in less than 96 hours in order to meet the downtime window ofrequirements that were required of the acquiring company’s executive management.It was the detailed scorecarding and operating war rooms to keep those scorecards up to date inreal-time that allowed us to be able to accomplish that. There’s just no possible way we wouldhave been able to do that ahead of time.I think that HP was very helpful in working with the customer and bringing that perspective intothe program very early on, because there had been a failed attempt to operate this program priorto that, and with our assistance and with developing these tools and capabilities, we were able tosuccessfully achieve the objectives of that program.Gardner: One thing that jumped out at me there was your use of the words real time. Howimportant is it to capture this data and adjust it and update it in real-time, where there’s not a lotof latency? How has that become so important? For more information on The HUB -- HPs video series on data center transformation, go to www.hp.com/go/thehub.Lawton: In this particular program, because there were so many activities taking place inparallel by representatives from all over the world across these nine different companies, thereal-time capture and update of all of the data and information that went into the scorecardingwas absolutely essential.In some of the other programs weve operated, there was not such a compressed time frame thatrequired real-time metrics, but we, at minimum, often required daily updates to the metrics. Soeach program, the strategies that drive that program, and some of the time constraints will drivewhat the need is for the real-time update.We often can provide the capabilities for the real-time updates to come from all stakeholders inthe program, so that the tools can capture the data, as long as the stakeholders are providing theupdates on a real-time basis.Gardner: So as is often the case, good information in, good results back.Lawton: Absolutely.
Organizing infrastructureGardner: Let’s move now to our third panelist today. Were going to hear about why organizingfacilities and infrastructure planning in conjunction in relationship to one another is so important.Now to Larry Hinman. Larry, let’s go historical for a second. Has there usually been acompletely separate direction for facilities planning in IT infrastructure? Why was that the case,and why is it so important to end that practice? Larry Hinman: Hi, Dana. If you look over time and over the last several years, everybody has data centers and everybody has IT. The things that weve seen over the last 10 or 15 years are things like the Internet and criticality of IT and high density and all this stuff that people are talking about these days. If you look at the ways companies organized themselves several years ago, IT was a separate organization, facilities was a separate organization, and that actually still exists today.One of the things that were still seeing today is that, even though there is this push to try to getIT groups and facilities organizations to talk and work each other, this gap that exists betweentruly how to glue all of this together.If you look at the way people do this traditionally -- and when I say people, Im talking about ITorganizations and facilities organization -- they typically will model IT and data centers, even ifthey are attempting to try and glue them together, they try to look at power requirements.One of the things that we spotted a few years ago was that when companies do this, the risk ofover provisioning or under provisioning is very high. We tried to ﬁgure out a way to back this upa few notches.How can we remedy this problem and how can we bring some structure to this and bring some,what I would call, sanity to the whole equation, to be able to have something predictable overtime? What we ﬁgured out was that you have to stop and back up a few notches to really start toget all this glued together.So we took this whole complex framework and data center program and broke it into four keyareas. It looks simplistic in the way weve done this, and we have done this over many, manyyears of analysis and trying to ﬁgure out exactly what direction we should take. Weve actuallyspun this off in many directions a few times, trying to continually make it better, but we alwayskeep coming back to these four key proﬁles.Business and risk is the ﬁrst proﬁle. IT architecture, which is really the application suite, is thesecond proﬁle. IT infrastructure is the third. Data center facilities is the fourth.
One of the things that you will start to hear from us, if you haven’t heard it already via the datacenter transformation story that you guys were just recently talking about, is this nomenclature ofIT plus facilities equals the data center.Getting synchronizedLook at that, look at these four proﬁles, and look at what we call a top-down approach, where Istart to get everybody synchronized on what risk proﬁles are and tolerances for risk are from anIT perspective and how to run the business, gluing that together with an IT infrastructurestrategy, and then gluing all that into a data center facility strategy.What we found over time is that we were able to take this complex program of trying to havesomething predictable, scalable, all of the groovy stuff that people talk about these days, andhave something that I could really manage. If youre called into the boss’s ofﬁce, as I and othershave been over the many years in my career, to ask what’s the data center going to look like overthe next ﬁve years, at least I would have some hope of trying to answer that question.That is kind of the secret sauce here, and the way we have developed our framework wasbreaking this complex program into these four key areas. Im certainly not trying to say this is aneasy thing to do. In a lot of companies, it’s culture changes. It’s a threat to the way the veryorganization is organized from an IT and a facilities perspective. The risk and recovery teamsand the management teams all have to start working together collaboratively and collectively tobe able to start to glue this together.Gardner: You mentioned earlier the issues around energy and the ongoing importance aroundthe cost structure for that. I suppose its not just ﬁtting these together, but making them ﬁt forpurpose. That is to say, IT and facilities on an ongoing basis.It’s not really something that you do and sit still, as would have been the case several years ago,or in the past generation of computing. This is something thats dynamic. So how do you allow aﬁt-for-purpose goal with data-center facilities to be something that you can maintain over time,even as your requirements change?Hinman: You just hit a very important point. One of the the big lessons learned for us over theyears has been this ability to not only provide this kind of modeling and predictability over timefor clients and for customers. We had to get out of this mode of doing this once and putting it ona shelf, deploying a future state data center framework, keep client pointing in the right direction.The data is, as you said, gets archived, and they pick it up every few years and do it again andagain and again, ﬁnding out that a lot of times theres an "aha" moment during those periods, thegaps between doing it again and again.One thing that we have learned is to not only have this deliberate framework and break it intothese four simplistic areas, where we can manage all of this, but to redevelop and re-hone our
tools and our focus a little bit, so that we could use this as a dynamic ongoing process to get theclient pointing the right direction. Build a data center framework that truly is right size,integrated, aligned, and all that stuff. But then, to have something that was very dynamic thatthey could manage over time.Thats what weve done. Weve taken all of our modeling tools and integrated them to commondatabases, where now we can start to glue together even the operational piece, of data centerinfrastructure management (DCIM), or architecture and infrastructure management, facilitiesmanagement, etc., so now the client can have this real-time, long-term, what we call a 10-yearview of the overall operation.So now, you do this. You get it pointing the right direction, collect the data, complete themodeling, put it in the toolset, and now you have something very dynamic that you can manageover time. Thats what weve done, and thats where we have been heading with all of our toolsand processes over the last two to three years.EcoPOD conceptGardner: I also remember with great interest the news from HP Discover in Las Vegas lastsummer about your EcoPOD and the whole POD concept towards facilities and infrastructure.Does that also play a part in this and perhaps make it easier when your modularity is ratcheted upto almost a mini data center level, rather than at the server or rack level?Hinman: With the various what we call facility sourcing options, which PODs are certainly oneof those these days, weve also been very careful to make sure that our framework is completelyunbiased when it comes to a speciﬁc sourcing option.What that means is, over the last 10 plus years, most people were really targeted at building newgreen-ﬁeld data centers. It was all about space, then it became all about power, then aboutcooling, but we were still in this brick and mortar age, but modularity and scalability has beendriving everything.With PODs coming on the scene with some of the other design technologies, like multi-tiered orﬂexible data center, what weve been able to do is make sure that our framework is targeted atalmost a generic framework where we can complete all the growth modeling and analysis,regardless of what the client is going to do from a facilities perspective.It lays the groundwork for the customer to get their arms around all of this and tie together ITand facilities with risk and business, and then start to map out an appropriate facility sourcingoption.We ﬁnd these days that POD is actually a very nice ﬁt with all of our clients, because it provideshigh density server farms, it provides things that they can implement very quickly, and gets thepower usage effectiveness (PUE) and power and operational cost down. Were starting to see thattake a stronghold in a lot of customers.
Gardner: As we begin to wrap up, I should think that these trends are going to be even moreimportant, these methods even more productive, when we start to factor in movement towardsprivate cloud. Theres the need to support more of a mobile tier set of devices, and the fact thatwere looking for of course even more savings on those long-term energy and operating costs.Back to you, Randy Lawton. Any thoughts about how scorecards and tracking will be even moreimportant in the future, as we move, as we expect we will, to a more cloud-, mobile-, and eco-friendly world?Lawton: Yes, Dana. In a lot of ways, there is added complexity these days with more customersoperating in a hybrid delivery model, where there may be multiple suppliers in addition to theirinternal IT organizations.Greater complexityJust like the example case I gave earlier, where you spread some of these activities not onlyacross multiple teams and stakeholders, but also into separate companies and suppliers who areworking under various contract mechanism, the complexity is even greater. If that complexity isnot pulled into a simpliﬁed model that is beta driven, that is supported by plans and contracts,then there are big gaps in the programs.The scorecarding and data gathering methods and approaches that we take on our programs aregoing to be even more critical as we go forward in these more complex environments.Operating the cloud environments simpliﬁes things from a customer perspective, but it does addsome additional complexities in the infrastructure and operations of the organization as well. Allof those complexities add up to, meaning that even more attention needs to be brought to thedetails of the program and where those responsibilities lie within stakeholders.Gardner: Larry Hinman, were seeing this drive towards cloud. Were also seeing consolidationand standardization around data center infrastructure. So perhaps more large data centers tosupport more types of applications to even more endpoints, users, and geographic locations orbusiness units. Getting that facilities and IT equation just right becomes even more important aswe have fewer, yet more massive and critical, data centers involved.Hinman: Dana, thats exactly correct. If you look at this, you have to look at the data centerfacilities piece, not only from a framework or model or topology perspective, but all the waydown to the speciﬁc environment.It could be that based on a speciﬁc client’s business requirements and IT strategy that it willrequire possibly a couple of large-scale core data centers and multiple remote sites and/or itcould just be a bunch of smaller types of facilities.
It really depends on how the business is being run and supported by IT and the application suite,what the tolerances for risk are, whether it’s high availability, synchronous, all the groovy stuff,and then coming up with a framework that matches all those requirements that it’s integrating.We tell clients constantly that you have to have your act together with respect to your proﬁle, andstart to align all of this, before you can even think about cloud and all the wonderful technologiesthat are coming down the pike. You have to be able to have something that you can at leastmanage to control cost and control this whole framework and manage to a future-state businessrequirement, before you can even start to really deploy some of these other things.So it all glues together. Its extremely important that customers understand that this really is aprocess they have to do.Gardner: Very good. Youve been listening to a sponsored BrieﬁngsDirect podcast discussion onhow quick and proven ways to attain productivity can signiﬁcantly improve IT operations andefﬁciency.This is the second in an ongoing series of podcasts on data center transformation best practicesand is presented in conjunction with a complementary video series.Id like to thank our guests. Weve been joined by Duncan Campbell, Vice President of Marketingfor HP Converged Infrastructure and SMB. Also, Randy Lawton, Practice Principal in theAmericas West Data Center Transformation & Cloud Infrastructure Consulting at HP.And last, Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leaderfor HP Critical Facility Services and HP Technology Services. So thanks to you all.This is Dana Gardner, Principal Analyst at Interarbor Solutions. Also, thanks to our audience forlistening, and come back next time. For more information on The HUB -- HPs video series on data center transformation, go to www.hp.com/go/thehub.Listen to the podcast. Find it on iTunes/iPod. Sponsor: HPTranscript of a sponsored podcast discussion in conjunction with an HP video series on the bestpractices for developing a common roadmap for DCT. Copyright Interarbor Solutions, LLC,2005-2011. All rights reserved.You may also be interested in:
• Continuous Improvement and Flexibility Are Keys to Successful Data Center Transformation, Say HP Experts• HPs Liz Roche on Why Enterprise Technology Strategy Must Move Beyond the Professional and Consumer Split• Well-Planned Data Center Transformation Effort Delivers IT Efﬁciency Paybacks, Green IT Boost for Valero Energy• Hastening Trends Around Cloud, Mobile Push Application Transforation as Priority, Says Research• Data Center Transformation Includes More Than New Systems, Theres Also Secure Data Removal, Recycling, Server Disposal