Hello and thank you for joining me today to discuss getting the best results with the UCS TCO/ROI tools. I’m Bill Shields, a marketing manager with Cisco responsible for the program management of the tools. This session is being recorded and will be distributed for playback.
The agenda is based on my experience helping others with their analysis. The goal is to share information that will save you time, help you produce more robust and accurate analysis and ultimately win more UCS business. This fiscal year, the tools have already helped close deals totaling $10 million dollars varying in size from $100K to $3 million dollars. Just last week, after an analysis presentation, the customer doubled their initial order over what had been planned. This is not general tool training. At the end of the presentation, I will provide a link to other training materials. I will be showing examples from both the web interface and the Excel model depending upon which easier illustrates the item in this forum. There will be time for questions at the end of the presentation.
First let’s talk about those things that will save you time.
Do not use Internet Explorer 6, 7, or 8. While these browsers are supported by the tool, their performance is very slow when compared to → Internet Explorer 9, Chrome, Firefox, or Safari. When I say slower, I’m referring to the transition between inputs and pages. If you spend any amount of time in the tools, this really will make a difference.
A number of templates have been created for various scenarios. These are validated designs that are complete and accurate. Any template is extensible such that you can easily add or change server and networking components. → This graphic shows the current templates. → When you mouse over the template name, hover help pops up with a description of the template contents. → If you think it will meet your needs, click the replicate button.
You can also replicate your existing analysis. → In this example, an analysis was created comparing the customer’s existing environment to UCS. Now we want to compare a net new competitive scenario to the same UCS servers. If I replicate the existing analysis, all of the UCS information will not have to be reentered. → Click the replicate button → and the create analysis page is displayed. Enter a new name and click begin. → You then deselect the existing environment, → and select the new competitive components. → You then have your second analysis started. → Similarly, a third iteration was done comparing existing, to new competitive, to new UCS all without having to reenter data.
A data collection template has been created for the Advanced R2 tool. The file contains all of the inputs found in the tool. This is a customer facing document. Best practices dictate that you do not simply send the file to the customer for completion. Instead, you should first fill in all the information you already know, then arrange a meeting with the customer and walk through the template and gather additional details; this can be done virtually or in person. Today, there is no way to import data from the template to the tool.
There is a way to make an offline template but it should be used with caution. → If the model is updated and you have an old version of the Excel file, you may have issues with broken data structures when you attempt to sync your results. → Only the servers and networking components you choose will be available to be configured. If you missed something, you will have to go back to the web and add those items before they can be used. → Similarly, if any server processor or memory configuration changes have been made to the web model, those options won’t be available for your analysis. → The prices may be out of date and the cost structures inaccurate. I’ll talk more about pricing latter.
To create a template, create an analysis as you normally would adding all of the possible items you might need. For this example I used HP vs. Cisco blades. I’ll pick all three server types and all of the 10Gb Ethernet, Converged, and Fibre Channel options. → Instead of entering any information, click on Present, Download Deliverables. → After the download is complete, open the Excel file and enable macros. Then work your way through all of the inputs. General, → the HP server inputs, → the Cisco server inputs, etc. → When done you return to the tool home page and choose sync analysis. → Browse for the file and then choose sync the analysis. → You will either have a success or failure message appear. If there is an issue, go ahead and check your analysis as it may still be substantially correct.
A common question I get asked is which tool should be used to perform an analysis.
You can choose between two tools, the simpler Basic tool and the more quote complex Advanced R2 tool.
Basic is used to model an existing environment to new UCS only. You will be comparing existing rack and or blade servers to a single UCS server type, either blade or rack. The existing network is predefined and cannot be changed. UCS networking stops at the fabric interconnects. The benefit of this simplicity is an analysis can be created in as few as 15 minutes. This can be a great starting point and encourage the customer to participate in a more in-depth advanced analysis.
Advanced R2 should be used when your analysis contains a new competitive component; it will also model existing. Advanced allows you to model up to three different server types or workloads. Each server can be configured with different processors, memory and networking connectivity options. There is more robust virtualization support in Advanced. You can model varying virtual to virtual and physical to virtual ratios across the three server types. There is also the ability to specify the processing requirements per VM in advanced.
Networking is completely customizable including existing and competitive blade chassis, access and aggregation layers for Ethernet and fibre channel edge and core. You can also choose to reuse the existing networking infrastructure in the future environments. Lastly, there are two optional user defined costs or benefits for CapEx and OpEx per environment. I will be discussing their use later in the presentation.
A successful existing to UCS analysis hinges on just a few key items.
The first is growth in the existing environment.
This is an example analysis with two servers in the existing environment, with some virtualization, → but no growth. → These are the current results. Since there is no growth in the exiting environment, there is only OpEx. While the UCS OpEx is better, the overall results favor doing nothing. → Now let’s change the VM growth rate to 15%. → Now the exiting environment is forced to add servers and networking to accommodate the growth. The result is a more favorable analysis. → If we change to 20%, the results continue to improve. → On the left we have the original results without growth and on the right with a 20% growth rate included.
The second key is virtualization.
In this example, we are using a B200 M2 server with an Intel E5649 processor and 96GB of RAM. → For each UCS server and new competitive servers in R2 only, you can define the amount of memory per VM. → So how does changing the amount of memory per VM affect the results? → We start with 4GB per VM. The tool automatically calculates that our server can hold 24 VMs. → Now we add that each VM needs one gigahertz of CPU. The tool multiples the cores, times the frequency, times the sockets to get the total amount of CPU available. Since we have 30 gigahertzs available, we are still memory constrained. → We can change to a cheaper processor and see how that affects the results. → Now we are CPU constrained. These two entries allow you to easily do “What if” modeling to define the best combination of memory and CPU for a given scenario.
So how does this impact CapEx? → We will set the new servers to have the same number of VMs as the existing environment using the policy defined maximum. → Obviously not a good result as we aren’t taking advantage of the superior processing and memory capacity of the UCS servers. → When we change from a policy limiter to a memory limiter, in this case using six GB of memory in a server with 256 GB of RAM, the server can support 42 VMs. → Now look at the results. → On the left is keeping the virtualization rates the same as the existing environment; the right is the impact of fully utilizing the resources of newer servers.
Now let’s virtualize some of the current bare metal servers. → With our V2V results as the starting point, → we will now virtualize 20% of the bare metal servers. → The results continue to improve. → The left shows before P2V migration and the right shows the improved results after including P2V migration. As you raise the percentage of servers to be migrated, the results will continue to improve.
I talked about doing what if scenarios when changing the processor and memory per VM earlier. The same sort of thing can be done with the servers. → The account team originally specified the B230 M2 as the server to use. → This didn’t produce favorable results. → Since the configuration wasn’t using a processor requirement, the scenario is memory constrained. → The amount of memory configured is also supported → in the B200 M2. → It makes a dramatic difference. → Again, the original results are on the left, and the new results are on the right. You can do this the new competitive servers as well.
One of the common things I see when reviewing analysis is missing server and networking components because the layers aren’t tied together properly.
This analysis should be comparing three different server types. → But notice that there is no server type three in the existing environment. → Therefore there will never be any costs associated with server type 3 in the new environment for either CapEx → or OpEx. → This is corrected by properly defining the server in the Baseline Environment.
We will start the discussion of networking by looking at that the existing or competitive blade chassis I/O. There are three possible fabrics. Fabric one is defined as the LOM, LAN on Motherboard, or the first mezz card and is always present. Fabrics two and three are user defined. There are four different input sections in the tool and they must all be correct for accurate results.
You must first choose which types of fabrics the chassis contains in the Analysis Configuration section of the tool. The choices are 1 and 10 gigabit Ethernet, Fibre Channel, and Converged 10 gigabit.
Next you define what types of mezzanine cards are in the server and their cost. → Remember that fabric one is the LOM or first mezzanine card and it is always included as part of the base server cost. → Fabrics two and three are add ons. Their fabric type and cost must be defined. → Select the dropdown and then choose the appropriate fabric. → Then enter the cost of the mezzanine card. Repeat this for fabric three as needed.
Then for each server type in the analysis, you define the quantity of add in mezzanine cards for fabrics two and three. While the blade chassis always has the exact same configuration, you can vary the count of the mezzanine cards for fabrics two and three allowing for some flexibility in configurations.
The last configuration point is the blade chassis integrated switching. In the analysis configuration section, I choose all four possible fabrics, but we are going to focus on → Converged since it illiterates how the layers tie together. → We have two modules to define. → Choose the connection type first. We have 10 gigabit Ethernet, Fibre Channel or Converged. → Then you choose the northbound connection. → For 10 gigabit Ethernet and Converged we have access and aggregation. → For Fibre Channel, we have edge and core. Before I show you an example of this in an analysis,
Let’s look at a 1 gigabit access switch. The other switch types work the same. → Enter the cost for a single switch. This also applies to the blade chassis switches. The tool automatically deploys switches in pairs for redundancy and only the A side information is entered. → Next enter the number of southbound ports that connect to the chassis or rack servers that are included in the base switch cost. This can be set to zero → if you include a cost for the incremental southbound ports, question 5. An example of this is a switch with line cards. You would enter the price of switch chassis, power supplies, fans, supervisor modules, etc. as the base cost, then divide the price of a line card by the number of ports and enter the per port cost here. While this sometimes under values the cost of a single the line card, it doesn’t inflate the price by including unused line cards either. → You must always enter the maximum number of ports a switch supports so the tool will know when to add the next pair of switches. If this entry is left blank, the number of switches will never be accurate. → Now enter the number of northbound ports that would connect this switch to the aggregation layer or if this was an edge fibre channel switch, the core layer. → If there any costs associated with these ports, enter it here. An example would be optics for 10Gb uplinks. → Next is the connection type for northbound. → In this case, it can be either 1 or 10 gigabit.
So how do you determine that all of networking has been configured properly? You see here an empty HP c7000 chassis. We will configure the networking and show how it will appear in the results.
We start by saying this chassis will be using their FlexFabric so fabric one is converged. → We configure the switch as Virtual Connect FlexFabric, including the cost for a single switch and the appropriate number of connections for each module, 10Gb Ethernet for module one and Fibre Channel for module two. → Next we enter the information for a HP 10Gb Ethernet switch. → Then a top of rack Fibre Channel switch. → When we look at the infrastructure tab in the Excel file, everything looks correct for the blade server I/O and c7000 connections. → However there are no 10Gb or Fibre Channel switches. Why is that? → When I set the northbound connection for module 1, → I said it was aggregation, → but I defined an access layer switch. → Similarly for Fibre Channel, I used core, → but defined an Edge switch. → I correct those inputs, → and now the switches are included. Not connecting the switch layers properly is a common mistake I find when reviewing analysis. It isn’t always obvious until you drill down into the infrastructure and costs tables of the analysis.
HP is currently the number one server vendor, for now. Here are a few points to help when competing against them.
Anytime an analysis has five or more blade chassis and uses any of the Virtual Connect switches, include Virtual Connect Enterprise Manager. It is a must have to manage multiple Virtual Connect domains. → To get full functionality for the Integrated Lights-Out server management ASIC, you must add a software license whereas Cisco includes this functionality by default. Insight Control is yet another piece of server management software and is commonly used in virtualized environments. → Download HP’s Product Bulletin. HP states that it is “a convenient central resource providing technical overviews and specifications for HP hardware and software.” You can also use it for competitive pricing for server options such as hard drives, mezzanine and PCIe cards and HP branded switches.
There is a misconception that Cisco cannot be competitive at smaller blade counts. Obviously this is dependent upon the solution, but if you take into account all of the components: severs, chassis, interconnects, top of rack switches vs. Fabric Interconnects, cabling, deployment, management software, power, and warranty; Cisco can compete.
I mentioned pricing when discussing the HP Product Bulletin, so I want to clarify where the pricing in the tool comes from.
For the existing environment, Intel has provided data on average system configurations for 1, 2, and 4 socket servers. The key word is average. Not every processors is included, only a subset of possible Intel and AMD processors over the past five years. → Approximately once a month, we scrape Dell & HP’s public server pricing pages. We then update the tool with configuration information and pricing for base server models, memory and processors. → The date of the last pricing update is posted on the web and → is also noted in the deliverables. → All Cisco pricing comes from the Global Price List.
The tool allows you to discount these prices to reflect the actual deal terms. → The Cisco UCS discounting is preset to 47% to drop the GPL price to MSRP. There are also pre-populated discounts for switches & warranty included for Cisco. → For Dell & HP, there is no preset discount as their internet pricing is the equivalent to Cisco’s MSRP. You will want to change both these depending upon the situation. → I don’t recommend changing the existing environment discount as these are average prices and I have found them to be generally low. You can raise these prices if needed. → Take for example this two socket server is priced at $5,800 which the customer tells you it is less than they are paying. → By changing the discount to a negative 20%, the price increases to $6,950.
User defined costs and benefits can be used to model any financial items the tool doesn’t already comprehend and be used for the existing, new competitive, and Cisco environments.
There are five entries per item. → First is the description of the item. → Second, if this should be included in the analysis. This toggle allows it to be easily included or removed from the analysis cost structures. → Third is the amount of the item. It may be a bit counter-intuitive how this works. Benefits are entered as negative values and costs are entered as positive values. This is the most common mistake I see when these are used. → Fourth, the frequency, annual or one time. → Fifth is the year the item starts. This is entered as 2012, 2013, etc. → This is how they are displayed in the Excel model. I’ve used these to model the storage portion of a Vblock deal and the customer’s expected 20% savings due to the efficiencies of Service Profiles. Other possibilities are training expenses or the value of projects that can now be under taken due to staff being freed from day-to-day server administration tasks.
I’ve been talking a lot about using Dell and HP in the new competitive server environment. Now I want to show you how you include any server vendor in an analysis. The build your own functionality is used in less than 10% of the analysis with a new competitive component. We think some opportunities are being missed where the TCO tools could be helpful with just a little more effort by the user.
In the Analysis Configuration setup, you change → Not Applicable to → New Competitive “Build Your Own” in Step Two where you define the New Competitive Server Environment. → It expands and looks just like Dell or HP. You can select up to three server types.
For the blade chassis, you must enter the cost for the fully configured chassis meaning everything needed except for the switch interconnects. Those will be entered just as was shown earlier in the HP example. The maintenance is optional but very helpful for OpEx. Power and the number of RU or rack units are also mandatory. → Here is an example of a user defined IBM BladeCenter H chassis. → Next we must provide details on the server blades that will be installed in this chassis. The key inputs are: → the amount of RAM, → CPU information including speed, the number of sockets and cores. These generic inputs can be used to support any CPU type not just Intel and AMD x86 processors but PowerPC, SPARC, PA-RISC etc. → You must provide the capital cost of the server. This is the cost of base system, CPU, memory and the fabric one LOM or mezzanine card. → Hard drives are entered separately just like every other server type. → Lastly, how many of this type of server fit into the chassis that was defined. You can therefore define server types 1, 2, and 3 to be different widths such as IBM’s HS22 and HX5 four socket or MAX5 configurations. → Here is an example of an IBM HS22 server.
Rack servers work just like blades. While this does take more effort than Dell or HP, you can see it isn’t difficult and would add only a few minutes to the overall time need to complete an analysis and opens up many additional opportunities.
The next to last topic is best practices in sharing the analysis with the customer.
Start by going through all of the inputs used to create the analysis. This level sets everyone about what has been included and can correct any discrepancies before the results are presented. → This is obviously audience dependant as you generally don’t present this level of detail to a CIO or CEO. For them, → the executive level business case → or the PowerPoint slides are a better option. → If you do change any of the inputs during the review, make sure you understand the impact it will have on the results. Think back to our V2V and P2V discussion. If those rates are lowered, it will have a negative impact. If on the other hand you started conservatively and raised the virtualization rates, it will have a positive impact. → A common question I get asked is if it is OK to share the Excel model with the customer. Yes it can be shared. However we ask that you don’t provide them the analysis until you have gone trough and validated the data with them. We pride ourselves that all of the formulas are exposed for inspection by the customer.
If you are going to provide just the PowerPoint or Word deliverables to the customer without the Excel model, I recommend breaking the links so when the documents is opened, it won’t try and update. To do this → click the Office button, → prepare, → edit links to files. → Select the first link, → the last link, → then click Break Link. → Now save file with a new name. That way if you do make changes to the model, you can prepare an updated PowerPoint deck or business case.
Lastly some import links.
I have created a page on the Cisco Communities site as a single source for all information about the TCO/ROI tools. There you will find general tool training and the Advanced R2 data collection template mentioned earlier. I will be posting the link to this recording there as well. If you forget the URL, just search for UCS TCO ROI in the Communities. → There is also a Power Calculator page where the offline version of the UCS power calculator may be downloaded.
Speaking of power, it is a critical component in calculating OpEx and should always be included. Advanced does have placeholder values for Cisco servers, but these should be updated with your specific configuration information. Here are links to various online power calculators including: Cisco’s UCS and selected Nexus / Catalyst switches, Dell, HP, and IBM servers. If there isn’t a tool available, use the product’s datasheet to find the power information.
Where can you get help and support with the tools? → For general questions, → use support and feedback links on the web – every page will have them. → Use the Communities. → It is a great place to have discussions and ask your peers for help and their best practices. → You can always contact me, Bill Shields. I will do my best to respond promptly but there may be a delay depending upon my workload. If the question is about a particular analysis, please let me know which tool is being used and the analysis name so I will be able to review it.
Thank you for spending your valuable time with me today and I hope you have found this session of value. Give me a moment to un mute the lines and I will take questions.