Dynamic Business Environment for Clients and Applications
Points to Need for Creating Proper Supporting Infrastructure
Transcript of a sponsored podcast discussion on present beneﬁts and future trends from client
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor:
Dana Gardner: Hi. This is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're
listening to BrieﬁngsDirect.
Today, we present a sponsored podcast discussion on client virtualization strategies and best
practices. We've heard a lot about client virtualization or virtual desktop
infrastructure (VDI) over the past few years, and there are some really great
technologies for delivering a PC client experience as a service. But, today’s
business and economic drivers need to go beyond good technology. There also
needs to be a clear rationale for change, both business and economic.
Second, there needs to be the methods and experience for properly moving to a client
virtualization, with all the moving parts, and do it at low risk and in ways that lead to both high
productivity and lower total cost over time.
Cloud computing, mobile device proliferation, and highly efﬁcient data centers are all aligning to
make it clear that the deeper and ﬂexible client support from servers will become more the norm
and less the exception over time.
Client devices and application types will also be dynamically shifting both in numbers and types
and crossing across the bridge or chasm between the consumer and business spaces.
These new requirements point to the need for planning and proper support of infrastructures that
can accommodate these client eventualities and to do so with that attractive long-term price
So, we're here to discuss how client virtualization infrastructure works, where it ﬁts in to support
multiple future client directions, and why it makes a tremendous amount of sense economically.
Here with us to explain how to take better advantage of client virtualization is Dan Nordhues. He
is the Marketing and Business Manager for Client Virtualization Solutions in HP's Industry
Standard Servers Organization. Welcome to BrieﬁngsDirect, Dan.
Dan Nordhues: Thank you.
Gardner: Tell us why it’s such a dynamic market now? What’s going on with clients, and why
do we seem to be having a mismatch between applications and user devices and the support
services on the back-end?
Nordhues: I can start by describing why now is the time to take a look at this and some of the
dynamics going on. There are quite a few, and different customers will resonate
more with some than others.
The major reasons are pretty consistent. We have an increasingly global and
mobile workforce out there. Roughly 60 percent of employees in organizations
don’t work where their headquarters are for their company, and they work
It’s a digital generation of millions of new folks entering the workforce and they've grown up
expecting to be mobile and increasingly global. So, we need to have computing environments
that don’t have us having to report to a post number in an ofﬁce building in order to get work
Then, there is just the burden of managing those desktops as they become more and more
distributed underneath desks. And, of course, there's the impact of security, which is always the
highest on customer lists. The average cost of a security breach in the U.S. is almost $7 million.
Over 10,000 laptops go missing from airports every week. These are just some of the things that
are pressing for a different approach.
And, there is really a catalyst coming as well in the Windows 7 availability and launch since late
last year. Many organizations are looking at their transition plans there. It’s a natural time to look
at a way to do the desktop differently than it has been done in the past.
Gardner: One thing that’s interesting about the client virtualization decision process is that you
really need to think across multiple domains. It’s not just about the client or even the network.
You have to think about storage, server, resources, and the culture, the fact that different people
need to be cooperating.
Nordhues: Yes, that’s deﬁnitely true. If you look at organizations today that have a desktop
environment, distributed PCs, or workstations at the desk, those organizations doing the IT there
are typically different than the data-center folks managing the server environment. When we go
to client virtualization, you still have a user endpoint out where the user is at the desk or mobile,
but you've got the applications and the desktops, actually running on servers in the data center.
So now, you get the storage folks in IT, the networking folks, and the server support folks all
involved in the support of the desk-side environment. It deﬁnitely brings a new dynamic. You
also have to look at the data center and its capacity to house the increased number of servers.
storage. and networking that has to go there to support the user infrastructure.
So, it’s a different kind of a discussion, and we've found that the most successful approach is to
include all of those folks in the conversation from the very beginning, as you look at doing
desktop virtualization. Otherwise, as you go toward deployment or integration toward
deployment, you're going to run into problems if you haven’t taken that step early on.
Gardner: Is there a signiﬁcant amount of waste in the way things are done now in terms of
redundancy or underutilization? Is there an opportunity, when you take this total client picture
into consideration, along with perhaps the data center transformation type of activity? Is there a
real economic issue here in terms of efﬁciency, when you look at it across all of these different,
hitherto distinct, domains?
Unused CPU cycles
Nordhues: Certainly. Customers tell us all the time that they believe that they have a lot of
untapped CPU cycles, for example, at the desk. As we look across the
processors going into these systems, they are at least dual-core. We've got
capability up past quad-core and 6-core today. Typical PC users, a very
broad section of the users, don’t really require all of that performance. We
even have customers trying to tap into those unused cycles across the
network at night, for example.
When you put that into the data center, it’s much more manageable. Customers who are
deploying see 40-60 percent improvements in IT productivity. That’s really what we're after --
IT productivity measured in means of how many users each IT person can support.
This is not a prescription for getting rid of those IT people. In fact, there is a lot of beneﬁt to the
businesses by moving those folks to do more innovation, and to free up cycles to do that, instead
of spending all those cycles managing a desktop environment that may be fairly difﬁcult to
The reason the manageability is easier is that the tools in the data center side, on the server side,
are much more robust than what customers typically have deployed on the desktop side.
Gardner: You mentioned security a little earlier. It’s interesting to me that people often think of
things delivered as a service as less secure, but in the case of clients and data, having that all on
the server, managed centrally, with permissions, policies, rules, and governance, tends to be more
secure than the hard drive full of the entire system out and about in the workspace.
Nordhues: Exactly. That’s deﬁnitely a primary consideration here. When we do surveys of
customers, it almost always comes back to security, as being the top consideration.
We've got customers out there, large enterprise accounts, who are spending north of $100 million
a year just to protect themselves from internal fraud. We're all very familiar with the privacy
policies that we get in mail, and companies are have to comply with increasingly strict rules to
protect the consumer.
With client virtualization, the security is built in. You have everything in the data center. You
can’t have users on the user endpoint side, which may be a thin client access device, taking ﬁles
away on USB keys.
It’s all something that can be protected by IT, and they can give access to users as they see ﬁt. In
most cases, they want to strictly control that. Also, you don’t have users putting applications that
you don't want, like MP3 players or games, on top of your IT infrastructure.
In the desktop virtualization area, what really comes out to the user device now is just pixel
information. Think about extending your monitor cable to anywhere in the world you want to go,
along with your keyboard and mouse. You really don’t have access to the ﬁles. These protocols
just give you the screen information, collect your user inputs from the keyboard and mouse, and
take those back to the application or the desktop in the data center.
Gardner: Dan, tell me about the notion of future-prooﬁng or protecting yourselves. We are
hearing a lot about cloud computing nowadays. Service-oriented architecture is becoming more
popular. Virtualization is the driver in many instances. But, we're also seeing a proliferation of
different types of endpoints -- smartphones, tablets, netbooks, and so forth. There seems to be a
drive towards a more singular server logic application base that then presents in a variety of
different ways, depending on what the call is.
Tell me what you think the enterprise is going to need to do about that, and does client
virtualization set you up to perhaps be in a better position to anticipate some of these new
Nordhues: Let’s start at the user endpoint and then we can go back to the cloud side or typically
what’s going to sit in the data center. At the user endpoint, as I mentioned, with an increasingly
global and mobile workforce, there is a variety of devices that users are going to want to work
on, but we have to be pragmatic about this as well.
You just can’t get the same experience on a smartphone with a three-and-a-half-inch screen that
you can get if you're sitting in the ofﬁce with a nice 22-inch LCD monitor, for example. The
mouse input, the human interface, is not as good of an experience typically as what you have at a
desk side access point, like a thin client.
When you go mobile, you give up some things. However, the major selling point is that you can
get access. You can check in on a running process, if you need to see how things are progressing.
You can do some simple things like go in and monitor processes, call logs, or things like that.
Having that access is increasingly important.
On the data center side, as we start talking about cloud, the solution is really progressing. HP is
moving very strongly toward what we call converged infrastructure, which is wire it once and
then have it provisioned and be ready to provide the services that you need. We're on a path
where the hardware pieces are there to deliver on that.
Delivering packaged services out to the end user is something that’s still being worked out by
software providers, and you're going to see some more elements of that come out as we go
through the next year. These are things like how to charge the departments for their usage, how
to measure how many virtual machine hours exist in your environment, for example, and how to
do billing to pay for your IT, or whatever infrastructure you need within your customer
A lot of that is coming. The hardware and the ability to provision that are there today. I'm talking
more about truly getting to the cloud, where you have services packaged up and delivered on
demand to the user community.
Gardner: It seems that if you can take this approach towards wire it once, you're in a position to
do a variety of things in terms of where your workforce and requirements are going, but you can
also support your legacy in older applications, so you have covered your backwards
What’s preventing people from doing this? What do they need to do to start thinking about
getting started, if you understand that managing that infrastructure portion is going to help solve
some of these issues as you move out towards the edge, considering what applications to keep
and what perhaps cloud services and hybrid services to bring in?
Nordhues: When you look at desktop virtualization, whether it’s a server-based computing
environment, where you are delivering applications, or if you are delivering the whole desktop,
as in VDI, to get started you really have to take a look at your whole environment and make sure
that you're doing a proper analysis and are actually ready. The expectations have to be aligned.
We hear from customers who think that client virtualization is magic for some reason, and that
their boot times are going to go to under 20 seconds. That’s just not true. While it can certainly
help, you have to have the right expectations going in. If you have users with peripherals at the
desk that are important to running their job, are they still going to have access to those
peripherals, when they go to a VDI environment, for example?
Probably the most important thing where we have seen some customer missteps in the past, is
really understanding user segmentation. Which of your users are really light task-oriented
workers? They maybe touch a couple of applications primarily, and that’s the majority of what
they do. At the other end of your spectrum, you have your power users using four or ﬁve times as
many I/O operations per second.
Gardner: Dan, given the fact that you've got so many of these variables now to manage, not just
a matter of ﬂipping the switch and going to VDI, are there any reference architectures or places
where people can begin to look at this larger, more comprehensive approach?
Nordhues: Certainly. This is where HP is squarely headed. We've launched several reference
architectures and we are going to continue to head down this path. A reference architecture is a
prescribed solution for a given set of problems.
For example, in June, we just launched a reference architecture for VDI that uses some iSCSI
SAN storage technology, and storage has traditionally been one of the cost factors in deploying
client virtualization. It has been very costly to deploy Fibre Channel SAN, for example. So,
moving to this iSCSI SAN technology is helping to reduce the cost and provide fantastic
In this reference architecture, we've done the system integration for the customer. A lot of the
deployment issue, and what makes this difﬁcult, is that there are so many choices. You have to
choose which server to use and from which vendor: HP, Dell, IBM, or Cisco? Which storage to
choose: HP, EMC, or NetApp? Then, you have got the software piece of it. Which hypervisor to
use: Microsoft, VMware, or Citrix? Once you chase all these down and do your testing and your
proof of concept, it can take quite a substantial length of time.
What HP has done with these reference architectures is say, "Look, Mr. Customer, we've done all
this for you. Here is the server and storage and all the way out to the thin client solution. We've
tested it. We've engineered it with our partners and with the software stack, and we can tell you
that this VDI solution will support exactly this many knowledge workers or that many
productivity users in your PC environment." So, you take that system integration task away from
the customer, because HP has done it for them.
Where we're headed with this, even more broadly than VDI, is back to the converged
infrastructure, where we talked about wire it once and have it be a solution. Say you're an ofﬁce
worker and you're just getting applications virtualized out to you. You're going to use ofﬁce type
applications. You don’t need a whole desktop. Maybe you just need some application streamed to
Maybe, you're more of a power user, and you need that whole desktop environment provided by
VDI. We'll provide reference architectures with just wire it once type of infrastructure with
storage. Depending on what type of user you are, it can deliver both the services and the
experience without having to go back and re-provision or start over, which can take weeks and
months, instead of minutes.
Crawl, walk, run
Gardner: Correct me, if I am wrong, but this is something you can bite off in somewhat of a
manageable chunk. As they say, there is a crawl-walk-run approach to this, where you will start
based on a certain class of user within your organization or an overriding security concern.
Perhaps you can explain to me how this develops organizationally?
Nordhues: Certainly. We targeted the enterprise ﬁrst. Some of our reference architectures that
are out there today exist for 1,000-plus users in a VDI environment. If you go to some of the
lower-end offerings we have, they are still in the 400-500 range.
We're looking at bringing that down even further with some new storage technologies, which
will get us down to a couple of hundred users, the small and medium business (SMB) market,
certainly the mid-market, and making it just very easy for those folks to deploy. They'll have it
come completely packaged.
Today, we have reference architectures based on VDI or based on server-based computing and
delivering just the applications. As I mentioned before, were looking at marrying those, so you
truly have a wire-once infrastructure that can deliver whatever the needs are for your broad user
Gardner: This really sounds like an ecosystem type of an affair, where there are so many
different players: hardware, software, storage, services, and changing methodologies. How do
you classify this? Is this something that you would call systems integration? How do we put a
label on what it needs or what it takes to go from your current state to a client-virtualized state?
Nordhues: You've touched on the experience of moving this into an organization for the ﬁrst
time. We mentioned earlier on the call, you get a lot of different IT folks involved in the data
center. There are separate folks for storage, networking, and for servers. And, you've got the
desktop, traditional folks from out in the user environment.
You're bringing in all these servers, storage, software choice, and software stack choice,
technology. Getting agreement and moving forward is a very time-consuming and risky part of
this, because it hasn't necessarily ever been tested together in exactly the conﬁguration that the
customer might end up settling on deploying. That does introduce risk into the equation.
In addition to the reference architectures, where we have taken that system integration,
responsibility, or task largely away from the customer and made it easier, HP also has total set of
services around client virtualization, and we can do a number of things. We come in and have a
strategy discussion with you. Are you really ready in your environment to do this? Here are some
of the things you should consider. Here’s what you need to get nailed down before you really
Then, we can even do assessments, where we determine how ready you might be, which user
community you maybe should be started with ﬁrst, and then all the way to complete deployment
services, if you want that.
Also, HP, instead of selling you the servers and the solution for you to deploy yourself, can even
take it to a managed services’ standpoint, where if you just want to tell HP, "We'd like to turn on
500 users next month," we can make it happen.
Gardner: That sounds to me a bit like a private cloud or even hybrid cloud types of services. Do
you save any cycles, Dan, in moving towards client virtualization by already having a private
cloud or virtualization or hybrid cloud strategy in place?
Nordhues: Certainly, you need to take a look at how these might ﬁt together with the cloud
strategy that you already have in place. Just over the last six to nine months, the whole cloud area
has erupted, if you read what’s out there on the web or in trade magazines. There’s a lot of new
technology coming. I referred to a part of it before, where there are more software solutions
coming around deploying services, instead of just provisioning the hardware, for example.
For customers who already have a solution in place, it’s very much worth taking a look at what’s
coming -- speciﬁcally, what HP can offer and how it might marry with what you have deployed.
But, that’s a very broad topic, and "your mileage may vary," I guess, is the way to talk about it,
because it really is dependent on exactly what you have deployed and how you have it deployed.
You have to ask if there's a better way to do this?
Gardner: When we think about examples, it’s always nice to tell, but to show is even more
impactful. Do you have some examples that we could point to about folks who have taken the
plunge, perhaps for security reasons, and moved into client virtualization? What sort of
experience have they had? Do you have any metrics for success or for economic impact?
Nordhues: We have a number of customer references. I won’t call them out speciﬁcally on the
podcast here, but we do have some of these posted out on HP.com/go/clientvirtualization, and we
continue to post more of our customer case studies out there. They are across the whole desktop
virtualization space. Some are on server-based computing or sharing applications, some are
based on VDI environments, and we continue to add to those.
HP also has an ROI or TCO calculator that we put together speciﬁcally for this space. You show
a customer a case study and they say, "Well, that doesn’t really match my pain points. That
doesn’t really match my problem. We don’t have that IT issue," or "We don’t have that energy,
We created this calculator, so that customers can put in their own data. It’s a fairly robust tool,
but we can put in information about what’s your desktop environment costing you today, what
would it cost to put in a client virtualization environment, and what you can expect as far as your
return on investment. So, it’s a compelling part of the discussion.
Obviously, with any new computing technology, the underlying consideration is always cost or,
in this case, a lot of customers look at it at a cost-per-seat perspective, and this is no different,
which is why we have provided the tool and the consulting around that.
Gardner: Dan, we've focused today on the infrastructure side quite a bit, and we've talked about
how the proliferation of device types is well along. I suppose it also makes sense to mention the
fact that HP has several lines of thin clients that ﬁt into this scenario as well.
Nordhues: Absolutely. As I mentioned before, these reference architectures that we have are not
just on the data-center side, but they include speciﬁc models on the thin-client side as well.
Whether you're talking about the thin client or the user access side, all the way to the data center,
HP has the whole area covered today with our solution. HP is number one in thin clients in the
world. We've got a whole lineup architected toward different use cases, whether your full VDI or
your very high-end engineer or trader need a high-end thin client that delivers the user
experience, all the way down to mobile thin clients that look very much like laptops, but don’t
have the hard drive and the sensitive data in them.
That’s why HP believes we can position ourselves as a leader here in client virtualization, not
just because of the data center to thin client access and the products and solutions we bring to
bear, but our partnerships with the major software vendors: Microsoft, Citrix, VMware, and all
the services, all the way out to managed services, if you want HP to come in and do this
implementation for you.
Gardner: You've mentioned some examples that people can look to. Are there any other places
to get started, any other resources or destinations that would offer a pass into either more
education or actual implementation?
Nordhues: On that same website that I mentioned, HP.com/go/clientvirtualization, we have our
technical white papers that we've published, along with each of these reference architectures.
For example, if you pick the VDI reference architecture that will support 1,000-plus users in
general, there is a 100-page white paper that talks about exactly how we tested it, how we
engineered it, and how it scales with the VMware view or with Microsoft Hyper-V, plus Citrix
It takes you through how we did the testing and the methodology, what the gotchas are-- don’t do
this, do it this way. And that can be tremendous reading for the IT manager, the CIO, who is
struggling to wrap their brain around this and try to ﬁgure out how we can make this work,
because it’s very descriptive in how you can put this together, and some of the choice points you
should and shouldn’t make.
I would recommend that as a next step to anybody who is seriously considering or getting close
to kicking the tires or thinking about a deployment.
Gardner: Of course we can't pre-announce things, but it’s my understanding that there’s quite a
bit of activity within HP, a whole area beyond what we have talked about. Perhaps you could
pique our interest a bit about what to expect later this year.
Nordhues: I've alluded to it a little bit already on the call. We're moving consistently toward our
converged infrastructure story, the wire once and then have the services delivered. As we go
through the next month and into the next year, you're going to see more of that, ﬁlling out
reference architectures, lowered towards small and medium businesses, as I mentioned. Also,
really a hybrid solution could deliver in the future VDI plus server-based computing together and
cover your whole gamut of users, from the very lowest task-oriented user, all the way up to the
highest end power users that you have.
And, we're going to see services wrapped around all of this, just to make it that much simpler for
the customers to take this, deploy it, and know that it’s going to be successful.
Gardner: It sounds very exciting. We'll look forward to hearing more about that.
We've been here today, talking about client virtualization infrastructure, how it works, where it
ﬁts in, and how it can support multiple future directions, as well as support the legacy and
existing requirements. To help us through this, we have been joined by Dan Nordhues. Dan is the
Marketing and Business Manager for Client Virtualization Solutions in HP’s Industry Standard
Thanks so much, Dan. That was really very interesting.
Nordhues: Thank you very much for the opportunity.
Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening
to a sponsored BrieﬁngsDirect podcast. Thanks for joining us, and come back next time.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor:
Transcript of a sponsored podcast discussion on present beneﬁts and future trends from client
virtualization. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.
You may also be interested in:
• HP's Robin Purohit Unpacks Business Service Management 9 as a Way to Address
Complexity in Hybrid Data Centers
• HP Data Protector, a Case Study on Scale and Completeness for Total Enterprise Data
Backup and Recovery
• HP Virtual Conferences Tackle Data Center Transformation, Trends, Methods, Solutions