Transcript of a BriefingsDirect podcast on how increased and more sophisticated attacks are forcing enterprises to innovate and expand security practices to not only detect, but predict, system intrusions.
Thought Leader Interview: HP's Global CISO Brett Wahlin on the Future of Security and Risk
Thought Leader Interview: HP's Global CISO Brett Wahlin
on the Future of Security and Risk
Transcript of a BrieﬁngsDirect podcast on how increased and more sophisticated attacks are
forcing enterprises to innovate and expand security practices to not only detect, but predict,
Listen to the podcast. Find it on iTunes. Sponsor: HP
Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance
Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your
moderator for this ongoing discussion of IT innovation and how it’s making an
impact on people’s lives.
Once again, we're focusing on how IT leaders are improving security and
reducing risk as they adapt to the new harsh realities of doing business online.
I'm now joined by our co-host for this sponsored podcast series, Paul Muller, the Chief Software
Evangelist at HP Software. Welcome back, Paul. How are you today? [Disclosure: HP is a
sponsor of BrieﬁngsDirect podcasts.]
Paul Muller: Dana, very well. It's great to be back, and I'm looking forward to today’s
Gardner: Yes, we have a big discussion today. We're joined by HP’s Global Chief Information
Security Ofﬁcer (CISO) to learn about how some of the very largest global enterprises like HP
are exploring all of their options for doing business safely and continuously. So with that, let's
welcome our guest. We're here Brett Wahlin. He is Global Vice President and Chief Information
Security Ofﬁcer at HP. Welcome, Brett.
Brett Wahlin: Thank you, Dana.
Gardner: Brett, there's been a lot of discussion, of course, about security and a lot of discussion
about big data. I'm very curious as to how these are related.
It seems to me that I've read and heard quite a bit about how big data can be used to improve
security and provide insights into what's going on within systems and even some greater analysis
capabilities. Is that what you're ﬁnding and hearing from other CISOs -- that there is a great tool
in big data that’s related to security?
Wahlin: Yes, big data is quite an interesting development for us in the ﬁeld of security. If we
look back on how we used to do security, trying to determine where our enemies were coming
from, what their capacities were, what their targets were, and how we're gathering intelligence to
be able to determine how best to protect the company, our resources were quite limited.
We've found that through the use of big data, we're now able to start gathering reams of
information that were never available to us in the past. We tend to look at this
almost in a modern-warfare type of perspective.
If you're a battleﬁeld commander, and you're looking at how to deploy defenses,
how would you deploy those offenses, and what would be the targets that your
enemies are looking for? You typically then look at gathering intelligence. This
intelligence comes through multiple sources, whether it's electronic or human
signals, and you begin to process the intelligence that's gathered, looking for
insights into your enemy.
This could be the enemy’s capabilities, motivation, resourcing, or targets. Then, by that
analysis of that intelligence, you can go through a process of moving your defenses,
understanding where the targets may be, and adjusting your troops on the ground.
Big data has now given us the ability to collect more intelligence from more
sources at a much more rapid pace. As we go through this, we're looking at
understanding these types of questions that we would ask as if we were
looking at direct adversaries.
We're looking at what these capabilities are, where people are attacking from,
why they're attacking us, and what targets they're looking for within our company. We can gather
that data much more rapidly through the use of big data and apply these types of analytics.
We begin to ask different questions of the data and, based on the type of questions we're asking,
we can come up with some rather interesting information that we never could get in the past.
This then takes us to a position where that advanced analytics allows us to almost predict where
an enemy might hit.
That’s in the future, I believe. Security is going from the use of prevention, where I'm tackling a
known bad thing, to the point where I can use big data to analyze what's happening in real time
and then predict where I may be attacked, by whom, and at what targets. That gives me the
ability to move the defenses around in such a way that I can protect the high-value items, based
on the intelligence that I see coming in through the analytics that we get out of big data.
Muller: I want to jump in, if it's okay, with a question directed it to Brett. You talk a lot about the
idea of getting in front of the problem. Can you talk a little bit about your point of
view on how security, from your perspective as a practitioner, has evolved over the
last 10-15 years?
Wahlin: Certainly. That’s a great question. Years ago, we used to be about trying to
prevent the known bad from happening. The questions we would ask would always be around,
can it happen to us, and if it does, can we respond to it? What we have to look at now is the fact
that the question should change. It should be not, "Can it happen to us," but "When is it going to
happen to us?" And not, "Can we respond to it," but "How can we survive it?"
If we look at that type of a mind-shift change, that takes us back to the old ways of doing
security, where you try to prevent, detect, and respond. Basically, you prevented the known bad
things from happening.
This went back to the days of -- pick your favorite attack from years ago. One that I remember is
very telling. It was Code Red, and we weren’t prepared for it. It hit us. We knew what the
signature looked like and we were able to stop it, once we identiﬁed what it was. That whole
preventive mechanism, back in the day, was pretty much what people did for security.
Fast forward several years, and you get into that new era of security threats highlighted by
attacks like Aurora, when it came out. Suddenly, we had the acronyms that ﬂew all over, such as
APT -- advanced persistent threats -- and advanced malware. Now, we have attacks that you can't
prevent, because you don’t know them. You can't see them. They're zero-days. They're
undiscovered malware that’s in your system already.
Detect and respond
That changed the way we moved our security. We went from prevent to a big focus on not just
preventing, because that becomes a hygiene function. Now, we move in to detect-and-respond
view, where we're looking for anomalies. We're looking for the unknown. We're beeﬁng up the
ability to quickly respond to those when we ﬁnd them.
The evolution, as we move forward, is to add a fourth dimension to this. We prevent, detect,
respond, and predict. We use elements like big data to understand not only how to get situational
awareness, where we connect the dots within our environment, but taking it one step further and
being able to predict where that next stop might land. As we evolve in this particular area, getting
to that point where we can understand and predict will become a key capability that security
departments must have in future.
Gardner: Brett, thank you for giving us some historical perspective. Perhaps you could illustrate
for our audience your personal background. How long you have been at HP and where you’ve
been before that?
Wahlin: I've been at HP for approximately eight months. Prior to joining HP, I was the CSO at
Sony Network Entertainment. My role there was to put the security in place after the infamous
PlayStation breach. Prior to that, I was also the CSO at McAfee. I did a stint as CSO at Los
Years ago, I got my start doing counterintelligence for the US Army during the Cold War. So we
had a lot of opportunity to drive and practice the intelligence gathering and analytics components
to which I'm referring around the big-data conversation.
Gardner: I hear you talking about getting more data, being proactive, and knowing yourself, as
an organization, in order to be better prepared for attacks. It sounds quite similar to what we have
been hearing for many years from the management side of the things, the operation side, to know
yourself to be able better maintain performance standards and therefore be able to quickly
remediate when something went wrong.
Are we seeing a conﬂuence between good management practices and good security practices and
should we still differentiate between the two?
Wahlin: As we move into the good management of IT, the good management of knowing
yourself, there's a hygiene element that appears within the correlation end of the security
industry. One of the elements that we look at, of course, is how to add all this additional
complexity and additional capability into security and yet still continue to drive value to the
business and drive costs out. So we look for areas of efﬁciencies and again we will draw many
As you understand the managing of your environments and knowing yourself, we'll begin to
apply known standards that we'll really use in the governance perspective. This is where you will
take your hygiene, instead of looking at a very elaborate risk equations. You'll have your typical
"risk equals threat times vulnerability times impact," and what are my probabilities.
It gets very confusing. So we're trying to cut cost out of those, saying that there are known
standards out there. Let's just use them. You can use the ISO 27001, NIST 800-53, or even
something like a PCI DSS. Pick your standard, and that then becomes the baseline of control that
you want to do. This is knowing yourself.
With these controls, you apply them based on risk to the company. Not all controls are applied
equally, nor should they be. As you apply the control based on risk, there is evaluation
assessment. Now, I have a known baseline that I can measure myself against.
As you began to build that known baseline, did you understand how well you're doing from a
hygiene perspective? These are all the things that you should be doing that give you a chance to
understand what your problem areas are.
As you begin to understand those metrics, you can understand where you might have early-
warning indicators that would tell you that that you might need to pay attention to certain types
of threats, risks, or areas within the company.
There are a lot of similarities as you would look at the IT infrastructures, server maintenance,
and understanding of those metrics for early warnings or early indicators of problems. We're
trying to do the same security, where we make it very repeatable. We can make it standards based
and we can then extend that across the company, of course always being based on risk.
Muller: There is one more element to that, Dana, such as the evolution of IT management
through, say, a framework like ITIL, where you very deliberately break down the barriers
between silos across IT.
Similarly, I increasingly ﬁnd with security that collaboration across organizations -- the whole
notion of general threat intelligence – forms one of the greatest sources of potential intelligence
about an imminent threat. That can come from the operational data, or a lot of operational logs,
and then sharing that situational awareness between the operations team is powerful.
At least this works in the experience that I have seen with many of our clients as they improve
security outcomes through a heightened sense of what's actually going on, across the
infrastructure with customers or users.
Gardner: Paul, as you’re traveling around and talking with a lot of organizations, do you sense
that they're sharing Brett’s perception that risk is sort of the über concept, and that security and
performance management fall under that? Or are they still sort of catching up to that concept or
even resisting it?
Muller: There's sort of a veiled security joke. There are two types of organizations -- those that
have been hacked and those that know they're being hacked.
One of the greatest challenges we have in moving through Brett’s evolution that he described is
that many executives still have the point of view that I have a little green light on my desktop,
and that tells me I don’t have any viruses today. I can assume that my organization is safe. That
is about as sophisticated a view of security as some executives have.
Then, of course, you have an increasing level of awareness that that is a false sense of security,
particularly in the ﬁnancial services industry, and increasingly in many governments, certainly
national government. Just because you haven't heard about a breach today, that doesn’t mean that
one isn't actually either being attempted or is, in fact, being successful.
One of the great challenges we have is just raising that executive awareness that a constant level
of vigilance is critical. The other place where we're slowly making progress is that it's not
necessarily a bad thing to share negative experiences.
The culture 10 or 15 years ago was that you don’t talk about a breach; you bury it. Increasingly,
we see companies like Heartland Payment Systems quite famously getting out there and being a
big believer in sharing the patterns of breach that occurred to help others be more aware of how
and when these things occur, but also increasingly sharing threat intelligence.
For example, if you're one bank and someone is attempting to break into your systems using a
known pattern of attack, it's highly likely they're trying to do it with your peers. Given that your
defenses between your peers and yourself might be slightly less than that between you and the
outside world, it's a good idea to share that ahead of time. Getting back to Brett’s point, the
heightened sense of threat intelligence is going to help you predict and respond more reliably.
Wahlin: Absolutely. We look at the inevitabilities of the fact that networks are penetrated, and
they're penetrated on a daily basis. There's a difference between having unwanted individuals
within your network and having the data actually exﬁltrated and having a reportable breach.
As we understand what that looks like and how the adversaries are actually getting into our
environment, that type of intelligence sharing typically will happen amongst peers. But the need
for the ability to actually share and do so without repercussions is an interesting concept. Most
companies won't do it, because they still have that preconceived notion that having somebody in
your environment is binary -- either my green light is on, and it's not happening, or I've got the
red light on, and I've got a problem.
In fact, there are multiple phases of gray that are happening in there, and the ability to share the
activities, while they may not be detrimental, are indicators that you have an issue going on and
you need to be paying attention to it, which is key when we actually start pointing intelligence.
I've seen these logs. I've seen this type of activity. Is that really an issue I need to pay attention to
or is that just an automated probe that’s testing our defenses? If we look at our environment, the
size of HP and how many systems we have across the globe, you can imagine that we see that
type of activity on a second-by-second basis.
We have to understand which ones of these we need to pay attention to and have the ability to not
only correlate amongst ourselves at the company, but correlate across an industry.
HP may be attacked. Other high-tech companies may also be attacked. We'll get supply-chain
attacks. We look at various types of politically motivated attacks. Why are they hitting us? So
again, it's back to the situational awareness. Knowing the adversary and knowing their
motivations, that data can be shared. Right now, it's usually in an ad-hoc way, peer-to-peer, but
deﬁnitely there's room for some formalized information sharing.
Muller: Especially when you consider the level of information sharing that goes on in the
cybercrime world. They run the equivalent of a Facebook almost. There is a huge amount of
information sharing that goes on in that community. It's quite well structured. It's quite well
organized. It hasn’t necessarily always been that well organized on the defense side of the
equation. I think what you're saying is that there's opportunity for improvement.
Wahlin: Yes, and as we look at that opportunity, the counterintelligence person in me always has
to stand up and say, "Let's make sure that we're sharing it and we understand our operational
security, so that we're sharing that in a way that we're not giving away our secrets to our
adversaries." So while there is an opportunity, we also have to be careful with how we share it.
Muller: You, of course, wind up in the situation where you could be amplifying bad information
as well. If you were paranoid enough, you could assume that the adversary is actually
deliberately planting some sort of distraction at one corner of the organization in order to get to
everybody focused on that, while they quietly sneak in through the backdoor.
Gardner: Brett, returning to this notion of actionable intelligence and the role of big data as an
important tool, where do you go for the data? Is it strictly the systems, the log information? Is
there an operational side to that that you tap more than the equipment, more than the behaviors?
What are the sources of data that you want to analyze in order to be better at security?
Wahlin: The sources that we use are evolving. We have our traditional sources, and within HP,
there is an internal project that is now going into alpha. It's called Project HAVEn and that’s
really a combination of ArcSight, Vertica, and Autonomy, integrating with Hadoop. As we build
that out and ﬁgure out what our capabilities are to put all this data into a large collection and
being able to ask the questions and get actionable results out of this, we begin to then analyze our
Sources are obvious as we look at historical operation and security perspective. We have all the
log ﬁles that are in the perimeter. We have application logs, network infrastructure logs, such as
DNS, Active Directory, and other types of LDAP logs.
Then you begin to say, what else can we throw in here? That’s pretty much covered in a
traditional ArcSight type of an implementation. But what happens if I start throwing things such
as badge access or in-and-out card swipes? How about phone logs? Most companies are running
IP phone. They will have logs. So what if I throw that in the equation?
What if I go outside to social media and begin to throw things such as Twitter or Facebook feeds
into this equation? What if I start pulling in public searches for government-type databases, law
enforcement databases, and start adding these? What results might I get based on all that data
We're not quite sure at this point. We've added many of these sources as we start to look and ask
questions and see from which areas we're able to pull the interesting correlations amongst
different types of data to give us that situational awareness.
There's still much to be done here, much to be discovered, as we understand the types of
questions that we should be asking. As we look at this data and the sources, we also look at how
to create that actionable intelligence.
The type of analysts that we typically use in a security operations center are very used to
ArcSight. I ingest the log and I see correlations. They're time-line driven. Now, we begin to ask
questions of multiple types of data sources that are very disparate in their information, and that
takes a different type of analyst.
Not only do we have different types of sources, but we have to have different types of skill sets
to ask the right questions of those sources. This will continue to evolve. We may or may not ﬁnd
value as we add sources. We don’t want to add a source just for the heck of it, but we also want
to understand that we can get very creative with the data as it comes together.
Muller: Brett makes a great point. There are actually two things that I think are important to
follow up on here. The ﬁrst is that, as it's true of every type of analytics conversation I am having
today, everyone talks about the term "data scientist." I prefer the term "data artist," because
there's a certain artistry to working out what information feeds I want to bring in.
Maybe "judgment" might be a better word in the context of security, a certain judgment or
stylistic question in terms of what data feed I want to bring in. It's that creativity in terms of
looking at something that doesn’t seem obvious from the outside, but could be a great leading
indicator of potential threat.
The other element is that, once we've got that information, one of the challenges is that we don’t
want to add to the overhead or the burden of processing that information. So it's being able to
increasing apply intelligence to, as Brett talked about, mechanistic patterns that you can
determine with traditional security information. Event management solutions are rather
mechanistic. In other words, you apply a set of logical rules to them.
Increasingly, when you're looking at behavioral activities, rules may not be quite as robust as
looking at techniques such as information clustering, where you look for hotspots of what seem
like unrelated activities at ﬁrst, but turn out later to be related.
There's a whole bunch of science in the area of crime investigation that we've applied to
cybercrime, using some of the techniques, Autonomy for example, to uncover fraud in the
ﬁnancial services market. That automation behind those techniques increasingly is being applied
to the big-data problem that security is starting to deal with.
Gardner: I was thinking that too, Brett, when you were describing this opportunity to bring so
much different information together. Yes, you would get some great beneﬁts for security and risk
purposes, but to Paul’s point, you also might have unintended consequences in terms of being
able to better understand processes, operational efﬁciencies, and seeing market opportunities that
you couldn’t see before.
Have you plumbed that at all? I know it's been a short time since you've been at HP, but are there
ancillary paybacks that would be of a business interest in addition to being a security beneﬁt?
Wahlin: Yes. As we further evaluate these data sources and the ability to understand, I believe
that the insight into using the big data, not only for security, but as more of a business
intelligence (BI) type of perspective has been well-documented. Our focus has really been on
trying to determine the patterns and characteristics of usage.
While we look at it from a purely security mindset, where we try to develop patterns, it takes
on a counter-intelligence way of understating how people go, where people go, and what do they
do. As people try to be unique, they tend to fall into patterns that are individual and speciﬁc to
themselves. Those patterns may be over weeks or months, but they're there.
Right now, a lot of times, we'll be asked as a security organization to provide badge swipes as
people go in and out of buildings. Can we take that even further and begin to understand where
the efﬁciency would come in based on behaviors and characteristics with workforces. Can we
divide that into different business units or geography to try to determine the best use of limited
resources across companies? This data could be used in those areas.
The unintended consequence that you brought up, as we look at this and begin to come up with
patterns of individuals, is that it begins to reveal a lot about how people interact with systems --
what systems they go to, how often they do things -- and that can be used in a negative way. So
there are privacy implications that come right to the forefront as we begin to identify folks.
That that will be an interesting discussion going forward, as the data comes out, patterns start to
unfold, patterns become uniquely identiﬁable to cities, buildings, and individuals. What do we do
with those unintended consequences?
It's almost going to be sort of a two step, where we can make a couple of steps forward in
progress and technology, then we are going to have to deal with these issues, and it might take us
a step back. It's deﬁnitely evolving in this area, and these unintended consequences could be very
detrimental if not addressed early.
We don’t want to completely shut down these types of activities based on privacy concerns or
some other type of legalities, when we could actually potentially solve for those problems in a
systematic perspective, as we move forward with the investigation of the usage of those
Muller: The concern that Brett raises is the ﬂip side of a conversation I've been having
surprisingly frequently, and it’s partly as a result of heightened awareness of some of the reported
intelligence gathering activities associated with national governments around the world and the
concerns as relates to privacy.
The ﬂip side of this that we need to keep in mind is that, going back to the unintended
consequences conversation, every technology that we introduce, whether it's the car, cell phone,
or pocket camera, all can have obviously great positive effects. We can put them to great use.
There are always situations where any new technology or any new capability could ultimately be
used in a negative fashion by bad people, or sometimes even unintentionally.
The question we always need to bear in mind here is, as Brett talks about it, what are the
potential unintended consequences? How can we get in front of those potential misuses early?
How can we be vigilant of those misuses and put in place good governance ahead of time?
There are three approaches. One is to bury your head in the send and pretend it will never
happen. Second is to avoid adopting a technology at all for fear of those unintended
consequences. The third is to be aware of them and be constantly looking for breaches of policy,
breaches of good governance, and being able to then correct for those if and when they do occur.
Gardner: Just brieﬂy, if the governance can be put in place, and privacy protections
maintained, the opportunity is vast for a tight closed-loop cycle of almost a focus group in real
time of what employees are doing with their systems, what applications they use, and how.
This can be applied to product development and, for a company like HP in the technology
product development ﬁeld, it could be a very, very powerful and valuable data, in addition, of
course, to being quite powerful for security and risk-reduction purposes.
So it’ll be a very interesting next few years, certainly with HAVEn, Vertica and HP’s security
businesses. They're probably a harbinger of what other organizations will be doing. Going back
to HP, Brett, tell us a bit about what you think HP is doing that will set the stage and perhaps help
others to learn how to get started in terms of better security and better leveraging of big data as a
tool for better security.
Wahlin: As HP progresses into the predicted security front, we're one of, I believe, two
companies that are actually trying to understand how to best use HAVEn as we begin the
analytics to determine the appropriate usage of the data that is at our ﬁngertips. That takes a
predictive capability that HP will be building.
We've created something called the Cyber Intelligence Center. The whole intent of that is to
develop the methodologies around how the big data is used, the plumbing, and then the sources
for which we actually create the big data and how we move logs into big data. That's very
different than what we're doing today, traditional ArcSight loggers and ESMs. There are a lot of
mechanics that we have to build for that.
Then, as we move out of that, we begin to look at the actual actionable intelligence creation to
use the analytics. What questions should we ask? Then, when we get the answer, is it something
we need to do something about? The lagging piece of this would be the actual creation of agile
security. In some places, we even call it mobile security, and it's different than mobility. It's
security that can actually move.
If you look at the war-type of analogies, back in the day, you had these columns of men with
riﬂes, and they weren’t that mobile. Then, as you got into mechanized infantry and other types of
technologies came online, airplanes and such, it became much more mobile. What's the
equivalent to that in the cyber security world, and how do we create that.
Right now, it's quite difﬁcult to move a ﬁrewall around. You don’t just unplug or re-VLAN a
network. It's very difﬁcult. You bring down applications. So what is the impact of understanding
what's coming at you, maybe tomorrow, maybe next week? Can we actually make a
infrastructure such that it can be reconﬁgured to not only to defend against that attack, but
perhaps even introduce some adversarial confusion.
I've done my reconnaissance. It looks like this. I come at it tomorrow, and it looks completely
different. That is the kill chain that will set back the adversary quite a bit, because most of the
time, during a kill chain, it's actually trying to ﬁgure out where am I, what I have, where the are
assets located, and doing reconnaissance through the network.
So there are a lot of interesting things that we can do as we come to this next step in the
evolution of security. At HP, we're trying to develop that at scale. Being the large company that
we are, we get the opportunity to see an enormous amount of data that we wouldn’t see if we are
For example, HP has millions of IP addresses and subnets that are out there. We have to try to
account for and ﬁgure out what's happening on any one of these networks. This gives us insight
to the types of trafﬁc, types of application conﬁgurations, types of interconnects between
different subnets, types of devices, anything from printers all the way through unreleased
How do you deal with things such as manufacturing supply chains, that are all connected to these
networks. Those types of inputs begin to create the methodologies that feed into the an upcoming
cyber intelligence center.
Gardner: Paul, it almost sounds as if security is an accelerant to becoming a better organization,
a more data-driven organization which will pay dividends in many ways. Do you agree that
security is still necessary, still pertinent, now that it's perhaps forcing the hand of organizations to
modernize in ways that they may not have done, if we weren’t facing such a difﬁcult security
Muller: I completely agree with you. Information security and the arms race, quite literally the
analogy, is a forcing function for many organizations. It would be hard to say this without a sense
of chagrin, but the great part about this is that there are actually technologies that are being
developed as a result of this. Take ArcSight Logo as an example, as a result of this arms race.
Those technologies can now be applied to business problems, gathering real-time operational
technology data, such as seismic events, Twitter feeds, and so forth, and being able to incorporate
those back in for business and public-good purposes. Just as the space race threw up a whole
bunch of technologies like Teﬂon or silicon adhesives that we use today, the the security arms
race is generating some great byproducts that are being used by enterprises to create value, and
that’s a positive thing.
Gardner: Last word to you, Brett, before we sign off. Do you concur on this notion of security
as an imperative, but that has a greater longer term beneﬁt?
Wahlin: Absolutely. The analogy of the space race is perfect, as you look at trying to do the
security maturation within an environment. You begin to see that a lot of the things that we're
doing, whether it's understanding the environment, being able to create the operational metrics
around an environment, or push into the fact that we've got to get in front of the adversaries to
create the environment that is extremely agile is going to throw off a lot of technology
It’s going to throw off some challenges to the IT industry and how things are put together. That’s
going to force typically sloppy operations -- such as I am just going to throw this up together, I
am not going to complete an acquisition, I don’t document, I don't understand my environmental
-- to clean it up as we go through those processes.
The confusion and the complexity within an environment is directly opposed to creating a sense
of security. As we create the more secure environment, environments that are capable of
detecting anomalies within them, you have to put the hygienic pieces in place. You have to create
the technologies that will allow you to leapfrog the adversaries. That’s deﬁnitely going to be both
a driver for business efﬁciencies, as well as technology, and innovation as it comes down.
Gardner: Well, very good. I'm afraid we will have to leave it there. We've been exploring how
IT leaders are improving security and reducing risks as they adapt to new and often harsh
realities of doing business in cyber land and we have been learning through an example of HP
and how it's adapting its well.
So with that please join me in thanking our cohost, Paul Muller, the Chief Software Evangelist at
HP Software. Thanks so much, Paul.
Muller: It's a pleasure, Dana.
Gardner: And I would like to thank our supporter for this series HP Software and remind our
audience to carry on the dialog with Paul through his blog, tweets, and The Discover
Performance Group on LinkedIn.
Then lastly, a huge thank you to our special guest, Brett Wahlin, the Vice President and Global
Chief Information Security Ofﬁcer at HP. Thanks so much, Brett.
Wahlin: Thank you, Dana, and thanks, Paul.
Gardner: And you can gain more insight and information on the best in IT performance
management at HP.com/go/discoverperformance and you can always access this and other
episodes in ongoing HP Discover Performance podcast series on iTunes under BrieﬁngsDirect.
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your co-host and moderator for this
ongoing discussion of IT innovation. Thanks again for listening and comeback next time.
Listen to the podcast. Find it on iTunes. Sponsor: HP
Transcript of a BrieﬁngsDirect podcast on how increased and more sophisticated attacks are
forcing enterprises to innovate and expand security practices to not only detect, but predict,
system intrusions. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.
You may also be interested in:
• Advanced IT monitoring Delivers Predictive Diagnostics Focus to United Airlines
• HP Vertica Architecture Gives Massive Performance Boost to Toughest BI Queries for
• HP-Fueled Application Delivery Transformation Pays Ongoing Dividends for McKesson
• Podcast recap: HP Experts analyze and explain the HAVEn big data news from HP
• HP's Project HAVEn rationalizes HP's portfolio while giving businesses a path to total
• Insurance leader AIG drives business transformation and IT service performance through
center of excellence model
• HP BSM software newly harnesses big-data analysis to better predict, prevent, and
respond to IT issues