Each successful exploit has three parts – the attacker, threat type, and target – we continue to see change in each. Attacker - in 2005, we saw a shift starting from attackers wanting notoriety to wanting profitability. Today, cybercrime is fully organized and we see crime syndicates out to profit from attacks. These attackers are now well funded, use sophisticated and purpose built tools and target organizations purely for profit. While this is nothing new, what we are seeing today is a move to not only attack “.gov/.com” but to attack “.me/.you”. Attackers are becoming increasingly sophisticated and are profiling not only companies but also individuals. They understand that we all have online identities but also “phyiscal profiles” or “connection points” where we connect to the internet from a variety of places……work, internet café, airport lounge, home. They have realized that often times our security defenses are down or weak at some of these connection points and penetrating individuals’ devices can work quite well outside of the work place. If you can infect a business user at an internet café and then have them walk that device into the enterprise then you can infiltrate the enterprise infrastructure and bypass many of the defenses that are in place today. Attackers understand this and have adopted their behavior. Threat – The threat landscape is also undergoing a change both in terms of the types of attacks and the sophistication and maturation of existing attacks. As expected, we continue to see new types of attacks to bypass the latest technologies that enterprises deploy.historically, the first large virus outbreak was on the Apple II in 1981. Since then there have been many well documented outbreaks that include the “iLOVEYOU” worm in 2000, SQL Slammer and Blaster worm in 2003 and countless worms, trojans and other forms of malware. Today, DOS has given way to DDOS and newer threats such as rootkits and botnets have taken hold. The most recent threat is APT which is not only a new type of threat but also a new way to profile and attack networks, systems and organizations. While we see new types of attacks we also see the morphing of existing attack types. As an example a few years ago, the majority of malware was in cleartext which could often be detected by AV or IDP solutions. Today over 80% of malware uses encryption, compression or file packing to bypass traditional AV or IDP technologies. Target - Finally, we also see significant changes with attack targets. Over the past few years there has been an explosion in devices that attackers target ranging from smartphones, to tablets to cloud services. What is particularly interesting about these new targets is the variation of the architecture of these platforms that ranges from more secure platforms such as the iphone to more open platforms such as the the Android OS. The other primary change we see is around the types of applications being attacked. Historically, most attacks have been focused on traditional corporate application servers and productivity applications such as office. Today, have seen a significant shift to web 2.0 type applications and social networking apps where attackers take advantage of a trusted relationship that is built amongst online users. They understand that there is a real tendency for online users to trust links that other users send within these applications and have used this vector as a target of malware. Transition: The challenge for enterprises today is how do they address the and new and emerging threats in a way that is both scalable and does not significantly drive up cost.
Juniper’s Always Protected Framework provides the critical components to securing your most valued assets through a combination of Restoring visibility with security context and coordination, flexible deployment options that meet the unique deployment models of your enterprise to reduce costs, and greater security with broad coverage that protects from the device to the data center.This framework goes hand in hand with our Simply Connected Enterprise Solutions to extend the overall value Juniper can bring to your enterprise.
What Are the Trends?And of course you want to attack the weak spots, not the strong spots, just for efficiency and simplicity.
Compliance vs. SecurityAlong those same lines, we start to get into a conversation of compliance vs. security. Where we had just port based firewalling, that’s a security feature. There's some compliance in there but it’s first and foremost a security an appliance. Now as we start to get into more advanced URL filtering and we get into application based filtering and things like that, we have this separate discussion. So for example, I’m not going to typically write a security policy that says if there are viruses coming into my network, block them if they’re coming to Bill, but allow them through if they’re coming to Joe. But I don’t know. Security policy tends to be: block the bad stuff and then filter the rest. Compliance is going to be: allow John to surf the Internet but don’t let Bill go to Facebook, because he’s just going to waste his day playing social media games and all that. So that’s into a security play that’s compliance, that’s productivity, that’s more employee based controls, where we used to have just security, now we have this mix of compliance and security. So it’s important that we start to have this discussion about how much security do you need — where and why — and how much compliance do you need — where and why — and then we can build a balance solution that covers both. We have seen some things in the market where people are effectively selling a compliance solution and calling it security, or selling a security solution and calling it compliance. We really need to make sure that we’re balancing those two aspects, so that once the install is in and everything is done and you’ve walked away, your client is happy and everything is nice and secure and compliant so they can feel good about their purchase and keep coming back to us for additional upgrades in the future business.
Leaky Application Firewalls One of the central points between that whole compliance vs. security, is when we start talking about pure application based firewalling as a technology — not port based, but pure application based firewalling — they leak data. They’re a compliant solution, not a pure security solution. What do I mean by this? Well, if we stand up an HTTP server running on port80, but we’re not port aware anymore, we’re smarter than that. Port awareness is for the past and now we’re all application aware, and it’s pure application based firewalling. We set up an application firewall that says permit HTTP. I send you a packet to the server, that’s a SYN packet on let’s say port23, but again we’re not port based so it doesn't matter. That application based firewall looks at that SYN packet on port 23, and says is this HTTP? Well there’s no application associated with the SYN packet, it’s just a TCP setup message. Does it block it or pass it? Well if it blocks it, there will never be application based traffic, whether it’s HTTP or something else, so we have to pass it. That’s going to hit my HTTP server; I’m not running anything on port 23, so it will send a rest. Again the application firewall looks at it, there is no application associated with the reset, so it passes the traffic. You just let me port scan your server from the Internet. Now I know there’s a server there for sure and it’s not running port23, so I can keep probing, I’m now interested in you — that’s a bad thing from a security perspective. Taking it one step further, if you have an application running on port22, lets’ say SSH, I send you a SYN on port22, application based firewall looks at it, there’s no application associated with the SYN, so it passes the traffic, it gets a SYN-ACK in reply. So now I know there's a server there and you’re running something on port22. He sends an ACK back and her starts sending application traffic. The application based firewall has to see a couple packets, 1, 2, 3, or maybe even 4, before he can conclusively identify that the traffic he’s seeing is not HTTP. When he conclusively identifies that, he can drop the session. The attacker on the Internet will see conclusive identification minus 1 packet, so if it takes two packets, then he does see one packet, this might give him a best guess. The application firewall must be certain it’s not HTTP before he can interrupt the conversion. The attacker doesn't have to be absolutely certain before he begins to fingerprint your system and understand what it is that you’re running. So again, we’re leaking a fair bit of data there because it’s a pure application firewall. This is why we still want our port based security in place.
Layered SecurityBecause once we put a port based security on top of the application based firewall or in front of, in the worst case, typically we want port and application based firewall in the same box, then we can build a policy that says for instance, permit port80 HTTP traffic. Then we’ll block anything that isn't port80, all of the junk that’s out there, all of the probes and inappropriate traffic, then anything that comes in on port80 will also run this application awareness to make sure that it’s HTTP. So we’re just filtering out that junk at the start, rather than letting it through while we determine what the application is. This is all “defense in depth”. For example, if you get a new alarm system, you’re not going to stop locking the doors on your house, you want to add layers of security, not take them away. Port based firewalling has been around for a long time, it’s not exciting, it’s not sexy anymore, but that doesn't mean it doesn’t have a very serious place in network security.
AppSecure Service ModulesAppSecure, application based security, Juniper’s implementation, is specifically built around our application identification engine. This was released with IPS IDP 4.0 about three or four years ago, and we could start writing IPS policies that were application aware way back then. The challenge with the SRX was that was part of IPS, so we’d have to run it through the firewall engine, through the IPS engine, through the AppID engine, and then spin it back around and run it back through the firewall engine, which would be a weird packet flow, high latency, a lot of overhead, all that good stuff. So we pulled the AppID engine out recently and it now runs as a service on the SRX. So really the core of AppSecure is the AppID engine. We identify the application and then we do stuff with it. AppTrack: we track what the applications are, bytes in, bytes out, duration of session; AppFW: permit deny, AppQoS: we set DSPC bits; AppDoS: intelligent application aware, context aware denial of service protection; and of course IPS still has some application aware features as well.
SSL ProxyAs a side note, we can today in the high end SRX do both reverse and forward proxying for SSL. So with reverse proxying, the scenario there typically is I have a Web server and I want to perform IPS on HTTPS traffic that's coming in. So we can load the private key onto the SRX, encrypted traffic comes in, we’ll make a copy of the traffic, decrypt it, on the SRX run IPS services, and then identify anything bad going in that’s a copy of the traffic so we are mirroring it, it’s not inline IPS but we can follow it so it’s more IDS detection system rather than active inline prevention. SSL forward proxy, we can actually setup a trust relationship with the client browsers when the clients browse out via HTTPS, the SRX will terminate the session and build a new SSL session out to the destination server so that the SRX is performing AppSecure based on clear text traffic.
Redirecting TrafficIt is important to note that for authentication, either the single sign on or the captive portal, we need to use that unauthenticated role or on any role, but preferably the unauthenticated role, to allow users to get access to the Infranet Controllers so that they can get authenticated. They need to be able to access their Active Directory server and their Infranet Controller before they're authenticated in order to get authenticated so they can match some role based rules.
AD Authentication WorkflowHow does this work? From an Active Directory authentication perspective, the single sign-on is an option that’s available. A user tries to browse through the SRX to a protected resource. The SRX will push back an SPNEGO redirect to the client’s Web browser. Modern browsers all support SPNEGO, the last few versions of Internet Explorer, Chrome, and Firefox — all the most popular versions are fully supported there. The SPNEGO redirect tells the client to contact their Active Directory server and obtain a Kerberos ticket. So the Active Directory server does its authentication stuff with the client and presents it with a Kerberos ticket which then gets sent to the Infranet Controller. The Infranet Controller will then look up the user and get the role information from the AD server and push all of that information down to the SRX so that we can match policies based on that user. If we have the option enabled, then we’ll keep that Web browser open, to run some AJAX keep alive scripts with the IC and will open a second browser window going to the initial destination — the original destination for the user — so it is effectively seamless, but we have the extra AJAX mechanism in there doing heartbeats as a keep alive mechanism.
Why a Two-Box Solution??Why do we do it this way? Why do we need a two box solution while some of our competitors just put a nice little agent on the Active Directory server — wouldn’t it be great to do that? Well, it would, but here's a scenario: I log in to Active Directory; Active Directory tracks my username and my IP address. I close my laptop, or I disconnect from the network or my desktop crashes or whatever, Active Directory doesn't care that there was a change on the network, it has its own authentication mechanisms it’s designed to protect Windows based resources so it’s doing that with Kerberos and some other authentication stuff going on in the background. It doesn't’ really care that I disconnected. So later on I bring my computer back up or I roam to a different wireless AP and get a different IP or whatever and I access an Active Directory resource. It takes note that my IP address is updated, but again it doesn't really care. Network based information, IP address specifically, isn’t something that it does more than just keep track of, it doesn't really care about changes. It’s not designed to actively check your network state — doesn’t care if there are changes. So if in between number 2 and three there I’ve logged into Active Directory, it’s tracking my user ID and IP, and I disappear for the network, I close my laptop, desktop crashes, whatever, and someone else comes in behind me and attaches to the network but doesn’t log into Active Directory —so for instance, I use a Mac, I don’t login to Active Directory — if they happen to grab the same IP because your DHCP is tight on addresses and it’s reassigning or the new person already had one reassigned previously and didn't give it up properly and it was statically coded, or because they’re malicious, Active Directory isn’t aware that the user attached to the IP is anything different than it was. All it knows is Active Directory calls that it sees; so there’s no log message, there's no network sniffing, there’s nothing that will tell Active Directory that the user is different. If we write an agent that sits on an Active Directory server, it’s very difficult to check that network state. We’re working on doing that because we want to have a clean one box solution. Maybe we’ll port some of this code onto the SRX, maybe we’ll build it into an Active Directory agent — it may be a lot of different things. We are trying to address that from a sales concern. But from a technology perspective, the cleanest solution is the one we already have. We already have this Infranet Controller that’s designed to do this SPNEGO redirect or a captive portal login so we can confirm who you are now. We can also keep that window open and run this AJAX script that does keep alives with the Infranet Controller, to check the network state so we know that you’re still you; so we can check who you are and we can check state so that we know you’re still you over time. That way, if you disconnect or your box crashes, or whatever, if the keep alives fail, the Infranet Controller is aware that you have dropped off the network from its perspective, and it flushes the security policies so we stay secure moving forward.
Slide 3: The World is on the Move Most business networks were designed to support specific IT-owned applications over wired ports using dedicated VLANs. Many haven't had a significant update in five years or more. Applications are bolted to the network, and wireless was designed as a secondary overlay network.Mobility obsoletes this model by changing the way content is consumed. Today, most network connections are wireless. Users employ a mix of personal and corporate cloud-based and user-chosen devices and applications.Mobility has forced enterprises to shift their security strategy away from a perimeter “protect your borders” approach, making them realize that borders are now global and that their vulnerabilities are actually internal. This changes the way they think about, and deploy, security. Additionally, applications are no longer slow and stable but fast and evolving; users are choosing their own applications to use. As a result, today’s enterprise is struggling to balance the risks posed by mobility, BYOD and fast-evolving cloud services against the safety and security of network resources. Segregated networks with dedicated VLANs can’t support the collaboration that users today demand.
Mega Trend – Server VirtualizationIt’s pretty clear that server virtualization is here to stay — right? It’s extremely uncommon to go into any enterprise at this point and not have virtualization in there in quite a big way in most cases. So it’s no longer just test dev off in some remote aspect of the business. This is fundamental to businesses, fundamental to service providers and what they’re trying to do, and this is an IDC slide that’s a couple years old now, but it’s pretty simple. It shows the fact that physical server roll outs are starting to flatten out and what we’re seeing is rapid deployment of virtualized servers and getting to the point where its 2x what the physical server deployment is. There’s lots of good reasons for that. It’s just virtualization and all the great things that come with virtualization that are driving this. It’s saving power, it’s dynamically allocating resources onto your server infrastructure to eke every last computing cent out of your physical servers. It’s operational management — things like being able to live migrate hosts, or live migrate VMs across hosts, and changing the way that server admins work, like there’s not these crazy demands for off hours just because you want to add some memory to a server. You can migrate the virtual machines and then take that down and in many cases people do that in the middle of the day because that technology is so robust and proven out. Clearly here to stay; the one thing to remember is we have to incorporate security into this rapid server virtualization, and customers have to understand that, as they’re virtualization more sensitive things, that they need to take security in lock step with that.
Other Virtualization PlatformsThe fact is that we have Hyper-V, KVM, Zin — these platforms starting to gain momentum for various reasons. On the KVM and Zin front, there’s a lot of backing and a lot of work being done on the KVM front, even Red Hat’s systems are obviously going to be based on that. The RHEV-M and the RHEV-H, the nonstandard Linux KVM has been taken and modified and improved upon and becoming standalone virtualization products from Red Hat. There’s the Zin and the Citrix pieces which are out there; customers are using each of those for various reasons; service providers wanting to save money from VMware licensing fees, and so on and so forth. So we’re seeing some of this starting to play out and make it tougher on VMware from a Hyper-V perspective and Microsoft perspective, there’s a lot that’s happening on Hyper-V in 2012. I was in Orlando for the TechEd conference and there’s a lot of catch up that’s happened on the features; it’s becoming very feature compatible, and in some cases for different versions — more feature rich in the Hyper-V scenario. Couple that technology catch-up with the fact that Microsoft is being very aggressive to do pricing and license strategies in a way that make it very compelling from a cost perspective to switch platforms. There’s really a lot of contention here about what platforms are going to be around. From a Juniper perspective, we really don’t care. We don’t sell a virtualization platform; we sell a security layer for this environment. So, yes, we need to be on the most important platforms, but our long term goal is to be across all of them, and let a customer who, in many cases, has multiple hypervisors in their single environment, let them feel confident that whatever security solution they select will work across these hypervisors. That’s really important for our strategy going forward for both products.
In a typical tree network the location of an application can have a significant impact on performance. [click] Ideally, an application should be no more than one hop away from its data for optimal performance, i.e. they are co-located on the same switch. We call this area of optimal performance “The Bubble” But switches have their physical limitations and often we must locate the application outside the bubble. [click] This is when networks can have a significant negative impact on application performance. [click] And the farther away we locate, the worse it gets.Although this is a great concept, it is practically never implemented in practice because the bubble size is limited. By definition, the size of the bubble is limited to a single switch. If we assume 48 ports on a top-of-rack switch with eight ports facing up to the aggregation layer, then we have 40 ports which are server facing. Given an average to 10 NICs per server, this leaves us with a bubble size of ten servers. Not big enough to be of any real use. We need to fix this problem.
Another problem with tree architecture is that, if we introduce a security appliance in the tree hierarchy, it casts a shadow over that part of the network. [click]If we move a VM within the shadow, VM can still taking advantage of the services that appliance delivers. [click] But, if you move VM moves out of the shadow, at best it’s insecure, and at worst you have lost it.So another way of viewing the job of managing the data center is to manage the intersection of bubbles and shadows.
Traditional data centers generally employ a one OS/application per server model. As we can see here, this can be highly inefficient. I’ve known situations where an application that runs one hour per week sits on its own server. This a true waste of resources.Today the vast majority of data centers are implementing programs for server virtualization and consolidation. [click] Using virtual machine technologies called hypervisors they can enable multiple OS/application pairs to run in a single server achieving better cost efficiency not only from reduced equipment costs, but also savings in power, cooling and space. There are several vendors of virtual machine technologies with VMWare being the leader in this space. [click]And new applications can easily be provisioned in just minutes, sharing existing resources and increasing cost efficiency.[click]But as application demand grows we can reach the limits of a single server. When this happens, we could manually move an application to a new server but this takes time and can violate the always responsive requirement.This is where networking and clouds enter the picture. [click]
Market Summary & ChallengesFrom a market summary, just a couple quick…examples.
Security Implication of VirtualizationLet’s get into a little bit more of the heart of the discussion around why do we care about security in a virtualized environment? What’s going on here that would necessitate these special solutions? We know virtualization is happening, we know there’s different platforms and choices our customers are going for. What does it really mean form a security perspective; what are the implications? When we first started developing the solution I would sit down with execs and leaders of IT staffs and ask them about their virtualized environment — what is the top protocol in use on their current switch? How do they know that certain virtual machines from the physical world that got virtualized from different departments aren’t intermingling there in a way that they don’t want? How do they deal with antivirus in this space? All of these sorts of questions were really hard for these guys to answer, in many, many cases. They didn’t know what was happening on their virtual network; they didn’t know what mechanisms had been put into place from a security perspective to lock things down. And the reason is that it’s not really just the servers that you’re virtualizing; it’s the network as well. So you have virtual switches, virtual interconnect in there, virtual NICs, and you’re consolidating that, but not always are you taking the security that you have from the physical world that you have in place and also virtualizing that and putting that into place. That disconnect creates essentially a blind spot from a visibility perspective into what’s happening, what are those VMs doing, and potentially a blind spot from security devices. So it used to be segmented by different buildings and different network ports and so forth, and a lot of that starts to disappear in this very dynamic environment where VMs can move around from server to server and you have virtual machine admins making decisions around what VM gets stuck into a particular port group. It’s quite different than many of the things that happen in the physical world. That’s the fundamental thing that we want to address and we want to do it in an efficient way; we want VMs to come up and understand what those VMs are doing and give them the policy to let them do what they’re supposed to do and nothing else.
Customers aren’t just trying to virtualize a few servers in a small scale like the previous slides. They are trying to adopt virtualization in high quantities in their internal networks (building private clouds) and they are even exploring hosting VMs off premise and bursting between these locations (i.e., building hybrid clouds). Service providers are dealing with requests to isolate hosted VMs and provide security guarantees in this very dynamic environment.The demands of this computing model dictates a solution that is integrated, flexible, scalable and efficient. Let’s take a look at some of the specifics of vGW.
We looked at different kinds of traffic flows earlier and this is the kind of logical network diagram where virtualization is shown that on the access tier you may have a set of VLANS going to core Virtual Chassis and on the core Virtual Chassis we are creating virtual routers — VR 1, VR 2 for different segments. Any traffic within VR1 on the set of VLANS, which is permitted on VR1, is not going to firewall but within VR2 across virtual routers it is going through firewall. This is very important in many places; in many RFPs we see the requirement for a virtualized data center, doing segmentation, and control through a point of entry where they can control through some kind of security policy, and this is one way to meet those requirements. We’ll look at those traffic flows in the next section, in which we explore based on how these traffic flows are supported within Data Center and also across the Data Center. And when you can support this across the Data Center on different traffic profiles that means you can have agility of resources across Data Center and that is one of the essential requirements of cloud readiness or an agile environment.
Now we’ll look at Intra Segment Intra-DC traffic flow. Here, as you can see from the animation, there are some resources on the 2 different access tier switches and the traffic basically goes to the core and comes back to another access tier; however, that traffic is not going through the firewall. Basically this environment doesn’t require stateful security or IDP inspection but higher performance and lower latency are much more important even though the resources are on two different access tiers. You may have the resources on the same access tier and maybe they’re talking to each other directly but if the number of resources are more and they are on other access switches for any number of reasons you can still meet certain performance criteria because that traffic doesn’t necessarily have to go through firewall services. This is one very basic simple flow. Next we’ll look at Intra segment but Inter-DC traffic flow.
In the Intra segment Inter-DC you can see that on both sides there is a VR-1 which is the green set of VLANs and basically when this access tier sends the traffic to the other Data Center that traffic basically goes to a VLAN extension towards MX; goes to VPLS network. The same VLAN traffic — Layer 2 broadcast or unicast — it can come to another Data Center access tier switch. This will support Layer 2 extensions; both sides are the same L-2 broadcast network; that means it can support Vmotion or VM mobility or data applications or any application which may require Layer 2 extension across the Data Center. This traffic will not go through firewall, even though certain types of traffic may be going to the firewall. This is one of the important use cases which kind of differentiates it from other solutions with MX and the building blocks we looked at earlier when we put it together we can have an end-to-end Layer 2 flow, which doesn’t go through firewall and meets the performance requirements and we have a technical article which you can refer to how to enable the Layer 2 services and how to get more benefits of MPLS network with that.
The 3rd type of traffic flow we are looking at is from Green VLAN to Blue VLAN in this example where even though the resources aren’t on the same access tier; the traffic goes to the core switch, goes to the firewall, is controlled through the zone security policy across these two zones, and comes out of the virtual router. So even though the resources are the same access tier, you can still control the traffic flow between those resources based on the security requirement in that. You can potentially allow it, or you can separate it out, you can even further virtualize the SRX cluster with your routers or logical systems and clear the complete segmented Data Center where this traffic doesn’t even see each other. This is one way to achieve virtualized Data Center environment. This traffic flow we looked at from within the data Center where it is across 2 segments. How does that traffic flow go through different points? If one of the segments is extended across the other data center if for any reason these two segments or the resources on these 2 segments need to talk to each other — how ever those resources are across the Data Center — how the firewalls are maintained, that we’ll look at next.
This traffic is from the green VLAN going to the blue VLAN, however the blue VLAN resources are on the other Data Center. So traffic will go through the VR1 go through the zone. And there is another zone going through so that traffic passes through virtual router on the MX which is connected to this side using a L3VPN configuration and it goes to the SRX cluster on the other Data Center One of the reasons the traffic is going through both SRXs or the security services is we can control from one side of SRX to other side of SRX, however that will require some routing policies, but at the same time you can not have a configuration so that any one side originates or picks the firewall on the origination side. The reason is if you do that then the return traffic will create asymmetric routing and the session may be dropped. One way to achieve it as it currently is configured is to go through both SRXs. We can always explore the options if any further optimization is required or necessary on the customer side depending on the amount of traffic and how many resources it is taking. You can decide if you want to create more control and optimize this traffic flow.
Competitive PositioningLet’s just look a little bit more at the competitive positioning.
This is the way we manage networks today. We send out the Mongolian Hordes of network administrators and tell them “Go build networks and keep them running! And don’t come back until you’re finished.” Which, of course they never are. So we keep adding manpower ad infinitum.Not a good way to manage anything.
The Smartest Way to Protect Websites and Web Apps from AttacksThank you for learning about Mykonos. We started Mykonos to solve a problem of Web App Security that no one had yet to solve, which is how do you get visibility into an attacker on your website right now? And Mykonos aimed to used deception and intrusion deception to detect an attacker before the actual attack. And if you think about the five stages of an attack, your first stage is reconnaissance. The attacker goes around the site looking for holes. Your second phase is the actual scripting phase where they try to write the attack. The third phase is the actual execution of an individual attack. Your fourth phase is your automation phase, as they try to bring that attack up to large volume. And finally you’ve got a maintenance phase — as you try to close the hole, the hacker tries to keep it open. Every security solution before Mykonos was focused on phases three and four — how do I stop an attack or an automated attack in process? Mykonos seeks to move that to phase one — how do I look for the bad behavior, the reconnaissance that an attacker does so that I actually have a chance to stop the attack before it happens?
Hacker ThreatsA lot of people think about hackers as being binary – that they’re either bad or good. But the reality is a lot more nuanced. And in that nuance is a lot of the secret about how to start stopping attacks and changing the economics. Now the first type of hacker that we worry about are IP scans. And these are where an attacker has gone out and is actually using a scanner that is equivalent to a robot checking every door and window in the neighborhood. It actually goes out and looks for a single vulnerability across hundreds of millions of IP addresses. Now we’ve been talking about this for about two years and, sure enough, about six months ago somebody wrote a script that actually went out and hacked 1.1 million websites in a matter of 24 hours. And that kind of shows you how powerful an IP scan can be if left uninterrupted. But perhaps equally important, if not more important, are targeted scanners – things like Grendel scan, Metasploit, O2 – scanners that allow every APT or every script kiddie to become very sophisticated. And so we see targeted scanners like Grendel that may attack 20, 30, 40 thousand vulnerabilities in the matter of an hour, and all of a sudden they make hacking not only faster but much, much easier. And so what Mykonos does actually is intercept it, slowing down the targeted scan, but also adding, injecting fake vulnerabilities, rendering the results useless. And the third type of vulnerability we worry about are botnets. And botnets are being used in two really interesting ways right now. One, they’re being used by APT threats to distribute an attack and avoid detection; and second, they’re being used to scale up an attack — automate a small attack to make it a really big one. And Mykonos here actually intercepts a botnet; uses a CAPTCHA processor inline to dynamically break the botnet and stop it on the fly. Now, if you can break the various scripts and tools — the IP scans, targeted scans, and botnets — what you do is force slow, visible, human hacking that’s a lot more expensive for the attacker and a lot easier to defend against.
Web App Security TechnologyUnlike traditional Web application firewalls that use signatures and force their customers to write signatures for each individual detection, Mykonos uses behaviors to go beyond the signature and not have to force the customer to finish the product for them. But, more importantly, unlike signatures that detect attacks in process, and have no coverage against zero day attacks, Mykonos actually uses its behavioral technology to take intrusion deception and detect the early reconnaissance behavior that happens before the attack ever starts. But Mykonos also goes a step further to go beyond the IP address. So, unlike an IP address, where there may be five or ten thousand people behind a single IP using a proxy, Mykonos identifies and targets the individual device and it can not only block them but it can do a huge range of responses. Both solutions meet the PCI section 6.6 requirements for compliance, but only Mykonos can detect an attacker before the attack ever happens and go beyond the IP address to stop an attacker without stopping…
The Mykonos Advantage Deception-Based SecurityThe way Mykonos works is in four steps. The first step is to detect attackers by injecting hundreds of little tiny bits of code into the Web application at serve time so that we detect an attacker while they’re doing the malicious behavior before the attack. And because the attacker is touching code that doesn’t exist, there aren’t false positives like traditional signature based solutions, and it also allows us to detect zero day attacks by seeing the bad behavior rather than relying on an attack signature. The moment we detect an attacker, we track it. We actually use a super cookie to track the individual browser based attacks and we use a finger printing technology to detect script based or APT attackers. And then we start to be able to build a profile, which looks like a DVR that records everything a hacker does, to start to get smart about who that hacker is and what threat level they represent. Then finally we respond. Unlike Web application firewalls where only 10% run in block mode, a hundred percent of Mykonos devices run in block mode, stopping attackers, blocking them, warning them, and deceiving them to make it much more expensive to hack a site where Mykonos is involved.
Detection by DeceptionArchitecturally, Mykonos sits as an inline proxy, directly in front of the application server. And as it hands the code down to the client, it injects tar traps or deception points into the code. Now the first example’s really simple; it’s a query string parameter — which is the URL string you’d see on any website. It’s very easy to hack a URL string — but a lot of people do, because there’s about 20% of top sites that have some sort of session hi-jacking vulnerability because of the query string. And so you’ll notice there, there’s a piece of code that says “debug=false”. Well, if the hacker changes this to “debug=true” to try to get back the bug information, or “debug=0” or a long string or anything else Mykonos will detect manipulation and now we know we have an attacker in our website. Let me give you a more sophisticated example. The “hidden” input field is something that you would use if you were looking at a form. Most SQL injection attempts are done via the forms, and that’s because that’s where the direct connection to the backend database is. And here you’ll see a bunch of HTML and you’ll see a line of code: <input type=“hidden” value=“0” name=“authorized”> Now there’s a lot of things you’re going to do. You might change the value; you might change the name. But what you’re trying to do is get this form to respond with an error message; with a SQL dump — with something that tells you how to get into the system that will then get into the data that you want. And here, this entire line of code is fake. It was inputted by Mykonos directly into the code stream so it’s indiscernible from actual code, and it allows us to detect those advanced SQL injection attacks before they ever touch the first input. And then finally, not only do we think about the width of deception — meaning all the different behaviors that an attacker might do — we also think about the depth of deception — meaning how do we detect an attacker and start to change those economics. And the third example of server configuration is a great example of that. This is an HT access file — it’s an Apache System file you’d find on any site. Now if a hacker accesses that — it shouldn’t be exposed, but it often is — and Mykonos will block the real one but return this fake one or a similar fake one. Now if the hacker reads through it they’ll notice it points to an HT password file, and if they traverse hidden directories, and get to that file, we’ll again respond — this time with a list of user names and encrypted passwords. So why do that? Why provide a list of user names and passwords, instead of blocking the attacker? We know they’re bad; why not just stop them? And the reason is we want to make it expensive for the attacker. So by returning a list of user names and encrypted passwords it could take the hacker fifteen, twenty hours to run a desktop encryption tool, like John the Ripper, and break that encryption. And if they do that, we’ll then let them try to log in to the “recoverPassword.aspx” file. So, in the hacker’s mind, they’re making progress. But what they’re actually doing is wasting time and teaching Mykonos what skill level and threat level they represent.
Track Attackers Beyond the IPSo once we detect the attacker we immediately start to track it. For browser based attackers, we inject a super cookie into the attacker’s PC. And that super cookie allows us to track them, even if they do things like clear cache and cookies or use private browsing mode. But on top of that, we also have a finger printing capability that serves as a backup mechanism for more sophisticated attackers that might try to spool up a new VM, or might try to figure out how to shake the cookie. And it also allows us to track script based attackers. And the reason we track them is so we can start to begin to profile.
Smart Profile of AttackerThe profiling technology allows us to become almost like a DVR and record everything that a hacker does. Now, every Mykonos hacker gets a name. And you’ll see this is “Jack 26”. And the reason we do that is so you’re not running around shouting IP addresses if you’re at a security operation center. And you’ll notice in the bottom, left that we can see that this attacker was extreme. We can see the last time they were active, the first time they were active, and the threat level they posed, and on the right you’ll notice that we start building an incident history — that query parameter manipulation of the URL string I mentioned earlier; the hidden parameter manipulation in the form; up to an Apache configuration file request; the password file, and finally they cracked the password. And what Mykonos did in the background is escalate the level of threat and start to record every bad action the hacker did and all the information underlying it so we can actually start to really understand what threat level they represent; what we should do about it — more importantly.
Respond and DeceiveAs I mentioned, a hundred percent of the Mykonos devices run in block mode, actually stopping real life attackers. While compliance is important, we think that preventing a company from being the next Sony is much more important. And Mykonos responds in a range of ways. We might warn the attacker. We built a response for fun a few years ago where, as a attacker attempts to hack a site, the site disappears and up pops a map of the hacker’s location, with a note that says, “It looks like you might need a criminal attorney”, with a list of lawyers in the hacker’s location. It was our way of saying we know where you are and you should really stop doing anything bad. We can block a user without affecting anyone else in that IP address, so we’re not stopping customers. We can force a CAPTCHA processor inline, so we can break any automation that may happen. We can slow a connection down, forcing hackers into go in slow motion. We can go out and actually simulate that the application’s been broken, or we can even, in the case of a financial application, force the logout and actually immediately block and lock the account so the attackers can’t get into it and do any damage.
Security AdministrationAnd so all of this becomes a real-time console. This is actually a real screen shot of the Mykonos console in action, and what you can see in the top left is the number of attacks we’ve detected — by low, medium and high — and the total number of attacks. You can see the total hackers on the site, also by low, medium, and high. So you can get a sense for the sophistication level of the people hitting your site. You can see in the top right the counter measures deployed that we’ve used to try to stop an attack. And then you can see the most frequent attacks — the top hackers — so you can see who is… are the APT threats continually hitting your site, and the top countries they come from. And then underneath that you can see the malicious incidents. You can get a sense for volume by day. And then you can see the number of sessions and hacker sessions so you can start to get a sense of what percentage of your traffic is coming from hackers. All of this data plugs into a SIEM tool via a command line interface we expose so you can plug it into any other tool you’d like. We also have ability to plug into Nagios or Unicenter or any of your data center management tools so you don’t have another screen to stare at. And finally, all of this data is real-time, it’s delivered on demand, and we can generate reporting as well, to help you for further use.
Unified Protection Across PlatformsSo from a deployment perspective, Mykonos actually lives as a software product. It’s a software appliance that can be installed on any traditional hardware for traditional data center deployments. We also have a virtual machine based version that supports VMware’s ESX for virtualized customers that have already virtualized their application infrastructure. And finally, we actually have a cloud based version we just released, for Amazon Web Services, so that customers that have decided to let their applications live in the cloud, can now bring the Mykonos security with them into the cloud. And the really exciting part is that as of Ambler, Mykonos latest release, we now have the ability to see a single attacker across multiple of these environments inside of a customer. So, going back to that Sony example, when attackers attacked Sony Japan, Sony Germany, Sony U.S. and Sony’s Amazon cloud, Mykonos would have detected it immediately on the first site and protected the second, third, and fourth before anything bad could have happened. We think that has an enormous amount of value to customers and we think it’s the first in what we think is going to be a wave of connected application and ultimately network firewalls.
Juniper’s separate data and control plane architecture offers significant advantages. Consider the difference:Competitors’ single plane designDuring attacks, no management access to address the situationDuring attacks, processing of routing updates stop and the network is downJuniper’s separate control and data plane designMaintain management access even during a DoS/DDoS attackRoute update processing continuesSeparate data (packet forwarding) and control (management) planeScales performanceEnhances resiliencyEnables redundancyTransition: Beyond the separate data and control plane architecture, consider Juniper’s consolidated security platform.
Juniper Network Management portfolio (Space/Security Design, STRM and AIM) enables operational and cost efficiencies through: Full network life cycle management (Provisioning/Visibility/Diagnostics) -closed loop, less resource-intensive, one-stop-shop Single configuration/provisioning platform across Juniper’s security/routing/switching devices Single event monitoring/threat management solution across all Juniper systems Case automation for efficient and cost effective incident management Network-wide visibility with application-level granularity Appliance form factor for one stop HW/OS/Application support Rapid deployment – no server provisioning lead times Schema-based device/Space interface for day 0 deployment (application transparency) One Stop Support for hw/OS/ApplicationTransition: Clearly Juniper Networks unified management meet customer needs. To summarize…
For Data Center SRX, NSS Labs have given their stamp of approval, recommending SRX to businesses and organizations around the world.ABI Research, in the assessment of UTM vendors, has established Juniper Networks as the overall #1 UTM vendor ranking #1 in all decision criteria: innovation and implementation.Transition: Other analysts, as well as customers, also have showered Juniper SRX with praise too.
See examples above.As you can see, analysts,research houses, and most importantly customers, believe in the strength and direction of Juniper.Transition: Clearly Juniper Networks SRX solution meetcustomer needs. To summarize…