CERIAS.ppt
Upcoming SlideShare
Loading in...5
×
 

CERIAS.ppt

on

  • 491 views

 

Statistics

Views

Total Views
491
Views on SlideShare
491
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    CERIAS.ppt CERIAS.ppt Presentation Transcript

    • Enabling Worm and Malware Investigation Using Virtualization (Demo and poster this afternoon) Dongyan Xu, Xuxian Jiang CERIAS and Department of Computer Science Purdue University
    • The Team  Lab FRIENDS  Xuxian Jiang (Ph.D. student)  Paul Ruth (Ph.D. student)  Dongyan Xu (faculty)  CERIAS  Eugene H. Spafford  External Collaboration  Microsoft Research
    • Our Goal In-depth understanding of increasingly sophisticated worm/malware behavior
    • Outline  Motivation  An integrated approach  Front-end : Collapsar (Part I)  Back-end : vGround (Part II)  Bringing them together  On-going work
    • The Big Picture Domain A Domain B GRE Proxy ARP Worm Capture Worm Worm Analysis Analysis
    • Part I Front-End: Collapsar Enabling Worm/Malware Capture * X. Jiang, D. Xu, “Collapsar: a VM-Based Architecture for Network Attack Detention Center”, 13th USENIX Security Symposium (Security’04), 2004.
    • General Approach  Promise of honeypots  Providing insights into intruders’ motivations, tactics, and tools  Highly concentrated datasets w/ low noise  Low false-positive and false negative rate  Discovering unknown vulnerabilities/exploitations  Example: CERT advisory CA-2002-01 (solaris CDE subprocess control daemon – dtspcd)
    • Current Honeypot Operation  Individual honeypots  Limited local view of attacks  Federation of distributed honeypots  Deploying honeypots in different networks  Exchanging logs and alerts  Problems  Difficulties in distributed management  Lack of honeypot expertise  Inconsistency in security and management policies  Example: log format, sharing policy, exchange frequency
    • Our Approach: Collapsar  Based on the HoneyFarm idea of Lance Spitzner  Achieving two (seemingly) conflicting goals  Distributed honeypot presence  Centralized honeypot operation  Key ideas  Leveraging unused IP addresses in each network  Diverting corresponding traffic to a “detention” center (transparently)  Creating VM-based honeypots in the center
    • Collapsar Architecture Production Network Attacker Redirector Production Network Redirector Redirector Front-End Production Network VM-based Collapsar Honeypot Center Management Correlation Station Engine
    • Comparison with Current Approaches  Overlay-based approach (e.g., NetBait, Domino overlay)  Honeypots deployed in different sites  Logs aggregated from distributed honeypots  Data mining performed on aggregated log information  Key difference: where the attacks take place (on-site vs. off-site)
    • Comparison with Current Approaches  Sinkhole networking approach (e.g., iSink )  “Dark” space to monitor Internet abnormality and commotion (e.g. msblaster worms)  Limited interaction for better scalability  Key difference: contiguous large address blocks (vs. scattered addresses)
    • Comparison with Current Approaches  Low-interaction approach (e.g., honeyd, iSink )  Highly scalable deployment  Low security risks  Key difference: emulated services (vs. real things)  Less effective to reveal unknown vulnerabilities  Less effective to capture 0-day worms
    • Collapsar Design  Functional components  Redirector  Collapsar Front-End  Virtual honeypots  Assurance modules  Logging module  Tarpitting module  Correlation module
    • Collapsar Deployment  Deployed in a local environment for a two-month period in 2003  Traffic redirected from five networks  Three wired LANs  One wireless LAN  One DSL network  ~ 50 honeypots analyzed so far  Internet worms (MSBlaster, Enbiei, Nachi )  Interactive intrusions (Apache, Samba)  OS: Windows, Linux, Solaris, FreeBSD
    • Incident: Apache Honeypot/VMware  Vulnerabilities  Vul 1: Apache (CERT® CA-2002-17)  Vul 2: Ptrace (CERT® VU-6288429)  Time-line  Deployed: 23:44:03pm, 11/24/03  Compromised: 09:33:55am, 11/25/03  Attack monitoring  Detailed log  http://www.cs.purdue.edu/homes/jiangx/collapsar
    • Incident: Windows XP Honeypot/VMware  Vulnerability  RPC DCOM Vul. (Microsoft Security Bulletin MS03-026)  Time-line  Deployed: 22:10:00pm, 11/26/03  MSBlaster: 00:36:47am, 11/27/03  Enbiei: 01:48:57am, 11/27/03  Nachi: 07:03:55am, 11/27/03
    • Summary (Front-End)  A novel front-end for worm/malware capture  Distributed presence and centralized operation of honeypots  Good potential in attack correlation and log mining  Unique features  Aggregation of Scattered unused (dark) IP addresses  Off-site (relative to participating networks) attack occurrences and monitoring  Real services for unknown vulnerability revelation
    • Part II Back-End: vGround Enabling Worm/Malware Analysis * X. Jiang, D. Xu, H. J. Wang, E. H. Spafford, “Virtual Playgrounds for Worm Behavior Investigation”, 8th International Symposium on Recent Advances in Intrusion Detection (RAID’05), 2005.
    • Basic Approach  A dedicated testbed  Internet-inna-box (IBM), Blended Threat Lab (Symantec)  DETER  Goal: understanding worm behavior  Static analysis/ execution trace  Reverse Engineering (IDA Pro, GDB, …)  Worm experiment within a limited scale  Result:  Only enabling relatively static analysis within a small scale
    • The Reality – Worm Threats  Speed, Virulence, & Sophistication of Worms  Flash/Warhol Worms  Polymorphic/Metamorphic Appearances  Zombie Networks (DDoS Attacks, Spam)  What we also need  A high-fidelity, large-scale, live but safe worm playground
    • A Worm Playground Picture by Peter Szor, Symantec Corp.
    • Requirements  Cost & Scalability  How about a topology with 2000+ nodes?  Confinement  In-house private use?  Management & user convenience  Diverse environment requirement  Recovery from damages from a worm experiment  re-installation, re-configuration, and reboot …
    • Our Approach  vGround  A virtualization-based approach  Virtual Entities:  Leveraging current virtual machine techniques  Designing new virtual networking techniques  User Configurability  Customizing every node (end-hosts/routers)  Enabling flexible experimental topologies
    • An Example Run: Internet Worms A worm playground Virtual Physical A shared infrastructure (e.g. PlanetLab)
    • Key Virtualization Techniques  Full-System Virtualization  Network Virtualization
    • Full-System Virtualization  Emerging and New VM Techniques  VMware, Xen, Denali, UML  Supporting for real-world services  DNS, Sendmail, Apache w/ “native” vulnerabilities  Adopted technique: UML  Deployability  Convenience/Resource Efficiency
    • User-Mode Linux (http://user-mode-linux.sf.net)  System-Call Virtualization  User-Level Implementation UM User UM User p Process 1 Process 2 t r a c Guest OS Kernel e MMU Device Drivers Host OS Kernel Device Drivers Hardware
    • New Network Virtualization  Link Layer Virtualization  User-Level Implementation Virtual Node 1 Virtual Node 2 IP-IP Virtual Switch 1 Host OS
    • User Configurability  Node Customization  System Template  End Node (BIND, Apach, Sendmail, …)  Router (RIP, OSPF, BGP, …)  Firewall (iptables)  Sniffer/IDS (bro, snort)  Topology Customization  Language  Network, Node  Toolkits
    • AS2_H1 AS2_H2 AS3_H1 AS1_H2 AS3_H2 AS1_H1 R1 R2 R3 node AS3_H1 { Project Planetlab-Worm switch AS1_lan1 { switch AS2_lan1 { switch AS3_lan1 { superclass slapper template slapper { unix_sock unix_sock sock/as2_lan1 unix_sock sock/as3_lan1 network eth0 { image slapper.ext2 sock/as1_lan1 host planetlab1.cs.purdue.edu host planetlab8.lcs.mit.edu switch AS3_lan1 cow enabled host } } address 128.12.1.5/24 startup { planetlab6.millennium. router R2 { gateway 128.12.1.250 /etc/rc.d/init.d/httpd start berkeley.eduswitch AS2_AS3 { superclass router } } } udp_sock 1500 } } host planetlab1.cs.purdue.edu network eth0 { node AS3_H2 { template router { switch AS1_AS2 { } switch AS2_lan1 superclass slapper image router.ext2 udp_sock 1500 address 128.11.1.250/24 network eth0 { routing ospf host node AS2_H1 { } switch AS3_lan1 startup { planetlab6.millennium. superclass slapper address 128.12.1.6/24 /etc/rc.d/init.d/ospfd start berkeley.edu network eth0 { network eth1 { gateway 128.12.1.250 } } switch AS2_lan1 switch AS1_AS2 } } address 128.11.1.5/24 address 128.8.1.2/24 } router R1 { node AS1_H1 { gateway 128.11.1.250 } router R3 { superclass router superclass slapper } superclass router network eth0 { } network eth2 { network eth0 { switch AS1_lan1 node AS2_H2 { switch AS2_AS3 network eth0 { switch AS1_lan1 address superclass slapper address 128.9.1.2/24 switch AS3_lan1 address 128.10.1.250/24 128.10.1.1/24 network eth0 { } address 128.12.1.250/24 } gateway 128.10.1.250 switch AS2_lan1 } } } address 128.11.1.6/24 network eth1 { } gateway 128.11.1.250 network eth1 { switch AS1_AS2 node AS1_H2 { } switch AS2_AS3 address 128.8.1.1/24 superclass slapper } address 128.9.1.1/24 } network eth0 { } } switch AS1_lan1 } address Networked Node 128.10.1.2/24 Network System Template gateway 128.10.1.250 } }
    • Features  Scalability  3000 virtual hosts in 10 physical nodes  Iterative Experiment Convenience  Virtual node generation time: 60 seconds  Boot-strap time: 90 seconds  Tear-down time: 10 seconds  Strict Confinement  High Fidelity
    • Evaluation  Current Focus  Worm behavior reproduction  Experiments  Probing, exploitation, payloads, and propagation  Further Potentials – on-going work  Routing worms / Stealthy worms  Infrastructure security (BGP)
    • Experiment Setup  Two Real-World Worms  Lion, Slapper, and their variants  A vGround Topology  10 virtual networks  1500 virtual Nodes  10 physical machines in an ITaP cluster
    • Evaluation  Target Host Distribution  Detailed Exploitation Steps  Malicious Payloads  Propagation Pattern
    • Probing: Target Network Selection Lion Worms Number of Probes (Total 105) 600 500 400 300 200 100 0 1 9 17 25 33 41 49 57 65 73 81 89 97 105 113 121 129 137 145 153 161 169 177 185 193 201 209 217 225 233 241 249 The First Octet of IP Address 13 243 Slapper Worms Number of Probes (105) 700 600 500 400 300 200 100 0 1 11 21 31 41 51 61 71 81 91 101 111 121 131 141 151 161 171 181 191 201 211 221 231 241 251 The First Octet of IP Address 80,81 http://www.iana.org/assignments/ipv4-address-space.
    • Exploitation (Lion) 1: Probing 2: Exploitation! 3: Propagation!
    • Exploitation (Slapper) 1: Probing 2: Exploitation! 3: Propagation!
    • Malicious Payload (Lion)
    • Propagation Pattern and Strategy  Address-Sweeping  Randomly choose a Class B address (a.b.0.0)  Sequentially scan hosts a.b.0.0 – a.b.255.255  Island-Hopping  Local subnet preference
    • Propagation Pattern and Strategy  Address-Sweeping (Slapper Worm) 192.168.a.b Infected Hosts: 2% Infected Hosts: 5% Infected Hosts: 10%
    • Propagation Pattern and Strategy  Island-Hopping Infected Hosts: 2% Infected Hosts: 5% Infected Hosts: 10%
    • Summary (Back-End)  vGround – the back-end  A Virtualization-Based Worm Playground  Properties:  High Fidelity  Strict Confinement  Good Scalability  3000 Virtual Hosts in 10 Physical Nodes  High Resource Efficiency  Flexible and Efficient Worm Experiment Control
    • Combining Collapsar and vGround Domain A Domain B GRE Worm Capture Worm Worm Analysis Analysis
    • Conclusions  An integrated virtualization-based platform for worm and malware investigation  Front-end : Collapsar  Back-end : vGround  Great potential for automatic  Characterization of unknown service vulnerabilities  Generation of 0-day worm signatures  Tracking of worm contaminations
    • On-going Work  More real-world evaluation  Stealthy worms  Polymorphic worms  Additional capabilities  Collapsar center federation  On-demand honeypot customization  Worm/malware contamination tracking  Automated signature generation
    • Thank you. Stop by our poster and demo this afternoon! For more information: Email: dxu@cs.purdue.edu URL: http://www.cs.purdue.edu/~dxu Google: “Purdue Collapsar Friends”