Your SlideShare is downloading. ×
Enabling Worm and Malware Investigation Using Virtualization
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Enabling Worm and Malware Investigation Using Virtualization

1,186
views

Published on

Enabling Worm and Malware Investigation Using Virtualization - From Dongyan Xu, Xuxian Jiang

Enabling Worm and Malware Investigation Using Virtualization - From Dongyan Xu, Xuxian Jiang

Published in: Technology

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,186
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Transcript

    • 1. Enabling Worm and Malware Investigation Using Virtualization (Demo and poster this afternoon) Dongyan Xu , Xuxian Jiang CERIAS and Department of Computer Science Purdue University
    • 2. The Team
      • Lab FRIENDS
        • Xuxian Jiang (Ph.D. student)
        • Paul Ruth (Ph.D. student)
        • Dongyan Xu (faculty)
      • CERIAS
        • Eugene H. Spafford
      • External Collaboration
        • Microsoft Research
    • 3.  
    • 4. Our Goal
      • In-depth understanding of increasingly sophisticated worm/malware behavior
    • 5. Outline
      • Motivation
      • An integrated approach
        • Front-end : Collapsar (Part I)
        • Back-end : vGround (Part II)
        • Bringing them together
      • On-going work
      Virtualization
    • 6. The Big Picture Proxy ARP Domain A Domain B GRE Worm Analysis Worm Analysis Worm Capture
    • 7. Front-End: Collapsar Enabling Worm/Malware Capture * X. Jiang, D. Xu, “Collapsar: a VM-Based Architecture for Network Attack Detention Center”, 13 th USENIX Security Symposium (Security’04), 2004. Part I
    • 8. General Approach
      • Promise of honeypots
        • Providing insights into intruders’ motivations, tactics, and tools
          • Highly concentrated datasets w/ low noise
          • Low false-positive and false negative rate
        • Discovering unknown vulnerabilities/exploitations
          • Example: CERT advisory CA-2002-01 (solaris CDE subprocess control daemon – dtspcd)
    • 9. Current Honeypot Operation
      • Individual honeypots
        • Limited local view of attacks
      • Federation of distributed honeypots
        • Deploying honeypots in different networks
        • Exchanging logs and alerts
      • Problems
        • Difficulties in distributed management
        • Lack of honeypot expertise
        • Inconsistency in security and management policies
          • Example: log format, sharing policy, exchange frequency
    • 10. Our Approach: Collapsar
      • Based on the HoneyFarm idea of Lance Spitzner
      • Achieving two (seemingly) conflicting goals
        • Distributed honeypot presence
        • Centralized honeypot operation
      • Key ideas
        • Leveraging unused IP addresses in each network
        • Diverting corresponding traffic to a “detention” center (transparently)
        • Creating VM-based honeypots in the center
      Virtualization
    • 11. Collapsar Architecture VM-based Honeypot Redirector Redirector Redirector Correlation Engine Management Station Production Network Production Network Production Network Collapsar Center Attacker Front-End
    • 12. Comparison with Current Approaches
      • Overlay-based approach (e.g., NetBait , Domino overlay)
        • Honeypots deployed in different sites
        • Logs aggregated from distributed honeypots
        • Data mining performed on aggregated log information
        • Key difference: where the attacks take place
        • (on-site vs. off-site)
    • 13. Comparison with Current Approaches
      • Sinkhole networking approach (e.g., iSink )
        • “ Dark” space to monitor Internet abnormality and commotion (e.g. msblaster worms)
        • Limited interaction for better scalability
        • Key difference: contiguous large address blocks ( vs. scattered addresses)
    • 14. Comparison with Current Approaches
      • Low-interaction approach (e.g., honeyd , iSink )
        • Highly scalable deployment
        • Low security risks
        • Key difference: emulated services ( vs. real things)
          • Less effective to reveal unknown vulnerabilities
          • Less effective to capture 0-day worms
    • 15. Collapsar Design
      • Functional components
        • Redirector
        • Collapsar Front-End
        • Virtual honeypots
      • Assurance modules
        • Logging module
        • Tarpitting module
        • Correlation module
    • 16. Collapsar Deployment
      • Deployed in a local environment for a two-month period in 2003
      • Traffic redirected from five networks
        • Three wired LANs
        • One wireless LAN
        • One DSL network
      • ~ 5 0 honeypots analyzed so far
        • Internet worms ( MSBlaster, Enbiei, Nachi )
        • Interactive intrusions ( Apache, Samba )
        • OS: Windows, Linux, Solaris, FreeBSD
    • 17. Incident: Apache Honeypot/VMware
      • Vulnerabilities
        • Vul 1: Apache (CERT® CA-2002-17)
        • Vul 2: Ptrace (CERT® VU-6288429)
      • Time-line
        • Deployed: 23:44:03pm, 11/24/03
        • Compromised: 09:33:55am, 11/25/03
      • Attack monitoring
        • Detailed log
          • http://www.cs.purdue.edu/homes/jiangx/collapsar
    • 18. Incident: Windows XP Honeypot/VMware
      • Vulnerability
        • RPC DCOM Vul. (Microsoft Security Bulletin MS03-026)
      • Time-line
        • Deployed: 22:10:00pm, 11/26/03
        • MSBlaster: 00:36:47am, 11/27/03
        • Enbiei: 01:48:57am, 11/27/03
        • Nachi: 07:03:55am, 11/27/03
    • 19. Summary (Front-End)
      • A novel front-end for worm/malware capture
        • Distributed presence and centralized operation of honeypots
        • Good potential in attack correlation and log mining
      • Unique features
        • Aggregation of Scattered unused (dark) IP addresses
        • Off-site (relative to participating networks) attack occurrences and monitoring
        • Real services for unknown vulnerability revelation
    • 20. Back-End: vGround Enabling Worm/Malware Analysis Part II * X. Jiang, D. Xu, H. J. Wang, E. H. Spafford, “Virtual Playgrounds for Worm Behavior Investigation”, 8 th International Symposium on Recent Advances in Intrusion Detection (RAID’05), 2005.
    • 21. Basic Approach
      • A dedicated testbed
        • Internet-inna-box (IBM) , Blended Threat Lab (Symantec)
        • DETER
      • Goal: understanding worm behavior
        • Static a nalysis/ e xecution t race
          • Reverse Engineering ( IDA Pro, GDB, … )
        • Worm experiment within a limited scale
        • Result:
          • Only enabling relatively static analysis within a small scale
    • 22. The Reality – Worm Threats
      • Speed, Virulence, & Sophistication of Worms
        • Flash/Warhol Worms
        • Polymorphic/Metamorphic Appearances
        • Zombie Networks ( DDoS Attacks , Spam )
      • What we also need
        • A h igh- f idelity , large-scale , live but safe w orm p layground
    • 23. A Worm Playground Picture by Peter Szor, Symantec Corp.
    • 24. Requirements
      • Cost & Scalability
        • How about a topology with 2000+ nodes?
      • Confinement
        • In-house private use?
      • Management & us er c onvenience
        • Diverse environment requirement
        • Recovery from damages from a worm experiment
          • re-installation, re-configuration, and reboot …
    • 25. Our Approach
      • vGround
        • A virtualization-based approach
      • Virtual Entities:
        • Leveraging current virtual machine techniques
        • Designing new virtual networking techniques
      • User Configurability
        • Customizing every node (end-hosts/routers)
        • Enabling flexible experimental topologies
      Virtualization
    • 26. An Example Run : Internet Worms A shared infrastructure (e.g. PlanetLab) A worm playground Virtual Physical
    • 27. Key Virtualization Techniques
      • Full-System Virtualization
      • Network Virtualization
    • 28. Full-System Virtualization
      • Emerging and New VM Techniques
        • VMware, Xen, Denali, UML
        • Supporting for real-world services
          • DNS, Sendmail, Apache w/ “ native” vulnerabilities
      • Adopted technique: UML
        • Deployability
        • Convenience/Resource Efficiency
    • 29. User-Mode Linux ( http://user-mode-linux.sf.net )
      • System-Call Virtualization
      • User-Level Implementation
      Host OS Kernel Device Drivers Hardware Device Drivers MMU Guest OS Kernel UM User Process 1 ptrace UM User Process 2
    • 30. New Network Virtualization
      • Link Layer Virtualization
      • User-Level Implementation
      Host OS Virtual Node 2 Virtual Switch 1 IP-IP Virtual Node 1
    • 31. User Configurability
      • Node Customization
        • System Template
          • End Node ( BIND, Apach, Sendmail, … )
          • Router ( RIP, OSPF, BGP, … )
          • Firewall ( iptables )
          • Sniffer/IDS ( bro, snort )
      • Topology Customization
        • Language
          • Network, Node
        • Toolkits
    • 32. Project Planetlab-Worm template slapper { image slapper.ext2 cow enabled startup { /etc/rc.d/init.d/httpd start } } template router { image router.ext2 routing ospf startup { /etc/rc.d/init.d/ospfd start } } router R1 { superclass router network eth0 { switch AS1_lan1 address 128.10.1.250/24 } network eth1 { switch AS1_AS2 address 128.8.1.1/24 } } switch AS1_lan1 { unix_sock sock/as1_lan1 host planetlab6.millennium. berkeley.edu } switch AS1_AS2 { udp_sock 1500 host planetlab6.millennium. berkeley.edu } node AS1_H1 { superclass slapper network eth0 { switch AS1_lan1 address 128.10.1.1/24 gateway 128.10.1.250 } } node AS1_H2 { superclass slapper network eth0 { switch AS1_lan1 address 128.10.1.2/24 gateway 128.10.1.250 } } switch AS2_lan1 { unix_sock sock/as2_lan1 host planetlab1.cs.purdue.edu } switch AS2_AS3 { udp_sock 1500 host planetlab1.cs.purdue.edu } node AS2_H1 { superclass slapper network eth0 { switch AS2_lan1 address 128.11.1.5/24 gateway 128.11.1.250 } } node AS2_H2 { superclass slapper network eth0 { switch AS2_lan1 address 128.11.1.6/24 gateway 128.11.1.250 } } switch AS3_lan1 { unix_sock sock/as3_lan1 host planetlab8.lcs.mit.edu } router R2 { superclass router network eth0 { switch AS2_lan1 address 128.11.1.250/24 } network eth1 { switch AS1_AS2 address 128.8.1.2/24 } network eth2 { switch AS2_AS3 address 128.9.1.2/24 } } node AS3_H1 { superclass slapper network eth0 { switch AS3_lan1 address 128.12.1.5/24 gateway 128.12.1.250 } } node AS3_H2 { superclass slapper network eth0 { switch AS3_lan1 address 128.12.1.6/24 gateway 128.12.1.250 } } router R3 { superclass router network eth0 { switch AS3_lan1 address 128.12.1.250/24 } network eth1 { switch AS2_AS3 address 128.9.1.1/24 } } AS1_H1 R1 AS1_H2 AS2_H1 AS2_H2 R2 R3 AS3_H1 AS3_H2 Networked Node Network System Template
    • 33. Features
      • Scalability
        • 3000 v irtual h osts in 10 p hysical n odes
      • Iterative Experiment Convenience
        • Virtual node generation time: 60 seconds
        • Boot-strap time: 90 seconds
        • Tear-down time: 10 seconds
      • Strict Confinement
      • High Fidelity
    • 34. Evaluation
      • Current Focus
        • Worm behavior reproduction
      • Experiments
        • Probing, exploitation, payloads, and propagation
      • Further Potentials – on-going work
        • Routing worms / Stealthy worms
        • Infrastructure security (BGP)
    • 35. Experiment Setup
      • Two Real-World Worms
        • Lion , Slapper , and their variants
      • A vGround Topology
        • 10 virtual networks
        • 1500 virtual Nodes
        • 10 physical machines in an ITaP cluster
    • 36. Evaluation
      • Target Host Distribution
      • Detailed Exploitation Steps
      • Malicious Payloads
      • Propagation Pattern
    • 37. Probing: Target Network Selection Lion Worms Slapper Worms 13 243 80,81 http://www.iana.org/assignments/ipv4-address-space .
    • 38. Exploitation (Lion) 1: Probing 2: Exploitation! 3: Propagation!
    • 39. Exploitation (Slapper) 1: Probing 2: Exploitation! 3: Propagation!
    • 40. Malicious Payload (Lion)
    • 41. Propagation Pattern and Strategy
      • Address-Sweeping
        • Randomly choose a Class B address (a.b.0.0)
        • Sequentially scan hosts a.b.0.0 – a.b.255.255
      • Island-Hopping
        • Local subnet preference
    • 42. Propagation Pattern and Strategy
      • Address-Sweeping (Slapper Worm)
      Infected Hosts: 2% Infected Hosts: 5% Infected Hosts: 10% 192.168.a.b
    • 43. Propagation Pattern and Strategy
      • Island-Hopping
      Infected Hosts: 2% Infected Hosts: 5% Infected Hosts: 10%
    • 44. Summary (Back-End)
      • vGround – the back-end
        • A Virtualization-Based Worm Playground
        • Properties:
          • High Fidelity
          • Strict Confinement
          • Good Scalability
            • 3000 Virtual Hosts in 10 Physical Nodes
          • High Resource Efficiency
          • Flexible and Efficient Worm Experiment Control
    • 45. Combining Collapsar and vGround Domain A Domain B GRE Worm Analysis Worm Analysis Worm Capture
    • 46. Conclusions
      • An integrated virtualization-based platform for worm and malware investigation
        • Front-end : Collapsar
        • Back-end : vGround
      • Great potential for automatic
        • Characterization of unknown service vulnerabilities
        • Generation of 0-day worm signatures
        • Tracking of worm contaminations
    • 47. On-going Work
      • More real-world evaluation
        • Stealthy worms
        • Polymorphic worms
      • Additional capabilities
        • Collapsar center federation
        • On-demand honeypot customization
        • Worm/malware contamination tracking
        • Automated signature generation
    • 48. Thank you. Stop by our poster and demo this afternoon! For more information: Email: d [email_address] URL: http://www.cs.purdue.edu/~dxu Google: “ Purdue Collapsar Friends ”