A Distributed Application Execution System for an Infrastructure with Dynamically Configured Networks
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

A Distributed Application Execution System for an Infrastructure with Dynamically Configured Networks

on

  • 1,060 views

NetCloud 2012 workshop

NetCloud 2012 workshop

Statistics

Views

Total Views
1,060
Views on SlideShare
1,052
Embed Views
8

Actions

Likes
3
Downloads
4
Comments
0

2 Embeds 8

http://ltang-ld2.linkedin.biz 6
https://twitter.com 2

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

A Distributed Application Execution System for an Infrastructure with Dynamically Configured Networks Presentation Transcript

  • 1. A"Distributed"Application"Execution" System"for"an"Infrastructure"with" Dynamically"Configured"Networks Ryousei)Takano,)Hidemoto)Nakada,)) Atsuko)Takefusa,)and)Tomohiro)Kudoh) ) Information*Technology*Research*Institute,** National*Institute*of*Advanced*Industrial*Science*and*Technology*(AIST),*Japan NetCloud)2012)Workshop,)Dec.)4)2012,)Taipei
  • 2. Background •  Intercloud)(a.k.a.)cloud)of)clouds)) –  Virtual)Infrastructure)over)multiple)region)and)domain) clouds) –  LargeLscale)data)intensive)scientific)computing)platform) •  e.g.,)high)energy)physics,)bio)science,)and)geo)science) •  largeLscale)data)and)special)instruments)are)geographically) distributed)) container containerVirtualInfrastructure container container site A site BPhysicalInfrastructure 2
  • 3. Challenges•  To)seamlessly)deploy)existing)applications)that)run) on)a)conventional)cluster)computer)system)into)a)VI,) it)is)important)to)quickly)setup)a)tailored)virtualized) cluster)environment)and)execute)an)application)with) minimum)overhead)of)virtualization.)•  The)fusion)of)computer)and)network)virtualization) technologies)may)help)us)to)realize)such)an) application)execution)environment.) –  Virtual)machine)and)OS)container)technologies) –  Software)defined)network:)OpenFlow,)OGF)NSI,)etc) 3
  • 4. Contributions•  Automatic)construction)of)distributed)application) execution)einvironment) –  OneLstop)service)to)execute)and)monitor)user)applications) over)isolated)Intercloud)resources) –  SliceLaware)contextualization)•  Implementation)as)a)part)of)GridARS)middleware)suite) –  OSLlevel)virtualization)(container))and)dynamic)network) path)provisioning)•  Demonstration)of)the)feasibility) –  Quick)slice)construction)and)low)overheads)of)virtualization)
  • 5. Agenda•  GridARS)and)the)Application)Execution)System)•  SliceLaware)contextualization)•  Evaluation)•  Conclusion)and)Future)Work 5
  • 6. GridARS"and""the"application"execution"system
  • 7. GridARS:"Grid"Advanced" Resource"management"System•  GridARS)is)a)reference)implementation)of)GNSLWSI) –  Defined)by)the)GLlambda)project) User •  Collaboration)between)) GRC: Global KDDI)R&D)Labs.,)NTT,)NICT,)) Domain 0 Resource and)AIST,)started)in)2004) DMS/A$ Coordinator GRC$ Aggregator RM: Resource –  Web)services)I/F)to)reserve,)) Manger modify)and)release)) various)resources) GRC$ DMS/A GRC$ DMS/A –  PollingLbased)) CRM Domain 2 CRM 2Lphase)commit) NRM DMC/C NRM DMC/C protocol) Allocated Collector•  GridARS)supports) DMC/C CRM DMC/C GRS$ DMC/A OGF)NSI)version)2. SRM SRM CRM DMC/C Domain 1 Domain 3 7
  • 8. Application"Execution"System •  The)goal)is)to)provide)users)with)a)slice)that)looks)like)) a)single)isolated)cluster)computer)system.) –  A)slice)consists)of)containers)and)dynamically)configured)network)paths.) –  All)containers)belong)to)the)same)IP)network)segment.) –  An)application)is)automatically)executed)at)the)reservation)time.) –  The)user)can)monitor)the)resource)utilization)of)their)slice.) container containerVirtualInfrastructure container container site A site BPhysicalInfrastructure 8
  • 9. Requirements•  A)slice)is)constructed)at)the)start)of)reservation) time,)an)application)is)automatically)executed)on) it,)and)it)is)released)at)the)end)of)reservation)time.))•  A)conventional)parallel)application)(e.g.,)an)MPI) program))requires)remote)login)and)process) execution)via)SSH.) –  SSH)public)keys)should)be)generated)and)exchanged) among)containers)in)advance.) –  A)host)list)file,)which)includes)IP)addresses)of)all) participating)containers,)should)be)prepared)in)advance.) •  IP)addresses)are)dynamically)assigned.) 9
  • 10. SliceFaware"contextualization•  The)key)for)automated)slice)construction)is)) contextualization,)which)dynamically)adjusts)each)container) setting,)including)the)IP)address,)the)hostname,)and)SSH) keys,)at)deployment)time.) –  VM)image)contextualization:)Nimbus)Context)Broker,)OpenNebula)•  However,)the)existing)techniques)assumed)to)be)used)within) a)single)site.)•  We)propose)sliceLaware)contextualization,)which) contextualizes)a)slice)based)on)information,)exchanged) among)several)sites)in)a)hierarchical)manner.) 10
  • 11. Design"and"Implementation
  • 12. Design"Overview•  Use)“Pilot)Job”)to)contextualize)and)monitor)a)slice)•  Use)OSLlevel)virtualization)(Linux)container))for) isolation) Applica:on$Execu:on$System SliceBaware$ File$system$weaving Monitoring contextualiza:on Resource$Management$ Resource$Alloca:on$ Distributed$ Service Planning$Service Monitoring$Service GridARS
  • 13. Node"Manager•  Local)job)scheduler)invokes)“pilot)jobs”)called)Node)Manager) (NM))instead)of)user)jobs.)•  The)NMs)set)up)a)virtual)cluster)and)execute)the)user)jobs. 13
  • 14. SliceFaware"Contextualization GRC: Global Resource Coordinator CRM: Compute Resource Manager GRC NM: Node Manager Reserve 3node, 192.168.1.0/24 Reserve 3node, 192.168.3.0/24 Available address range: Available address range: 192.168.3.0/24, 192.168.0.0/24, SSH keys 192.168.1.0/24 192.168.1.0/24 Addresses Hosts authorized_keys CRM CRM known_hosts SSH keysAddress Hosts authorized_keys known_hosts NM NM NM NM NM NM Container Container Container Container Container Container
  • 15. File"System"Weaving•  Setting)up)a)container)file) container file system system)may)be)time) / consuming.) etc usr opt home alice E .ssh•  The)most)of)files)could)be) shared)with)the)host)OS. D•  File)system)weaving)helps)) .ssh to)quickly)set)up)a)container) usr opt alice C file)system)and)isolate)from) the)host)OS)file)system.) writable layer B –  aufs2)stackable)file)system) / etc usr home A –  bind)mount)option) opt host OS file system (read only) 15
  • 16. Slice"Monitoring"Service"(1/2)•  AEMD)gathers)monitoring)information)in)each)site)via)NMs.)•  GridARS)DMS)aggregates)information)per)slice. 16
  • 17. Slice"Monitoring"Service"(2/2) Administrator’s view User’s view Reservation StatusAnother user status Ganglia Network status Disclose resource information only to the users who made reservation on the resource Computer status 17
  • 18. Evaluation
  • 19. Experimental"Setting•  Slice)start)up)time:) –  container)start)up)time –  contextualization)information)exchange)time) –  barrier)synchronization) PC spec. Site A Site C CPU Intel Core 2 Q9550/2.83GHz Memory 4 GB Ethernet Intel PRO/1000 G G G OS Rocks 5.2 (kernel 2.6.30) G: GtrcNET-1 - latency injection: 0 – 300 ms GRC - per-VLAN traffic monitoring Site B Site D 19
  • 20. Slice"Start"Up"Time 5 NM start - info gathering info gathering - info distribution Site A Site C info distribution - execution 2 4 1 2 1 4 Site A Site A B Site A C Site ABCD G G GElapsed Time [s] 3 Barrier Synch. 2 1 1 GRC Site B Site D 2 Contextualization Information Exchange 1 Container Setup 0 0m 10 20 ms 30 ms 0m 10 20 ms 30 ms 0m 10 20 ms 30 ms 0m 10 20 ms 30 ms 0 0 0m 0 0 0m 0 0 0m 0 0 0m s s s s s s s s •  The)contextualization)process)depends)on)the)latencies)injected.) •  The)number)of)sites)does)not)affect)the)elapsed)times)very)much.) 20
  • 21. Container"Setup"Time Elapsed"Time"(seconds) File)system)construction 0.02) Key)pair)generation 0.44 Guest)OS)start)up 0.59 Total 1.05•  The)container)setup)is)quite)fast.)•  40)%)of)the)setup)time)is)consumed)by)generation)of)SSH)keys.) This)could)be)eliminated)by)inLadvance)generation)of)them.) 21
  • 22. Conclusion"and"Future"Work
  • 23. Conclusion•  We)have)proposed)a)distributed)application)execution)system,) and)developed)an)implementation)of)it)as)a)part)of)the) GridARS)middleware)suite.)•  The)key)for)automated)slice)construction)is)sliceFaware" contextualization.)•  We)confirmed)that)a)slice)could)be)established)in)one)second,) leveraging)OSLlevel)virtualization)and)file)system)weaving.)•  We)also)confirmed)that)the)overhead)for)propagating) contextualization)information)is)small)enough.) 23
  • 24. Future"Work •  Hardware)as)a)Service)(HaaS))over)Intercloud) –  An)IaaS)provider)can)extend)their)hardware)resources)on)demand.) –  HaaS)divides)resources)into)a)slice)and)provides)L2)network) connectivity)between)the)slice)and)the)IaaS’s)data)center.) Data center A GridARS CloudStackData center C slice for DC A Data center B OpenStack slice for DC B 24
  • 25. Thanks)for)your)attention! http://www.gLlambda.net/gridars/ This)work)was)partly)supported)by)the)National)Institute)of) Information)and)Communications)Technology)(NICT),)Japan.)) 25