BIN REPACKING SCHEDULING 
IN 
VIRTUALIZED DATACENTERS 
Fabien HERMENIER, FLUX, U. Utah & OASIS, INRIA/CNRS, U. Nice 
Sophie DEMASSEY, TASC, INRIA/CNRS, Mines Nantes 
Xavier LORCA, TASC, INRIA/CNRS, Mines Nantes 
1
CONTEXT 
datacenters and virtualization 
2
DATACENTERS 
interconnected servers 
hosting 
distributed applications 
3
RESOURCES 
server 
capacities 
application 
requirements 
4
VIRTUALIZATION 
applications embedded in 
Virtual Machines 
colocated on any server 
manipulable 
1 datacenter 
4 servers, 3 apps 5
ACTION 
• stop/suspend 
• launch/resume 
• migrate live 
has a known duration 
consumes resources 
impacts VM performance 
6
ACTION 
live migration 
has a known duration 
consumes resources 
impacts VM performance 
6
DYNAMIC SYSTEM 
Applications Servers 
• submission 
• removal (complete, crash) 
• requirement change (load 
spikes) 
• addition 
• removal (power off, crash) 
• availability change 
7
DYNAMIC 
RECONFIGURATION 
8
RECONFIGURATION PLAN 
migrate S1 - S2 migrate S4 - S1 launch S4 
time 
t=0 
9
RECONFIGURATION 
PROBLEM 
given an initial configuration and new requirements: 
• find a new viable configuration 
• associate actions to VMs 
• schedule the actions to resolve violations and dependencies 
s.t every action is complete as early as possible 
10
+ USER REQUIREMENTS 
Clients Administrators 
• fault-tolerance w. replication 
• performance w. isolation 
• resource matchmaking 
• maintenance 
• security w. isolation 
• shared resource control 
to satisfy at any time during the reconfiguration 
11
ENTROPY 
an autonomous VM manager 
12
PLAN MODULE 
• reconfiguration problem: vector packing + cumulative 
scheduling + side constraints (NP-hard) 
Constraint 
Programming 
• trade-off fast+good 
• composable online 
• easily adaptable offline 
13
ABSTRACTION 
•1VM = 0 or 1 action 
•min Σ end(A) 
actions A 
•full resource requirements 
at transition 
14
Compound CP model 
Packing + Scheduling 
decomposition degrades the objective and may result 
in a feasible packing without reconfiguration plan 
shared constraints resource+side 
separated branching heuristic 1-packing 2-scheduling 
15
Producer/Consumer 
1 action = 2 tasks / no-wait 
home-made cumulatives in every dimension 
16
SIDE CONSTRAINTS 
ban unary 
fence unary 
spread alldifferent + precedences 
lonely disjoint 
capacity gcc 
among element 
gather allequals 
mostly spread nvalue 
17
REPAIR 
candidate VMs 
fixed 
heuristically 
resource+placement constraints come with a new service: 
return a feasible sub-configuration 
18
EVALUATION 
19
PROTOCOL 
500 - 2,500 homogeneous servers 
2,000 - 10,000 heterogeneous VMs 
extra-large/high-memory EC2 instances 
3-tiers HA web applications with replica 
50% VM grow 30% uCPU 
4% VM launch, 2% VM stop 
1% server stop 
20
LOAD 
solving time 1,000 servers 
70% = standard average consolidation ratio 
21
SCALABILITY 
solving time 5 VMs : 1 server 
2,000 servers = standard capacity of a container 
22
field to investigate for packing+scheduling 
≠ usages to optimize: CPU, energy, revenue 
side constraints important for users 
Entropy: CP-based resource manager, fast, 
scalable, generic, composable, flexible, easy to 
maintain 
http://entropy.gforge.inria.fr/ 
CONCLUSION 
23
user constraint catalog 
routing and application topology constraints 
soft/explained user constraints 
new results available: 
http://sites.google.com/site/hermenierfabien/ 
PERSPECTIVES 
24

Bin repacking scheduling in virtualized datacenters

  • 1.
    BIN REPACKING SCHEDULING IN VIRTUALIZED DATACENTERS Fabien HERMENIER, FLUX, U. Utah & OASIS, INRIA/CNRS, U. Nice Sophie DEMASSEY, TASC, INRIA/CNRS, Mines Nantes Xavier LORCA, TASC, INRIA/CNRS, Mines Nantes 1
  • 2.
    CONTEXT datacenters andvirtualization 2
  • 3.
    DATACENTERS interconnected servers hosting distributed applications 3
  • 4.
    RESOURCES server capacities application requirements 4
  • 5.
    VIRTUALIZATION applications embeddedin Virtual Machines colocated on any server manipulable 1 datacenter 4 servers, 3 apps 5
  • 6.
    ACTION • stop/suspend • launch/resume • migrate live has a known duration consumes resources impacts VM performance 6
  • 7.
    ACTION live migration has a known duration consumes resources impacts VM performance 6
  • 8.
    DYNAMIC SYSTEM ApplicationsServers • submission • removal (complete, crash) • requirement change (load spikes) • addition • removal (power off, crash) • availability change 7
  • 9.
  • 10.
    RECONFIGURATION PLAN migrateS1 - S2 migrate S4 - S1 launch S4 time t=0 9
  • 11.
    RECONFIGURATION PROBLEM givenan initial configuration and new requirements: • find a new viable configuration • associate actions to VMs • schedule the actions to resolve violations and dependencies s.t every action is complete as early as possible 10
  • 12.
    + USER REQUIREMENTS Clients Administrators • fault-tolerance w. replication • performance w. isolation • resource matchmaking • maintenance • security w. isolation • shared resource control to satisfy at any time during the reconfiguration 11
  • 13.
    ENTROPY an autonomousVM manager 12
  • 14.
    PLAN MODULE •reconfiguration problem: vector packing + cumulative scheduling + side constraints (NP-hard) Constraint Programming • trade-off fast+good • composable online • easily adaptable offline 13
  • 15.
    ABSTRACTION •1VM =0 or 1 action •min Σ end(A) actions A •full resource requirements at transition 14
  • 16.
    Compound CP model Packing + Scheduling decomposition degrades the objective and may result in a feasible packing without reconfiguration plan shared constraints resource+side separated branching heuristic 1-packing 2-scheduling 15
  • 17.
    Producer/Consumer 1 action= 2 tasks / no-wait home-made cumulatives in every dimension 16
  • 18.
    SIDE CONSTRAINTS banunary fence unary spread alldifferent + precedences lonely disjoint capacity gcc among element gather allequals mostly spread nvalue 17
  • 19.
    REPAIR candidate VMs fixed heuristically resource+placement constraints come with a new service: return a feasible sub-configuration 18
  • 20.
  • 21.
    PROTOCOL 500 -2,500 homogeneous servers 2,000 - 10,000 heterogeneous VMs extra-large/high-memory EC2 instances 3-tiers HA web applications with replica 50% VM grow 30% uCPU 4% VM launch, 2% VM stop 1% server stop 20
  • 22.
    LOAD solving time1,000 servers 70% = standard average consolidation ratio 21
  • 23.
    SCALABILITY solving time5 VMs : 1 server 2,000 servers = standard capacity of a container 22
  • 24.
    field to investigatefor packing+scheduling ≠ usages to optimize: CPU, energy, revenue side constraints important for users Entropy: CP-based resource manager, fast, scalable, generic, composable, flexible, easy to maintain http://entropy.gforge.inria.fr/ CONCLUSION 23
  • 25.
    user constraint catalog routing and application topology constraints soft/explained user constraints new results available: http://sites.google.com/site/hermenierfabien/ PERSPECTIVES 24