SlideShare a Scribd company logo
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
  
	
  
	
  
	
   Website	
  in	
  a	
  Box	
  
The	
  Next	
  Generation	
  Hosting	
  Platform	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
	
  
Slava	
  Vladyshevsky	
  
Alex	
  Kostsin	
   	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   2	
  
Table	
  of	
  Contents	
  
	
  
PLATFORM	
  OVERVIEW	
  ....................................................................................................................................................	
  4	
  
INFRASTRUCTURE	
  OVERVIEW	
  .................................................................................................................................................................	
  4	
  
NETWORK	
  SETUP	
  OVERVIEW	
  ..................................................................................................................................................................	
  6	
  
PLATFORM	
  USER	
  ROLES	
  ..................................................................................................................................................	
  7	
  
PLATFORM	
  COMPONENTS	
  ..............................................................................................................................................	
  8	
  
PLATFORM	
  SERVICES	
  ................................................................................................................................................................................	
  9	
  
Stats	
  Collector	
  .........................................................................................................................................................................................	
  9	
  
Stats	
  Database	
  .....................................................................................................................................................................................	
  11	
  
Image	
  Registry	
  .....................................................................................................................................................................................	
  13	
  
Image	
  Builder	
  .......................................................................................................................................................................................	
  15	
  
Deployment	
  Service	
  ............................................................................................................................................................................	
  16	
  
Container	
  Provisioning	
  Service	
  .....................................................................................................................................................	
  17	
  
Reporting	
  Service	
  ................................................................................................................................................................................	
  19	
  
Persistent	
  Volumes	
  .............................................................................................................................................................................	
  19	
  
Volume	
  Sync-­‐Share	
  Service	
  .............................................................................................................................................................	
  20	
  
Persistent	
  Database	
  Storage	
  ..........................................................................................................................................................	
  22	
  
Database	
  Driver	
  .....................................................................................................................................................................................................................	
  23	
  
Percona	
  XtraDB	
  Cluster	
  Limitations	
  .............................................................................................................................................................................	
  24	
  
Secure	
  Storage	
  .....................................................................................................................................................................................	
  24	
  
Identity	
  Management	
  Service	
  ........................................................................................................................................................	
  26	
  
Load-­‐Balancer	
  Service	
  ......................................................................................................................................................................	
  29	
  
SCM	
  Service	
  ............................................................................................................................................................................................	
  30	
  
Workflow	
  Engine	
  ................................................................................................................................................................................	
  32	
  
SonarQube	
  Service	
  ..............................................................................................................................................................................	
  35	
  
Sonar	
  Database	
  ....................................................................................................................................................................................	
  36	
  
Sonar	
  Scanner	
  ......................................................................................................................................................................................	
  36	
  
PLATFORM	
  INTERFACES	
  ........................................................................................................................................................................	
  40	
  
API	
  Endpoints	
  .......................................................................................................................................................................................	
  40	
  
Command	
  Line	
  Interfaces	
  ...............................................................................................................................................................	
  40	
  
Platform	
  CLI	
  .............................................................................................................................................................................................................................	
  40	
  
Docker	
  CLI	
  ................................................................................................................................................................................................................................	
  48	
  
Web	
  Portals	
  ...........................................................................................................................................................................................	
  48	
  
Stats	
  Visualization	
  Portal	
  ...................................................................................................................................................................................................	
  48	
  
GitLab	
  Portal	
  ............................................................................................................................................................................................................................	
  49	
  
Sonar	
  Portal	
  .............................................................................................................................................................................................................................	
  50	
  
Platform	
  Orchestration	
  Portal	
  .........................................................................................................................................................................................	
  50	
  
OTHER	
  COMPONENTS	
  ............................................................................................................................................................................	
  51	
  
Docker	
  Engine	
  .........................................................................................................................................................................................................................	
  51	
  
Docker	
  Containers	
  .................................................................................................................................................................................................................	
  51	
  
PLATFORM	
  CAPACITY	
  MODEL	
  .....................................................................................................................................	
  52	
  
PLATFORM	
  SECURITY	
  .....................................................................................................................................................	
  54	
  
USER	
  NAMESPACE	
  REMAP	
  ....................................................................................................................................................................	
  54	
  
DOCKER	
  BENCH	
  FOR	
  SECURITY	
  ............................................................................................................................................................	
  57	
  
WEB	
  APPLICATION	
  SECURITY	
  ..............................................................................................................................................................	
  57	
  
PLATFORM	
  CHANGE	
  MANAGEMENT	
  ..........................................................................................................................	
  59	
  
DRUPAL	
  HOSTING	
  ............................................................................................................................................................	
  60	
  
DRUPAL	
  SITE	
  COMPONENTS	
  .................................................................................................................................................................	
  61	
  
DRUPAL	
  CONTAINER	
  COMPONENTS	
  ....................................................................................................................................................	
  62	
  
DRUPAL	
  CONTAINER	
  PERFORMANCE	
  .................................................................................................................................................	
  64	
  
Sizing	
  Considerations	
  ........................................................................................................................................................................	
  64	
  
Apache	
  vs.	
  NGINX	
  ................................................................................................................................................................................	
  66	
  
Performance	
  Test	
  ................................................................................................................................................................................	
  67	
  
Process	
  Size	
  Conundrum	
  ..................................................................................................................................................................	
  69	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   3	
  
DRUPAL	
  PROJECT	
  CREATION	
  ................................................................................................................................................................	
  73	
  
DRUPAL	
  WEBSITE	
  DEPLOYMENT	
  ........................................................................................................................................................	
  76	
  
Web	
  Project	
  Deployment	
  .................................................................................................................................................................	
  76	
  
Web	
  Container	
  Deployment	
  ...........................................................................................................................................................	
  80	
  
Website	
  Deployment	
  Workflow	
  ....................................................................................................................................................	
  80	
  
EDITORIAL	
  WORKFLOW	
  ........................................................................................................................................................................	
  81	
  
CONTENT	
  PUBLISHING	
  ...........................................................................................................................................................................	
  83	
  
ACTIVE	
  DIRECTORY	
  STRUCTURE	
  ...............................................................................................................................	
  85	
  
GITLAB	
  REPOSITORY	
  STRUCTURE	
  .............................................................................................................................	
  87	
  
MANAGEMENT	
  TASKS	
  AND	
  WORKFLOWS	
  ...............................................................................................................	
  88	
  
PLATFORM	
  STARTUP	
  .....................................................................................................................................................	
  91	
  
BASE	
  OS	
  IMAGE	
  .................................................................................................................................................................	
  98	
  
THE	
  OS	
  IMAGE	
  INSIDE	
  CONTAINER	
  ....................................................................................................................................................	
  98	
  
ONE	
  VS.	
  MULTIPLE	
  APPLICATIONS	
  ......................................................................................................................................................	
  99	
  
PROCESS	
  SUPERVISOR	
  ............................................................................................................................................................................	
  99	
  
QUICK	
  SUMMARY	
  .................................................................................................................................................................................	
  100	
  
STORAGE	
  SCALABILITY	
  IN	
  DOCKER	
  ........................................................................................................................	
  101	
  
LOOP	
  LVM	
  ............................................................................................................................................................................................	
  102	
  
DIRECT-­‐LVM	
  ........................................................................................................................................................................................	
  102	
  
BTRFS	
  ...................................................................................................................................................................................................	
  103	
  
OVERLAYFS	
  ..........................................................................................................................................................................................	
  103	
  
ZFS	
  .........................................................................................................................................................................................................	
  103	
  
CONCLUSION	
  ...................................................................................................................................................................	
  104	
  
	
  
Figure	
  Register	
  
	
  
Figure	
  1	
  -­‐	
  Infrastructure	
  Diagram	
  ....................................................................................................................................	
  5	
  
Figure	
  2	
  -­‐	
  Foundation	
  Infrastructure	
  Diagram	
  ...........................................................................................................	
  6	
  
Figure	
  3	
  -­‐	
  High-­‐level	
  Network	
  Diagram	
  .........................................................................................................................	
  7	
  
Figure	
  4	
  -­‐	
  Platform	
  Components	
  .......................................................................................................................................	
  8	
  
Figure	
  5	
  -­‐	
  cAdvisor	
  Web	
  UI:	
  CPU	
  usage	
  ......................................................................................................................	
  10	
  
Figure	
  6	
  -­‐	
  InfluxDB	
  Web	
  Console	
  ...................................................................................................................................	
  12	
  
Figure	
  7	
  -­‐	
  Image	
  Builder	
  UI	
  ..............................................................................................................................................	
  15	
  
Figure	
  8	
  -­‐	
  Sonar	
  Project	
  Dashboard	
  .............................................................................................................................	
  38	
  
Figure	
  9	
  -­‐	
  Sonar	
  Issue	
  Report	
  ..........................................................................................................................................	
  39	
  
Figure	
  10	
  -­‐	
  Stats	
  Visualization	
  and	
  Analysis	
  Portal	
  ...............................................................................................	
  48	
  
Figure	
  11	
  -­‐	
  GitLab	
  Portal	
  ...................................................................................................................................................	
  49	
  
Figure	
  12	
  -­‐	
  Sonar	
  Portal	
  .....................................................................................................................................................	
  50	
  
Figure	
  13	
  -­‐	
  Platform	
  Orchestration	
  Portal	
  .................................................................................................................	
  51	
  
Figure	
  14	
  –	
  Platform	
  Capacity	
  Model	
  ...........................................................................................................................	
  52	
  
Figure	
  15	
  -­‐	
  Drupal	
  CMS:	
  Configuration	
  Portal	
  .........................................................................................................	
  60	
  
Figure	
  16	
  -­‐	
  Drupal	
  Site	
  Components	
  ............................................................................................................................	
  61	
  
Figure	
  17	
  -­‐	
  Web	
  Container	
  Components	
  ....................................................................................................................	
  62	
  
Figure	
  18	
  -­‐	
  Stress	
  Test	
  Results	
  ........................................................................................................................................	
  67	
  
Figure	
  19	
  -­‐	
  Drupal	
  Project	
  Creation	
  Process	
  ............................................................................................................	
  73	
  
Figure	
  20	
  -­‐	
  Drupal	
  Project	
  Deployment	
  Process	
  .....................................................................................................	
  77	
  
Figure	
  21	
  -­‐	
  Website	
  Deployment	
  Workflow	
  .............................................................................................................	
  81	
  
Figure	
  22	
  -­‐	
  Editorial	
  Workflow	
  .......................................................................................................................................	
  82	
  
Figure	
  23	
  -­‐	
  Content	
  Publishing	
  Process	
  ......................................................................................................................	
  84	
  
Figure	
  24	
  -­‐	
  Example:	
  MS	
  Active	
  Directory	
  Structure	
  ............................................................................................	
  85	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   4	
  
Platform	
  Overview	
  
	
  
This	
  document	
  provides	
  in	
  depth	
  overview	
  for	
  the	
  Proof	
  of	
  Concept	
  project,	
  hereinafter	
  POC,	
  for	
  
container-­‐based	
   LAMP	
   web	
   hosting.	
   This	
   POC	
   project	
   has	
   been	
   performed	
   to	
   verify	
   technical	
  
feasibility	
   and	
   architectural	
   assumptions	
   as	
   well	
   as	
   to	
   demonstrate	
   the	
   prospect	
   customer	
   our	
  
expertise	
   in	
   this	
   domain.	
   It’s	
   assumed	
   that	
   this	
   project	
   or	
   its	
   parts	
   will	
   be	
   adopted	
   and	
  
productized.	
  
	
  
No	
   clear	
   requirements	
   have	
   been	
   provided.	
   Therefore,	
   the	
   overall	
   design	
   and	
   architectural	
  
decisions	
  have	
  been	
  mostly	
  governed	
  by	
  the	
  following	
  assumptions:	
  
• The	
  platform	
  must	
  provide	
  fully	
  managed	
  website	
  placeholders	
  that	
  will	
  be	
  populated	
  with	
  
customer-­‐provided	
  code	
  and	
  assets;	
  
• The	
  platform	
  must	
  provide	
  LAMP	
  (Linux,	
  Apache,	
  PHP,	
  MySQL)	
  run-­‐time	
  environment;	
  
• The	
  platform	
  architecture	
  must	
  be	
  similar	
  to	
  existing	
  Windows	
  hosting	
  platform;	
  
• The	
  platform	
  must	
  guarantee	
  high-­‐availability	
  for	
  production	
  workloads;	
  
• The	
   platform	
   must	
   prevent	
   the	
   noisy-­‐neighbors	
   effect,	
   i.e.	
   websites	
   sharing	
   the	
   same	
  
infrastructure	
  must	
  not	
  impact	
  each	
  other	
  performance;	
  
• The	
  platform	
  must	
  support	
  different	
  website	
  sizes	
  and	
  resource	
  allocation	
  profiles;	
  
• The	
  platform	
  must	
  guarantee	
  resources	
  and	
  be	
  able	
  to	
  report	
  on	
  their	
  usage;	
  
	
  
From	
   early	
   project	
   stages	
   it’s	
   been	
   assumed	
   that	
   hosting	
   platform	
   will	
   utilize	
   Linux	
   containers	
  
technology	
  popularized	
  by	
  Docker	
  and	
  often	
  referred	
  to	
  as	
  Docker	
  Containers.	
  Obviously,	
  Docker	
  
is	
  a	
  good	
  fit	
  for	
  such	
  hosting	
  platform	
  since	
  Docker	
  Containers:	
  
• Allowing	
  for	
  much	
  higher	
  workload	
  density	
  than	
  VMs;	
  
• Providing	
  enough	
  workload	
  isolation	
  and	
  containment;	
  
• Enabling	
  granular	
  resource	
  management	
  and	
  reporting;	
  
• Considered	
  the	
  future	
  of	
  PaaS.	
  
	
  
Soon	
   it	
   became	
   apparent	
   that	
   there	
   is	
   much	
   more	
   required	
   than	
   Docker	
   alone	
   for	
   supporting	
  
platform	
  requirements	
  and	
  some	
  additional	
  services	
  and	
  components	
  are	
  essential	
  for	
  providing	
  
reliable	
  hosting	
  services.	
  
	
  
Over	
  time,	
  the	
  set	
  of	
  Docker	
  containers	
  and	
  bunch	
  of	
  scripts	
  to	
  manage	
  them	
  evolved	
  into	
  the	
  real	
  
platform	
   with	
   well-­‐defined	
   services,	
   components	
   and	
   interfaces	
   between	
   them.	
   Operational	
  
procedures	
   and	
   workflows	
   have	
   been	
   automated	
   and	
   exposed	
   via	
   different	
   interface	
   to	
   enable	
  
future	
  integration	
  and	
  instrumentation.	
  
	
  
The	
  platform	
  architecture,	
  design	
  approach	
  and	
  processes	
  heavily	
  relying	
  on	
  Twelve-­‐Factor	
  App	
  
principles.	
  For	
  more	
  details	
  see	
  https://12factor.net/.	
  
	
  
Infrastructure	
  Overview	
  
Originally	
   the	
   platform	
   has	
   been	
   built	
   on	
   top	
   of	
   Kubernetes	
   cluster	
   for	
   simplified	
   container	
  
scheduling	
   and	
   orchestration.	
   Due	
   to	
   the	
   lack	
   of	
   expertise	
   in	
   Support	
   Organization	
   and	
   little	
  
acceptance	
   within	
   the	
   account	
   team,	
   this	
   approach	
   has	
   been	
   discontinued	
   and	
   Platform	
  
Infrastructure	
  setup	
  followed	
  and	
  adopted	
  as	
  much	
  as	
  possible	
  existing	
  Windows	
  hosting	
  platform	
  
architecture.	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   5	
  
	
  
Figure	
  1	
  -­‐	
  Infrastructure	
  Diagram	
  
	
  
The	
  POC	
  farm	
  infrastructure	
  is	
  mimicking	
  existing	
  web-­‐farm	
  setup	
  for	
  Windows	
  hosting:	
  
• All	
  inbound	
  network	
  traffic	
  is	
  passing	
  CDN/WAF;	
  
• The	
  network	
  is	
  split	
  into	
  two	
  security	
  zones:	
  DMZ	
  and	
  TRUST;	
  
• The	
  front-­‐end	
  services	
  and	
  service	
  components	
  are	
  hosted	
  in	
  DMZ	
  subnet;	
  
• The	
  back-­‐end	
  and	
  secured	
  components	
  are	
  located	
  in	
  TRUST	
  subnet;	
  
• When	
  coming	
  from	
  CDN/WAF,	
  the	
  network	
  traffic	
  is	
  passing	
  firewalls	
  and	
  load-­‐balancers;	
  
• Production	
  HTTP/S	
  VIPs	
  are	
  passing	
  traffic	
  to	
  HA	
  pair	
  of	
  web	
  instances;	
  
• Other	
  HTTP/S	
  VIPs,	
  e.g.	
  Staging	
  are	
  passing	
  traffic	
  to	
  a	
  singe	
  end-­‐point;	
  
• The	
  TRUST	
  subnet	
  contains	
  DB	
  servers:	
  a	
  cluster	
  for	
  production	
  workloads	
  and	
  a	
  single	
  
instance	
  for	
  staging	
  use;	
  
• All	
   platform	
   services	
   and	
   components	
   are	
   running	
   in	
   corresponding	
   containers	
   with	
  
exception	
  for	
  DB	
  instances,	
  which	
  are	
  running	
  directly	
  on	
  host	
  OS.	
  
	
  
There	
  is	
  additional	
  shared	
  farm,	
  so	
  called	
  Utility	
  or	
  Foundation,	
  one	
  per	
  DC,	
  where	
  various	
  utility	
  
services	
  shared	
  across	
  multiple	
  farms	
  and	
  websites	
  being	
  hosted.	
  For	
  production	
  deployment	
  it	
  
may	
  be	
  beneficial	
  from	
  security	
  standpoint	
  to	
  place	
  some	
  foundation	
  services	
  into	
  TRUST	
  subnet.	
  
	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   6	
  
	
  
Figure	
  2	
  -­‐	
  Foundation	
  Infrastructure	
  Diagram	
  	
  
It	
  is	
  envisioned	
  that	
  existing	
  foundation	
  farm	
  will	
  need	
  to	
  be	
  extended	
  with	
  at	
  least	
  two	
  additional	
  
systems	
   for	
   providing	
   required	
   foundation	
   services.	
   This	
   is	
   assuming	
   that	
   the	
   rest	
   of	
   existing	
  
foundation	
   services	
   such	
   as	
   Active	
   Directory,	
   DNS,	
   SMTP,	
   NTP,	
   …	
   will	
   be	
   shared	
   with	
   the	
   new	
  
platform.	
  
	
  
Network	
  Setup	
  Overview	
  
The	
   diagram	
   below	
   is	
   showing	
   a	
   logical	
   view	
   on	
   the	
   hosting	
   network	
   structure.	
   It’s	
   worth	
  
mentioning	
   that	
   besides	
   TRUST	
   and	
   DMZ	
   VLANs,	
   the	
   Docker	
   is	
   adding	
   one	
   more	
   layer	
   of	
  
indirection	
   by	
   creating	
   at	
   least	
   one	
   network	
   bridge	
   per	
   Docker	
   host	
   to	
   pass	
   traffic	
   between	
  
containers	
  and	
  external	
  world.	
  
	
  
There	
   are	
   number	
   of	
   solutions	
   emerged	
   over	
   past	
   couple	
   years,	
   bringing	
   SDN	
   and	
   network	
  
virtualization	
  capabilities	
  to	
  container	
  eco-­‐system.	
  During	
  this	
  POC	
  project	
  we	
  won’t	
  be	
  exploring	
  
these	
  network	
  abstraction	
  solutions	
  and	
  will	
  use	
  standard	
  network	
  stack	
  provided	
  by	
  Docker.	
  
	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   7	
  
	
  
Figure	
  3	
  -­‐	
  High-­‐level	
  Network	
  Diagram	
  
Platform	
  User	
  Roles	
  
The	
  user	
  role	
  definition	
  is	
  tightly	
  bound	
  to	
  the	
  definition	
  scope.	
  The	
  following	
  scopes	
  defined:	
  
• Platform	
   Scope	
   –	
   platform-­‐wide	
   scope,	
   including	
   all	
   hosted	
   organizations	
   and	
  
applications;	
  
• Organization	
  Scope	
  –	
  includes	
  organization	
  owned	
  objects	
  and	
  applications;	
  
• Application	
  Scope	
  –	
  includes	
  objects	
  and	
  components	
  pertinent	
  to	
  a	
  given	
  application;	
  
	
  
Specific	
  user	
  roles	
  and	
  their	
  mapping	
  will	
  be	
  dictated	
  by	
  the	
  particular	
  use-­‐case	
  and	
  processes	
  
accepted	
  within	
  hosting	
  organization.	
  	
  
	
  
For	
  the	
  sake	
  of	
  simplicity	
  we’ll	
  assume	
  the	
  following	
  major	
  roles	
  defined	
  in	
  the	
  scope	
  of	
  proposed	
  
hosting	
  platform:	
  
• Authorized	
  User	
  –	
  a	
  user	
  that	
  passed	
  authentication	
  and	
  has	
  been	
  assigned	
  corresponding	
  
permissions:	
  
o Administrator	
  –	
  a	
  management	
  user	
  performing	
  administrations	
  tasks;	
  
o Developer	
  –	
  a	
  developer,	
  an	
  individual	
  writing	
  and	
  testing	
  the	
  code;	
  
o Content	
   Manager	
  –	
  an	
  editor,	
  an	
  individual	
  authoring	
  and	
  managing	
  the	
  web	
  site	
  
content;	
  
• Anonymous	
  User	
  –	
  a	
  website	
  visitor	
  coming	
  from	
  the	
  public	
  Internet;	
  
	
  
The	
   Identity	
   Management	
   (IdM)	
   Service	
   performs	
   mapping	
   between	
   user	
   identity	
   and	
   its	
  
associated	
  roles.	
  This	
  is	
  implemented	
  using	
  LDAP	
  grouping	
  mechanisms.	
  	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   8	
  
Things	
  to	
  keep	
  in	
  mind:	
  
• User	
  role	
  depending	
  on	
  the	
  scope,	
  e.g.	
  one	
  user	
  may	
  be	
  Developer	
  in	
  one	
  organization	
  and	
  
act	
  as	
  Content	
  Manager	
  in	
  other	
  organization.	
  While	
  this	
  is	
  possible,	
  generally	
  such	
  cross-­‐
organization	
  role	
  assignments	
  are	
  discouraged;	
  
• One	
  may	
  differentiate	
  Platform	
  User	
  Roles	
  and	
  Application	
  User	
  Roles	
  for	
  the	
  Applications	
  
deployed	
   on	
   the	
   platform.	
   However,	
   both	
   user	
   types	
   are	
   authenticated	
   and	
   authorized	
  
using	
  the	
  same	
  IdM	
  Platform	
  Service	
  and	
  as	
  such	
  making	
  no	
  real	
  difference.	
  For	
  example	
  
Drupal	
  user	
  roles	
  are	
  subset	
  of	
  platform	
  user	
  roles;	
  
• Both	
  Applications	
  and	
  Platform	
  using	
  IdM	
  Service	
  currently,	
  however,	
  it’s	
  not	
  a	
  mandatory	
  
requirement.	
   Additional	
   or	
   alternative	
   Authentication	
   Mechanisms	
   may	
   be	
   used	
   too.	
   For	
  
example	
  many	
  Platform	
  services	
  have	
  local	
  user	
  database	
  and	
  local	
  administrative	
  accounts	
  
in	
  order	
  to	
  be	
  able	
  to	
  act	
  autonomously	
  in	
  case	
  of	
  IdM	
  Service	
  failure	
  or	
  other	
  issues;	
  
• The	
  website	
  visitor	
  is	
  not	
  required	
  to	
  pass	
  authentication	
  and	
  granted	
  the	
  Anonymous	
  User	
  
role	
  by	
  default.	
  
Platform	
  Components	
  
Below	
   is	
   the	
   high-­‐level	
   diagram	
   of	
   the	
   Platform	
   components.	
   Connectors	
   depicting	
   major1	
  
communication	
   channels	
   and	
   interactions	
   between	
   services	
   and	
   generally	
   may	
   be	
   seen	
   as	
   the	
  
“using”	
  statement.	
  The	
  dotted-­‐line	
  connectors	
  are	
  showing	
  alternative	
  path.	
  
	
  
	
  
	
  
Figure	
  4	
  -­‐	
  Platform	
  Components	
  
	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
1	
  Major	
  is	
  referring	
  to	
  the	
  fact	
  that	
  some	
  dependencies	
  are	
  not	
  shown	
  to	
  avoid	
  diagram	
  clutter.	
  E.g.	
  pretty	
  much	
  all	
  
platform	
  components	
  depending	
  on	
  Persistent	
  Volumes	
  and	
  this	
  is	
  not	
  depicted	
  here.	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   9	
  
Different	
  components	
  marked	
  with	
  different	
  colors	
  to	
  differentiate	
  their	
  types:	
  
• Red	
  components	
  are	
  administrative	
  or	
  management	
  portals;	
  
• Yellow	
  components	
  are	
  Platform	
  Services,	
  generally	
  speaking	
  –	
  containers;	
  
• Blue	
  components	
  are	
  development	
  portals;	
  
• Grey	
  components	
  are	
  general-­‐purpose	
  platform	
  building	
  blocks;	
  
• Green	
  components	
  are	
  hosted	
  website	
  instances	
  the	
  user	
  interacting	
  with.	
  
	
  
The	
  following	
  platform	
  Actors	
  defined:	
  
• Admin	
  –	
  platform	
  administrator;	
  
• Dev	
  –	
  website	
  developer;	
  
• Website	
  User	
  –	
  both	
  content	
  manager	
  and	
  public	
  Internet	
  user.	
  
	
  
Platform	
  Services	
  
Below	
  is	
  a	
  short	
  overview	
  for	
  the	
  Platform	
  Services.	
  For	
  every	
  Service	
  it	
  is	
  providing	
  description	
  of	
  
its	
  role,	
  dependencies	
  as	
  well	
  as	
  configuration	
  and	
  usage	
  examples.	
  
	
  
The	
   service	
   startup	
   instructions	
   in	
   this	
   chapter	
   are	
   provided	
   for	
   demonstration	
   purposes	
   only.	
  
Normally	
  services	
  are	
  expected	
  to	
  boot	
  in	
  automated	
  manner,	
  for	
  example	
  using	
  Docker	
  Composer	
  
scripts.	
   By	
   using	
   Composer	
   we	
   can	
   ensure	
   repeatable	
   and	
   consistent	
   configuration	
   as	
   well	
   as	
  
reliable	
  service	
  startup	
  and	
  recovery.	
  See	
  the	
  Platform	
  Startup	
  chapter	
  for	
  additional	
  details.	
  
Stats	
  Collector	
  
Platform	
  Stats	
  Collector	
  is	
  a	
  stateless	
  service	
  implemented	
  as	
  container	
  running	
  on	
  every	
  Docker	
  
host	
   and	
   collecting	
   resource	
   usage	
   stats	
   exposed	
   by	
   Docker	
   Engine	
   using	
   Google	
   cAdvisor	
  
application	
  https://github.com/google/cadvisor.	
  
	
  
The	
   quote	
   from	
   the	
   project	
   page:	
   “The	
  cAdvisor	
  (Container	
  Advisor)	
  provides	
  container	
  users	
  an	
  
understanding	
  of	
  the	
  resource	
  usage	
  and	
  performance	
  characteristics	
  of	
  their	
  running	
  containers.	
  It	
  
is	
   a	
   running	
   daemon	
   that	
   collects,	
   aggregates,	
   processes,	
   and	
   exports	
   information	
   about	
   running	
  
containers.	
  Specifically,	
  for	
  each	
  container	
  it	
  keeps	
  resource	
  isolation	
  parameters,	
  historical	
  resource	
  
usage,	
  and	
  histograms	
  of	
  complete	
  historical	
  resource	
  usage	
  and	
  network	
  statistics.	
  This	
  data	
  may	
  be	
  
exported	
  either	
  by	
  container	
  or	
  machine-­‐wide.	
  The	
  cAdvisor	
  has	
  native	
  support	
  for	
  Docker	
  containers	
  
and	
  should	
  support	
  just	
  about	
  any	
  other	
  container	
  type	
  out	
  of	
  the	
  box.”	
  
	
  
Current	
  setup	
  assumes	
  that	
  Stats	
  Collector	
  is	
  using	
  Stats	
  DB	
  service	
  for	
  storing	
  metrics	
  collected	
  
from	
  the	
  Docker	
  Engine.	
  Therefore	
  Stats	
  Collector	
  depends	
  on	
  Stats	
  DB	
  service	
  and	
  Docker	
  Engine	
  
APIs	
  and	
  must	
  be	
  deployed	
  and	
  booted	
  accordingly.	
  
	
  
Alternatively,	
   it’s	
   possible	
   to	
   use	
   https://github.com/kubernetes/heapster	
   for	
   stats	
   aggregation	
  
and	
   resource	
   monitoring	
   for	
   more	
   complex	
   deployments	
   or	
   query	
   Docker	
   API	
   directly,	
   if	
   more	
  
control	
  or	
  flexibility	
  is	
  required.	
  
	
  
Although	
   cAdvisor	
   instances	
   may	
   be	
   accessed	
   directly	
   and	
   providing	
   Web	
   UI	
   for	
   metric	
  
visualization,	
   the	
   more	
   practical	
   approach	
   is	
   to	
   export	
   collected	
   stats	
   to	
   external	
   database	
   that	
  
may	
   be	
   used	
   for	
   arbitrary	
   data	
   aggregation,	
   reporting	
   and	
   analysis	
   tasks.	
   The	
   cAdvisor	
   does	
  
provide	
  multiple	
  storage	
  drivers	
  out	
  of	
  the	
  box.	
  Current	
  implementation	
  is	
  using	
  InfluxDB	
  time-­‐
series	
  database	
  for	
  storing	
  collected	
  measurements.	
  
	
  
Below	
  is	
  an	
  example	
  of	
  the	
  chart	
  produced	
  by	
  cAdvisor	
  in	
  runtime.	
  It	
  has	
  quite	
  limited	
  practical	
  
usage	
  if	
  at	
  all	
  and	
  provided	
  just	
  for	
  reference	
  purposes.	
  
	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   10	
  
	
  
Figure	
  5	
  -­‐	
  cAdvisor	
  Web	
  UI:	
  CPU	
  usage	
  
Below	
  is	
  an	
  example	
  command	
  for	
  running	
  cAdvisor	
  container:	
  
	
  
$	
  docker	
  run	
  -­‐-­‐name=cadvisor	
  -­‐-­‐hostname=`hostname`	
  -­‐-­‐detach=true	
  -­‐-­‐restart=always	
  	
  
	
  -­‐-­‐cpu-­‐shares	
  100	
  -­‐-­‐memory	
  500m	
  -­‐-­‐memory-­‐swap	
  1G	
  -­‐-­‐userns=host	
  -­‐-­‐publish=8080:8080	
  	
  
	
  -­‐-­‐volume=/:/rootfs:ro	
  -­‐-­‐volume=/var/run:/var/run:rw	
  -­‐-­‐volume=/sys:/sys:ro	
  	
  
	
  -­‐-­‐volume=/var/lib/docker/:/var/lib/docker:ro	
  	
  
	
  google/cadvisor:v0.24.0	
  	
  
	
  	
  -­‐storage_driver=influxdb	
  -­‐storage_driver_db=cadvisor	
  -­‐storage_driver_host=${INFLUXDB_HOST}:8086	
  	
  
	
  	
  -­‐storage_driver_user=${INFLUXDB_RW_USER}	
  -­‐storage_driver_password="${INFLUXDB_RW_PASS}"	
  
	
  
The	
  cAdvisor	
  is	
  still	
  an	
  evolving	
  project	
  and,	
  unfortunately,	
  having	
  own	
  shortcomings,	
  for	
  example	
  
it’s	
  only	
  accepting	
  configuration	
  values	
  via	
  command	
  line	
  options.	
  Neither	
  configuration	
  files	
  nor	
  
ENV	
   variables	
   currently	
   supported.	
   One	
   of	
   the	
   issues	
   directly	
   following	
   form	
   this	
   –	
   the	
   DB	
  
credentials	
  passed	
  as	
  command	
  line	
  parameters	
  in	
  clear	
  text	
  and	
  can	
  be	
  seen	
  in	
  the	
  process	
  list.	
  
	
  
There	
  are	
  several	
  things	
  to	
  keep	
  in	
  mind:	
  
• Unless	
  default	
  database	
  scheme	
  and	
  credentials	
  used,	
  they	
  must	
  be	
  provided	
  too	
  as	
  storage	
  
driver	
  parameters.	
  The	
  database	
  scheme	
  must	
  be	
  created	
  prior	
  to	
  storing	
  collected	
  metrics;	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   11	
  
• The	
  cAdvisor	
  does	
  not	
  store	
  collected	
  metrics	
  for	
  more	
  than	
  120sec	
  by	
  default.	
  Therefore,	
  if	
  
database	
   connection	
   is	
   interrupted,	
   the	
   resource	
   metrics	
   are	
   lost.	
   Depending	
   on	
   your	
  
specific	
  environment	
  setup	
  and	
  requirements	
  it	
  may	
  be	
  a	
  good	
  idea	
  to	
  review	
  and	
  adjust	
  
default	
  buffering	
  and	
  flushing	
  settings;	
  
• More-­‐less	
   obvious	
   observation:	
   the	
   more	
   containers	
   running	
   on	
   the	
   host,	
   the	
   more	
  
resources	
  will	
  cAdvisor	
  consume	
  and	
  the	
  more	
  traffic	
  will	
  flow	
  between	
  cAdvisor	
  instance	
  
and	
  storage	
  backend.	
  Consequently:	
  
o It’s	
   a	
   good	
   idea	
   to	
   limit	
   cAdvisor	
   resource	
   usage	
   to	
   avoid	
   impacting	
   production	
  
workloads.	
  On	
  the	
  other	
  hand,	
  pulling	
  the	
  belt	
  too	
  tight	
  may	
  have	
  adverse	
  affects	
  on	
  
metrics	
   collection	
   itself.	
   The	
   constraints	
   provided	
   in	
   example	
   above	
   are	
   for	
  
demonstration	
   purposes	
   only	
   and	
   must	
   be	
   adjusted	
   for	
   specific	
   setup	
   and	
  
environment;	
  
o For	
   busy	
   hosts	
   with	
   high	
   container	
   density	
   it’s	
   recommended	
   to	
   adjust	
   cAdvisor	
  
buffering,	
  caching	
  and	
  flushing	
  parameters	
  for	
  the	
  best	
  performance.	
  For	
  example:	
  
cAdvisor	
  is	
  collecting	
  metrics	
  during	
  the	
  1min	
  time	
  frame	
  and	
  flushing	
  them	
  in	
  a	
  
single	
   transaction.	
   In	
   certain	
   scenarios	
   increasing	
   this	
   time	
   frame	
   may	
   improve	
  
performance	
  without	
  impacting	
  monitoring	
  granularity;	
  
• The	
   cAdvisor	
   requires	
   elevated	
   permissions	
   (-­‐-­‐userns=host),	
   since	
   it	
   is	
   accessing	
   some	
  
objects	
  in	
  the	
  Docker	
  host	
  namespace;	
  
• The	
   cAdvisor	
   project	
   does	
   not	
   enforce	
   security	
   by	
   default,	
   which	
   leaves	
   us	
   with	
   three	
  
possible	
  options	
  for	
  running	
  this	
  service.	
  All	
  these	
  options	
  have	
  been	
  explored	
  during	
  the	
  
POC	
  project	
  and	
  providing	
  the	
  balance	
  between	
  security	
  and	
  complexity:	
  
o Insecure:	
   using	
   default	
   credentials	
   for	
   storage	
   driver.	
   No	
   additional	
   options	
  
required;	
  
o Kind-­‐of-­‐secure:	
  providing	
  storage	
  driver	
  credentials	
  as	
  command-­‐line	
  parameters,	
  
so	
  they	
  will	
  show	
  up	
  in	
  the	
  process	
  list;	
  
o Secure:	
  creating	
  a	
  custom	
  build	
  and	
  image	
  for	
  cAdvisor	
  that	
  will	
  handle	
  and	
  pass	
  
credentials	
  securely.	
  
• It’s	
   unlikely	
   that	
   cAdvisor	
   Web	
   UI	
   itself	
   is	
   going	
   to	
   be	
   used	
   for	
   production	
   deployment	
  
monitoring,	
  therefore	
  it’s	
  recommended	
  to	
  avoid	
  publishing	
  cAdvisor	
  Web	
  UI	
  ports;	
  
• The	
   cAdvisor,	
   being	
   a	
   part	
   of	
   Kubernetes	
   project	
   is	
   quickly	
   evolving	
   and	
   new	
   versions	
  
appearing	
  quite	
  often.	
  Although	
  common	
  practice	
  is	
  to	
  use	
  the	
  “latest”	
  image	
  version,	
  it’s	
  
recommended	
  to	
  standardize	
  on	
  and	
  run	
  specific	
  cAdvisor	
  version	
  across	
  all	
  deployments	
  
for	
  consistent	
  and	
  predictable	
  behavior	
  and	
  results.	
  
	
  
Stats	
  Database	
  
All	
   metrics	
   gathered	
   by	
   Stats	
   Collector	
   service	
   are	
   passed	
   to	
   and	
   persisted	
   by	
   Stats	
   Database	
  
service.	
  This	
  service	
  is	
  implemented	
  as	
  Docker	
  container	
  located	
  on	
  utility	
  host	
  in	
  foundation	
  farm	
  
and	
  running	
  InfluxDB	
  time-­‐series	
  database	
  https://github.com/influxdata/influxdb.	
  
	
  
Depending	
  on	
  specific	
  requirements	
  different	
  storage	
  back-­‐ends	
  may	
  be	
  used	
  in	
  place	
  of	
  InfluxDB.	
  
The	
  choice	
  has	
  been	
  made	
  in	
  favor	
  of	
  InfluxDB	
  for	
  the	
  following	
  reasons:	
  
• Simple	
  and	
  self-­‐contained	
  database	
  without	
  external	
  dependencies;	
  
• Purpose	
  made	
  database	
  for	
  time-­‐series	
  metric	
  storage	
  and	
  querying;	
  
• Supported	
  by	
  and	
  integrated	
  into	
  many	
  modern	
  deployment	
  stacks	
  and	
  platforms;	
  
• Provides	
  several	
  storage	
  engines	
  geared	
  towards	
  real-­‐time	
  data	
  processing;	
  
• REST	
  API	
  driven	
  for	
  both	
  management,	
  data	
  ingestion	
  and	
  processing;	
  
• Supporting	
  SQL-­‐like	
  InfluxQL	
  language	
  for	
  querying	
  database;	
  
• Provides	
  flexible	
  controls	
  and	
  data	
  retention	
  policies;	
  
• Scalable	
  and	
  supports	
  clustering;	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   12	
  
	
  
The	
  Stats	
  Database	
  service	
  is	
  indirectly	
  depending	
  on	
  Image	
  Registry	
  service,	
  since	
  its	
  image	
  being	
  
pulled	
  from	
  registry	
  by	
  the	
  Docker	
  Engine	
  during	
  the	
  service	
  container	
  startup.	
  Other	
  than	
  that,	
  
assuming	
  standalone	
  (non-­‐clustered)	
  deployment,	
  the	
  Stats	
  Database	
  service	
  is	
  self-­‐sufficient	
  and	
  
being	
  used	
  by	
  other	
  services	
  and	
  components	
  such	
  as:	
  
• The	
  Stats	
  Visualization	
  portal	
  –	
  is	
  querying	
  Stats	
  Database	
  for	
  visualized	
  resource	
  metrics;	
  
• The	
  Reporting	
  service	
  –	
  is	
  querying	
  Stats	
  Database	
  for	
  compiling	
  various	
  usage	
  reports;	
  
• The	
  Stats	
  Collector	
  –	
  is	
  periodically	
  storing	
  measurements	
  in	
  the	
  Stats	
  Database.	
  
	
  
The	
  InfluxDB	
  is	
  also	
  providing	
  web	
  console	
  for	
  basic	
  management	
  and	
  querying	
  operations.	
  
	
  
	
  
	
  
Figure	
  6	
  -­‐	
  InfluxDB	
  Web	
  Console	
  
Here	
  is	
  an	
  example	
  for	
  running	
  InfluxDB	
  container:	
  
	
  
$	
  docker	
  run	
  -­‐-­‐name=influxdb	
  -­‐-­‐detach=true	
  -­‐-­‐restart=always	
  	
  
	
  -­‐-­‐cpu-­‐shares	
  512	
  -­‐-­‐memory	
  1G	
  -­‐-­‐memory-­‐swap	
  1G	
  	
  
	
  -­‐-­‐volume=${VOL_DATA}/influxdb:/influxdb	
  -­‐-­‐publish	
  8083:8083	
  -­‐-­‐publish	
  8086:8086	
  	
  
	
  -­‐-­‐expose	
  8090	
  -­‐-­‐expose	
  8099	
  	
  
	
  -­‐-­‐env	
  ADMIN_USER="root"	
  -­‐-­‐env	
  PRE_CREATE_DB=cadvisor	
  	
  
	
  ${REGISTRY}/influxdb	
  
	
  
In	
  some	
  cases	
  there	
  may	
  be	
  a	
  need	
  to	
  have	
  separate	
  user	
  accounts	
  with	
  varying	
  access	
  levels.	
  The	
  
user	
  with	
  write	
  permissions	
  may	
  be	
  used	
  for	
  storing	
  stats	
  in	
  the	
  DB	
  and	
  read-­‐only	
  user	
  may	
  be	
  
used	
  for	
  reporting	
  and	
  monitoring	
  activities.	
  Let’s	
  create	
  users	
  with	
  read	
  and	
  write	
  permissions:	
  
	
  
$	
  cat	
  <<"EOT"	
  |	
  docker	
  exec	
  -­‐i	
  influxdb	
  /usr/bin/influx	
  -­‐username=root	
  -­‐password=root	
  -­‐path	
  -­‐	
  
CREATE	
  DATABASE	
  cadvisor	
  
CREATE	
  USER	
  writer	
  WITH	
  PASSWORD	
  '<writer	
  password>'	
  
CREATE	
  USER	
  reader	
  WITH	
  PASSWORD	
  '<reader	
  password>'	
  
GRANT	
  WRITE	
  ON	
  cadvisor	
  TO	
  writer	
  
GRANT	
  READ	
  ON	
  cadvisor	
  TO	
  reader	
  
EOT	
  
	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   13	
  
	
  
Now,	
  we	
  will	
  list	
  available	
  databases	
  using	
  InfluxDB	
  client:	
  
	
  
$	
  echo	
  "show	
  databases"	
  |	
  docker	
  exec	
  -­‐i	
  influxdb	
  /usr/bin/influx	
  -­‐username=root	
  -­‐password=root	
  -­‐path	
  
-­‐	
  
	
  
Visit	
  https://enterprise.influxdata.com	
  to	
  register	
  for	
  updates,	
  InfluxDB	
  server	
  management,	
  and	
  
monitoring.	
  
Connected	
  to	
  http://localhost:8086	
  version	
  0.10.3	
  
InfluxDB	
  shell	
  0.10.3	
  
>	
  name:	
  databases	
  
-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐	
  
name	
  
cadvisor	
  
_internal	
  
	
  
Things	
  to	
  keep	
  in	
  mind:	
  
• For	
  the	
  sake	
  of	
  simplicity	
  InfluxDB	
  is	
  deployed	
  as	
  standalone	
  instance	
  and	
  therefore	
  it	
  is	
  
not	
   resilient	
   to	
   service	
   failures	
   resulting	
   in	
   data	
   loss	
   until	
   service	
   is	
   recovered.	
   It’s	
  
recommended	
  to	
  deploy	
  InfluxDB	
  cluster	
  for	
  production	
  deployments;	
  
• The	
  database	
  size	
  on	
  disk	
  will	
  depend	
  on	
  retention	
  policies	
  and	
  amount	
  of	
  metrics	
  collected	
  
over	
  time.	
  The	
  policies	
  and	
  retention	
  rules	
  will	
  need	
  to	
  be	
  adjusted	
  for	
  production	
  use	
  and	
  
on	
  case-­‐by-­‐case	
  basis;	
  
• The	
  service	
  (container)	
  memory	
  consumption	
  will	
  depend	
  on	
  configured	
  storage	
  engine,	
  
amount	
   of	
   metrics	
   collected	
   and	
   configuration	
   settings.	
   Those	
   settings	
   will	
   need	
   to	
   be	
  
adjusted	
  for	
  production	
  use,	
  keeping	
  in	
  mind	
  resource	
  constraints;	
  
• InfluxDB	
  provides	
  multiple	
  interfaces	
  for	
  monitoring	
  and	
  data	
  querying,	
  including	
  database	
  
client	
  application,	
  client	
  libraries	
  for	
  most	
  popular	
  languages	
  as	
  well	
  as	
  REST	
  API	
  endpoint;	
  
• This	
  project	
  is	
  using	
  custom	
  built	
  image	
  for	
  InfluxDB	
  for	
  automating	
  and	
  simplifying	
  basic	
  
setup	
   and	
   management	
   tasks.	
   It	
   may	
   behave	
   differently	
   comparing	
   to	
   default	
   image	
  
provided	
  by	
  the	
  vendor.	
  
	
  
Image	
  Registry	
  
All	
  container	
  images	
  used	
  by	
  the	
  POC	
  project	
  are	
  stored	
  in	
  the	
  local	
  image	
  repository	
  provided	
  by	
  
Image	
  Registry	
  service.	
  This	
  service	
  is	
  implemented	
  as	
  the	
  Docker	
  container	
  located	
  on	
  utility	
  host	
  
in	
  the	
  foundation	
  farm	
  and	
  running	
  Docker	
  Distribution	
  https://github.com/docker/distribution	
  
application.	
  	
  Whenever	
  new	
  container	
  image	
  is	
  built	
  –	
  it	
  is	
  stored	
  in	
  the	
  Image	
  Registry.	
  Whenever	
  
new	
  container	
  created,	
  its	
  image	
  being	
  pulled	
  from	
  this	
  repository.	
  
	
  
More	
   details	
   and	
   examples	
   can	
   be	
   found	
   in	
   Docker	
   Distribution	
   project	
   documentation	
   on	
   the	
  
following	
  link	
  https://github.com/docker/distribution/blob/master/docs/deploying.md.	
  
	
  
Being	
  one	
  of	
  the	
  base	
  services,	
  the	
  Image	
  Registry	
  is	
  self-­‐contained	
  and	
  does	
  not	
  depend	
  on	
  other	
  
Platform	
  services.	
  At	
  the	
  same	
  time	
  the	
  Image	
  Registry	
  is	
  not	
  used	
  directly	
  by	
  Platform	
  services.	
  
Usually,	
  it	
  is	
  used	
  indirectly,	
  when	
  Docker	
  Engine	
  cannot	
  find	
  required	
  image	
  in	
  the	
  local	
  image	
  
storage	
  on	
  particular	
  host.	
  In	
  this	
  case	
  the	
  image	
  is	
  being	
  queried,	
  validated	
  and	
  pulled	
  from	
  the	
  
Image	
  Registry.	
  
	
  
Here	
  is	
  an	
  example	
  for	
  setting	
  up	
  image	
  registry	
  service.	
  First	
  of	
  all	
  we’ll	
  setup	
  certificates.	
  The	
  SSL	
  
keys	
  will	
  need	
  to	
  be	
  generated	
  only	
  once,	
  but	
  have	
  to	
  be	
  deployed	
  on	
  every	
  Docker	
  host:	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   14	
  
	
  
#	
  executed	
  only	
  once:	
  generating	
  self-­‐signed	
  registry	
  certificate,	
  CN=registry.poc	
  
	
  
$	
  mkdir	
  -­‐p	
  ~/certs	
  
$	
  openssl	
  req	
  -­‐newkey	
  rsa:4096	
  -­‐nodes	
  -­‐sha256	
  -­‐x509	
  -­‐days	
  365	
  	
  
	
  -­‐subj	
  "/C=DE/ST=HE/L=Frankfurt/O=VZ/OU=MH/CN=registry.poc/emailAddress=admin@vzpoc.com"	
  	
  
	
  -­‐keyout	
  ~/certs/registry.key	
  -­‐out	
  ~/certs/registry.crt	
  
	
  
#	
  executed	
  on	
  each	
  Docker	
  host:	
  
	
  
#	
  -­‐	
  deploying	
  certificates	
  to	
  the	
  Docker	
  certificate	
  store	
  
$	
  mkdir	
  -­‐p	
  /etc/docker/certs.d/registry.poc:5000	
  
$	
  cp	
  certs/registry.crt	
  /etc/docker/certs.d/registry.poc:5000/ca.crt	
  
	
  
#	
  -­‐	
  restarting	
  docker	
  to	
  activate	
  certificates	
  
$	
  systemctl	
  restart	
  docker.service	
  
	
  
Next,	
  we’ll	
  set	
  up	
  host	
  volumes	
  and	
  configuration	
  for	
  the	
  Image	
  Registry	
  service	
  container:	
  
	
  
$	
  mkdir	
  -­‐p	
  /var/data/registry/{certs,config,data}	
  
$	
  [	
  -­‐d	
  ~/certs	
  ]	
  &&	
  cp	
  ~/certs/*	
  /var/data/registry/certs	
  
$	
  cat	
  <<EOT	
  >	
  /var/data/registry/config/config.xml	
  	
  
version:	
  0.1	
  
log:	
  
	
  	
  level:	
  info	
  
	
  	
  formatter:	
  text	
  
	
  	
  fields:	
  
	
  	
  	
  	
  service:	
  registry	
  
	
  	
  	
  	
  environment:	
  production	
  
storage:	
  
	
  	
  	
  	
  cache:	
  
	
  	
  	
  	
  	
  	
  	
  	
  layerinfo:	
  inmemory	
  
	
  	
  	
  	
  filesystem:	
  
	
  	
  	
  	
  	
  	
  	
  	
  rootdirectory:	
  /var/lib/registry	
  
http:	
  
	
  	
  	
  	
  addr:	
  :5000	
  
	
  	
  	
  	
  tls:	
  
	
  	
  	
  	
  	
  	
  	
  	
  certificate:	
  /certs/registry.crt	
  
	
  	
  	
  	
  	
  	
  	
  	
  key:	
  /certs/registry.key	
  
	
  	
  	
  	
  debug:	
  
	
  	
  	
  	
  	
  	
  	
  	
  addr:	
  :5001	
  
EOT	
  
	
  
Eventually,	
  we’ll	
  start	
  registry	
  service	
  and	
  validate	
  that	
  it	
  can	
  be	
  accessed	
  over	
  HTTPS:	
  
	
  
#	
  starting	
  Docker	
  container	
  with	
  registry	
  service	
  
$	
  docker	
  run	
  -­‐-­‐name	
  registry	
  -­‐-­‐hostname	
  registry.poc	
  -­‐-­‐detach=true	
  -­‐-­‐restart=always	
  	
  
	
  -­‐-­‐env	
  REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt	
  	
  
	
  -­‐-­‐env	
  REGISTRY_HTTP_TLS_KEY=/certs/registry.key	
  	
  
	
  -­‐-­‐volume	
  /var/data/registry/certs:/certs:ro	
  	
  
	
  -­‐-­‐volume	
  /var/data/registry/data:/var/lib/registry:rw	
  	
  
	
  -­‐-­‐volume	
  /var/data/registry/config:/etc/docker/registry:ro	
  	
  
	
  -­‐-­‐publish	
  5000:5000	
  	
  
	
  registry:2.5	
  
	
  
#	
  verifying	
  registry	
  is	
  working,	
  registry.poc	
  name	
  should	
  resolve	
  to	
  IP	
  owned	
  by	
  the	
  registry	
  service	
  
$	
  docker	
  tag	
  busybox	
  registry.poc:5000/poc/busybox:v1	
  
$	
  docker	
  push	
  registry.poc:5000/poc/busybox:v1	
  
$	
  curl	
  -­‐-­‐cacert	
  ~/certs/registry.crt	
  -­‐X	
  GET	
  https://registry.poc:5000/v2/poc/busybox/tags/list	
  
{"name":"poc/busybox","tags":["v1"]}	
  
	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   15	
  
Things	
  to	
  keep	
  in	
  mind:	
  
• Most	
   container	
   images	
   are	
   stored	
   in	
   the	
   locally	
   hosted	
   Image	
   Registry,	
   however,	
   some	
  
images	
   are	
   pulled	
   from	
   outside	
   repositories	
   to	
   avoid	
   circular	
   dependencies	
   during	
   the	
  
service	
  startup:	
  
o The	
   Docker	
   Distribution	
   container	
   image	
   is	
   provided	
   by	
   Docker	
   and	
   pulled	
   from	
  
external	
  registry	
  https://hub.docker.com/r/distribution/registry/	
  
o The	
   Google	
   cAdvisor	
   container	
   image	
   is	
   provided	
   by	
   Google	
   and	
   pulled	
   from	
   the	
  
external	
  registry	
  https://hub.docker.com/r/google/cadvisor/	
  	
  
o The	
  GitLab	
  container	
  image	
  is	
  provided	
  by	
  GitLab	
  community	
  and	
  pulled	
  from	
  the	
  
external	
  registry	
  https://hub.docker.com/r/gitlab/gitlab-­‐ce/	
  	
  
• For	
  the	
  sake	
  of	
  simplicity	
  the	
  Image	
  Registry	
  service	
  is	
  deployed	
  as	
  standalone	
  instance	
  and	
  
therefore	
   is	
   not	
   resilient	
   to	
   service	
   failures.	
   The	
   HA	
   deployment	
   is	
   recommended	
   for	
  
production	
  use;	
  
• Current	
  implementation	
  is	
  not	
  using	
  any	
  authentication	
  or	
  authorization	
  mechanisms,	
  thus	
  
allowing	
   any	
   user	
   to	
   access	
   container	
   images.	
   Although	
   this	
   service	
   is	
   only	
   used	
   inside	
  
internal	
  secure	
  perimeter,	
  it’s	
  recommended	
  to	
  implement	
  RBAC	
  policies	
  or	
  at	
  least	
  strong	
  
authentication	
  mechanism	
  for	
  production	
  deployments;	
  
• Due	
  to	
  security	
  considerations	
  all	
  traffic	
  is	
  encrypted	
  and	
  service	
  access	
  is	
  only	
  possible	
  
using	
  HTTPS	
  protocol	
  as	
  a	
  transport.	
  Depending	
  on	
  security	
  requirements	
  there	
  may	
  be	
  a	
  
need	
  to	
  create	
  and	
  sign	
  service	
  SSL	
  keys	
  using	
  trusted	
  CA.	
  Current	
  implementation	
  is	
  using	
  
self-­‐signed	
  CA	
  and	
  keys.	
  For	
  this	
  to	
  work,	
  those	
  self-­‐signed	
  keys	
  must	
  be	
  added	
  to	
  Docker	
  
certificate	
  store	
  on	
  every	
  Docker	
  host	
  that	
  is	
  communicating	
  with	
  “Image	
  Registry”	
  service;	
  
• Obviously,	
   there	
   is	
   a	
   trade-­‐off	
   with	
   known	
   pro’s	
   and	
   contras,	
   when	
   implementing	
   local	
  
registry	
   comparing	
   to	
   externally	
   hosted	
   container	
   registry.	
   For	
   this	
   project	
   it’s	
   been	
  
decided	
   to	
   use	
   local	
   registry,	
   however,	
   nothing	
   prevents	
   using	
   external	
   Image	
   Registry	
  
service.	
  This	
  is	
  assuming	
  that	
  service	
  integration	
  has	
  been	
  performed,	
  service	
  availability,	
  
security	
  and	
  access	
  issues	
  have	
  been	
  addressed.	
  
	
  
Image	
  Builder	
  
This	
   service	
   is	
   implemented	
   as	
   Platform	
   management.	
   Currently,	
   new	
   image	
   builds	
   have	
   to	
   be	
  
triggered	
  manually	
  after	
  Docker	
  files	
  have	
  been	
  modified,	
  however,	
  nothing	
  is	
  speaking	
  against	
  
automating	
  this	
  step	
  and	
  triggering	
  image	
  build	
  upon	
  certain	
  event,	
  for	
  example	
  container	
  image	
  
code	
  or	
  configuration	
  changes.	
  	
  
	
  
	
  
Figure	
  7	
  -­‐	
  Image	
  Builder	
  UI	
  
	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   16	
  
There	
  are	
  no	
  services	
  depending	
  on	
  Image	
  Builder.	
  The	
  Image	
  Builder	
  itself	
  is	
  directly	
  depending	
  
on	
   SCM	
   service	
   and	
   indirectly	
   on	
   Image	
   Registry	
   where	
   fresh	
   built	
   images	
   being	
   pushed	
   to.	
  
Obviously,	
  some	
  secrets	
  such	
  as	
  keys	
  and	
  credentials	
  must	
  be	
  used	
  during	
  the	
  container	
  image	
  
build	
  stage.	
  There	
  is	
  a	
  nice	
  write	
  up	
  providing	
  good	
  summary	
  for	
  available	
  solutions	
  and	
  options.	
  
See	
  http://elasticcompute.io/2016/01/22/build-­‐time-­‐secrets-­‐with-­‐docker-­‐containers/.	
  
	
  
Currently,	
  container	
  images	
  can	
  be	
  built	
  in	
  two	
  modes:	
  
• Build:	
  container	
  image	
  is	
  built	
  from	
  scratch	
  and	
  properly	
  tagged;	
  
• Release:	
   after	
   performing	
   image	
   build	
   the	
   image	
   is	
   undergoing	
   tests	
   and,	
   if	
   successful,	
  
pushed	
  to	
  the	
  image	
  repository,	
  thus	
  becoming	
  available	
  for	
  deployment.	
  
	
  
Things	
  to	
  keep	
  in	
  mind:	
  
• Although	
   container	
   build	
   workflow	
   does	
   include	
   the	
   step	
   for	
   executing	
   tests,	
   currently,	
  
there	
  are	
  no	
  actual	
  tests	
  provided.	
  Special	
  care	
  should	
  be	
  taken	
  and	
  container	
  images	
  must	
  
be	
  tested	
  manually	
  prior	
  to	
  deploying	
  and	
  using	
  them;	
  
• Sometimes,	
   when	
   memory	
   becomes	
   scarce	
   (multiple	
   SonarQube	
   analysis	
   running)	
   –	
   the	
  
image	
   rebuild	
   process	
   may	
   fail	
   with	
   error	
   messages	
   indicating	
   lack	
   of	
   memory.	
   It’s	
  
indicating	
   some	
   memory	
   leaks	
   in	
   Docker	
   and	
   hopefully	
   will	
   be	
   fixed	
   in	
   the	
   upcoming	
  
releases.	
  This	
  should	
  not	
  occur	
  though	
  in	
  environments	
  with	
  sufficient	
  memory	
  allocation;	
  
• The	
   Docker	
   files	
   for	
   images	
   have	
   been	
   built	
   considering	
   image	
   caching,	
   therefore	
   often	
  
image	
   rebuilds	
   must	
   not	
   create	
   significant	
   load.	
   At	
   the	
   same	
   time	
   image	
   caching	
   may	
  
become	
  a	
  source	
  of	
  hard-­‐to-­‐track	
  issues,	
  therefore	
  administrators	
  may	
  need	
  to	
  pay	
  a	
  special	
  
care	
   to	
   the	
   local	
   image	
   store	
   and	
   cached	
   images	
   on	
   the	
   systems	
   where	
   builds	
   are	
  
performed.	
  
	
  
Deployment	
  Service	
  
By	
   using	
   Deployment	
   service	
   we	
   can	
   ensure	
   that	
   all	
   projects	
   are	
   following	
   naming,	
   security,	
  
configuration	
  and	
  deployment	
  standards	
  and	
  conventions.	
  They	
  can	
  be	
  easily	
  identified,	
  managed	
  
and	
  recreated	
  in	
  a	
  standard	
  and	
  repeatable	
  way.	
  See	
  the	
  Drupal	
  Website	
  Deployment	
  chapter	
  for	
  
additional	
  details	
  and	
  examples.	
  
	
  
All	
  project	
  deployment	
  tasks	
  are	
  handled	
  by	
  this	
  service,	
  namely:	
  
• Checking	
  requested	
  parameters	
  against	
  naming	
  standards;	
  
• Choosing	
  the	
  target	
  location	
  based	
  on	
  user	
  inputs	
  or	
  defaults;	
  
• Validating	
  that	
  target	
  location	
  is	
  ready	
  for	
  deployment;	
  
• Cloning	
  requested	
  project	
  version	
  from	
  the	
  code	
  repository;	
  
• Cloning	
  required	
  add-­‐on	
  projects	
  from	
  the	
  code	
  repository;	
  
• Deploying	
  code	
  to	
  the	
  target	
  location;	
  
• Running	
  configuration	
  instructions	
  and	
  setup	
  procedures;	
  
	
  
The	
   Deployment	
   service	
   is	
   completely	
   decoupled	
   from	
   containers	
   or	
   other	
   infrastructure	
  
semantics.	
   From	
   a	
   high-­‐level	
   perspective	
   the	
   relationship	
   between	
   related	
   components	
   can	
   be	
  
described	
  as:	
  
• Container	
  Provisioning	
  Service	
  is	
  deploying	
  well	
  defined	
  pre-­‐configured	
  containers;	
  
• Containers	
  are	
  encapsulating	
  applications	
  and	
  are	
  immutable	
  or	
  read-­‐only.	
  All	
  volatile	
  and	
  
mutable	
  objects	
  such	
  as	
  content,	
  log	
  files,	
  temporary	
  files,	
  etc.	
  are	
  persisted	
  on	
  volumes	
  or	
  
using	
  other	
  persistence	
  mechanisms	
  such	
  as	
  Database	
  Storage;	
  
• Deployment	
   Service	
   is	
   populating	
   host	
   volumes	
   with	
   application	
   objects	
   such	
   as	
   code,	
  
configuration,	
  content,	
  etc.	
  Those	
  host	
  volumes	
  are	
  mapped	
  to	
  container	
  volumes	
  and	
  thus	
  
becoming	
  available	
  to	
  execution	
  runtime	
  inside	
  corresponding	
  containers.	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   17	
  
	
  
The	
  Deployment	
  service	
  is	
  used	
  by	
  Deployment	
  workflows	
  via	
  corresponding	
  Platform	
  CLI	
  calls.	
  
The	
  service	
  itself	
  is	
  having	
  several	
  dependencies:	
  
• Secure	
  Storage	
  –	
  used	
  to	
  query	
  various	
  credentials	
  and	
  sensitive	
  information;	
  
• SCM	
  Service	
  –	
  used	
  to	
  clone	
  requested	
  projects	
  and	
  their	
  dependencies;	
  
• Persistent	
  Volumes	
  –	
  used	
  for	
  deployment	
  targets	
  to	
  store	
  project-­‐related	
  objects;	
  
• Persistent	
   Database	
   Storage	
   –	
   may	
   be	
   indirectly	
   used	
   by	
   project	
   setup	
   scripts,	
   for	
  
example	
   for	
   creating	
   database	
   scheme	
   for	
   the	
   project	
   or	
   populating	
   required	
   database	
  
objects.	
  
	
  
Things	
  to	
  keep	
  in	
  mind:	
  
• The	
   Deployment	
   service	
   is	
   not	
   making	
   orchestration	
   decisions	
   and	
   therefore	
   must	
   be	
  
provided	
  the	
  target	
  location	
  specification	
  by	
  upstream	
  caller.	
  This	
  is	
  done	
  on	
  purpose	
  to	
  
keep	
  orchestration	
  logic	
  and	
  mechanisms	
  separate	
  from	
  deployment	
  semantics;	
  
• The	
   Deployment	
   service	
   is	
   a	
   part	
   of	
   Platform	
   CLI	
   component	
   and	
   as	
   such	
   uses	
   platform	
  
configuration,	
  settings	
  and	
  naming	
  standards;	
  
• Since	
  provisioning	
  tasks	
  may	
  involve	
  multiple	
  hosts	
  or	
  be	
  invoked	
  remotely,	
  it	
  is	
  required	
  
that	
   password-­‐less	
   (key-­‐based)	
   SSH	
   access	
   is	
   configured	
   between	
   the	
   master	
   and	
   slave	
  
nodes;	
  
• Deployment	
   service	
   does	
   just	
   that	
   –	
   deploying	
   projects	
   to	
   target	
   locations	
   according	
   to	
  
well-­‐defined	
  rules	
  and	
  naming	
  standards.	
  It	
  does	
  not	
  care,	
  nor	
  making	
  assumptions	
  about	
  
the	
  applications,	
  custom	
  code	
  or	
  content	
  used	
  by	
  applications	
  deployed	
  inside	
  containers	
  as	
  
long	
  as	
  projects	
  following	
  defined	
  project	
  structure.	
  
	
  
Container	
  Provisioning	
  Service	
  
All	
  container	
  provisioning	
  and	
  de-­‐provisioning	
  operations	
  are	
  handled	
  by	
  this	
  service,	
  which	
  is	
  
translating	
   requested	
   actions	
   into	
   corresponding	
   Docker	
   commands	
   and	
   API	
   calls.	
   It	
   is	
   still	
  
possible	
   to	
   create	
   arbitrary	
   containers	
   using	
   Docker	
   client	
   or	
   APIs,	
   however,	
   for	
   the	
   sake	
   of	
  
consistency	
  this	
  approach	
  is	
  discouraged.	
  
	
  
This	
   can	
   be	
   best	
   explained	
   by	
   the	
   following	
   example.	
   Let’s	
   provision	
   new	
   web	
   container	
   using	
  
Docker	
  CLI:	
  
	
  
$	
  docker	
  run	
  -­‐-­‐name	
  d7-­‐demo	
  -­‐-­‐hostname	
  wbs1	
  -­‐-­‐detach=true	
  -­‐-­‐restart=on-­‐failure:5	
  	
  
	
  -­‐-­‐security-­‐opt	
  no-­‐new-­‐privileges	
  -­‐-­‐cpu-­‐shares	
  16	
  -­‐-­‐memory	
  64m	
  -­‐-­‐memory-­‐swap	
  1G	
  	
  
	
  -­‐-­‐publish	
  10.169.64.232:8080:80	
  -­‐-­‐publish	
  10.169.64.232:8443:443	
  	
  
	
  -­‐-­‐volume	
  /var/web/stg/root/d7-­‐demo:/var/www	
  -­‐-­‐volume	
  /var/web/stg/data/d7-­‐demo:/var/data	
  	
  
	
  -­‐-­‐volume	
  /var/web/stg/logs/d7-­‐demo:/var/log	
  -­‐-­‐volume	
  /var/web/stg/temp/d7-­‐demo:/var/tmp	
  	
  
	
  -­‐-­‐volume	
  /var/web/stg/cert/d7-­‐demo:/etc/ssl/web	
  	
  
	
  -­‐-­‐tmpfs	
  /run:rw,nosuid,exec,nodev,mode=755	
  	
  
	
  -­‐-­‐tmpfs	
  /tmp:rw,nosuid,noexec,nodev,mode=755	
  	
  
	
  -­‐-­‐env-­‐file	
  /opt/deploy/container.env	
  	
  
	
  -­‐-­‐label	
  container.env=stg	
  -­‐-­‐label	
  container.size=small	
  	
  
	
  -­‐-­‐label	
  container.site=d7-­‐demo	
  -­‐-­‐label	
  container.type=web	
  	
  
	
  registry.poc:5000/poc/nginx-­‐php-­‐fpm	
  
	
  
You	
  may	
  have	
  noticed,	
  there	
  are	
  number	
  of	
  additional	
  options	
  and	
  parameters	
  required	
  by	
  the	
  
platform	
  itself,	
  its	
  services	
  and	
  naming	
  standards.	
  Although,	
  Container	
  Provisioning	
  Service	
  has	
  
made	
  exactly	
  this	
  same	
  call	
  to	
  a	
  Docker	
  engine,	
  there	
  is	
  lot	
  more	
  happening,	
  hidden	
  under	
  the	
  
hood.	
  
	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   18	
  
Now,	
  let’s	
  provision	
  the	
  same	
  web	
  container	
  using	
  Container	
  Provisioning	
  Service.	
  In	
  addition	
  to	
  
creating	
  Docker	
  Container	
  it	
  is	
  performing	
  the	
  following	
  essential	
  steps:	
  
• Checking	
  is	
  container	
  name	
  against	
  naming	
  standards;	
  
• Checking	
  there	
  is	
  no	
  container	
  with	
  such	
  name	
  already	
  present;	
  
• Validating	
  IP	
  address:	
  
o Checking	
  whether	
  provided	
  IP	
  belongs	
  to	
  address	
  pool	
  and	
  whether	
  this	
  IP	
  is	
  not	
  
already	
  taken	
  by	
  other	
  container;	
  
o If	
  no	
  IP-­‐address	
  provided,	
  then	
  automatically	
  selecting	
  next	
  free	
  IP	
  from	
  the	
  pool;	
  
• Checking	
  whether	
  container	
  host	
  volumes	
  present	
  and	
  creating	
  them	
  otherwise;	
  
• Adding	
  container	
  labels,	
  specifying	
  web	
  site,	
  its	
  environment,	
  size	
  and	
  container	
  type;	
  
• Adding	
  resource	
  constraints	
  and	
  security	
  related	
  options;	
  
• Using	
  given	
  image	
  or	
  default	
  one	
  if	
  no	
  container	
  image	
  specified	
  for	
  creating	
  new	
  container.	
  
	
  
	
  
$	
  /opt/deploy/web	
  container	
  create	
  -­‐-­‐farm	
  poc	
  -­‐-­‐env	
  stg	
  -­‐-­‐site	
  d7-­‐demo	
  -­‐-­‐image	
  nginx-­‐php-­‐fpm	
  
web	
  container	
  create:	
  using	
  next	
  free	
  IP:	
  10.169.64.232	
  
web	
  container	
  create:	
  checking	
  10.169.64.232	
  is	
  setup	
  
	
  	
  	
  	
  inet	
  10.169.64.232/26	
  brd	
  10.169.64.255	
  scope	
  global	
  secondary	
  enp0s17:	
  
web	
  container	
  create:	
  folder	
  /var/web/stg/root/d7-­‐demo	
  not	
  found,	
  creating	
  
web	
  container	
  create:	
  folder	
  /var/web/stg/data/d7-­‐demo	
  not	
  found,	
  creating	
  
web	
  container	
  create:	
  folder	
  /var/web/stg/logs/d7-­‐demo	
  not	
  found,	
  creating	
  
web	
  container	
  create:	
  folder	
  /var/web/stg/cert/d7-­‐demo	
  not	
  found,	
  creating	
  
web	
  container	
  create:	
  folder	
  /var/web/stg/temp/d7-­‐demo	
  not	
  found,	
  creating	
  
web	
  container	
  create:	
  exporting	
  container	
  ENV	
  variables	
  from	
  /opt/deploy/container.env	
  
web	
  container	
  create:	
  creating	
  container	
  d7-­‐demo	
  
web	
  container	
  create:	
  |-­‐-­‐	
  image-­‐tag:	
  registry.poc:5000/poc/nginx-­‐php-­‐fpm	
  
web	
  container	
  create:	
  |-­‐-­‐	
  resources:	
  small	
  (-­‐-­‐cpu-­‐shares	
  16	
  -­‐-­‐memory	
  64m	
  -­‐-­‐memory-­‐swap	
  1G)	
  
web	
  container	
  create:	
  |-­‐-­‐	
  published:	
  10.169.64.232:8080:80	
  
web	
  container	
  create:	
  |-­‐-­‐	
  published:	
  10.169.64.232:8443:443	
  
web	
  container	
  create:	
  |-­‐-­‐	
  volume:	
  /var/web/stg/cert/d7-­‐demo:/etc/apache2/ssl	
  
web	
  container	
  create:	
  |-­‐-­‐	
  volume:	
  /var/web/stg/logs/d7-­‐demo:/var/log	
  
web	
  container	
  create:	
  |-­‐-­‐	
  volume:	
  /var/web/stg/root/d7-­‐demo:/var/www	
  
web	
  container	
  create:	
  |-­‐-­‐	
  volume:	
  /var/web/stg/data/d7-­‐demo:/var/data	
  
web	
  container	
  create:	
  |-­‐-­‐	
  volume:	
  /var/web/stg/temp/d7-­‐demo:/var/tmp	
  
web	
  container	
  create:	
  |-­‐-­‐	
  volume:	
  tmpfs:/run	
  
web	
  container	
  create:	
  |-­‐-­‐	
  volume:	
  tmpfs:/tmp	
  
web	
  container	
  create:	
  |-­‐-­‐	
  label:	
  container.env=stg	
  
web	
  container	
  create:	
  |-­‐-­‐	
  label:	
  container.size=small	
  
web	
  container	
  create:	
  |-­‐-­‐	
  label:	
  container.site=d7-­‐demo	
  
web	
  container	
  create:	
  __	
  label:	
  container.type=web	
  
web	
  container	
  create:	
  started	
  site	
  container	
  
cb68618b84b4d3276a77ebd4a0635c5387a8319f1ffaac3759c74820fa32b258	
  
	
  
By	
   using	
   Container	
   Provisioning	
   service	
   we	
   can	
   ensure	
   that	
   all	
   containers	
   following	
   naming,	
  
security,	
  configuration	
  and	
  resource	
  allocation	
  standards.	
  They	
  can	
  be	
  easily	
  identified,	
  managed	
  
and	
  recreated	
  in	
  a	
  standard	
  and	
  repeatable	
  way.	
  
	
  
$	
  /opt/deploy/web	
  container	
  list	
  -­‐-­‐farm	
  poc	
  -­‐-­‐env	
  stg	
  -­‐-­‐format	
  table	
  
web	
  container	
  list:	
  
CONTAINER	
  ID	
  	
  	
  	
  NAMES	
  	
  	
  	
  	
  	
  STATUS	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  ENV	
  	
  SIZE	
  	
  	
  	
  	
  PORTS	
  
cb68618b84b4	
  	
  	
  	
  d7-­‐demo	
  	
  	
  	
  Up	
  16	
  minutes	
  	
  	
  	
  stg	
  	
  small	
  	
  	
  	
  10.1.1.2:8080-­‐>80/tcp,	
  10.1.1.2:8443-­‐
>443/tcp	
  
c953adf92e09	
  	
  	
  	
  d7	
  	
  	
  	
  	
  	
  	
  	
  	
  Up	
  3	
  weeks	
  	
  	
  	
  	
  	
  	
  stg	
  	
  small	
  	
  	
  	
  10.1.1.2:8080-­‐>80/tcp,	
  10.1.1.2:8443-­‐
>443/tcp	
  
	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   19	
  
The	
  Container	
  Provisioning	
  service	
  is	
  used	
  by	
  Deployment	
  workflows	
  via	
  corresponding	
  Platform	
  
CLI	
  calls.	
  The	
  service	
  itself	
  having	
  no	
  specific	
  dependencies	
  and	
  is	
  using	
  Docker	
  CLI	
  for	
  performing	
  
container	
  management	
  operations.	
  
	
  
Things	
  to	
  keep	
  in	
  mind:	
  
• The	
   Container	
   Provisioning	
   service	
   is	
   not	
   making	
   orchestration	
   decisions	
   and	
   therefore	
  
must	
   be	
   provided	
   the	
   target	
   location	
   specification	
   by	
   upstream	
   caller.	
   This	
   is	
   done	
   on	
  
purpose	
  to	
  keep	
  orchestration	
  logic	
  and	
  mechanisms	
  separate	
  from	
  deployment	
  semantics;	
  
• The	
  Container	
  Provisioning	
  service	
  is	
  a	
  part	
  of	
  Platform	
  CLI	
  component	
  and	
  as	
  such	
  uses	
  
platform	
  configuration,	
  settings	
  and	
  naming	
  standards;	
  
• Since	
  provisioning	
  tasks	
  may	
  involve	
  multiple	
  hosts	
  or	
  be	
  invoked	
  remotely,	
  it	
  is	
  required	
  
that	
   password-­‐less	
   (key-­‐based)	
   SSH	
   access	
   is	
   configured	
   between	
   the	
   master	
   and	
   slave	
  
nodes;	
  
• The	
   Container	
   Provisioning	
   service	
   does	
   just	
   that	
   –	
   provisions	
   properly	
   configured	
  
containers.	
  It	
  does	
  not	
  consider,	
  nor	
  making	
  assumptions	
  about	
  the	
  applications,	
  custom	
  
code	
  or	
  content	
  used	
  by	
  applications	
  deployed	
  inside	
  containers;	
  
• The	
   Container	
   Provisioning	
   service	
   is	
   the	
   only	
   component	
   that	
   has	
   to	
   be	
   adjusted,	
   if	
  
different	
  mechanism	
  or	
  API	
  has	
  to	
  be	
  used	
  for	
  provisioning	
  containers,	
  for	
  example	
  CoreOS	
  
rkt	
  or	
  LXD;	
  
• In	
   case	
   of	
   using	
   orchestration	
   engines	
   such	
   as	
   Kubernetes,	
   the	
   Container	
   Provisioning	
  
service	
  can	
  implement	
  a	
  wrapper	
  for	
  provided	
  provisioning	
  functionality.	
  
	
  
Reporting	
  Service	
  
Reporting	
  service	
  is	
  implemented	
  as	
  Docker	
  container	
  that	
  runs	
  queries	
  against	
  Stats	
  Database	
  
and	
   compiles	
   reports	
   for	
   aggregated	
   resource	
   usage	
   according	
   to	
   specified	
   conditions	
   and	
  
parameters.	
  There	
  are	
  no	
  services	
  depending	
  on	
  Reporting	
  service.	
  The	
  Reporting	
  service	
  itself	
  is	
  
depending	
  on	
  Stats	
  Database	
  for	
  fetching	
  report	
  data.	
  
	
  
Persistent	
  Volumes	
  
One	
  of	
  the	
  platform	
  design	
  paradigms	
  is	
  to	
  keep	
  containers	
  immutable	
  or	
  read-­‐only	
  and	
  all	
  volatile	
  
and	
  modified	
  data	
  should	
  be	
  stored	
  outside	
  of	
  container	
  on	
  so	
  called	
  container	
  volumes.	
  Since	
  we	
  
want	
  this	
  data	
  to	
  be	
  available	
  between	
  container	
  runs	
  these	
  volumes	
  must	
  be	
  persistent.	
  There	
  is	
  
another	
  benefit	
  related	
  to	
  keeping	
  application	
  data	
  and	
  content	
  outside	
  of	
  container	
  –	
  it	
  allows	
  
achieving	
  the	
  best	
  application	
  performance.	
  Since	
  there	
  is	
  not	
  COW	
  (copy-­‐on-­‐write)	
  indirection	
  
layer	
  in	
  between,	
  all	
  I/O	
  operations	
  are	
  handled	
  effectively	
  by	
  Linux	
  kernel.	
  
	
  
Things	
  to	
  keep	
  in	
  mind:	
  
• Current	
   platform	
   design	
   is	
   not	
   making	
   assumptions	
   about	
   underlying	
   technology	
   and	
  
orchestration	
   layer.	
   For	
   the	
   sake	
   of	
   simplicity	
   the	
   container	
   host	
   volumes	
   are	
   used	
   as	
  
persistent	
  volumes	
  implementation;	
  
• There	
  are	
  other	
  options	
  to	
  be	
  explored	
  for	
  mapping	
  container	
  volumes	
  to	
  corresponding	
  
SAN	
   volumes,	
   NAS	
   volumes	
   or	
   iSCSI	
   targets.	
   This	
   would	
   allow	
   containers	
   to	
   take	
   their	
  
volumes	
   along	
   with	
   them	
   if	
   restarted	
   on	
   a	
   different	
   Docker	
   thus	
   making	
   containers	
  
“mobile”	
  and	
  allowing	
  container	
  migrations	
  across	
  available	
  hosts.	
  These	
  options	
  were	
  not	
  
explored	
   during	
   this	
   project,	
   however,	
   using	
   them	
   may	
   be	
   essential	
   when	
   running	
  
containers	
  on	
  platforms	
  like	
  Kubernetes.	
  
	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   20	
  
Volume	
  Sync-­‐Share	
  Service	
  
Horizontal	
   scaling	
   and	
   high	
   availability	
   requirements	
   demand	
   that	
   application	
   span	
   multiple	
  
application	
   instances,	
   or	
   containers	
   for	
   this	
   matter.	
   Although	
   session	
   state	
   is	
   kept	
   outside	
   of	
  
containers,	
  the	
  static	
  content	
  still	
  has	
  to	
  be	
  shared	
  between	
  multiple	
  application	
  instances.	
  
	
  
Generally	
   speaking,	
   there	
   are	
   two	
   possible	
   ways	
   for	
   resolving	
   this	
   issue:	
   share	
   file-­‐system	
   or	
  
synchronize	
  file-­‐systems.	
  Every	
  solution	
  is	
  having	
  own	
  strong	
  and	
  weak	
  sides.	
  Both	
  options	
  have	
  
been	
   explored	
   and	
   considered	
   viable.	
   The	
   choice	
   is	
   really	
   dictated	
   by	
   specific	
   infrastructure,	
  
performance	
  and	
  support	
  requirements.	
  The	
  following	
  comparison	
  shall	
  help	
  selecting	
  the	
  most	
  
appropriate	
  option	
  for	
  specific	
  deployment	
  scenario:	
  
	
  
	
   Shared	
  Content	
   Synchronized	
  Content	
  
Implementation	
  
approach	
  
Centralized	
   storage	
   holding	
   single	
  
file-­‐system	
   with	
   many	
   nodes	
  
performing	
  access.	
  
Share	
   nothing	
   architecture.	
  
Many	
  nodes	
  with	
  multi-­‐master	
  
replication	
   between	
   file-­‐
systems.	
  
Storage	
  space	
  
requirements	
  
Volume-­‐Size	
   Volume-­‐Size	
  x	
  N	
  (#	
  of	
  nodes)	
  
Storage	
  throughput	
   All	
   nodes	
   sharing	
   server	
   network	
  
link	
  and	
  capped	
  by	
  its	
  throughput.	
  
One	
  node	
  may	
  saturate	
  the	
  link	
  and	
  
degrade	
  performance	
  for	
  others.	
  
	
  
Limited	
   by	
   single	
   volume	
   IOPs,	
  
quickly	
   degrades	
   with	
   number	
   of	
  
nodes.	
  
Throughput	
   and	
   IOPs	
   scale	
  
linearly	
  with	
  number	
  of	
  nodes.	
  
File-­‐system	
  locking	
   File-­‐system	
   locks	
   maintained	
   to	
  
allow	
   concurrent	
   access	
   for	
  
multiple	
   nodes	
   to	
   a	
   single	
   object.	
  
Can	
   lead	
   to	
   stalled	
   I/O	
   operations	
  
and,	
   as	
   result,	
   to	
   unresponsive	
  
applications.	
  
No	
  file-­‐system	
  locks	
  required.	
  
Change	
  propagation	
   Instant	
   Little	
  latency	
  
Implementation	
  
complexity	
  
Low	
   Moderate	
  
Support	
  complexity	
   Moderate	
   Low	
  
Known	
  limitations	
   SendFile	
  kernel	
  support	
  and	
  mmap	
  
must	
   be	
   disabled	
   on	
   shared	
  
volumes.	
  
	
  
Orphaned	
   file-­‐system	
   locks	
   may	
  
need	
   to	
   be	
   identified	
   and	
   cleaned	
  
manually.	
  
	
  
Storage	
   volume	
   restart	
   may	
   have	
  
unpredicted	
  effects	
  on	
  clients,	
  they	
  
may	
  need	
  to	
  re-­‐mount	
  storage.	
  
	
  
File-­‐system	
   caching	
   may	
   produce	
  
inconsistent	
  results	
  across	
  clients.	
  
Large	
  file-­‐system	
  changes	
  may	
  
take	
   some	
   time	
   to	
   propagate	
  
on	
  all	
  clients.	
  
	
  
In	
   rare	
   cases	
   file	
   may	
   be	
  
modified	
   in	
   several	
   locations	
  
producing	
  a	
  conflict	
  that	
  has	
  to	
  
be	
   resolved	
   either	
  
automatically	
  or	
  manually.	
  
Specific	
  application	
   NFS	
  4.x	
  server	
  and	
  clients	
   SyncThing	
  +	
  inotify	
  
 Website	
  in	
  a	
  Box	
  or	
  the	
  Next	
  Generation	
  Hosting	
  Platform	
  
Copyright	
  2016	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  All	
  Rights	
  Reserved.	
  Not	
  for	
  disclosure	
  without	
  written	
  permission.	
   	
   	
   21	
  
	
  
Given	
  overview	
  above,	
  one	
  may	
  still	
  wonder,	
  which	
  route	
  to	
  choose	
  and	
  whether	
  there	
  is	
  a	
  simple	
  
rule	
  of	
  thumb	
  to	
  select	
  the	
  most	
  appropriate	
  option.	
  Here	
  we	
  go:	
  
	
  
• Implement	
  NFS:	
  
o If	
  you	
  have	
  storage	
  array	
  capable	
  of	
  serving	
  files	
  using	
  NFS	
  4.x	
  protocol;	
  
o If	
  your	
  applications	
  don’t	
  require	
  high	
  storage	
  throughput	
  and	
  concurrency;	
  
o If	
  you	
  can	
  tolerate	
  noisy	
  neighbors	
  effect	
  at	
  times;	
  
o If	
  storage	
  volume	
  size	
  (and/or	
  its	
  cost)	
  is	
  significant;	
  
o If	
  you	
  already	
  have	
  expertise	
  in	
  house;	
  
o If	
  other	
  parts	
  of	
  your	
  solution	
  using	
  NFS;	
  
• Implement	
  SyncThing:	
  
o If	
  you	
  don’t	
  have	
  fault-­‐tolerant	
  NFS	
  server	
  and	
  can’t	
  afford	
  it	
  for	
  whatever	
  reason;	
  
o If	
  your	
  applications	
  require	
  highest	
  storage	
  throughput	
  and	
  need	
  to	
  scale	
  as	
  they	
  
grow;	
  
o If	
  you	
  absolutely	
  can’t	
  tolerate	
  noisy	
  neighbors	
  effect	
  or	
  NFS	
  server	
  downtime;	
  
o If	
  you	
  can	
  tolerate	
  little	
  latency	
  required	
  to	
  propagate	
  changes;	
  
o If	
  storage	
  volume	
  size	
  is	
  small	
  enough	
  to	
  have	
  redundant	
  copy	
  on	
  every	
  client.	
  
	
  
Below	
  is	
  an	
  example	
  of	
  how	
  to	
  start	
  volume	
  sync	
  service:	
  
	
  
$	
  docker	
  run	
  -­‐-­‐name	
  datasync	
  -­‐-­‐hostname	
  `hostname`	
  -­‐-­‐detach=true	
  -­‐-­‐restart=always	
  	
  
	
  -­‐-­‐cpu-­‐shares	
  100	
  -­‐-­‐memory	
  100m	
  	
  
	
  -­‐-­‐publish	
  22000:22000	
  -­‐-­‐publish	
  21027:21027/udp	
  -­‐-­‐publish	
  8384:8384	
  	
  
	
  -­‐-­‐volume	
  /var/deploy/prd/data/:/var/sync	
  -­‐-­‐volume	
  /var/data/datasync:/etc/syncthing	
  	
  
	
  -­‐-­‐tmpfs	
  /run:rw,nosuid,nodev,mode=755	
  -­‐-­‐tmpfs	
  /tmp:rw,nosuid,nodev,mode=755	
  	
  
	
  registry.poc:5000/poc/syncthing	
  
	
  
This	
  service	
  has	
  to	
  be	
  started	
  on	
  all	
  Docker	
  host	
  nodes	
  having	
  data	
  volumes	
  that	
  must	
  be	
  kept	
  in	
  
sync.	
  After	
  starting,	
  these	
  services	
  have	
  to	
  be	
  introduced	
  to	
  each	
  other	
  or	
  preform	
  handshake	
  and	
  
mutual	
  changes	
  have	
  to	
  be	
  allowed	
  between	
  them.	
  It’s	
  one-­‐time	
  configuration.	
  
	
  
All	
  file-­‐system	
  changes	
  will	
  be	
  tracked	
  via	
  inotify	
  subscription	
  and	
  updated	
  files	
  will	
  be	
  exchanged	
  
between	
   nodes	
   using	
   efficient	
   block	
   exchange	
   protocol	
   similar	
   to	
   BitTorrent.	
   Thus,	
   the	
   change	
  
propagation	
  speed	
  grows	
  with	
  the	
  number	
  of	
  nodes	
  participating	
  in	
  exchange.	
  
	
  
Things	
  to	
  keep	
  in	
  mind:	
  
• SyncThing	
  is	
  relatively	
  young,	
  actively	
  developing	
  application.	
  There	
  may	
  be	
  side	
  effects	
  
that	
  have	
  not	
  been	
  studied	
  yet;	
  
• SyncThing	
  configuration	
  can	
  be	
  generated	
  from	
  template	
  and	
  saved	
  to	
  the	
  configuration	
  
file.	
  It	
  can	
  be	
  also	
  adjusted	
  using	
  APIs	
  and	
  Web	
  UI.	
  The	
  access	
  to	
  API	
  and	
  Web	
  UI	
  must	
  be	
  
appropriately	
  secured;	
  
• SyncThing	
  protocol	
  is	
  ensuring	
  quick	
  delta	
  updates	
  and	
  high	
  performance.	
  During	
  the	
  tests	
  
~100+MB/s	
  sync	
  speed	
  has	
  been	
  measured;	
  
• Although	
   SyncThing	
   can	
   perform	
   dynamic	
   service	
   and	
   network	
   discovery,	
   the	
   static	
  
configuration	
  has	
  been	
  used	
  for	
  this	
  project.	
  
	
  
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform
Website in a Box or the Next Generation Hosting Platform

More Related Content

What's hot

E book lplt
E book lplt E book lplt
E book lplt
Ana Hribar Beluhan
 
Usb2817 h(v6.027) nopw
Usb2817 h(v6.027) nopwUsb2817 h(v6.027) nopw
Usb2817 h(v6.027) nopw
Nuceretain
 
Phrasalverb
PhrasalverbPhrasalverb
Phrasalverb
itu
 
Kpmg guide to investing in algeria 2011
Kpmg guide to investing in algeria 2011 Kpmg guide to investing in algeria 2011
Kpmg guide to investing in algeria 2011
NORYAS
 
A Study on Market Share of Indian Detergent giant Surf Excel and Suggesting m...
A Study on Market Share of Indian Detergent giant Surf Excel and Suggesting m...A Study on Market Share of Indian Detergent giant Surf Excel and Suggesting m...
A Study on Market Share of Indian Detergent giant Surf Excel and Suggesting m...
silvygoldy
 
Highlighted section 6 player development policies (feb 2013)
Highlighted section 6 player development policies (feb 2013)Highlighted section 6 player development policies (feb 2013)
Highlighted section 6 player development policies (feb 2013)
UYSA
 
Mission Commitee Guidelines
Mission Commitee GuidelinesMission Commitee Guidelines
World of the Unseen - II
World of the Unseen - IIWorld of the Unseen - II
World of the Unseen - II
Shane Elahi
 
Catalogo Promozionale - Teatro Torlonia ( ADVI 2015 )
Catalogo Promozionale - Teatro Torlonia ( ADVI 2015 )Catalogo Promozionale - Teatro Torlonia ( ADVI 2015 )
Catalogo Promozionale - Teatro Torlonia ( ADVI 2015 )
Emiliano De Ciantis
 
Internship Report
Internship Report Internship Report
Internship Report
zeeshan Ahmad
 
2527 seiko watch_catalogue_2015-16_aus
2527 seiko watch_catalogue_2015-16_aus2527 seiko watch_catalogue_2015-16_aus
2527 seiko watch_catalogue_2015-16_aus
Tanmay Dastidar
 
texas instruments 2007 Proxy Statement
texas instruments  2007 Proxy Statementtexas instruments  2007 Proxy Statement
texas instruments 2007 Proxy Statement
finance19
 
texas instruments 2006Proxy Statement
texas instruments  2006Proxy Statementtexas instruments  2006Proxy Statement
texas instruments 2006Proxy Statement
finance19
 
sprint nextel 2008 Proxy Statement
sprint nextel 2008 Proxy Statementsprint nextel 2008 Proxy Statement
sprint nextel 2008 Proxy Statement
finance6
 
RFP Software FPSC v.1.2 04-08-2016
RFP Software FPSC v.1.2 04-08-2016RFP Software FPSC v.1.2 04-08-2016
RFP Software FPSC v.1.2 04-08-2016
Sayyad Ali Mughal
 
best buy FY '06 Proxy
best buy 	FY '06 Proxy best buy 	FY '06 Proxy
best buy FY '06 Proxy
finance7
 
Binder1.compressed
Binder1.compressedBinder1.compressed
Binder1.compressed
Taylor weissman
 
Binder2
Binder2Binder2
Social Safety Nets and Gender- Learning from Impact Evaluations and World Ban...
Social Safety Nets and Gender- Learning from Impact Evaluations and World Ban...Social Safety Nets and Gender- Learning from Impact Evaluations and World Ban...
Social Safety Nets and Gender- Learning from Impact Evaluations and World Ban...
Segen Moges
 
WSIS+10 Country Reporting - Rwanda (Republic of)
WSIS+10 Country Reporting - Rwanda (Republic of)WSIS+10 Country Reporting - Rwanda (Republic of)
WSIS+10 Country Reporting - Rwanda (Republic of)
Dr Lendy Spires
 

What's hot (20)

E book lplt
E book lplt E book lplt
E book lplt
 
Usb2817 h(v6.027) nopw
Usb2817 h(v6.027) nopwUsb2817 h(v6.027) nopw
Usb2817 h(v6.027) nopw
 
Phrasalverb
PhrasalverbPhrasalverb
Phrasalverb
 
Kpmg guide to investing in algeria 2011
Kpmg guide to investing in algeria 2011 Kpmg guide to investing in algeria 2011
Kpmg guide to investing in algeria 2011
 
A Study on Market Share of Indian Detergent giant Surf Excel and Suggesting m...
A Study on Market Share of Indian Detergent giant Surf Excel and Suggesting m...A Study on Market Share of Indian Detergent giant Surf Excel and Suggesting m...
A Study on Market Share of Indian Detergent giant Surf Excel and Suggesting m...
 
Highlighted section 6 player development policies (feb 2013)
Highlighted section 6 player development policies (feb 2013)Highlighted section 6 player development policies (feb 2013)
Highlighted section 6 player development policies (feb 2013)
 
Mission Commitee Guidelines
Mission Commitee GuidelinesMission Commitee Guidelines
Mission Commitee Guidelines
 
World of the Unseen - II
World of the Unseen - IIWorld of the Unseen - II
World of the Unseen - II
 
Catalogo Promozionale - Teatro Torlonia ( ADVI 2015 )
Catalogo Promozionale - Teatro Torlonia ( ADVI 2015 )Catalogo Promozionale - Teatro Torlonia ( ADVI 2015 )
Catalogo Promozionale - Teatro Torlonia ( ADVI 2015 )
 
Internship Report
Internship Report Internship Report
Internship Report
 
2527 seiko watch_catalogue_2015-16_aus
2527 seiko watch_catalogue_2015-16_aus2527 seiko watch_catalogue_2015-16_aus
2527 seiko watch_catalogue_2015-16_aus
 
texas instruments 2007 Proxy Statement
texas instruments  2007 Proxy Statementtexas instruments  2007 Proxy Statement
texas instruments 2007 Proxy Statement
 
texas instruments 2006Proxy Statement
texas instruments  2006Proxy Statementtexas instruments  2006Proxy Statement
texas instruments 2006Proxy Statement
 
sprint nextel 2008 Proxy Statement
sprint nextel 2008 Proxy Statementsprint nextel 2008 Proxy Statement
sprint nextel 2008 Proxy Statement
 
RFP Software FPSC v.1.2 04-08-2016
RFP Software FPSC v.1.2 04-08-2016RFP Software FPSC v.1.2 04-08-2016
RFP Software FPSC v.1.2 04-08-2016
 
best buy FY '06 Proxy
best buy 	FY '06 Proxy best buy 	FY '06 Proxy
best buy FY '06 Proxy
 
Binder1.compressed
Binder1.compressedBinder1.compressed
Binder1.compressed
 
Binder2
Binder2Binder2
Binder2
 
Social Safety Nets and Gender- Learning from Impact Evaluations and World Ban...
Social Safety Nets and Gender- Learning from Impact Evaluations and World Ban...Social Safety Nets and Gender- Learning from Impact Evaluations and World Ban...
Social Safety Nets and Gender- Learning from Impact Evaluations and World Ban...
 
WSIS+10 Country Reporting - Rwanda (Republic of)
WSIS+10 Country Reporting - Rwanda (Republic of)WSIS+10 Country Reporting - Rwanda (Republic of)
WSIS+10 Country Reporting - Rwanda (Republic of)
 

Viewers also liked

Vehículos2
Vehículos2Vehículos2
Vehículos2
tortugacaballo
 
Les atouts cachés de l’Enterprise Content Management - ICTJournal
Les atouts cachés de l’Enterprise Content Management - ICTJournalLes atouts cachés de l’Enterprise Content Management - ICTJournal
Les atouts cachés de l’Enterprise Content Management - ICTJournal
Hervé Stalder
 
Het Nieuwe Pensioenverhaal in Hollywood (6 d)
Het Nieuwe Pensioenverhaal in Hollywood (6 d)Het Nieuwe Pensioenverhaal in Hollywood (6 d)
Het Nieuwe Pensioenverhaal in Hollywood (6 d)Peter de Kuster
 
Ulcers global clinical trials review, h1, 2014
Ulcers global clinical trials review, h1, 2014Ulcers global clinical trials review, h1, 2014
Ulcers global clinical trials review, h1, 2014
QYResearchReports
 
Prarbéns te dezejo tudo
Prarbéns te dezejo tudoPrarbéns te dezejo tudo
Prarbéns te dezejo tudo
erc33
 
Ficha de repaso 2do
Ficha de repaso 2doFicha de repaso 2do
Ficha de repaso 2do
analuciaoyarbide
 
Autoevaluación
AutoevaluaciónAutoevaluación
Autoevaluación
Ana caroline Morales
 
Caso2 permanenteoficio
Caso2 permanenteoficioCaso2 permanenteoficio
Caso2 permanenteoficio
jennifergarzong
 
cv latest
cv latestcv latest
cv latest
musharraf khan
 
Renault Culture d'Entreprise
Renault Culture d'EntrepriseRenault Culture d'Entreprise
Renault Culture d'Entreprise
Simon Jouenne
 

Viewers also liked (11)

Vehículos2
Vehículos2Vehículos2
Vehículos2
 
Doc1
Doc1Doc1
Doc1
 
Les atouts cachés de l’Enterprise Content Management - ICTJournal
Les atouts cachés de l’Enterprise Content Management - ICTJournalLes atouts cachés de l’Enterprise Content Management - ICTJournal
Les atouts cachés de l’Enterprise Content Management - ICTJournal
 
Het Nieuwe Pensioenverhaal in Hollywood (6 d)
Het Nieuwe Pensioenverhaal in Hollywood (6 d)Het Nieuwe Pensioenverhaal in Hollywood (6 d)
Het Nieuwe Pensioenverhaal in Hollywood (6 d)
 
Ulcers global clinical trials review, h1, 2014
Ulcers global clinical trials review, h1, 2014Ulcers global clinical trials review, h1, 2014
Ulcers global clinical trials review, h1, 2014
 
Prarbéns te dezejo tudo
Prarbéns te dezejo tudoPrarbéns te dezejo tudo
Prarbéns te dezejo tudo
 
Ficha de repaso 2do
Ficha de repaso 2doFicha de repaso 2do
Ficha de repaso 2do
 
Autoevaluación
AutoevaluaciónAutoevaluación
Autoevaluación
 
Caso2 permanenteoficio
Caso2 permanenteoficioCaso2 permanenteoficio
Caso2 permanenteoficio
 
cv latest
cv latestcv latest
cv latest
 
Renault Culture d'Entreprise
Renault Culture d'EntrepriseRenault Culture d'Entreprise
Renault Culture d'Entreprise
 

Similar to Website in a Box or the Next Generation Hosting Platform

Solar energy market overview nov 25 2011_eng_final
Solar energy market overview nov 25 2011_eng_finalSolar energy market overview nov 25 2011_eng_final
Solar energy market overview nov 25 2011_eng_final
Jason_2011710
 
monografia de redacción
monografia de redacción monografia de redacción
monografia de redacción
yubis96
 
Ppdg Robust File Replication
Ppdg Robust File ReplicationPpdg Robust File Replication
Ppdg Robust File Replication
guest0dc8a2
 
2008 Annual Report Wasso Hospital, Ngorongoro, Tanzania
2008 Annual Report Wasso Hospital, Ngorongoro, Tanzania2008 Annual Report Wasso Hospital, Ngorongoro, Tanzania
2008 Annual Report Wasso Hospital, Ngorongoro, Tanzania
Christian van Rij
 
The Heart of KOOBFACE
The Heart of KOOBFACEThe Heart of KOOBFACE
The Heart of KOOBFACE
Trend Micro
 
Publication: Space Debris: Applied Technologies and Policy Prescriptions
Publication: Space Debris: Applied Technologies and Policy PrescriptionsPublication: Space Debris: Applied Technologies and Policy Prescriptions
Publication: Space Debris: Applied Technologies and Policy Prescriptions
stephaniclark
 
En 3051378
En 3051378En 3051378
En 3051378
guest57b17c
 
Going the Extra Mile
Going the Extra MileGoing the Extra Mile
Going the Extra Mile
Karthik Kastury
 
SEAMLESS MPLS
SEAMLESS MPLSSEAMLESS MPLS
SEAMLESS MPLS
Johnson Liu
 
Vietnam Quarterly Knowledge Report | Q3 2016
Vietnam Quarterly Knowledge Report | Q3 2016 Vietnam Quarterly Knowledge Report | Q3 2016
Vietnam Quarterly Knowledge Report | Q3 2016
Colliers International | Vietnam
 
Colliers Vietnam Quarterly Knowledge Report Q3 2016
Colliers Vietnam Quarterly Knowledge Report Q3 2016Colliers Vietnam Quarterly Knowledge Report Q3 2016
Colliers Vietnam Quarterly Knowledge Report Q3 2016
Jonathon Clarke
 
perl_tk_tutorial
perl_tk_tutorialperl_tk_tutorial
perl_tk_tutorial
tutorialsruby
 
perl_tk_tutorial
perl_tk_tutorialperl_tk_tutorial
perl_tk_tutorial
tutorialsruby
 
C:\Documents And Settings\Junyang8\Desktop\Utap\Blog
C:\Documents And Settings\Junyang8\Desktop\Utap\BlogC:\Documents And Settings\Junyang8\Desktop\Utap\Blog
C:\Documents And Settings\Junyang8\Desktop\Utap\Blog
wang wangt
 
BizTalk Practical Course Preview
BizTalk Practical Course PreviewBizTalk Practical Course Preview
BizTalk Practical Course Preview
MoustafaRefaat
 
ScalaCheck Cookbook v1.0
ScalaCheck Cookbook v1.0ScalaCheck Cookbook v1.0
ScalaCheck Cookbook v1.0
Oscar Renalias
 
Link Resources Past Performance Summaries
Link Resources Past Performance SummariesLink Resources Past Performance Summaries
Link Resources Past Performance Summaries
Link Resources
 
Link SDVOSB Past Performance Summaries
Link SDVOSB Past Performance SummariesLink SDVOSB Past Performance Summaries
Link SDVOSB Past Performance Summaries
gasanden
 
Link SDVOSB Past Performance Summaries
Link SDVOSB Past Performance SummariesLink SDVOSB Past Performance Summaries
Link SDVOSB Past Performance Summaries
gasanden
 
01 f25 introduction
01 f25 introduction01 f25 introduction
01 f25 introduction
c3uo
 

Similar to Website in a Box or the Next Generation Hosting Platform (20)

Solar energy market overview nov 25 2011_eng_final
Solar energy market overview nov 25 2011_eng_finalSolar energy market overview nov 25 2011_eng_final
Solar energy market overview nov 25 2011_eng_final
 
monografia de redacción
monografia de redacción monografia de redacción
monografia de redacción
 
Ppdg Robust File Replication
Ppdg Robust File ReplicationPpdg Robust File Replication
Ppdg Robust File Replication
 
2008 Annual Report Wasso Hospital, Ngorongoro, Tanzania
2008 Annual Report Wasso Hospital, Ngorongoro, Tanzania2008 Annual Report Wasso Hospital, Ngorongoro, Tanzania
2008 Annual Report Wasso Hospital, Ngorongoro, Tanzania
 
The Heart of KOOBFACE
The Heart of KOOBFACEThe Heart of KOOBFACE
The Heart of KOOBFACE
 
Publication: Space Debris: Applied Technologies and Policy Prescriptions
Publication: Space Debris: Applied Technologies and Policy PrescriptionsPublication: Space Debris: Applied Technologies and Policy Prescriptions
Publication: Space Debris: Applied Technologies and Policy Prescriptions
 
En 3051378
En 3051378En 3051378
En 3051378
 
Going the Extra Mile
Going the Extra MileGoing the Extra Mile
Going the Extra Mile
 
SEAMLESS MPLS
SEAMLESS MPLSSEAMLESS MPLS
SEAMLESS MPLS
 
Vietnam Quarterly Knowledge Report | Q3 2016
Vietnam Quarterly Knowledge Report | Q3 2016 Vietnam Quarterly Knowledge Report | Q3 2016
Vietnam Quarterly Knowledge Report | Q3 2016
 
Colliers Vietnam Quarterly Knowledge Report Q3 2016
Colliers Vietnam Quarterly Knowledge Report Q3 2016Colliers Vietnam Quarterly Knowledge Report Q3 2016
Colliers Vietnam Quarterly Knowledge Report Q3 2016
 
perl_tk_tutorial
perl_tk_tutorialperl_tk_tutorial
perl_tk_tutorial
 
perl_tk_tutorial
perl_tk_tutorialperl_tk_tutorial
perl_tk_tutorial
 
C:\Documents And Settings\Junyang8\Desktop\Utap\Blog
C:\Documents And Settings\Junyang8\Desktop\Utap\BlogC:\Documents And Settings\Junyang8\Desktop\Utap\Blog
C:\Documents And Settings\Junyang8\Desktop\Utap\Blog
 
BizTalk Practical Course Preview
BizTalk Practical Course PreviewBizTalk Practical Course Preview
BizTalk Practical Course Preview
 
ScalaCheck Cookbook v1.0
ScalaCheck Cookbook v1.0ScalaCheck Cookbook v1.0
ScalaCheck Cookbook v1.0
 
Link Resources Past Performance Summaries
Link Resources Past Performance SummariesLink Resources Past Performance Summaries
Link Resources Past Performance Summaries
 
Link SDVOSB Past Performance Summaries
Link SDVOSB Past Performance SummariesLink SDVOSB Past Performance Summaries
Link SDVOSB Past Performance Summaries
 
Link SDVOSB Past Performance Summaries
Link SDVOSB Past Performance SummariesLink SDVOSB Past Performance Summaries
Link SDVOSB Past Performance Summaries
 
01 f25 introduction
01 f25 introduction01 f25 introduction
01 f25 introduction
 

Recently uploaded

GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)
Javier Junquera
 
5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides
DanBrown980551
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Safe Software
 
SAP S/4 HANA sourcing and procurement to Public cloud
SAP S/4 HANA sourcing and procurement to Public cloudSAP S/4 HANA sourcing and procurement to Public cloud
SAP S/4 HANA sourcing and procurement to Public cloud
maazsz111
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
shyamraj55
 
Generating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and MilvusGenerating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and Milvus
Zilliz
 
Public CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptxPublic CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptx
marufrahmanstratejm
 
AWS Cloud Cost Optimization Presentation.pptx
AWS Cloud Cost Optimization Presentation.pptxAWS Cloud Cost Optimization Presentation.pptx
AWS Cloud Cost Optimization Presentation.pptx
HarisZaheer8
 
Taking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdfTaking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdf
ssuserfac0301
 
Dandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity serverDandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity server
Antonios Katsarakis
 
dbms calicut university B. sc Cs 4th sem.pdf
dbms  calicut university B. sc Cs 4th sem.pdfdbms  calicut university B. sc Cs 4th sem.pdf
dbms calicut university B. sc Cs 4th sem.pdf
Shinana2
 
System Design Case Study: Building a Scalable E-Commerce Platform - Hiike
System Design Case Study: Building a Scalable E-Commerce Platform - HiikeSystem Design Case Study: Building a Scalable E-Commerce Platform - Hiike
System Design Case Study: Building a Scalable E-Commerce Platform - Hiike
Hiike
 
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
Tatiana Kojar
 
Columbus Data & Analytics Wednesdays - June 2024
Columbus Data & Analytics Wednesdays - June 2024Columbus Data & Analytics Wednesdays - June 2024
Columbus Data & Analytics Wednesdays - June 2024
Jason Packer
 
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing InstancesEnergy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Alpen-Adria-Universität
 
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
saastr
 
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
Edge AI and Vision Alliance
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
Brandon Minnick, MBA
 
WeTestAthens: Postman's AI & Automation Techniques
WeTestAthens: Postman's AI & Automation TechniquesWeTestAthens: Postman's AI & Automation Techniques
WeTestAthens: Postman's AI & Automation Techniques
Postman
 
Monitoring and Managing Anomaly Detection on OpenShift.pdf
Monitoring and Managing Anomaly Detection on OpenShift.pdfMonitoring and Managing Anomaly Detection on OpenShift.pdf
Monitoring and Managing Anomaly Detection on OpenShift.pdf
Tosin Akinosho
 

Recently uploaded (20)

GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)GNSS spoofing via SDR (Criptored Talks 2024)
GNSS spoofing via SDR (Criptored Talks 2024)
 
5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides5th LF Energy Power Grid Model Meet-up Slides
5th LF Energy Power Grid Model Meet-up Slides
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
 
SAP S/4 HANA sourcing and procurement to Public cloud
SAP S/4 HANA sourcing and procurement to Public cloudSAP S/4 HANA sourcing and procurement to Public cloud
SAP S/4 HANA sourcing and procurement to Public cloud
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
 
Generating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and MilvusGenerating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and Milvus
 
Public CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptxPublic CyberSecurity Awareness Presentation 2024.pptx
Public CyberSecurity Awareness Presentation 2024.pptx
 
AWS Cloud Cost Optimization Presentation.pptx
AWS Cloud Cost Optimization Presentation.pptxAWS Cloud Cost Optimization Presentation.pptx
AWS Cloud Cost Optimization Presentation.pptx
 
Taking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdfTaking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdf
 
Dandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity serverDandelion Hashtable: beyond billion requests per second on a commodity server
Dandelion Hashtable: beyond billion requests per second on a commodity server
 
dbms calicut university B. sc Cs 4th sem.pdf
dbms  calicut university B. sc Cs 4th sem.pdfdbms  calicut university B. sc Cs 4th sem.pdf
dbms calicut university B. sc Cs 4th sem.pdf
 
System Design Case Study: Building a Scalable E-Commerce Platform - Hiike
System Design Case Study: Building a Scalable E-Commerce Platform - HiikeSystem Design Case Study: Building a Scalable E-Commerce Platform - Hiike
System Design Case Study: Building a Scalable E-Commerce Platform - Hiike
 
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...
 
Columbus Data & Analytics Wednesdays - June 2024
Columbus Data & Analytics Wednesdays - June 2024Columbus Data & Analytics Wednesdays - June 2024
Columbus Data & Analytics Wednesdays - June 2024
 
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing InstancesEnergy Efficient Video Encoding for Cloud and Edge Computing Instances
Energy Efficient Video Encoding for Cloud and Edge Computing Instances
 
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
 
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
 
WeTestAthens: Postman's AI & Automation Techniques
WeTestAthens: Postman's AI & Automation TechniquesWeTestAthens: Postman's AI & Automation Techniques
WeTestAthens: Postman's AI & Automation Techniques
 
Monitoring and Managing Anomaly Detection on OpenShift.pdf
Monitoring and Managing Anomaly Detection on OpenShift.pdfMonitoring and Managing Anomaly Detection on OpenShift.pdf
Monitoring and Managing Anomaly Detection on OpenShift.pdf
 

Website in a Box or the Next Generation Hosting Platform

  • 1. Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.         Website  in  a  Box   The  Next  Generation  Hosting  Platform                                                     Slava  Vladyshevsky   Alex  Kostsin    
  • 2.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       2   Table  of  Contents     PLATFORM  OVERVIEW  ....................................................................................................................................................  4   INFRASTRUCTURE  OVERVIEW  .................................................................................................................................................................  4   NETWORK  SETUP  OVERVIEW  ..................................................................................................................................................................  6   PLATFORM  USER  ROLES  ..................................................................................................................................................  7   PLATFORM  COMPONENTS  ..............................................................................................................................................  8   PLATFORM  SERVICES  ................................................................................................................................................................................  9   Stats  Collector  .........................................................................................................................................................................................  9   Stats  Database  .....................................................................................................................................................................................  11   Image  Registry  .....................................................................................................................................................................................  13   Image  Builder  .......................................................................................................................................................................................  15   Deployment  Service  ............................................................................................................................................................................  16   Container  Provisioning  Service  .....................................................................................................................................................  17   Reporting  Service  ................................................................................................................................................................................  19   Persistent  Volumes  .............................................................................................................................................................................  19   Volume  Sync-­‐Share  Service  .............................................................................................................................................................  20   Persistent  Database  Storage  ..........................................................................................................................................................  22   Database  Driver  .....................................................................................................................................................................................................................  23   Percona  XtraDB  Cluster  Limitations  .............................................................................................................................................................................  24   Secure  Storage  .....................................................................................................................................................................................  24   Identity  Management  Service  ........................................................................................................................................................  26   Load-­‐Balancer  Service  ......................................................................................................................................................................  29   SCM  Service  ............................................................................................................................................................................................  30   Workflow  Engine  ................................................................................................................................................................................  32   SonarQube  Service  ..............................................................................................................................................................................  35   Sonar  Database  ....................................................................................................................................................................................  36   Sonar  Scanner  ......................................................................................................................................................................................  36   PLATFORM  INTERFACES  ........................................................................................................................................................................  40   API  Endpoints  .......................................................................................................................................................................................  40   Command  Line  Interfaces  ...............................................................................................................................................................  40   Platform  CLI  .............................................................................................................................................................................................................................  40   Docker  CLI  ................................................................................................................................................................................................................................  48   Web  Portals  ...........................................................................................................................................................................................  48   Stats  Visualization  Portal  ...................................................................................................................................................................................................  48   GitLab  Portal  ............................................................................................................................................................................................................................  49   Sonar  Portal  .............................................................................................................................................................................................................................  50   Platform  Orchestration  Portal  .........................................................................................................................................................................................  50   OTHER  COMPONENTS  ............................................................................................................................................................................  51   Docker  Engine  .........................................................................................................................................................................................................................  51   Docker  Containers  .................................................................................................................................................................................................................  51   PLATFORM  CAPACITY  MODEL  .....................................................................................................................................  52   PLATFORM  SECURITY  .....................................................................................................................................................  54   USER  NAMESPACE  REMAP  ....................................................................................................................................................................  54   DOCKER  BENCH  FOR  SECURITY  ............................................................................................................................................................  57   WEB  APPLICATION  SECURITY  ..............................................................................................................................................................  57   PLATFORM  CHANGE  MANAGEMENT  ..........................................................................................................................  59   DRUPAL  HOSTING  ............................................................................................................................................................  60   DRUPAL  SITE  COMPONENTS  .................................................................................................................................................................  61   DRUPAL  CONTAINER  COMPONENTS  ....................................................................................................................................................  62   DRUPAL  CONTAINER  PERFORMANCE  .................................................................................................................................................  64   Sizing  Considerations  ........................................................................................................................................................................  64   Apache  vs.  NGINX  ................................................................................................................................................................................  66   Performance  Test  ................................................................................................................................................................................  67   Process  Size  Conundrum  ..................................................................................................................................................................  69  
  • 3.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       3   DRUPAL  PROJECT  CREATION  ................................................................................................................................................................  73   DRUPAL  WEBSITE  DEPLOYMENT  ........................................................................................................................................................  76   Web  Project  Deployment  .................................................................................................................................................................  76   Web  Container  Deployment  ...........................................................................................................................................................  80   Website  Deployment  Workflow  ....................................................................................................................................................  80   EDITORIAL  WORKFLOW  ........................................................................................................................................................................  81   CONTENT  PUBLISHING  ...........................................................................................................................................................................  83   ACTIVE  DIRECTORY  STRUCTURE  ...............................................................................................................................  85   GITLAB  REPOSITORY  STRUCTURE  .............................................................................................................................  87   MANAGEMENT  TASKS  AND  WORKFLOWS  ...............................................................................................................  88   PLATFORM  STARTUP  .....................................................................................................................................................  91   BASE  OS  IMAGE  .................................................................................................................................................................  98   THE  OS  IMAGE  INSIDE  CONTAINER  ....................................................................................................................................................  98   ONE  VS.  MULTIPLE  APPLICATIONS  ......................................................................................................................................................  99   PROCESS  SUPERVISOR  ............................................................................................................................................................................  99   QUICK  SUMMARY  .................................................................................................................................................................................  100   STORAGE  SCALABILITY  IN  DOCKER  ........................................................................................................................  101   LOOP  LVM  ............................................................................................................................................................................................  102   DIRECT-­‐LVM  ........................................................................................................................................................................................  102   BTRFS  ...................................................................................................................................................................................................  103   OVERLAYFS  ..........................................................................................................................................................................................  103   ZFS  .........................................................................................................................................................................................................  103   CONCLUSION  ...................................................................................................................................................................  104     Figure  Register     Figure  1  -­‐  Infrastructure  Diagram  ....................................................................................................................................  5   Figure  2  -­‐  Foundation  Infrastructure  Diagram  ...........................................................................................................  6   Figure  3  -­‐  High-­‐level  Network  Diagram  .........................................................................................................................  7   Figure  4  -­‐  Platform  Components  .......................................................................................................................................  8   Figure  5  -­‐  cAdvisor  Web  UI:  CPU  usage  ......................................................................................................................  10   Figure  6  -­‐  InfluxDB  Web  Console  ...................................................................................................................................  12   Figure  7  -­‐  Image  Builder  UI  ..............................................................................................................................................  15   Figure  8  -­‐  Sonar  Project  Dashboard  .............................................................................................................................  38   Figure  9  -­‐  Sonar  Issue  Report  ..........................................................................................................................................  39   Figure  10  -­‐  Stats  Visualization  and  Analysis  Portal  ...............................................................................................  48   Figure  11  -­‐  GitLab  Portal  ...................................................................................................................................................  49   Figure  12  -­‐  Sonar  Portal  .....................................................................................................................................................  50   Figure  13  -­‐  Platform  Orchestration  Portal  .................................................................................................................  51   Figure  14  –  Platform  Capacity  Model  ...........................................................................................................................  52   Figure  15  -­‐  Drupal  CMS:  Configuration  Portal  .........................................................................................................  60   Figure  16  -­‐  Drupal  Site  Components  ............................................................................................................................  61   Figure  17  -­‐  Web  Container  Components  ....................................................................................................................  62   Figure  18  -­‐  Stress  Test  Results  ........................................................................................................................................  67   Figure  19  -­‐  Drupal  Project  Creation  Process  ............................................................................................................  73   Figure  20  -­‐  Drupal  Project  Deployment  Process  .....................................................................................................  77   Figure  21  -­‐  Website  Deployment  Workflow  .............................................................................................................  81   Figure  22  -­‐  Editorial  Workflow  .......................................................................................................................................  82   Figure  23  -­‐  Content  Publishing  Process  ......................................................................................................................  84   Figure  24  -­‐  Example:  MS  Active  Directory  Structure  ............................................................................................  85  
  • 4.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       4   Platform  Overview     This  document  provides  in  depth  overview  for  the  Proof  of  Concept  project,  hereinafter  POC,  for   container-­‐based   LAMP   web   hosting.   This   POC   project   has   been   performed   to   verify   technical   feasibility   and   architectural   assumptions   as   well   as   to   demonstrate   the   prospect   customer   our   expertise   in   this   domain.   It’s   assumed   that   this   project   or   its   parts   will   be   adopted   and   productized.     No   clear   requirements   have   been   provided.   Therefore,   the   overall   design   and   architectural   decisions  have  been  mostly  governed  by  the  following  assumptions:   • The  platform  must  provide  fully  managed  website  placeholders  that  will  be  populated  with   customer-­‐provided  code  and  assets;   • The  platform  must  provide  LAMP  (Linux,  Apache,  PHP,  MySQL)  run-­‐time  environment;   • The  platform  architecture  must  be  similar  to  existing  Windows  hosting  platform;   • The  platform  must  guarantee  high-­‐availability  for  production  workloads;   • The   platform   must   prevent   the   noisy-­‐neighbors   effect,   i.e.   websites   sharing   the   same   infrastructure  must  not  impact  each  other  performance;   • The  platform  must  support  different  website  sizes  and  resource  allocation  profiles;   • The  platform  must  guarantee  resources  and  be  able  to  report  on  their  usage;     From   early   project   stages   it’s   been   assumed   that   hosting   platform   will   utilize   Linux   containers   technology  popularized  by  Docker  and  often  referred  to  as  Docker  Containers.  Obviously,  Docker   is  a  good  fit  for  such  hosting  platform  since  Docker  Containers:   • Allowing  for  much  higher  workload  density  than  VMs;   • Providing  enough  workload  isolation  and  containment;   • Enabling  granular  resource  management  and  reporting;   • Considered  the  future  of  PaaS.     Soon   it   became   apparent   that   there   is   much   more   required   than   Docker   alone   for   supporting   platform  requirements  and  some  additional  services  and  components  are  essential  for  providing   reliable  hosting  services.     Over  time,  the  set  of  Docker  containers  and  bunch  of  scripts  to  manage  them  evolved  into  the  real   platform   with   well-­‐defined   services,   components   and   interfaces   between   them.   Operational   procedures   and   workflows   have   been   automated   and   exposed   via   different   interface   to   enable   future  integration  and  instrumentation.     The  platform  architecture,  design  approach  and  processes  heavily  relying  on  Twelve-­‐Factor  App   principles.  For  more  details  see  https://12factor.net/.     Infrastructure  Overview   Originally   the   platform   has   been   built   on   top   of   Kubernetes   cluster   for   simplified   container   scheduling   and   orchestration.   Due   to   the   lack   of   expertise   in   Support   Organization   and   little   acceptance   within   the   account   team,   this   approach   has   been   discontinued   and   Platform   Infrastructure  setup  followed  and  adopted  as  much  as  possible  existing  Windows  hosting  platform   architecture.  
  • 5.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       5     Figure  1  -­‐  Infrastructure  Diagram     The  POC  farm  infrastructure  is  mimicking  existing  web-­‐farm  setup  for  Windows  hosting:   • All  inbound  network  traffic  is  passing  CDN/WAF;   • The  network  is  split  into  two  security  zones:  DMZ  and  TRUST;   • The  front-­‐end  services  and  service  components  are  hosted  in  DMZ  subnet;   • The  back-­‐end  and  secured  components  are  located  in  TRUST  subnet;   • When  coming  from  CDN/WAF,  the  network  traffic  is  passing  firewalls  and  load-­‐balancers;   • Production  HTTP/S  VIPs  are  passing  traffic  to  HA  pair  of  web  instances;   • Other  HTTP/S  VIPs,  e.g.  Staging  are  passing  traffic  to  a  singe  end-­‐point;   • The  TRUST  subnet  contains  DB  servers:  a  cluster  for  production  workloads  and  a  single   instance  for  staging  use;   • All   platform   services   and   components   are   running   in   corresponding   containers   with   exception  for  DB  instances,  which  are  running  directly  on  host  OS.     There  is  additional  shared  farm,  so  called  Utility  or  Foundation,  one  per  DC,  where  various  utility   services  shared  across  multiple  farms  and  websites  being  hosted.  For  production  deployment  it   may  be  beneficial  from  security  standpoint  to  place  some  foundation  services  into  TRUST  subnet.    
  • 6.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       6     Figure  2  -­‐  Foundation  Infrastructure  Diagram     It  is  envisioned  that  existing  foundation  farm  will  need  to  be  extended  with  at  least  two  additional   systems   for   providing   required   foundation   services.   This   is   assuming   that   the   rest   of   existing   foundation   services   such   as   Active   Directory,   DNS,   SMTP,   NTP,   …   will   be   shared   with   the   new   platform.     Network  Setup  Overview   The   diagram   below   is   showing   a   logical   view   on   the   hosting   network   structure.   It’s   worth   mentioning   that   besides   TRUST   and   DMZ   VLANs,   the   Docker   is   adding   one   more   layer   of   indirection   by   creating   at   least   one   network   bridge   per   Docker   host   to   pass   traffic   between   containers  and  external  world.     There   are   number   of   solutions   emerged   over   past   couple   years,   bringing   SDN   and   network   virtualization  capabilities  to  container  eco-­‐system.  During  this  POC  project  we  won’t  be  exploring   these  network  abstraction  solutions  and  will  use  standard  network  stack  provided  by  Docker.    
  • 7.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       7     Figure  3  -­‐  High-­‐level  Network  Diagram   Platform  User  Roles   The  user  role  definition  is  tightly  bound  to  the  definition  scope.  The  following  scopes  defined:   • Platform   Scope   –   platform-­‐wide   scope,   including   all   hosted   organizations   and   applications;   • Organization  Scope  –  includes  organization  owned  objects  and  applications;   • Application  Scope  –  includes  objects  and  components  pertinent  to  a  given  application;     Specific  user  roles  and  their  mapping  will  be  dictated  by  the  particular  use-­‐case  and  processes   accepted  within  hosting  organization.       For  the  sake  of  simplicity  we’ll  assume  the  following  major  roles  defined  in  the  scope  of  proposed   hosting  platform:   • Authorized  User  –  a  user  that  passed  authentication  and  has  been  assigned  corresponding   permissions:   o Administrator  –  a  management  user  performing  administrations  tasks;   o Developer  –  a  developer,  an  individual  writing  and  testing  the  code;   o Content   Manager  –  an  editor,  an  individual  authoring  and  managing  the  web  site   content;   • Anonymous  User  –  a  website  visitor  coming  from  the  public  Internet;     The   Identity   Management   (IdM)   Service   performs   mapping   between   user   identity   and   its   associated  roles.  This  is  implemented  using  LDAP  grouping  mechanisms.    
  • 8.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       8   Things  to  keep  in  mind:   • User  role  depending  on  the  scope,  e.g.  one  user  may  be  Developer  in  one  organization  and   act  as  Content  Manager  in  other  organization.  While  this  is  possible,  generally  such  cross-­‐ organization  role  assignments  are  discouraged;   • One  may  differentiate  Platform  User  Roles  and  Application  User  Roles  for  the  Applications   deployed   on   the   platform.   However,   both   user   types   are   authenticated   and   authorized   using  the  same  IdM  Platform  Service  and  as  such  making  no  real  difference.  For  example   Drupal  user  roles  are  subset  of  platform  user  roles;   • Both  Applications  and  Platform  using  IdM  Service  currently,  however,  it’s  not  a  mandatory   requirement.   Additional   or   alternative   Authentication   Mechanisms   may   be   used   too.   For   example  many  Platform  services  have  local  user  database  and  local  administrative  accounts   in  order  to  be  able  to  act  autonomously  in  case  of  IdM  Service  failure  or  other  issues;   • The  website  visitor  is  not  required  to  pass  authentication  and  granted  the  Anonymous  User   role  by  default.   Platform  Components   Below   is   the   high-­‐level   diagram   of   the   Platform   components.   Connectors   depicting   major1   communication   channels   and   interactions   between   services   and   generally   may   be   seen   as   the   “using”  statement.  The  dotted-­‐line  connectors  are  showing  alternative  path.         Figure  4  -­‐  Platform  Components                                                                                                                   1  Major  is  referring  to  the  fact  that  some  dependencies  are  not  shown  to  avoid  diagram  clutter.  E.g.  pretty  much  all   platform  components  depending  on  Persistent  Volumes  and  this  is  not  depicted  here.  
  • 9.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       9   Different  components  marked  with  different  colors  to  differentiate  their  types:   • Red  components  are  administrative  or  management  portals;   • Yellow  components  are  Platform  Services,  generally  speaking  –  containers;   • Blue  components  are  development  portals;   • Grey  components  are  general-­‐purpose  platform  building  blocks;   • Green  components  are  hosted  website  instances  the  user  interacting  with.     The  following  platform  Actors  defined:   • Admin  –  platform  administrator;   • Dev  –  website  developer;   • Website  User  –  both  content  manager  and  public  Internet  user.     Platform  Services   Below  is  a  short  overview  for  the  Platform  Services.  For  every  Service  it  is  providing  description  of   its  role,  dependencies  as  well  as  configuration  and  usage  examples.     The   service   startup   instructions   in   this   chapter   are   provided   for   demonstration   purposes   only.   Normally  services  are  expected  to  boot  in  automated  manner,  for  example  using  Docker  Composer   scripts.   By   using   Composer   we   can   ensure   repeatable   and   consistent   configuration   as   well   as   reliable  service  startup  and  recovery.  See  the  Platform  Startup  chapter  for  additional  details.   Stats  Collector   Platform  Stats  Collector  is  a  stateless  service  implemented  as  container  running  on  every  Docker   host   and   collecting   resource   usage   stats   exposed   by   Docker   Engine   using   Google   cAdvisor   application  https://github.com/google/cadvisor.     The   quote   from   the   project   page:   “The  cAdvisor  (Container  Advisor)  provides  container  users  an   understanding  of  the  resource  usage  and  performance  characteristics  of  their  running  containers.  It   is   a   running   daemon   that   collects,   aggregates,   processes,   and   exports   information   about   running   containers.  Specifically,  for  each  container  it  keeps  resource  isolation  parameters,  historical  resource   usage,  and  histograms  of  complete  historical  resource  usage  and  network  statistics.  This  data  may  be   exported  either  by  container  or  machine-­‐wide.  The  cAdvisor  has  native  support  for  Docker  containers   and  should  support  just  about  any  other  container  type  out  of  the  box.”     Current  setup  assumes  that  Stats  Collector  is  using  Stats  DB  service  for  storing  metrics  collected   from  the  Docker  Engine.  Therefore  Stats  Collector  depends  on  Stats  DB  service  and  Docker  Engine   APIs  and  must  be  deployed  and  booted  accordingly.     Alternatively,   it’s   possible   to   use   https://github.com/kubernetes/heapster   for   stats   aggregation   and   resource   monitoring   for   more   complex   deployments   or   query   Docker   API   directly,   if   more   control  or  flexibility  is  required.     Although   cAdvisor   instances   may   be   accessed   directly   and   providing   Web   UI   for   metric   visualization,   the   more   practical   approach   is   to   export   collected   stats   to   external   database   that   may   be   used   for   arbitrary   data   aggregation,   reporting   and   analysis   tasks.   The   cAdvisor   does   provide  multiple  storage  drivers  out  of  the  box.  Current  implementation  is  using  InfluxDB  time-­‐ series  database  for  storing  collected  measurements.     Below  is  an  example  of  the  chart  produced  by  cAdvisor  in  runtime.  It  has  quite  limited  practical   usage  if  at  all  and  provided  just  for  reference  purposes.    
  • 10.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       10     Figure  5  -­‐  cAdvisor  Web  UI:  CPU  usage   Below  is  an  example  command  for  running  cAdvisor  container:     $  docker  run  -­‐-­‐name=cadvisor  -­‐-­‐hostname=`hostname`  -­‐-­‐detach=true  -­‐-­‐restart=always      -­‐-­‐cpu-­‐shares  100  -­‐-­‐memory  500m  -­‐-­‐memory-­‐swap  1G  -­‐-­‐userns=host  -­‐-­‐publish=8080:8080      -­‐-­‐volume=/:/rootfs:ro  -­‐-­‐volume=/var/run:/var/run:rw  -­‐-­‐volume=/sys:/sys:ro      -­‐-­‐volume=/var/lib/docker/:/var/lib/docker:ro      google/cadvisor:v0.24.0        -­‐storage_driver=influxdb  -­‐storage_driver_db=cadvisor  -­‐storage_driver_host=${INFLUXDB_HOST}:8086        -­‐storage_driver_user=${INFLUXDB_RW_USER}  -­‐storage_driver_password="${INFLUXDB_RW_PASS}"     The  cAdvisor  is  still  an  evolving  project  and,  unfortunately,  having  own  shortcomings,  for  example   it’s  only  accepting  configuration  values  via  command  line  options.  Neither  configuration  files  nor   ENV   variables   currently   supported.   One   of   the   issues   directly   following   form   this   –   the   DB   credentials  passed  as  command  line  parameters  in  clear  text  and  can  be  seen  in  the  process  list.     There  are  several  things  to  keep  in  mind:   • Unless  default  database  scheme  and  credentials  used,  they  must  be  provided  too  as  storage   driver  parameters.  The  database  scheme  must  be  created  prior  to  storing  collected  metrics;  
  • 11.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       11   • The  cAdvisor  does  not  store  collected  metrics  for  more  than  120sec  by  default.  Therefore,  if   database   connection   is   interrupted,   the   resource   metrics   are   lost.   Depending   on   your   specific  environment  setup  and  requirements  it  may  be  a  good  idea  to  review  and  adjust   default  buffering  and  flushing  settings;   • More-­‐less   obvious   observation:   the   more   containers   running   on   the   host,   the   more   resources  will  cAdvisor  consume  and  the  more  traffic  will  flow  between  cAdvisor  instance   and  storage  backend.  Consequently:   o It’s   a   good   idea   to   limit   cAdvisor   resource   usage   to   avoid   impacting   production   workloads.  On  the  other  hand,  pulling  the  belt  too  tight  may  have  adverse  affects  on   metrics   collection   itself.   The   constraints   provided   in   example   above   are   for   demonstration   purposes   only   and   must   be   adjusted   for   specific   setup   and   environment;   o For   busy   hosts   with   high   container   density   it’s   recommended   to   adjust   cAdvisor   buffering,  caching  and  flushing  parameters  for  the  best  performance.  For  example:   cAdvisor  is  collecting  metrics  during  the  1min  time  frame  and  flushing  them  in  a   single   transaction.   In   certain   scenarios   increasing   this   time   frame   may   improve   performance  without  impacting  monitoring  granularity;   • The   cAdvisor   requires   elevated   permissions   (-­‐-­‐userns=host),   since   it   is   accessing   some   objects  in  the  Docker  host  namespace;   • The   cAdvisor   project   does   not   enforce   security   by   default,   which   leaves   us   with   three   possible  options  for  running  this  service.  All  these  options  have  been  explored  during  the   POC  project  and  providing  the  balance  between  security  and  complexity:   o Insecure:   using   default   credentials   for   storage   driver.   No   additional   options   required;   o Kind-­‐of-­‐secure:  providing  storage  driver  credentials  as  command-­‐line  parameters,   so  they  will  show  up  in  the  process  list;   o Secure:  creating  a  custom  build  and  image  for  cAdvisor  that  will  handle  and  pass   credentials  securely.   • It’s   unlikely   that   cAdvisor   Web   UI   itself   is   going   to   be   used   for   production   deployment   monitoring,  therefore  it’s  recommended  to  avoid  publishing  cAdvisor  Web  UI  ports;   • The   cAdvisor,   being   a   part   of   Kubernetes   project   is   quickly   evolving   and   new   versions   appearing  quite  often.  Although  common  practice  is  to  use  the  “latest”  image  version,  it’s   recommended  to  standardize  on  and  run  specific  cAdvisor  version  across  all  deployments   for  consistent  and  predictable  behavior  and  results.     Stats  Database   All   metrics   gathered   by   Stats   Collector   service   are   passed   to   and   persisted   by   Stats   Database   service.  This  service  is  implemented  as  Docker  container  located  on  utility  host  in  foundation  farm   and  running  InfluxDB  time-­‐series  database  https://github.com/influxdata/influxdb.     Depending  on  specific  requirements  different  storage  back-­‐ends  may  be  used  in  place  of  InfluxDB.   The  choice  has  been  made  in  favor  of  InfluxDB  for  the  following  reasons:   • Simple  and  self-­‐contained  database  without  external  dependencies;   • Purpose  made  database  for  time-­‐series  metric  storage  and  querying;   • Supported  by  and  integrated  into  many  modern  deployment  stacks  and  platforms;   • Provides  several  storage  engines  geared  towards  real-­‐time  data  processing;   • REST  API  driven  for  both  management,  data  ingestion  and  processing;   • Supporting  SQL-­‐like  InfluxQL  language  for  querying  database;   • Provides  flexible  controls  and  data  retention  policies;   • Scalable  and  supports  clustering;  
  • 12.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       12     The  Stats  Database  service  is  indirectly  depending  on  Image  Registry  service,  since  its  image  being   pulled  from  registry  by  the  Docker  Engine  during  the  service  container  startup.  Other  than  that,   assuming  standalone  (non-­‐clustered)  deployment,  the  Stats  Database  service  is  self-­‐sufficient  and   being  used  by  other  services  and  components  such  as:   • The  Stats  Visualization  portal  –  is  querying  Stats  Database  for  visualized  resource  metrics;   • The  Reporting  service  –  is  querying  Stats  Database  for  compiling  various  usage  reports;   • The  Stats  Collector  –  is  periodically  storing  measurements  in  the  Stats  Database.     The  InfluxDB  is  also  providing  web  console  for  basic  management  and  querying  operations.         Figure  6  -­‐  InfluxDB  Web  Console   Here  is  an  example  for  running  InfluxDB  container:     $  docker  run  -­‐-­‐name=influxdb  -­‐-­‐detach=true  -­‐-­‐restart=always      -­‐-­‐cpu-­‐shares  512  -­‐-­‐memory  1G  -­‐-­‐memory-­‐swap  1G      -­‐-­‐volume=${VOL_DATA}/influxdb:/influxdb  -­‐-­‐publish  8083:8083  -­‐-­‐publish  8086:8086      -­‐-­‐expose  8090  -­‐-­‐expose  8099      -­‐-­‐env  ADMIN_USER="root"  -­‐-­‐env  PRE_CREATE_DB=cadvisor      ${REGISTRY}/influxdb     In  some  cases  there  may  be  a  need  to  have  separate  user  accounts  with  varying  access  levels.  The   user  with  write  permissions  may  be  used  for  storing  stats  in  the  DB  and  read-­‐only  user  may  be   used  for  reporting  and  monitoring  activities.  Let’s  create  users  with  read  and  write  permissions:     $  cat  <<"EOT"  |  docker  exec  -­‐i  influxdb  /usr/bin/influx  -­‐username=root  -­‐password=root  -­‐path  -­‐   CREATE  DATABASE  cadvisor   CREATE  USER  writer  WITH  PASSWORD  '<writer  password>'   CREATE  USER  reader  WITH  PASSWORD  '<reader  password>'   GRANT  WRITE  ON  cadvisor  TO  writer   GRANT  READ  ON  cadvisor  TO  reader   EOT    
  • 13.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       13     Now,  we  will  list  available  databases  using  InfluxDB  client:     $  echo  "show  databases"  |  docker  exec  -­‐i  influxdb  /usr/bin/influx  -­‐username=root  -­‐password=root  -­‐path   -­‐     Visit  https://enterprise.influxdata.com  to  register  for  updates,  InfluxDB  server  management,  and   monitoring.   Connected  to  http://localhost:8086  version  0.10.3   InfluxDB  shell  0.10.3   >  name:  databases   -­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐-­‐   name   cadvisor   _internal     Things  to  keep  in  mind:   • For  the  sake  of  simplicity  InfluxDB  is  deployed  as  standalone  instance  and  therefore  it  is   not   resilient   to   service   failures   resulting   in   data   loss   until   service   is   recovered.   It’s   recommended  to  deploy  InfluxDB  cluster  for  production  deployments;   • The  database  size  on  disk  will  depend  on  retention  policies  and  amount  of  metrics  collected   over  time.  The  policies  and  retention  rules  will  need  to  be  adjusted  for  production  use  and   on  case-­‐by-­‐case  basis;   • The  service  (container)  memory  consumption  will  depend  on  configured  storage  engine,   amount   of   metrics   collected   and   configuration   settings.   Those   settings   will   need   to   be   adjusted  for  production  use,  keeping  in  mind  resource  constraints;   • InfluxDB  provides  multiple  interfaces  for  monitoring  and  data  querying,  including  database   client  application,  client  libraries  for  most  popular  languages  as  well  as  REST  API  endpoint;   • This  project  is  using  custom  built  image  for  InfluxDB  for  automating  and  simplifying  basic   setup   and   management   tasks.   It   may   behave   differently   comparing   to   default   image   provided  by  the  vendor.     Image  Registry   All  container  images  used  by  the  POC  project  are  stored  in  the  local  image  repository  provided  by   Image  Registry  service.  This  service  is  implemented  as  the  Docker  container  located  on  utility  host   in  the  foundation  farm  and  running  Docker  Distribution  https://github.com/docker/distribution   application.    Whenever  new  container  image  is  built  –  it  is  stored  in  the  Image  Registry.  Whenever   new  container  created,  its  image  being  pulled  from  this  repository.     More   details   and   examples   can   be   found   in   Docker   Distribution   project   documentation   on   the   following  link  https://github.com/docker/distribution/blob/master/docs/deploying.md.     Being  one  of  the  base  services,  the  Image  Registry  is  self-­‐contained  and  does  not  depend  on  other   Platform  services.  At  the  same  time  the  Image  Registry  is  not  used  directly  by  Platform  services.   Usually,  it  is  used  indirectly,  when  Docker  Engine  cannot  find  required  image  in  the  local  image   storage  on  particular  host.  In  this  case  the  image  is  being  queried,  validated  and  pulled  from  the   Image  Registry.     Here  is  an  example  for  setting  up  image  registry  service.  First  of  all  we’ll  setup  certificates.  The  SSL   keys  will  need  to  be  generated  only  once,  but  have  to  be  deployed  on  every  Docker  host:  
  • 14.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       14     #  executed  only  once:  generating  self-­‐signed  registry  certificate,  CN=registry.poc     $  mkdir  -­‐p  ~/certs   $  openssl  req  -­‐newkey  rsa:4096  -­‐nodes  -­‐sha256  -­‐x509  -­‐days  365      -­‐subj  "/C=DE/ST=HE/L=Frankfurt/O=VZ/OU=MH/CN=registry.poc/emailAddress=admin@vzpoc.com"      -­‐keyout  ~/certs/registry.key  -­‐out  ~/certs/registry.crt     #  executed  on  each  Docker  host:     #  -­‐  deploying  certificates  to  the  Docker  certificate  store   $  mkdir  -­‐p  /etc/docker/certs.d/registry.poc:5000   $  cp  certs/registry.crt  /etc/docker/certs.d/registry.poc:5000/ca.crt     #  -­‐  restarting  docker  to  activate  certificates   $  systemctl  restart  docker.service     Next,  we’ll  set  up  host  volumes  and  configuration  for  the  Image  Registry  service  container:     $  mkdir  -­‐p  /var/data/registry/{certs,config,data}   $  [  -­‐d  ~/certs  ]  &&  cp  ~/certs/*  /var/data/registry/certs   $  cat  <<EOT  >  /var/data/registry/config/config.xml     version:  0.1   log:      level:  info      formatter:  text      fields:          service:  registry          environment:  production   storage:          cache:                  layerinfo:  inmemory          filesystem:                  rootdirectory:  /var/lib/registry   http:          addr:  :5000          tls:                  certificate:  /certs/registry.crt                  key:  /certs/registry.key          debug:                  addr:  :5001   EOT     Eventually,  we’ll  start  registry  service  and  validate  that  it  can  be  accessed  over  HTTPS:     #  starting  Docker  container  with  registry  service   $  docker  run  -­‐-­‐name  registry  -­‐-­‐hostname  registry.poc  -­‐-­‐detach=true  -­‐-­‐restart=always      -­‐-­‐env  REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt      -­‐-­‐env  REGISTRY_HTTP_TLS_KEY=/certs/registry.key      -­‐-­‐volume  /var/data/registry/certs:/certs:ro      -­‐-­‐volume  /var/data/registry/data:/var/lib/registry:rw      -­‐-­‐volume  /var/data/registry/config:/etc/docker/registry:ro      -­‐-­‐publish  5000:5000      registry:2.5     #  verifying  registry  is  working,  registry.poc  name  should  resolve  to  IP  owned  by  the  registry  service   $  docker  tag  busybox  registry.poc:5000/poc/busybox:v1   $  docker  push  registry.poc:5000/poc/busybox:v1   $  curl  -­‐-­‐cacert  ~/certs/registry.crt  -­‐X  GET  https://registry.poc:5000/v2/poc/busybox/tags/list   {"name":"poc/busybox","tags":["v1"]}    
  • 15.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       15   Things  to  keep  in  mind:   • Most   container   images   are   stored   in   the   locally   hosted   Image   Registry,   however,   some   images   are   pulled   from   outside   repositories   to   avoid   circular   dependencies   during   the   service  startup:   o The   Docker   Distribution   container   image   is   provided   by   Docker   and   pulled   from   external  registry  https://hub.docker.com/r/distribution/registry/   o The   Google   cAdvisor   container   image   is   provided   by   Google   and   pulled   from   the   external  registry  https://hub.docker.com/r/google/cadvisor/     o The  GitLab  container  image  is  provided  by  GitLab  community  and  pulled  from  the   external  registry  https://hub.docker.com/r/gitlab/gitlab-­‐ce/     • For  the  sake  of  simplicity  the  Image  Registry  service  is  deployed  as  standalone  instance  and   therefore   is   not   resilient   to   service   failures.   The   HA   deployment   is   recommended   for   production  use;   • Current  implementation  is  not  using  any  authentication  or  authorization  mechanisms,  thus   allowing   any   user   to   access   container   images.   Although   this   service   is   only   used   inside   internal  secure  perimeter,  it’s  recommended  to  implement  RBAC  policies  or  at  least  strong   authentication  mechanism  for  production  deployments;   • Due  to  security  considerations  all  traffic  is  encrypted  and  service  access  is  only  possible   using  HTTPS  protocol  as  a  transport.  Depending  on  security  requirements  there  may  be  a   need  to  create  and  sign  service  SSL  keys  using  trusted  CA.  Current  implementation  is  using   self-­‐signed  CA  and  keys.  For  this  to  work,  those  self-­‐signed  keys  must  be  added  to  Docker   certificate  store  on  every  Docker  host  that  is  communicating  with  “Image  Registry”  service;   • Obviously,   there   is   a   trade-­‐off   with   known   pro’s   and   contras,   when   implementing   local   registry   comparing   to   externally   hosted   container   registry.   For   this   project   it’s   been   decided   to   use   local   registry,   however,   nothing   prevents   using   external   Image   Registry   service.  This  is  assuming  that  service  integration  has  been  performed,  service  availability,   security  and  access  issues  have  been  addressed.     Image  Builder   This   service   is   implemented   as   Platform   management.   Currently,   new   image   builds   have   to   be   triggered  manually  after  Docker  files  have  been  modified,  however,  nothing  is  speaking  against   automating  this  step  and  triggering  image  build  upon  certain  event,  for  example  container  image   code  or  configuration  changes.         Figure  7  -­‐  Image  Builder  UI    
  • 16.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       16   There  are  no  services  depending  on  Image  Builder.  The  Image  Builder  itself  is  directly  depending   on   SCM   service   and   indirectly   on   Image   Registry   where   fresh   built   images   being   pushed   to.   Obviously,  some  secrets  such  as  keys  and  credentials  must  be  used  during  the  container  image   build  stage.  There  is  a  nice  write  up  providing  good  summary  for  available  solutions  and  options.   See  http://elasticcompute.io/2016/01/22/build-­‐time-­‐secrets-­‐with-­‐docker-­‐containers/.     Currently,  container  images  can  be  built  in  two  modes:   • Build:  container  image  is  built  from  scratch  and  properly  tagged;   • Release:   after   performing   image   build   the   image   is   undergoing   tests   and,   if   successful,   pushed  to  the  image  repository,  thus  becoming  available  for  deployment.     Things  to  keep  in  mind:   • Although   container   build   workflow   does   include   the   step   for   executing   tests,   currently,   there  are  no  actual  tests  provided.  Special  care  should  be  taken  and  container  images  must   be  tested  manually  prior  to  deploying  and  using  them;   • Sometimes,   when   memory   becomes   scarce   (multiple   SonarQube   analysis   running)   –   the   image   rebuild   process   may   fail   with   error   messages   indicating   lack   of   memory.   It’s   indicating   some   memory   leaks   in   Docker   and   hopefully   will   be   fixed   in   the   upcoming   releases.  This  should  not  occur  though  in  environments  with  sufficient  memory  allocation;   • The   Docker   files   for   images   have   been   built   considering   image   caching,   therefore   often   image   rebuilds   must   not   create   significant   load.   At   the   same   time   image   caching   may   become  a  source  of  hard-­‐to-­‐track  issues,  therefore  administrators  may  need  to  pay  a  special   care   to   the   local   image   store   and   cached   images   on   the   systems   where   builds   are   performed.     Deployment  Service   By   using   Deployment   service   we   can   ensure   that   all   projects   are   following   naming,   security,   configuration  and  deployment  standards  and  conventions.  They  can  be  easily  identified,  managed   and  recreated  in  a  standard  and  repeatable  way.  See  the  Drupal  Website  Deployment  chapter  for   additional  details  and  examples.     All  project  deployment  tasks  are  handled  by  this  service,  namely:   • Checking  requested  parameters  against  naming  standards;   • Choosing  the  target  location  based  on  user  inputs  or  defaults;   • Validating  that  target  location  is  ready  for  deployment;   • Cloning  requested  project  version  from  the  code  repository;   • Cloning  required  add-­‐on  projects  from  the  code  repository;   • Deploying  code  to  the  target  location;   • Running  configuration  instructions  and  setup  procedures;     The   Deployment   service   is   completely   decoupled   from   containers   or   other   infrastructure   semantics.   From   a   high-­‐level   perspective   the   relationship   between   related   components   can   be   described  as:   • Container  Provisioning  Service  is  deploying  well  defined  pre-­‐configured  containers;   • Containers  are  encapsulating  applications  and  are  immutable  or  read-­‐only.  All  volatile  and   mutable  objects  such  as  content,  log  files,  temporary  files,  etc.  are  persisted  on  volumes  or   using  other  persistence  mechanisms  such  as  Database  Storage;   • Deployment   Service   is   populating   host   volumes   with   application   objects   such   as   code,   configuration,  content,  etc.  Those  host  volumes  are  mapped  to  container  volumes  and  thus   becoming  available  to  execution  runtime  inside  corresponding  containers.  
  • 17.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       17     The  Deployment  service  is  used  by  Deployment  workflows  via  corresponding  Platform  CLI  calls.   The  service  itself  is  having  several  dependencies:   • Secure  Storage  –  used  to  query  various  credentials  and  sensitive  information;   • SCM  Service  –  used  to  clone  requested  projects  and  their  dependencies;   • Persistent  Volumes  –  used  for  deployment  targets  to  store  project-­‐related  objects;   • Persistent   Database   Storage   –   may   be   indirectly   used   by   project   setup   scripts,   for   example   for   creating   database   scheme   for   the   project   or   populating   required   database   objects.     Things  to  keep  in  mind:   • The   Deployment   service   is   not   making   orchestration   decisions   and   therefore   must   be   provided  the  target  location  specification  by  upstream  caller.  This  is  done  on  purpose  to   keep  orchestration  logic  and  mechanisms  separate  from  deployment  semantics;   • The   Deployment   service   is   a   part   of   Platform   CLI   component   and   as   such   uses   platform   configuration,  settings  and  naming  standards;   • Since  provisioning  tasks  may  involve  multiple  hosts  or  be  invoked  remotely,  it  is  required   that   password-­‐less   (key-­‐based)   SSH   access   is   configured   between   the   master   and   slave   nodes;   • Deployment   service   does   just   that   –   deploying   projects   to   target   locations   according   to   well-­‐defined  rules  and  naming  standards.  It  does  not  care,  nor  making  assumptions  about   the  applications,  custom  code  or  content  used  by  applications  deployed  inside  containers  as   long  as  projects  following  defined  project  structure.     Container  Provisioning  Service   All  container  provisioning  and  de-­‐provisioning  operations  are  handled  by  this  service,  which  is   translating   requested   actions   into   corresponding   Docker   commands   and   API   calls.   It   is   still   possible   to   create   arbitrary   containers   using   Docker   client   or   APIs,   however,   for   the   sake   of   consistency  this  approach  is  discouraged.     This   can   be   best   explained   by   the   following   example.   Let’s   provision   new   web   container   using   Docker  CLI:     $  docker  run  -­‐-­‐name  d7-­‐demo  -­‐-­‐hostname  wbs1  -­‐-­‐detach=true  -­‐-­‐restart=on-­‐failure:5      -­‐-­‐security-­‐opt  no-­‐new-­‐privileges  -­‐-­‐cpu-­‐shares  16  -­‐-­‐memory  64m  -­‐-­‐memory-­‐swap  1G      -­‐-­‐publish  10.169.64.232:8080:80  -­‐-­‐publish  10.169.64.232:8443:443      -­‐-­‐volume  /var/web/stg/root/d7-­‐demo:/var/www  -­‐-­‐volume  /var/web/stg/data/d7-­‐demo:/var/data      -­‐-­‐volume  /var/web/stg/logs/d7-­‐demo:/var/log  -­‐-­‐volume  /var/web/stg/temp/d7-­‐demo:/var/tmp      -­‐-­‐volume  /var/web/stg/cert/d7-­‐demo:/etc/ssl/web      -­‐-­‐tmpfs  /run:rw,nosuid,exec,nodev,mode=755      -­‐-­‐tmpfs  /tmp:rw,nosuid,noexec,nodev,mode=755      -­‐-­‐env-­‐file  /opt/deploy/container.env      -­‐-­‐label  container.env=stg  -­‐-­‐label  container.size=small      -­‐-­‐label  container.site=d7-­‐demo  -­‐-­‐label  container.type=web      registry.poc:5000/poc/nginx-­‐php-­‐fpm     You  may  have  noticed,  there  are  number  of  additional  options  and  parameters  required  by  the   platform  itself,  its  services  and  naming  standards.  Although,  Container  Provisioning  Service  has   made  exactly  this  same  call  to  a  Docker  engine,  there  is  lot  more  happening,  hidden  under  the   hood.    
  • 18.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       18   Now,  let’s  provision  the  same  web  container  using  Container  Provisioning  Service.  In  addition  to   creating  Docker  Container  it  is  performing  the  following  essential  steps:   • Checking  is  container  name  against  naming  standards;   • Checking  there  is  no  container  with  such  name  already  present;   • Validating  IP  address:   o Checking  whether  provided  IP  belongs  to  address  pool  and  whether  this  IP  is  not   already  taken  by  other  container;   o If  no  IP-­‐address  provided,  then  automatically  selecting  next  free  IP  from  the  pool;   • Checking  whether  container  host  volumes  present  and  creating  them  otherwise;   • Adding  container  labels,  specifying  web  site,  its  environment,  size  and  container  type;   • Adding  resource  constraints  and  security  related  options;   • Using  given  image  or  default  one  if  no  container  image  specified  for  creating  new  container.       $  /opt/deploy/web  container  create  -­‐-­‐farm  poc  -­‐-­‐env  stg  -­‐-­‐site  d7-­‐demo  -­‐-­‐image  nginx-­‐php-­‐fpm   web  container  create:  using  next  free  IP:  10.169.64.232   web  container  create:  checking  10.169.64.232  is  setup          inet  10.169.64.232/26  brd  10.169.64.255  scope  global  secondary  enp0s17:   web  container  create:  folder  /var/web/stg/root/d7-­‐demo  not  found,  creating   web  container  create:  folder  /var/web/stg/data/d7-­‐demo  not  found,  creating   web  container  create:  folder  /var/web/stg/logs/d7-­‐demo  not  found,  creating   web  container  create:  folder  /var/web/stg/cert/d7-­‐demo  not  found,  creating   web  container  create:  folder  /var/web/stg/temp/d7-­‐demo  not  found,  creating   web  container  create:  exporting  container  ENV  variables  from  /opt/deploy/container.env   web  container  create:  creating  container  d7-­‐demo   web  container  create:  |-­‐-­‐  image-­‐tag:  registry.poc:5000/poc/nginx-­‐php-­‐fpm   web  container  create:  |-­‐-­‐  resources:  small  (-­‐-­‐cpu-­‐shares  16  -­‐-­‐memory  64m  -­‐-­‐memory-­‐swap  1G)   web  container  create:  |-­‐-­‐  published:  10.169.64.232:8080:80   web  container  create:  |-­‐-­‐  published:  10.169.64.232:8443:443   web  container  create:  |-­‐-­‐  volume:  /var/web/stg/cert/d7-­‐demo:/etc/apache2/ssl   web  container  create:  |-­‐-­‐  volume:  /var/web/stg/logs/d7-­‐demo:/var/log   web  container  create:  |-­‐-­‐  volume:  /var/web/stg/root/d7-­‐demo:/var/www   web  container  create:  |-­‐-­‐  volume:  /var/web/stg/data/d7-­‐demo:/var/data   web  container  create:  |-­‐-­‐  volume:  /var/web/stg/temp/d7-­‐demo:/var/tmp   web  container  create:  |-­‐-­‐  volume:  tmpfs:/run   web  container  create:  |-­‐-­‐  volume:  tmpfs:/tmp   web  container  create:  |-­‐-­‐  label:  container.env=stg   web  container  create:  |-­‐-­‐  label:  container.size=small   web  container  create:  |-­‐-­‐  label:  container.site=d7-­‐demo   web  container  create:  __  label:  container.type=web   web  container  create:  started  site  container   cb68618b84b4d3276a77ebd4a0635c5387a8319f1ffaac3759c74820fa32b258     By   using   Container   Provisioning   service   we   can   ensure   that   all   containers   following   naming,   security,  configuration  and  resource  allocation  standards.  They  can  be  easily  identified,  managed   and  recreated  in  a  standard  and  repeatable  way.     $  /opt/deploy/web  container  list  -­‐-­‐farm  poc  -­‐-­‐env  stg  -­‐-­‐format  table   web  container  list:   CONTAINER  ID        NAMES            STATUS                      ENV    SIZE          PORTS   cb68618b84b4        d7-­‐demo        Up  16  minutes        stg    small        10.1.1.2:8080-­‐>80/tcp,  10.1.1.2:8443-­‐ >443/tcp   c953adf92e09        d7                  Up  3  weeks              stg    small        10.1.1.2:8080-­‐>80/tcp,  10.1.1.2:8443-­‐ >443/tcp    
  • 19.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       19   The  Container  Provisioning  service  is  used  by  Deployment  workflows  via  corresponding  Platform   CLI  calls.  The  service  itself  having  no  specific  dependencies  and  is  using  Docker  CLI  for  performing   container  management  operations.     Things  to  keep  in  mind:   • The   Container   Provisioning   service   is   not   making   orchestration   decisions   and   therefore   must   be   provided   the   target   location   specification   by   upstream   caller.   This   is   done   on   purpose  to  keep  orchestration  logic  and  mechanisms  separate  from  deployment  semantics;   • The  Container  Provisioning  service  is  a  part  of  Platform  CLI  component  and  as  such  uses   platform  configuration,  settings  and  naming  standards;   • Since  provisioning  tasks  may  involve  multiple  hosts  or  be  invoked  remotely,  it  is  required   that   password-­‐less   (key-­‐based)   SSH   access   is   configured   between   the   master   and   slave   nodes;   • The   Container   Provisioning   service   does   just   that   –   provisions   properly   configured   containers.  It  does  not  consider,  nor  making  assumptions  about  the  applications,  custom   code  or  content  used  by  applications  deployed  inside  containers;   • The   Container   Provisioning   service   is   the   only   component   that   has   to   be   adjusted,   if   different  mechanism  or  API  has  to  be  used  for  provisioning  containers,  for  example  CoreOS   rkt  or  LXD;   • In   case   of   using   orchestration   engines   such   as   Kubernetes,   the   Container   Provisioning   service  can  implement  a  wrapper  for  provided  provisioning  functionality.     Reporting  Service   Reporting  service  is  implemented  as  Docker  container  that  runs  queries  against  Stats  Database   and   compiles   reports   for   aggregated   resource   usage   according   to   specified   conditions   and   parameters.  There  are  no  services  depending  on  Reporting  service.  The  Reporting  service  itself  is   depending  on  Stats  Database  for  fetching  report  data.     Persistent  Volumes   One  of  the  platform  design  paradigms  is  to  keep  containers  immutable  or  read-­‐only  and  all  volatile   and  modified  data  should  be  stored  outside  of  container  on  so  called  container  volumes.  Since  we   want  this  data  to  be  available  between  container  runs  these  volumes  must  be  persistent.  There  is   another  benefit  related  to  keeping  application  data  and  content  outside  of  container  –  it  allows   achieving  the  best  application  performance.  Since  there  is  not  COW  (copy-­‐on-­‐write)  indirection   layer  in  between,  all  I/O  operations  are  handled  effectively  by  Linux  kernel.     Things  to  keep  in  mind:   • Current   platform   design   is   not   making   assumptions   about   underlying   technology   and   orchestration   layer.   For   the   sake   of   simplicity   the   container   host   volumes   are   used   as   persistent  volumes  implementation;   • There  are  other  options  to  be  explored  for  mapping  container  volumes  to  corresponding   SAN   volumes,   NAS   volumes   or   iSCSI   targets.   This   would   allow   containers   to   take   their   volumes   along   with   them   if   restarted   on   a   different   Docker   thus   making   containers   “mobile”  and  allowing  container  migrations  across  available  hosts.  These  options  were  not   explored   during   this   project,   however,   using   them   may   be   essential   when   running   containers  on  platforms  like  Kubernetes.    
  • 20.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       20   Volume  Sync-­‐Share  Service   Horizontal   scaling   and   high   availability   requirements   demand   that   application   span   multiple   application   instances,   or   containers   for   this   matter.   Although   session   state   is   kept   outside   of   containers,  the  static  content  still  has  to  be  shared  between  multiple  application  instances.     Generally   speaking,   there   are   two   possible   ways   for   resolving   this   issue:   share   file-­‐system   or   synchronize  file-­‐systems.  Every  solution  is  having  own  strong  and  weak  sides.  Both  options  have   been   explored   and   considered   viable.   The   choice   is   really   dictated   by   specific   infrastructure,   performance  and  support  requirements.  The  following  comparison  shall  help  selecting  the  most   appropriate  option  for  specific  deployment  scenario:       Shared  Content   Synchronized  Content   Implementation   approach   Centralized   storage   holding   single   file-­‐system   with   many   nodes   performing  access.   Share   nothing   architecture.   Many  nodes  with  multi-­‐master   replication   between   file-­‐ systems.   Storage  space   requirements   Volume-­‐Size   Volume-­‐Size  x  N  (#  of  nodes)   Storage  throughput   All   nodes   sharing   server   network   link  and  capped  by  its  throughput.   One  node  may  saturate  the  link  and   degrade  performance  for  others.     Limited   by   single   volume   IOPs,   quickly   degrades   with   number   of   nodes.   Throughput   and   IOPs   scale   linearly  with  number  of  nodes.   File-­‐system  locking   File-­‐system   locks   maintained   to   allow   concurrent   access   for   multiple   nodes   to   a   single   object.   Can   lead   to   stalled   I/O   operations   and,   as   result,   to   unresponsive   applications.   No  file-­‐system  locks  required.   Change  propagation   Instant   Little  latency   Implementation   complexity   Low   Moderate   Support  complexity   Moderate   Low   Known  limitations   SendFile  kernel  support  and  mmap   must   be   disabled   on   shared   volumes.     Orphaned   file-­‐system   locks   may   need   to   be   identified   and   cleaned   manually.     Storage   volume   restart   may   have   unpredicted  effects  on  clients,  they   may  need  to  re-­‐mount  storage.     File-­‐system   caching   may   produce   inconsistent  results  across  clients.   Large  file-­‐system  changes  may   take   some   time   to   propagate   on  all  clients.     In   rare   cases   file   may   be   modified   in   several   locations   producing  a  conflict  that  has  to   be   resolved   either   automatically  or  manually.   Specific  application   NFS  4.x  server  and  clients   SyncThing  +  inotify  
  • 21.  Website  in  a  Box  or  the  Next  Generation  Hosting  Platform   Copyright  2016                                                        All  Rights  Reserved.  Not  for  disclosure  without  written  permission.       21     Given  overview  above,  one  may  still  wonder,  which  route  to  choose  and  whether  there  is  a  simple   rule  of  thumb  to  select  the  most  appropriate  option.  Here  we  go:     • Implement  NFS:   o If  you  have  storage  array  capable  of  serving  files  using  NFS  4.x  protocol;   o If  your  applications  don’t  require  high  storage  throughput  and  concurrency;   o If  you  can  tolerate  noisy  neighbors  effect  at  times;   o If  storage  volume  size  (and/or  its  cost)  is  significant;   o If  you  already  have  expertise  in  house;   o If  other  parts  of  your  solution  using  NFS;   • Implement  SyncThing:   o If  you  don’t  have  fault-­‐tolerant  NFS  server  and  can’t  afford  it  for  whatever  reason;   o If  your  applications  require  highest  storage  throughput  and  need  to  scale  as  they   grow;   o If  you  absolutely  can’t  tolerate  noisy  neighbors  effect  or  NFS  server  downtime;   o If  you  can  tolerate  little  latency  required  to  propagate  changes;   o If  storage  volume  size  is  small  enough  to  have  redundant  copy  on  every  client.     Below  is  an  example  of  how  to  start  volume  sync  service:     $  docker  run  -­‐-­‐name  datasync  -­‐-­‐hostname  `hostname`  -­‐-­‐detach=true  -­‐-­‐restart=always      -­‐-­‐cpu-­‐shares  100  -­‐-­‐memory  100m      -­‐-­‐publish  22000:22000  -­‐-­‐publish  21027:21027/udp  -­‐-­‐publish  8384:8384      -­‐-­‐volume  /var/deploy/prd/data/:/var/sync  -­‐-­‐volume  /var/data/datasync:/etc/syncthing      -­‐-­‐tmpfs  /run:rw,nosuid,nodev,mode=755  -­‐-­‐tmpfs  /tmp:rw,nosuid,nodev,mode=755      registry.poc:5000/poc/syncthing     This  service  has  to  be  started  on  all  Docker  host  nodes  having  data  volumes  that  must  be  kept  in   sync.  After  starting,  these  services  have  to  be  introduced  to  each  other  or  preform  handshake  and   mutual  changes  have  to  be  allowed  between  them.  It’s  one-­‐time  configuration.     All  file-­‐system  changes  will  be  tracked  via  inotify  subscription  and  updated  files  will  be  exchanged   between   nodes   using   efficient   block   exchange   protocol   similar   to   BitTorrent.   Thus,   the   change   propagation  speed  grows  with  the  number  of  nodes  participating  in  exchange.     Things  to  keep  in  mind:   • SyncThing  is  relatively  young,  actively  developing  application.  There  may  be  side  effects   that  have  not  been  studied  yet;   • SyncThing  configuration  can  be  generated  from  template  and  saved  to  the  configuration   file.  It  can  be  also  adjusted  using  APIs  and  Web  UI.  The  access  to  API  and  Web  UI  must  be   appropriately  secured;   • SyncThing  protocol  is  ensuring  quick  delta  updates  and  high  performance.  During  the  tests   ~100+MB/s  sync  speed  has  been  measured;   • Although   SyncThing   can   perform   dynamic   service   and   network   discovery,   the   static   configuration  has  been  used  for  this  project.