SlideShare a Scribd company logo
Flash-­‐Based	
  Extended	
  Cache	
  	
  
                   for	
  Higher	
  Throughput	
  and	
  Faster	
  Recovery



                   Woon-­‐hak	
  Kang,	
  Sang-­‐won	
  Lee,	
  and	
  Bongki	
  Moon	
  




12.	
  9.	
  19.                                                                            1
Outline
•  IntroducIon	
  
•  Related	
  work	
  
•  Flash	
  as	
  Cache	
  Extension	
  (FaCE)	
  
   –    Design	
  choice	
  
   –    Two	
  opImizaIons	
  

•  Recovery	
  in	
  FaCE	
  
•  Performance	
  EvaluaIon	
  
•  Conclusion	
  

                                                     2
Outline
•  IntroducIon	
  
•  Related	
  work	
  
•  Flash	
  as	
  Cache	
  Extension	
  (FaCE)	
  
   –    Design	
  choice	
  
   –    Two	
  opImizaIons	
  

•  Recovery	
  in	
  FaCE	
  
•  Performance	
  EvaluaIon	
  
•  Conclusion	
  

                                                     3
IntroducIon
•  Flash	
  Memory	
  Solid	
  State	
  Drive(SSD
   )	
  
    –  NAND	
  flash	
  memory	
  based	
  non-­‐volaI
       le	
  storage	
  
•  CharacterisIcs	
  
    –  No	
  mechanical	
  parts	
  
         •  Low	
  access	
  latency	
  and	
  High	
  random	
  IOP
            S	
  
    –  MulI-­‐channel	
  and	
  mulI-­‐plane	
  
         •  Intrinsic	
  parallelism,	
  high	
  concurrency	
  
    –  No	
  overwriIng	
  
         •  Erase-­‐before-­‐overwriIng	
  
         •  Read	
  cost	
  <<	
  Write	
  cost	
  
    –  Limited	
  life	
  span	
  
         •  #	
  of	
  erasures	
  of	
  the	
  flash	
  block	
  
                                                                                                                                      4
                                                                    Image	
  from	
  :	
  hXp://www.legitreviews.com/arIcle/1197/2/
IntroducIon(2)
•  IOPS	
  (IOs	
  Per	
  Second)	
  maXers	
  in	
  OLTP	
  
•  IOPS/$:	
  SSDs	
  >>	
  HDDs	
  
    –  e.g.	
  SSD	
  	
  63	
  (=	
  28,495	
  IOPS	
  /	
  450$)	
  vs.	
  HDD	
  1.7	
  (=	
  409	
  IOPS	
  /	
  240$)	
  

•  GB/$:	
  HDDs	
  >>	
  SSDs	
  
    –  e.g.	
  SSD	
  	
  0.073	
  (=	
  32GB	
  /	
  440$)	
  vs.	
  	
  HDD	
  0.617	
  (=	
  146.8GB	
  /	
  240$)	
  

•  Therefore,	
  it	
  is	
  more	
  sensible	
  to	
  use	
  SSDs	
  to	
  su
   pplement	
  HDDs,	
  rather	
  than	
  to	
  replace	
  them	
  
    –  SSDs	
  as	
  cache	
  between	
  RAM	
  and	
  	
  HDDs	
  
    –  To	
  provide	
  both	
  the	
  performance	
  of	
  SSDs	
  and	
  the	
  c
       apacity	
  of	
  HDDs	
  as	
  liXle	
  cost	
  as	
  possible	
  
                                                                                                                                 5
IntroducIon(3)
   •  A	
  few	
  exisIng	
  flash-­‐based	
  cache	
  schemes	
  
              –  e.g.	
  Oracle	
  Exadata,	
  IBM,	
  MS	
  
              –  Pages	
  cached	
  in	
  SSDs	
  are	
  overwriXen;	
  the	
  write	
  paXern	
  in	
  SS
                 Ds	
  is	
  random	
  
   •  Write	
  bandwidth	
  disparity	
  in	
  SSDs	
  
              –  e.g.	
  random	
  write	
  (25MB/s	
  =	
  6,314	
  x	
  4KBs/s	
  )	
  vs.	
  sequenIal	
  w
                 rite	
  (243MB/s)	
  vs.	
  	
  

                        4KB	
  Random	
  Throughput	
  (                                      Ra=o	
  Sequen=al/Random
                                                           Sequen=al	
  Bandwidth	
  (MBPS)
                                    IOPS)                                                                	
  write

                           Read             Write             Read              Write
SSD	
  mid	
  A             28,495             6,314                251               243              9.85
SSD	
  mid	
  B             35,601             2,547                259                 80             8.04
HDD	
  Single	
  	
             409               343               156               154            114.94
HDD	
  Single	
  (x8
)                            2,598             2,502                848               843             86.25              6
IntroducIon(4)	
  
•  FaCE	
  (Flash	
  as	
  Cache	
  Extension)	
  –	
  main	
  contribuIons	
  
    –  Write-­‐opImized	
  flash	
  cache	
  scheme:	
  e.g.	
  3x	
  higher	
  throughput	
  t
       han	
  the	
  exisIng	
  ones	
  
    –  Faster	
  database	
  recovery	
  support	
  by	
  exploiIng	
  the	
  non-­‐volaIle	
  c
       ache	
  pages	
  in	
  SSDs	
  for	
  recovery:	
  e.g.	
  4x	
  faster	
  recovery	
  Ime

                                       DRAM
                  Random	
  Read	
                   Sequen=al	
  Write	
  
                   (Low	
  cost)                   (à	
  High	
  throughput)
  Random	
  
   Read
                                                                         Non-­‐vola=lity	
  of	
  flash	
  
                                        SSD                               cache	
  for	
  recovery	
  
                                                     Random	
              (faster	
  recovery)
                                                      Write



                                       HDD
                                                                                                             7
Contents
•  IntroducIon	
  
•  Related	
  work	
  
•  Flash	
  as	
  Cache	
  Extension	
  (FaCE)	
  
   –    Design	
  choice	
  
   –    Two	
  opImizaIons	
  

•  Recovery	
  in	
  FaCE	
  
•  Performance	
  EvaluaIon	
  
•  Conclusion	
  

                                                     8
Related	
  work
•  How	
  to	
  adopt	
  SSDs	
  in	
  the	
  DBMS	
  area?	
  

1.  SSD	
  as	
  faster	
  disk	
  
     –     VLDB	
  ‘08,	
  Koltsidas	
  et	
  al.,	
  “Flashing	
  up	
  the	
  Storage	
  Layer”	
  
     –     VLDB	
  ’09,	
  Canim	
  et	
  al.	
  “An	
  Object	
  Placement	
  Advisor	
  for	
  DB2	
  Usin
           g	
  Solid	
  State	
  Storage”	
  
     –     SIGMOD	
  ‘08,	
  Lee	
  et	
  al.,	
  "A	
  Case	
  for	
  Flash	
  Memory	
  SSD	
  in	
  Enterpris
           e	
  Database	
  ApplicaIons"	
  	
  

2.  SSD	
  as	
  DRAM	
  buffer	
  extension	
  
     –     VLDB	
  ’10,	
  Canim	
  et	
  al.,	
  “SSD	
  Bufferpool	
  extensions	
  for	
  Database	
  s
           ystems”	
  
     –     SIGMOD	
  ’11,	
  Do	
  et	
  al.,	
  “Turbocharging	
  	
  DBMS	
  Buffer	
  Pool	
  Using	
  SS
           Ds”

                                                                                                                   9
Lazy	
  Cleaning	
  (LC)	
  [SIGMOD’11]	
  
•  Cache	
  on	
  exit	
  
•  Write-­‐back	
  policy	
  
•  LRU-­‐based	
  SSD	
  cache	
  replacement	
  policy	
  
      –  To	
  incur	
  almost	
  random	
  writes	
  against	
  SSD	
  

•  No	
  efficient	
  recovery	
  mechanism	
  provided

                                                  Flash	
  hit
                                                                           Random	
  writes
                                                     	
  Evict
        RAM Buffer (LRU)                                                   Flash memory SSD
   Fetch	
  on	
  miss                                                         Stage	
  out	
  dirty	
  pages


                                                       HDD
                                                                                                                10
Contents
•  IntroducIon	
  
•  Related	
  work	
  
•  Flash	
  as	
  Cache	
  Extension	
  (FaCE)	
  
   –    Design	
  choices	
  
   –    Two	
  opImizaIons	
  

•  Recovery	
  in	
  FaCE	
  
•  Performance	
  EvaluaIon	
  
•  Conclusion	
  

                                                     11
FaCE:	
  Design	
  Choices
1.  When	
  to	
  cache	
  pages	
  in	
  SSD?	
  


2.  What	
  pages	
  to	
  cache	
  in	
  SSD?	
  


3.  Sync	
  policy	
  b/w	
  SSD	
  and	
  HDD	
  


4.  SSD	
  Cache	
  Replacement	
  Policy	
  
                                                     12
Design	
  Choices:	
  When/What/Sync	
  Policy
 •  When	
  :	
  on	
  entry	
  vs.	
  on	
  exit	
  
 •  What	
  :	
  clean	
  vs.	
  dirty	
  vs.	
  both	
  
 •  Sync	
  policy	
  :	
  write-­‐thru	
  vs.	
  write-­‐back	
  
                                                         On	
  exit	
  :	
  dirty	
  pages	
  athe	
  ell	
  as	
  
                                                                 Sync	
  policy	
  :	
  for	
   s	
  w
                                                         performance,	
  write-­‐back	
  sync	
  
                                                                            clean	
  pages



                                       Evict	
  	
  
            RAM Buffer
                                                        Flash as Cache Extension
              (LRU)

Fetch	
  on	
  miss                                                     Stage	
  out	
  dirty	
  pages	
  

                                       HDD
                                                                                                                      13
Design	
  Choices:	
  SSD	
  Cache	
  Replacement	
  Policy
•  What	
  to	
  do	
  when	
  a	
  page	
  is	
  evicted	
  from	
  DRAM	
  buffe
   r	
  and	
  SSD	
  cache	
  is	
  full	
  
•  LRU	
  vs.	
  FIFO	
  (First-­‐In-­‐First-­‐Out)	
  
     –  Write	
  miss:	
  LRU-­‐based	
  vicIm	
  selecIon,	
  write-­‐back	
  if	
  dirt
        y	
  vicIm,	
  and	
  overwrite	
  the	
  old	
  vicIm	
  page	
  with	
  the	
  new
        	
  page	
  being	
  evicted	
  
     –  Write	
  hit:	
  overwrite	
  the	
  old	
  copy	
  in	
  flash	
  cache	
  with	
  the	
  
        updated	
  page	
  being	
  evicted	
  
                                                                           Random	
  
                                                                            writes	
  
                                           Evict	
  	
  
          RAM Buffer                                                     against	
  SSD
                                                              Flash as Cache Extension
            (LRU)


                                              HDD
                                                                                                     14
Design	
  Choices:	
  SSD	
  Cache	
  Replacement	
  Policy

•  LRU	
  vs.	
  FIFO	
  (First-­‐In-­‐First-­‐Out)	
  
    –  VicIms	
  are	
  chosen	
  from	
  the	
  rear	
  end	
  of	
  flash	
  cache	
  
       :	
  “sequenIal	
  writes”	
  against	
  SSD	
  
    –  Write	
  hit	
  :	
  no	
  addiIonal	
  acIon	
  is	
  taken	
  in	
  order	
  not	
  
       to	
  incur	
  random	
  writes.	
  
         •  mulIple	
  versions	
  in	
  SSD	
  cache	
  


                                            Evict	
  	
  
        RAM Buffer
                                                            Flash as Cache Extension
          (LRU)
                                                               Multi-Version FIFO
                                                                    (mvFIFO)


                                             HDD
                                                                                                15
Write	
  ReducIon	
  in	
  mvFIFO
•  Example	
  
   –  Reduce	
  three	
  writes	
  to	
  HDD	
  to	
  one	
  Versions	
  of	
  Page	
  P
                                                 Mul=ple	
  
                                                                                               Choose	
  
                                                                         Invalidated	
  	
   Invalidated	
  
                                                                                             Write-­‐back	
  
                                                                           version             Discard
                                                                                               Vic=m
                                                                                               version
                                                                                               to	
  HDD
                                                                          Page	
  P-­‐v2 Page	
  P-­‐v1
                                                        Page	
  P-­‐v3



                                        Evict	
  	
  
      RAM Buffer
                                                                 Flash as Cache Extension
        (LRU)




                                         HDD

                                                                                                            16
Design	
  Choices:	
  SSD	
  Cache	
  Replacement	
  Policy

•  LRU	
  vs.	
  FIFO	
  
                                           LRU              FIFO
 Write	
  paXern                         Random          Sequen=al
 Write	
  performance                      Low              High	
  
 #	
  of	
  copy	
  pages                 Single          MulIple
 Space	
  uIlizaIon                        High             Low
 Hit	
  raIo	
  &	
  write	
  reducIon     High             Low

 •  Trade-­‐off	
  :	
  hit-­‐raIo	
  <>	
  write	
  performance	
  
       –  Write	
  performance	
  benefit	
  by	
  FIFO	
  >>	
  Performance
          	
  gain	
  from	
  higher	
  hit	
  raIo	
  by	
  LRU	
  
                                                                              17
mvFIFO:	
  Two	
  OpImizaIons
•  Group	
  Replacement	
  (GR)	
  	
  	
  
    –  MulIple	
  pages	
  are	
  replaced	
  in	
  a	
  group	
  in	
  order	
  to	
  exploi
       t	
  the	
  internal	
  parallelism	
  in	
  modern	
  SSDs	
  
    –  Replacement	
  depth	
  is	
  limited	
  by	
  parallelism	
  size	
  (chann
       el	
  *	
  plane)	
  
    –  GR	
  can	
  improve	
  SSD	
  I/O	
  throughput	
  
•  Group	
  Second	
  Chance	
  (GSC)	
  	
  	
  
    –  GR	
  +	
  Second	
  chance	
  
    –  if	
  a	
  vicIm	
  candidate	
  page	
  is	
  valid	
  and	
  referenced,	
  will	
  re
       -­‐enque	
  the	
  vicIm	
  to	
  SSD	
  cache	
  
         •  A	
  variant	
  of	
  “clock”	
  replacement	
  algorithm	
  for	
  the	
  FaCE	
  
    –  GSC	
  can	
  achieve	
  higher	
  hit	
  raIo	
  and	
  more	
  write	
  reducI
       ons	
  

                                                                                                  18
Group	
  Replacement	
  (GR)	
  
•  Single	
  group	
  read	
  from	
  SSD
   	
  (64/128	
  pages)	
  
•  Batch	
  random	
  writes	
  to	
  HD                  RAM           Check	
  valid	
  and	
  
                                                                           dirty	
  flag	
  
   D	
  
                                                                               Flash	
  Cache	
  
•  Single	
  group	
  write	
  to	
  SSD	
                                    becomes	
  FULL


                                 2.	
  Evict	
  	
  
             RAM Buffer                                Flash as Cache Extension
               (LRU)


1.	
  Fetch	
  on	
  miss


                                            HDD
                                                                                                    19
Group	
  Second	
  Chance	
  (GSC)	
  
•  GR	
  +	
  Second	
  Chance	
                         reference	
  bit	
  is	
  ON
                                                                             Check	
  reference	
  bit,	
  
                                                      RAM                     if	
  true	
  galid	
  them	
  
                                                                              Check	
  vave	
  and	
  
                                                                                      dirty	
  flag
                                                                                      2nd	
  chance

                                                                                        Flash	
  Caches	
  
                                                                                        become	
  FULL


                             2.	
  Evict	
  	
  
             RAM Buffer                            Flash as Cache Extension
               (LRU)


1.	
  Fetch	
  on	
  miss


                                        HDD
                                                                                                              20
Contents
•  IntroducIon	
  
•  Related	
  work	
  
•  Flash	
  as	
  Cache	
  Extension	
  (FaCE)	
  
   –    Design	
  choice	
  
   –    Two	
  opImizaIons	
  

•  Recovery	
  in	
  FaCE	
  
•  Performance	
  EvaluaIon	
  
•  Conclusion	
  

                                                     21
Recovery	
  Issues	
  in	
  SSD	
  Cache
•  With	
  write-­‐back	
  sync	
  policy,	
  many	
  recent	
  copies	
  of	
  data	
  pages	
  ar
   e	
  kept	
  in	
  SSD,	
  not	
  in	
  HDD.	
  
•  Therefore,	
  database	
  in	
  HDD	
  is	
  in	
  an	
  inconsistent	
  state	
  ayer	
  syste
   m	
  failure	
  	
  


                                                                                           New	
  version	
  
                                                                                            of	
  page	
  P

RAM           SSD Mapping Information
                    (Metadata)
                                                   Crash
                                               Inconsistent	
   as Cache Extension
                                                       Flash
                                                   state
                                                                                           Old	
  version	
  
                                                                                            of	
  page	
  P
                                             HDD
                                                                                                            22
Recovery	
  Issues	
  in	
  SSD	
  Cache
•  With	
  write-­‐back	
  sync	
  policy,	
  many	
  recent	
  copies	
  of	
  data	
  pages	
  are	
  kept	
  
   in	
  SSD,	
  not	
  in	
  HDD.	
  
•  Therefore,	
  database	
  in	
  HDD	
  is	
  in	
  an	
  inconsistent	
  state	
  ayer	
  system	
  failu
   re	
  	
  
•  In	
  this	
  situa=on,	
  one	
  recovery	
  approach	
  with	
  flash	
  cache	
  is	
  to	
  view	
  da
   tabase	
  in	
  harddisk	
  as	
  the	
  only	
  persistent	
  DB	
  [SIGMOD	
  11]	
  
      –  Periodically	
  checkpoint	
  updated	
  pages	
  from	
  SSD	
  cache	
  as	
  well	
  as	
  DRAM	
  bu
         ffer	
  to	
  HDD	
  	
  
                                                                                                         New	
  version	
  
                                                                                                          of	
  page	
  P
RAM             SSD Mapping Information    Excessive	
   Checkpoint
              Checkpoint
                                          Checkpoint	
  
                                                   Flash as Cache Extension
                                             Cost
                                                                                                            Persistent	
  	
  
                                                                                                               DB
                                                   HDD Old	
  version	
  
                                                        of	
  page	
  P                                                  23
Recovery	
  Issues	
  in	
  SSD	
  Cache(2)
•  Fortunately,	
  because	
  SSDs	
  are	
  non-­‐vola=le,	
  pages	
  cached	
  in	
  SSD	
  are	
  al
   ive	
  even	
  ayer	
  system	
  failure.	
  
•  SSD	
  mapping	
  informaIon	
  has	
  gone	
  
•  Two	
  approaches	
  for	
  recovering	
  metadata.	
  
      1.     Rebuild	
  lost	
  metadata	
  by	
  scanning	
  the	
  whole	
  pages	
  cached	
  in	
  SSD	
  (Naïve
             	
  approach)	
  –	
  Time-­‐consuming	
  scanning	
  
      2.     Write	
  metadata	
  persistently	
  whenever	
  metadata	
  is	
  changed	
  [DaMon	
  11]
             	
  –	
  Run-­‐Ime	
  overhead	
  for	
  managing	
  metadata	
  persistently	
  
                                                                                                           New	
  version	
  
                                                                                                               of	
  page	
  P
RAM            SSD Mapping Information
                                                                             Full	
  Scanning

                                                                   Flash as Cache Extension
  Flush	
  every	
  update
                                                                                                               Persistent	
  	
  
                                                                                                                  DB
                                                    HDD Old	
  version	
  
                                                         of	
  page	
  P                                                    24
Recovery	
  in	
  FaCE
•  Metadata	
  checkpoinIng	
  
   –  Because	
  a	
  data	
  page	
  entering	
  SSD	
  cache	
  is	
  wriXen	
  t
      o	
  the	
  rear	
  in	
  chronological	
  order,	
  metadata	
  can	
  be	
  
      wriXen	
  regularly	
  in	
  a	
  single	
  large	
  segment	
  

                                                                                    64K	
  
   	
  
   RAM                     Recovery	
  :	
     SSD Metadata                        page	
  
                                                                                   info.
                                                                                              Mapping
                                                                                              Segment
                            Scanning	
  
                            segment



                                                  Periodically	
  checkpoint	
  
    Flash as Cache Extension           Crash            metadata



                                 HDD
                                                                                                         25
Contents
•  IntroducIon	
  
•  Related	
  work	
  
•  Flash	
  as	
  Cache	
  Extension	
  (FaCE)	
  
   –    Design	
  choice	
  
   –    Two	
  opImizaIons	
  

•  Recovery	
  in	
  FaCE	
  
•  Performance	
  EvaluaIon	
  
•  Conclusion	
  

                                                     26
Experimental	
  Set-­‐Up
•  FaCE	
  ImplementaIon	
  in	
  PostgreSQL	
  
    –  3	
  funcIons	
  in	
  buffer	
  mgr.	
  :	
  bufferAlloc(),	
  getFreeBuffer(),	
  buff
       erSync()	
  
    –  2	
  funcIons	
  in	
  bootstrap	
  for	
  recovery	
  :	
  startupXLOG(),	
  initBuff
       erPool()	
  

•  Experiment	
  Setup	
  
    –    Centos	
  Linux	
  
    –    Intel	
  Core	
  i7-­‐860	
  2.8	
  GHz	
  (quad	
  core)	
  and	
  4G	
  DRAM	
  
    –    Disks	
  :	
  8	
  RAIDed	
  15k	
  rpm	
  Seagate	
  SAS	
  HDDs	
  (146.8GB)	
  
    –    SSD	
  :	
  Samsung	
  MLC	
  (256GB)	
  
•  Workloads	
  
    –  TPC-­‐C	
  with	
  500	
  warehouses	
  (50GB)	
  and	
  50	
  concurrent	
  clients	
  
    –  BenchmarkSQL	
  

                                                                                                  27
TransacIon	
  Throughput
                                                    HDD	
  only	
         SSD	
  only	
          LC	
       FaCE	
        FaCE+GR	
     FaCE+GSC	
  

                               7000	
  
                                                                                                                                                                    3.9x
                               6000	
                                                                               FaCE+GSC
                                                                              FaCE+GR
                                                                                                                                                                    3.1x
                               5000	
  
Transac=ons	
  Per	
  minute




                                                                                                                                                                    2.6x
                               4000	
                                                                               FaCE-­‐basic
                                                                                                                                                             2.6x
                                                                                                                                                             2.4x
                                                                                                                                                             2.1x
                               3000	
                                                                                                            1.5x
                                                                                                                        LC
                               2000	
  
                                                                                                                                                                 SSD	
  only

                               1000	
                             HDD	
  only

                                    0	
  
                                            4	
                   8	
                   12	
               16	
               20	
          24	
        28	
  
                                                                                            |Flash	
  cache|/|Database|	
  (%)

                                                                                                                                                                               28
Hit	
  RaIo,	
  Write	
  ReducIon,	
  and	
  I/O	
  Throughput

                                                                Flash	
  Cache	
  Hit	
  Ra=o
                     100	
  

                       95	
  

                       90	
                                                                                  LC
                       85	
                                                                                                               FaCE+GSC
Hit	
  ra=o	
  (%)




                       80	
  

                       75	
                                                                                                       FaCE-­‐basic
                       70	
  
                                                                                                                                  FaCE+GR

                       65	
  

                       60	
  
                                   2GB	
         4GB	
                                  6GB	
                           8GB	
         10GB	
  
                                                                                Flash	
  cache	
  size

                                                           LC	
      FaCE	
          FaCE+GR	
           FaCE+GSC	
  


                                                                                                                                                 29
Hit	
  RaIo,	
  Write	
  ReducIon,	
  and	
  I/O	
  Throughput

                                                              Write	
  Reduc=on	
  Ra=o	
  By	
  Flash	
  Cache
                                                                                                          Write	
  Reduc=on	
  Ra=o	
  
                     100	
                Flash	
  Cache	
  Hit	
  Ra=o                                       By	
  Flash	
  Cache
               100	
  
                                                                                                                                              100	
  
                       90	
  
                     95	
  
                                                                                                                                                90	
  
                     90	
  
                      80	
  

                     85	
                                                                                                                       80	
              LC
       Ra=o(%)	
  
Hit	
  ra=o	
  (%)




                                                                                                                                Ra=o(%)	
  
                       70	
  
                     80	
                                                                                                                       70	
                                                                FaCE+GSC
                     75	
  
                      60	
  
                                                                                                                                                60	
  
                                                                                                                                                                                                        FaCE-­‐basic
                     70	
  
                       50	
                                                                                                                     50	
  
                                                                                                                                                                                                        FaCE+GR
                     65	
  

                     60	
  
                      40	
                                                                                                                      40	
  
                                2GB	
         2GB	
   4GB	
             6GB	
       4GB	
  8GB	
           10GB	
         6GB	
                               2GB	
       4GB	
  
                                                                                                                                                                         8GB	
              6GB	
            8GB	
  
                                                                                                                                                                                                               10GB	
     10GB	
  
                                                                Flash	
  cache	
  size                            Flash	
  cache	
  size                                            Flash	
  cache	
  size

                                     LC	
            FaCE	
          FaCE+GR	
            FaCE+GSC	
   FaCE	
  
                                                                                           LC	
                         FaCE+GR	
                            LC	
  
                                                                                                                                                         FaCE+GSC	
     FaCE	
            FaCE+GR	
            FaCE+GSC	
  


                                                                                                                                                                                                                                     30
Hit	
  RaIo,	
  Write	
  ReducIon,	
  and	
  I/O	
  Throughput

                                                                                                 Write	
  Reduc=on	
  Ra=o	
  
                                                                                                Throughput	
  of	
  4KB-­‐page	
  I/O        Throughput	
  of	
  4KB-­‐page	
  
                       Flash	
  Cache	
  Hit	
  Ra=o
                       16000	
  
                                                                                                                  Throughput	
  of	
  4KB-­‐page	
  
                                                                                                         By	
  Flash	
  Cache                          I/O
            100	
                                                                                                             I/O
                       14000	
  
                                                                                                 100	
  
                                                                                                                         FaCE+GSC            16000	
  
                                                                                                                                                  16000	
  
                95	
                                                                                                                                                                                              14000	
  
                                                                                                         90	
  
                       12000	
                                                                                                                    14000	
  
                90	
                                                                                                                                                                                              12000	
  
                                                                                                                                      FaCE+GR
Throughput	
  (4KB)




                       10000	
                                                                           80	
                          12000	
  




                                                                                                                                                                                            Throughput	
  (4KB)
                85	
                                                                                                                                                                                              10000	
  
  Hit	
  ra=o	
  (%)




                                                                                                                            Throughput	
  (4KB)
                                                                                           Ra=o(%)	
  

                                                                                                                                                  10000	
  
                80	
  8000	
                                                                             70	
                                                                                                      8000	
  
                                                                                                           FaCE-­‐basic
                                                                                                                      8000	
  
                75	
  6000	
                                                                                                                                                                                       6000	
  
                                                                                                         60	
  
                                                                                                                                                   6000	
  
                70	
  4000	
                                                                                                                                                                                       4000	
  
                                                                                                                                                   4000	
  
                65	
  2000	
                                                                LC 50	
                                                                                                                2000	
  
                                                                                                                                                   2000	
  
                60	
                                                                                     40	
                                                                                                            0	
  
                             0	
  
                              2GB	
           4GB	
        6GB	
      8GB	
     10GB	
                            2GB	
               4GB	
     0	
   6GB	
  8GB	
   10GB	
  
                                                                                                                                                  6GB	
                                                                          2GB	
       4GB	
     6GB	
   8GB	
       10GB	
  
                                                          2GB	
                                 4GB	
                                                                                              8GB	
                                                 10GB	
  
                                               Flash	
  cache	
  size                                                                    Flash	
  cache	
  size 4GB	
   6GB	
  
                                                                                                                                                          2GB	
                                 8GB	
   10GB	
                                Flash	
  cache	
  size
                                                                                                                                                    Flash	
  cache	
  size
                                                                                                                                                                         Flash	
  cache	
  size
                  LC	
             FaCE	
               FaCE+GR	
           FaCE+GSC	
                   LC	
       FaCE	
                          FaCE+GR	
            FaCE+GSC	
                                 LC	
          FaCE	
           FaCE+GR	
           FaCE+GSC	
  
                                                                                                         LC	
        FaCE	
                                FaCE+GR	
         FaCE+GSC	
  
                                                                                                                                                    LC	
        FaCE	
       FaCE+GR	
                            FaCE+GSC	
  

                                                                                                                                                                                                                                                                                 31
Recovery	
  Performance
•  4.4x	
  faster	
  recovery	
  than	
  HDD	
  only	
  approac
   h	
  




              Metadata	
  recovery	
  :	
  2
                redo	
  Ime	
  :me	
  :	
  823
                    redo	
  I 	
  186



                                                                  32
Contents
•  IntroducIon	
  
•  Related	
  work	
  
•  Flash	
  as	
  Cache	
  Extension	
  (FaCE)	
  
   –    Design	
  choice	
  
   –    Two	
  opImizaIons	
  

•  Recovery	
  in	
  FaCE	
  
•  Performance	
  EvaluaIon	
  
•  Conclusion	
  

                                                     33
Conclusion
•  We	
  presented	
  a	
  low-­‐overhead	
  caching	
  method	
  
   called	
  FaCE	
  that	
  uIlizes	
  flash	
  memory	
  as	
  an	
  ext
   ension	
  to	
  a	
  DRAM	
  buffer	
  for	
  a	
  recoverable	
  data
   base.	
  
•  FaCE	
  can	
  maximized	
  the	
  I/O	
  throughput	
  of	
  a	
  fla
   sh	
  caching	
  device	
  by	
  turning	
  small	
  random	
  writ
   es	
  to	
  large	
  sequenIal	
  ones	
  
•  Also,	
  FaCE	
  takes	
  advantage	
  of	
  the	
  non-­‐volaIlity
   	
  of	
  flash	
  memory	
  to	
  accelerate	
  the	
  system	
  resta
   rt	
  from	
  a	
  failure.	
  
                                                                            34
QnA

More Related Content

What's hot

JetStor NAS 724UXD Dual Controller Active-Active ZFS Based
JetStor NAS 724UXD Dual Controller Active-Active ZFS BasedJetStor NAS 724UXD Dual Controller Active-Active ZFS Based
JetStor NAS 724UXD Dual Controller Active-Active ZFS Based
Gene Leyzarovich
 
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Community
 
Webinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS Storage
Webinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS StorageWebinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS Storage
Webinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS Storage
GlusterFS
 
Varrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentationVarrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentationpittmantony
 
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMsGlobal Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Marco Obinu
 
[B4]deview 2012-hdfs
[B4]deview 2012-hdfs[B4]deview 2012-hdfs
[B4]deview 2012-hdfsNAVER D2
 
Gluster Webinar: Introduction to GlusterFS
Gluster Webinar: Introduction to GlusterFSGluster Webinar: Introduction to GlusterFS
Gluster Webinar: Introduction to GlusterFS
GlusterFS
 
SSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQLSSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQL
Yoshinori Matsunobu
 
Intro to GlusterFS Webinar - August 2011
Intro to GlusterFS Webinar - August 2011Intro to GlusterFS Webinar - August 2011
Intro to GlusterFS Webinar - August 2011
GlusterFS
 
Award winning scale-up and scale-out storage for Xen
Award winning scale-up and scale-out storage for XenAward winning scale-up and scale-out storage for Xen
Award winning scale-up and scale-out storage for Xen
GlusterFS
 
Tuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadTuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy Workload
Marius Adrian Popa
 
Hybrid Storage Pools (Now with the benefit of hindsight!)
Hybrid Storage Pools (Now with the benefit of hindsight!)Hybrid Storage Pools (Now with the benefit of hindsight!)
Hybrid Storage Pools (Now with the benefit of hindsight!)
ahl0003
 
VMworld 2013: Extreme Performance Series: Storage in a Flash
VMworld 2013: Extreme Performance Series: Storage in a Flash VMworld 2013: Extreme Performance Series: Storage in a Flash
VMworld 2013: Extreme Performance Series: Storage in a Flash
VMworld
 
Vm13 vnx mixed workloads
Vm13 vnx mixed workloadsVm13 vnx mixed workloads
Vm13 vnx mixed workloadspittmantony
 
Cpu Cache and Memory Ordering——并发程序设计入门
Cpu Cache and Memory Ordering——并发程序设计入门Cpu Cache and Memory Ordering——并发程序设计入门
Cpu Cache and Memory Ordering——并发程序设计入门
frogd
 
SM16 - Can i move my stuff to openstack
SM16 - Can i move my stuff to openstackSM16 - Can i move my stuff to openstack
SM16 - Can i move my stuff to openstack
pittmantony
 
ASPLOS2011 workshop RESoLVE "Effect of Disk Prefetching of Guest OS "
ASPLOS2011 workshop RESoLVE "Effect of Disk Prefetching of Guest OS "ASPLOS2011 workshop RESoLVE "Effect of Disk Prefetching of Guest OS "
ASPLOS2011 workshop RESoLVE "Effect of Disk Prefetching of Guest OS "Kuniyasu Suzaki
 

What's hot (20)

JetStor NAS 724UXD Dual Controller Active-Active ZFS Based
JetStor NAS 724UXD Dual Controller Active-Active ZFS BasedJetStor NAS 724UXD Dual Controller Active-Active ZFS Based
JetStor NAS 724UXD Dual Controller Active-Active ZFS Based
 
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
 
Webinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS Storage
Webinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS StorageWebinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS Storage
Webinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS Storage
 
Varrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentationVarrow madness 2013 virtualizing sql presentation
Varrow madness 2013 virtualizing sql presentation
 
Storage
StorageStorage
Storage
 
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMsGlobal Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
 
[B4]deview 2012-hdfs
[B4]deview 2012-hdfs[B4]deview 2012-hdfs
[B4]deview 2012-hdfs
 
Gluster Webinar: Introduction to GlusterFS
Gluster Webinar: Introduction to GlusterFSGluster Webinar: Introduction to GlusterFS
Gluster Webinar: Introduction to GlusterFS
 
SSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQLSSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQL
 
Intro to GlusterFS Webinar - August 2011
Intro to GlusterFS Webinar - August 2011Intro to GlusterFS Webinar - August 2011
Intro to GlusterFS Webinar - August 2011
 
Award winning scale-up and scale-out storage for Xen
Award winning scale-up and scale-out storage for XenAward winning scale-up and scale-out storage for Xen
Award winning scale-up and scale-out storage for Xen
 
Tuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy WorkloadTuning Linux Windows and Firebird for Heavy Workload
Tuning Linux Windows and Firebird for Heavy Workload
 
Hybrid Storage Pools (Now with the benefit of hindsight!)
Hybrid Storage Pools (Now with the benefit of hindsight!)Hybrid Storage Pools (Now with the benefit of hindsight!)
Hybrid Storage Pools (Now with the benefit of hindsight!)
 
Methods of NoSQL database systems benchmarking
Methods of NoSQL database systems benchmarkingMethods of NoSQL database systems benchmarking
Methods of NoSQL database systems benchmarking
 
VMworld 2013: Extreme Performance Series: Storage in a Flash
VMworld 2013: Extreme Performance Series: Storage in a Flash VMworld 2013: Extreme Performance Series: Storage in a Flash
VMworld 2013: Extreme Performance Series: Storage in a Flash
 
Vm13 vnx mixed workloads
Vm13 vnx mixed workloadsVm13 vnx mixed workloads
Vm13 vnx mixed workloads
 
Cpu Cache and Memory Ordering——并发程序设计入门
Cpu Cache and Memory Ordering——并发程序设计入门Cpu Cache and Memory Ordering——并发程序设计入门
Cpu Cache and Memory Ordering——并发程序设计入门
 
SM16 - Can i move my stuff to openstack
SM16 - Can i move my stuff to openstackSM16 - Can i move my stuff to openstack
SM16 - Can i move my stuff to openstack
 
Firebird and RAID
Firebird and RAIDFirebird and RAID
Firebird and RAID
 
ASPLOS2011 workshop RESoLVE "Effect of Disk Prefetching of Guest OS "
ASPLOS2011 workshop RESoLVE "Effect of Disk Prefetching of Guest OS "ASPLOS2011 workshop RESoLVE "Effect of Disk Prefetching of Guest OS "
ASPLOS2011 workshop RESoLVE "Effect of Disk Prefetching of Guest OS "
 

Viewers also liked

Network Separation Policy in Korea
 Network Separation Policy in Korea Network Separation Policy in Korea
Network Separation Policy in Korea
Seungjoo Kim
 
Vpn
VpnVpn
Vpn
kwabo
 
Optical Fibres
Optical FibresOptical Fibres
Optical Fibres
simonandisa
 
Sdh concept
Sdh conceptSdh concept
SDH BASICS
SDH BASICSSDH BASICS
SDH BASICS
Niranjan Poojary
 
Fundamentals of sdh
Fundamentals of sdhFundamentals of sdh
Fundamentals of sdh
sreejithkt
 
E1 To Stm
E1 To Stm E1 To Stm
E1 To Stm
Krishna Mohan
 
OPTICAL FIBER COMMUNICATION PPT
OPTICAL FIBER COMMUNICATION PPTOPTICAL FIBER COMMUNICATION PPT
OPTICAL FIBER COMMUNICATION PPT
Er. Satyendra Vishwakarma
 
optical fibre ppt for download visit http://nowcracktheworld.blogspot.in/
optical fibre  ppt for download visit http://nowcracktheworld.blogspot.in/optical fibre  ppt for download visit http://nowcracktheworld.blogspot.in/
optical fibre ppt for download visit http://nowcracktheworld.blogspot.in/
Ram Niwas Bajiya
 
The Future of Mobile - Presented at SMX Munich
The Future of Mobile - Presented at SMX MunichThe Future of Mobile - Presented at SMX Munich
The Future of Mobile - Presented at SMX Munich
Eric Enge
 
SDH MAPPING AND MULTIPLEXING
SDH MAPPING AND MULTIPLEXINGSDH MAPPING AND MULTIPLEXING
SDH MAPPING AND MULTIPLEXING
Niranjan Poojary
 
MPLS
MPLSMPLS

Viewers also liked (14)

Network Separation Policy in Korea
 Network Separation Policy in Korea Network Separation Policy in Korea
Network Separation Policy in Korea
 
Vpn
VpnVpn
Vpn
 
Ccnp securite vpn
Ccnp securite vpnCcnp securite vpn
Ccnp securite vpn
 
Optical Fibres
Optical FibresOptical Fibres
Optical Fibres
 
Sdh concept
Sdh conceptSdh concept
Sdh concept
 
SDH BASICS
SDH BASICSSDH BASICS
SDH BASICS
 
SDH Frame Structure
SDH Frame StructureSDH Frame Structure
SDH Frame Structure
 
Fundamentals of sdh
Fundamentals of sdhFundamentals of sdh
Fundamentals of sdh
 
E1 To Stm
E1 To Stm E1 To Stm
E1 To Stm
 
OPTICAL FIBER COMMUNICATION PPT
OPTICAL FIBER COMMUNICATION PPTOPTICAL FIBER COMMUNICATION PPT
OPTICAL FIBER COMMUNICATION PPT
 
optical fibre ppt for download visit http://nowcracktheworld.blogspot.in/
optical fibre  ppt for download visit http://nowcracktheworld.blogspot.in/optical fibre  ppt for download visit http://nowcracktheworld.blogspot.in/
optical fibre ppt for download visit http://nowcracktheworld.blogspot.in/
 
The Future of Mobile - Presented at SMX Munich
The Future of Mobile - Presented at SMX MunichThe Future of Mobile - Presented at SMX Munich
The Future of Mobile - Presented at SMX Munich
 
SDH MAPPING AND MULTIPLEXING
SDH MAPPING AND MULTIPLEXINGSDH MAPPING AND MULTIPLEXING
SDH MAPPING AND MULTIPLEXING
 
MPLS
MPLSMPLS
MPLS
 

Similar to [G2]fa ce deview_2012

Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...
Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...
Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...
DataStax Academy
 
SSDs, IMDGs and All the Rest - Jax London
SSDs, IMDGs and All the Rest - Jax LondonSSDs, IMDGs and All the Rest - Jax London
SSDs, IMDGs and All the Rest - Jax London
Uri Cohen
 
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...
JAXLondon2014
 
FlashSQL 소개 & TechTalk
FlashSQL 소개 & TechTalkFlashSQL 소개 & TechTalk
FlashSQL 소개 & TechTalk
I Goo Lee
 
SSD - Solid State Drive PPT by Atishay Jain
SSD - Solid State Drive PPT by Atishay JainSSD - Solid State Drive PPT by Atishay Jain
SSD - Solid State Drive PPT by Atishay Jain
Atishay Jain
 
Optimizing RocksDB for Open-Channel SSDs
Optimizing RocksDB for Open-Channel SSDsOptimizing RocksDB for Open-Channel SSDs
Optimizing RocksDB for Open-Channel SSDs
Javier González
 
SSD PPT BY SAURABH
SSD PPT BY SAURABHSSD PPT BY SAURABH
SSD PPT BY SAURABH
Saurabh Kumar
 
Group assignment 1
Group assignment 1Group assignment 1
Group assignment 1bren61
 
SSD Caching: Device-Mapper- and Hardware-based solutions compared
SSD Caching: Device-Mapper- and Hardware-based solutions compared SSD Caching: Device-Mapper- and Hardware-based solutions compared
SSD Caching: Device-Mapper- and Hardware-based solutions compared Werner Fischer
 
Design Tradeoffs for SSD Performance
Design Tradeoffs for SSD PerformanceDesign Tradeoffs for SSD Performance
Design Tradeoffs for SSD Performance
jimmytruong
 
OSS Presentation Accelerating VDI by Daniel Beveridge
OSS Presentation Accelerating VDI by Daniel BeveridgeOSS Presentation Accelerating VDI by Daniel Beveridge
OSS Presentation Accelerating VDI by Daniel BeveridgeOpenStorageSummit
 
Intuitions for scaling data centric architectures - Benjamin Stopford
Intuitions for scaling data centric architectures - Benjamin StopfordIntuitions for scaling data centric architectures - Benjamin Stopford
Intuitions for scaling data centric architectures - Benjamin Stopford
JAXLondon_Conference
 
P99CONF — What We Need to Unlearn About Persistent Storage
P99CONF — What We Need to Unlearn About Persistent StorageP99CONF — What We Need to Unlearn About Persistent Storage
P99CONF — What We Need to Unlearn About Persistent Storage
ScyllaDB
 
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Lars Marowsky-Brée
 
SSD-Bondi.pptx
SSD-Bondi.pptxSSD-Bondi.pptx
SSD-Bondi.pptx
ssuserfc2c45
 
Presentation database on flash
Presentation   database on flashPresentation   database on flash
Presentation database on flash
xKinAnx
 
The Hive Think Tank: Rocking the Database World with RocksDB
The Hive Think Tank:  Rocking the Database World with RocksDBThe Hive Think Tank:  Rocking the Database World with RocksDB
The Hive Think Tank: Rocking the Database World with RocksDB
The Hive
 
seminar.pdf
seminar.pdfseminar.pdf
seminar.pdf
RohitSalve20
 

Similar to [G2]fa ce deview_2012 (20)

Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...
Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...
Cassandra Day Chicago 2015: DataStax Enterprise & Apache Cassandra Hardware B...
 
SSDs, IMDGs and All the Rest - Jax London
SSDs, IMDGs and All the Rest - Jax LondonSSDs, IMDGs and All the Rest - Jax London
SSDs, IMDGs and All the Rest - Jax London
 
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...
How to randomly access data in close-to-RAM speeds but a lower cost with SSD’...
 
FlashSQL 소개 & TechTalk
FlashSQL 소개 & TechTalkFlashSQL 소개 & TechTalk
FlashSQL 소개 & TechTalk
 
SSD - Solid State Drive PPT by Atishay Jain
SSD - Solid State Drive PPT by Atishay JainSSD - Solid State Drive PPT by Atishay Jain
SSD - Solid State Drive PPT by Atishay Jain
 
Optimizing RocksDB for Open-Channel SSDs
Optimizing RocksDB for Open-Channel SSDsOptimizing RocksDB for Open-Channel SSDs
Optimizing RocksDB for Open-Channel SSDs
 
SSD PPT BY SAURABH
SSD PPT BY SAURABHSSD PPT BY SAURABH
SSD PPT BY SAURABH
 
Group assignment 1
Group assignment 1Group assignment 1
Group assignment 1
 
SSD Caching: Device-Mapper- and Hardware-based solutions compared
SSD Caching: Device-Mapper- and Hardware-based solutions compared SSD Caching: Device-Mapper- and Hardware-based solutions compared
SSD Caching: Device-Mapper- and Hardware-based solutions compared
 
CLFS 2010
CLFS 2010CLFS 2010
CLFS 2010
 
Design Tradeoffs for SSD Performance
Design Tradeoffs for SSD PerformanceDesign Tradeoffs for SSD Performance
Design Tradeoffs for SSD Performance
 
OSS Presentation Accelerating VDI by Daniel Beveridge
OSS Presentation Accelerating VDI by Daniel BeveridgeOSS Presentation Accelerating VDI by Daniel Beveridge
OSS Presentation Accelerating VDI by Daniel Beveridge
 
Intuitions for scaling data centric architectures - Benjamin Stopford
Intuitions for scaling data centric architectures - Benjamin StopfordIntuitions for scaling data centric architectures - Benjamin Stopford
Intuitions for scaling data centric architectures - Benjamin Stopford
 
P99CONF — What We Need to Unlearn About Persistent Storage
P99CONF — What We Need to Unlearn About Persistent StorageP99CONF — What We Need to Unlearn About Persistent Storage
P99CONF — What We Need to Unlearn About Persistent Storage
 
Ssd collab13
Ssd   collab13Ssd   collab13
Ssd collab13
 
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
Modeling, estimating, and predicting Ceph (Linux Foundation - Vault 2015)
 
SSD-Bondi.pptx
SSD-Bondi.pptxSSD-Bondi.pptx
SSD-Bondi.pptx
 
Presentation database on flash
Presentation   database on flashPresentation   database on flash
Presentation database on flash
 
The Hive Think Tank: Rocking the Database World with RocksDB
The Hive Think Tank:  Rocking the Database World with RocksDBThe Hive Think Tank:  Rocking the Database World with RocksDB
The Hive Think Tank: Rocking the Database World with RocksDB
 
seminar.pdf
seminar.pdfseminar.pdf
seminar.pdf
 

More from NAVER D2

[211] 인공지능이 인공지능 챗봇을 만든다
[211] 인공지능이 인공지능 챗봇을 만든다[211] 인공지능이 인공지능 챗봇을 만든다
[211] 인공지능이 인공지능 챗봇을 만든다
NAVER D2
 
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...
NAVER D2
 
[215] Druid로 쉽고 빠르게 데이터 분석하기
[215] Druid로 쉽고 빠르게 데이터 분석하기[215] Druid로 쉽고 빠르게 데이터 분석하기
[215] Druid로 쉽고 빠르게 데이터 분석하기
NAVER D2
 
[245]Papago Internals: 모델분석과 응용기술 개발
[245]Papago Internals: 모델분석과 응용기술 개발[245]Papago Internals: 모델분석과 응용기술 개발
[245]Papago Internals: 모델분석과 응용기술 개발
NAVER D2
 
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
NAVER D2
 
[235]Wikipedia-scale Q&A
[235]Wikipedia-scale Q&A[235]Wikipedia-scale Q&A
[235]Wikipedia-scale Q&A
NAVER D2
 
[244]로봇이 현실 세계에 대해 학습하도록 만들기
[244]로봇이 현실 세계에 대해 학습하도록 만들기[244]로봇이 현실 세계에 대해 학습하도록 만들기
[244]로봇이 현실 세계에 대해 학습하도록 만들기
NAVER D2
 
[243] Deep Learning to help student’s Deep Learning
[243] Deep Learning to help student’s Deep Learning[243] Deep Learning to help student’s Deep Learning
[243] Deep Learning to help student’s Deep Learning
NAVER D2
 
[234]Fast & Accurate Data Annotation Pipeline for AI applications
[234]Fast & Accurate Data Annotation Pipeline for AI applications[234]Fast & Accurate Data Annotation Pipeline for AI applications
[234]Fast & Accurate Data Annotation Pipeline for AI applications
NAVER D2
 
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load BalancingOld version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing
NAVER D2
 
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지
NAVER D2
 
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
NAVER D2
 
[224]네이버 검색과 개인화
[224]네이버 검색과 개인화[224]네이버 검색과 개인화
[224]네이버 검색과 개인화
NAVER D2
 
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
NAVER D2
 
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기
NAVER D2
 
[213] Fashion Visual Search
[213] Fashion Visual Search[213] Fashion Visual Search
[213] Fashion Visual Search
NAVER D2
 
[232] TensorRT를 활용한 딥러닝 Inference 최적화
[232] TensorRT를 활용한 딥러닝 Inference 최적화[232] TensorRT를 활용한 딥러닝 Inference 최적화
[232] TensorRT를 활용한 딥러닝 Inference 최적화
NAVER D2
 
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
NAVER D2
 
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터
NAVER D2
 
[223]기계독해 QA: 검색인가, NLP인가?
[223]기계독해 QA: 검색인가, NLP인가?[223]기계독해 QA: 검색인가, NLP인가?
[223]기계독해 QA: 검색인가, NLP인가?
NAVER D2
 

More from NAVER D2 (20)

[211] 인공지능이 인공지능 챗봇을 만든다
[211] 인공지능이 인공지능 챗봇을 만든다[211] 인공지능이 인공지능 챗봇을 만든다
[211] 인공지능이 인공지능 챗봇을 만든다
 
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...
 
[215] Druid로 쉽고 빠르게 데이터 분석하기
[215] Druid로 쉽고 빠르게 데이터 분석하기[215] Druid로 쉽고 빠르게 데이터 분석하기
[215] Druid로 쉽고 빠르게 데이터 분석하기
 
[245]Papago Internals: 모델분석과 응용기술 개발
[245]Papago Internals: 모델분석과 응용기술 개발[245]Papago Internals: 모델분석과 응용기술 개발
[245]Papago Internals: 모델분석과 응용기술 개발
 
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
 
[235]Wikipedia-scale Q&A
[235]Wikipedia-scale Q&A[235]Wikipedia-scale Q&A
[235]Wikipedia-scale Q&A
 
[244]로봇이 현실 세계에 대해 학습하도록 만들기
[244]로봇이 현실 세계에 대해 학습하도록 만들기[244]로봇이 현실 세계에 대해 학습하도록 만들기
[244]로봇이 현실 세계에 대해 학습하도록 만들기
 
[243] Deep Learning to help student’s Deep Learning
[243] Deep Learning to help student’s Deep Learning[243] Deep Learning to help student’s Deep Learning
[243] Deep Learning to help student’s Deep Learning
 
[234]Fast & Accurate Data Annotation Pipeline for AI applications
[234]Fast & Accurate Data Annotation Pipeline for AI applications[234]Fast & Accurate Data Annotation Pipeline for AI applications
[234]Fast & Accurate Data Annotation Pipeline for AI applications
 
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load BalancingOld version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing
 
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지
 
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
 
[224]네이버 검색과 개인화
[224]네이버 검색과 개인화[224]네이버 검색과 개인화
[224]네이버 검색과 개인화
 
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
 
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기
 
[213] Fashion Visual Search
[213] Fashion Visual Search[213] Fashion Visual Search
[213] Fashion Visual Search
 
[232] TensorRT를 활용한 딥러닝 Inference 최적화
[232] TensorRT를 활용한 딥러닝 Inference 최적화[232] TensorRT를 활용한 딥러닝 Inference 최적화
[232] TensorRT를 활용한 딥러닝 Inference 최적화
 
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
 
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터
 
[223]기계독해 QA: 검색인가, NLP인가?
[223]기계독해 QA: 검색인가, NLP인가?[223]기계독해 QA: 검색인가, NLP인가?
[223]기계독해 QA: 검색인가, NLP인가?
 

Recently uploaded

Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
sonjaschweigert1
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
Kari Kakkonen
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..
UiPathCommunity
 
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
Jen Stirrup
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Nexer Digital
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
UiPathCommunity
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Aggregage
 
Quantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIsQuantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIs
Vlad Stirbu
 

Recently uploaded (20)

Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..UiPath Community Day Dubai: AI at Work..
UiPath Community Day Dubai: AI at Work..
 
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...The Metaverse and AI: how can decision-makers harness the Metaverse for their...
The Metaverse and AI: how can decision-makers harness the Metaverse for their...
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
 
Quantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIsQuantum Computing: Current Landscape and the Future Role of APIs
Quantum Computing: Current Landscape and the Future Role of APIs
 

[G2]fa ce deview_2012

  • 1. Flash-­‐Based  Extended  Cache     for  Higher  Throughput  and  Faster  Recovery Woon-­‐hak  Kang,  Sang-­‐won  Lee,  and  Bongki  Moon   12.  9.  19. 1
  • 2. Outline •  IntroducIon   •  Related  work   •  Flash  as  Cache  Extension  (FaCE)   –  Design  choice   –  Two  opImizaIons   •  Recovery  in  FaCE   •  Performance  EvaluaIon   •  Conclusion   2
  • 3. Outline •  IntroducIon   •  Related  work   •  Flash  as  Cache  Extension  (FaCE)   –  Design  choice   –  Two  opImizaIons   •  Recovery  in  FaCE   •  Performance  EvaluaIon   •  Conclusion   3
  • 4. IntroducIon •  Flash  Memory  Solid  State  Drive(SSD )   –  NAND  flash  memory  based  non-­‐volaI le  storage   •  CharacterisIcs   –  No  mechanical  parts   •  Low  access  latency  and  High  random  IOP S   –  MulI-­‐channel  and  mulI-­‐plane   •  Intrinsic  parallelism,  high  concurrency   –  No  overwriIng   •  Erase-­‐before-­‐overwriIng   •  Read  cost  <<  Write  cost   –  Limited  life  span   •  #  of  erasures  of  the  flash  block   4 Image  from  :  hXp://www.legitreviews.com/arIcle/1197/2/
  • 5. IntroducIon(2) •  IOPS  (IOs  Per  Second)  maXers  in  OLTP   •  IOPS/$:  SSDs  >>  HDDs   –  e.g.  SSD    63  (=  28,495  IOPS  /  450$)  vs.  HDD  1.7  (=  409  IOPS  /  240$)   •  GB/$:  HDDs  >>  SSDs   –  e.g.  SSD    0.073  (=  32GB  /  440$)  vs.    HDD  0.617  (=  146.8GB  /  240$)   •  Therefore,  it  is  more  sensible  to  use  SSDs  to  su pplement  HDDs,  rather  than  to  replace  them   –  SSDs  as  cache  between  RAM  and    HDDs   –  To  provide  both  the  performance  of  SSDs  and  the  c apacity  of  HDDs  as  liXle  cost  as  possible   5
  • 6. IntroducIon(3) •  A  few  exisIng  flash-­‐based  cache  schemes   –  e.g.  Oracle  Exadata,  IBM,  MS   –  Pages  cached  in  SSDs  are  overwriXen;  the  write  paXern  in  SS Ds  is  random   •  Write  bandwidth  disparity  in  SSDs   –  e.g.  random  write  (25MB/s  =  6,314  x  4KBs/s  )  vs.  sequenIal  w rite  (243MB/s)  vs.     4KB  Random  Throughput  ( Ra=o  Sequen=al/Random Sequen=al  Bandwidth  (MBPS) IOPS)  write Read Write Read Write SSD  mid  A 28,495 6,314 251 243 9.85 SSD  mid  B 35,601 2,547 259 80 8.04 HDD  Single     409 343 156 154 114.94 HDD  Single  (x8 ) 2,598 2,502 848 843 86.25 6
  • 7. IntroducIon(4)   •  FaCE  (Flash  as  Cache  Extension)  –  main  contribuIons   –  Write-­‐opImized  flash  cache  scheme:  e.g.  3x  higher  throughput  t han  the  exisIng  ones   –  Faster  database  recovery  support  by  exploiIng  the  non-­‐volaIle  c ache  pages  in  SSDs  for  recovery:  e.g.  4x  faster  recovery  Ime DRAM Random  Read   Sequen=al  Write   (Low  cost) (à  High  throughput) Random   Read Non-­‐vola=lity  of  flash   SSD cache  for  recovery   Random   (faster  recovery) Write HDD 7
  • 8. Contents •  IntroducIon   •  Related  work   •  Flash  as  Cache  Extension  (FaCE)   –  Design  choice   –  Two  opImizaIons   •  Recovery  in  FaCE   •  Performance  EvaluaIon   •  Conclusion   8
  • 9. Related  work •  How  to  adopt  SSDs  in  the  DBMS  area?   1.  SSD  as  faster  disk   –  VLDB  ‘08,  Koltsidas  et  al.,  “Flashing  up  the  Storage  Layer”   –  VLDB  ’09,  Canim  et  al.  “An  Object  Placement  Advisor  for  DB2  Usin g  Solid  State  Storage”   –  SIGMOD  ‘08,  Lee  et  al.,  "A  Case  for  Flash  Memory  SSD  in  Enterpris e  Database  ApplicaIons"     2.  SSD  as  DRAM  buffer  extension   –  VLDB  ’10,  Canim  et  al.,  “SSD  Bufferpool  extensions  for  Database  s ystems”   –  SIGMOD  ’11,  Do  et  al.,  “Turbocharging    DBMS  Buffer  Pool  Using  SS Ds” 9
  • 10. Lazy  Cleaning  (LC)  [SIGMOD’11]   •  Cache  on  exit   •  Write-­‐back  policy   •  LRU-­‐based  SSD  cache  replacement  policy   –  To  incur  almost  random  writes  against  SSD   •  No  efficient  recovery  mechanism  provided Flash  hit Random  writes  Evict RAM Buffer (LRU) Flash memory SSD Fetch  on  miss Stage  out  dirty  pages HDD 10
  • 11. Contents •  IntroducIon   •  Related  work   •  Flash  as  Cache  Extension  (FaCE)   –  Design  choices   –  Two  opImizaIons   •  Recovery  in  FaCE   •  Performance  EvaluaIon   •  Conclusion   11
  • 12. FaCE:  Design  Choices 1.  When  to  cache  pages  in  SSD?   2.  What  pages  to  cache  in  SSD?   3.  Sync  policy  b/w  SSD  and  HDD   4.  SSD  Cache  Replacement  Policy   12
  • 13. Design  Choices:  When/What/Sync  Policy •  When  :  on  entry  vs.  on  exit   •  What  :  clean  vs.  dirty  vs.  both   •  Sync  policy  :  write-­‐thru  vs.  write-­‐back   On  exit  :  dirty  pages  athe  ell  as   Sync  policy  :  for   s  w performance,  write-­‐back  sync   clean  pages Evict     RAM Buffer Flash as Cache Extension (LRU) Fetch  on  miss Stage  out  dirty  pages   HDD 13
  • 14. Design  Choices:  SSD  Cache  Replacement  Policy •  What  to  do  when  a  page  is  evicted  from  DRAM  buffe r  and  SSD  cache  is  full   •  LRU  vs.  FIFO  (First-­‐In-­‐First-­‐Out)   –  Write  miss:  LRU-­‐based  vicIm  selecIon,  write-­‐back  if  dirt y  vicIm,  and  overwrite  the  old  vicIm  page  with  the  new  page  being  evicted   –  Write  hit:  overwrite  the  old  copy  in  flash  cache  with  the   updated  page  being  evicted   Random   writes   Evict     RAM Buffer against  SSD Flash as Cache Extension (LRU) HDD 14
  • 15. Design  Choices:  SSD  Cache  Replacement  Policy •  LRU  vs.  FIFO  (First-­‐In-­‐First-­‐Out)   –  VicIms  are  chosen  from  the  rear  end  of  flash  cache   :  “sequenIal  writes”  against  SSD   –  Write  hit  :  no  addiIonal  acIon  is  taken  in  order  not   to  incur  random  writes.   •  mulIple  versions  in  SSD  cache   Evict     RAM Buffer Flash as Cache Extension (LRU) Multi-Version FIFO (mvFIFO) HDD 15
  • 16. Write  ReducIon  in  mvFIFO •  Example   –  Reduce  three  writes  to  HDD  to  one  Versions  of  Page  P Mul=ple   Choose   Invalidated     Invalidated   Write-­‐back   version Discard Vic=m version to  HDD Page  P-­‐v2 Page  P-­‐v1 Page  P-­‐v3 Evict     RAM Buffer Flash as Cache Extension (LRU) HDD 16
  • 17. Design  Choices:  SSD  Cache  Replacement  Policy •  LRU  vs.  FIFO   LRU FIFO Write  paXern Random Sequen=al Write  performance Low High   #  of  copy  pages Single MulIple Space  uIlizaIon High Low Hit  raIo  &  write  reducIon High Low •  Trade-­‐off  :  hit-­‐raIo  <>  write  performance   –  Write  performance  benefit  by  FIFO  >>  Performance  gain  from  higher  hit  raIo  by  LRU   17
  • 18. mvFIFO:  Two  OpImizaIons •  Group  Replacement  (GR)       –  MulIple  pages  are  replaced  in  a  group  in  order  to  exploi t  the  internal  parallelism  in  modern  SSDs   –  Replacement  depth  is  limited  by  parallelism  size  (chann el  *  plane)   –  GR  can  improve  SSD  I/O  throughput   •  Group  Second  Chance  (GSC)       –  GR  +  Second  chance   –  if  a  vicIm  candidate  page  is  valid  and  referenced,  will  re -­‐enque  the  vicIm  to  SSD  cache   •  A  variant  of  “clock”  replacement  algorithm  for  the  FaCE   –  GSC  can  achieve  higher  hit  raIo  and  more  write  reducI ons   18
  • 19. Group  Replacement  (GR)   •  Single  group  read  from  SSD  (64/128  pages)   •  Batch  random  writes  to  HD RAM Check  valid  and   dirty  flag   D   Flash  Cache   •  Single  group  write  to  SSD   becomes  FULL 2.  Evict     RAM Buffer Flash as Cache Extension (LRU) 1.  Fetch  on  miss HDD 19
  • 20. Group  Second  Chance  (GSC)   •  GR  +  Second  Chance   reference  bit  is  ON Check  reference  bit,   RAM if  true  galid  them   Check  vave  and   dirty  flag 2nd  chance Flash  Caches   become  FULL 2.  Evict     RAM Buffer Flash as Cache Extension (LRU) 1.  Fetch  on  miss HDD 20
  • 21. Contents •  IntroducIon   •  Related  work   •  Flash  as  Cache  Extension  (FaCE)   –  Design  choice   –  Two  opImizaIons   •  Recovery  in  FaCE   •  Performance  EvaluaIon   •  Conclusion   21
  • 22. Recovery  Issues  in  SSD  Cache •  With  write-­‐back  sync  policy,  many  recent  copies  of  data  pages  ar e  kept  in  SSD,  not  in  HDD.   •  Therefore,  database  in  HDD  is  in  an  inconsistent  state  ayer  syste m  failure     New  version   of  page  P RAM SSD Mapping Information (Metadata) Crash Inconsistent   as Cache Extension Flash state Old  version   of  page  P HDD 22
  • 23. Recovery  Issues  in  SSD  Cache •  With  write-­‐back  sync  policy,  many  recent  copies  of  data  pages  are  kept   in  SSD,  not  in  HDD.   •  Therefore,  database  in  HDD  is  in  an  inconsistent  state  ayer  system  failu re     •  In  this  situa=on,  one  recovery  approach  with  flash  cache  is  to  view  da tabase  in  harddisk  as  the  only  persistent  DB  [SIGMOD  11]   –  Periodically  checkpoint  updated  pages  from  SSD  cache  as  well  as  DRAM  bu ffer  to  HDD     New  version   of  page  P RAM SSD Mapping Information Excessive   Checkpoint Checkpoint Checkpoint   Flash as Cache Extension Cost Persistent     DB HDD Old  version   of  page  P 23
  • 24. Recovery  Issues  in  SSD  Cache(2) •  Fortunately,  because  SSDs  are  non-­‐vola=le,  pages  cached  in  SSD  are  al ive  even  ayer  system  failure.   •  SSD  mapping  informaIon  has  gone   •  Two  approaches  for  recovering  metadata.   1.  Rebuild  lost  metadata  by  scanning  the  whole  pages  cached  in  SSD  (Naïve  approach)  –  Time-­‐consuming  scanning   2.  Write  metadata  persistently  whenever  metadata  is  changed  [DaMon  11]  –  Run-­‐Ime  overhead  for  managing  metadata  persistently   New  version   of  page  P RAM SSD Mapping Information Full  Scanning Flash as Cache Extension Flush  every  update Persistent     DB HDD Old  version   of  page  P 24
  • 25. Recovery  in  FaCE •  Metadata  checkpoinIng   –  Because  a  data  page  entering  SSD  cache  is  wriXen  t o  the  rear  in  chronological  order,  metadata  can  be   wriXen  regularly  in  a  single  large  segment   64K     RAM Recovery  :   SSD Metadata page   info. Mapping Segment Scanning   segment Periodically  checkpoint   Flash as Cache Extension Crash metadata HDD 25
  • 26. Contents •  IntroducIon   •  Related  work   •  Flash  as  Cache  Extension  (FaCE)   –  Design  choice   –  Two  opImizaIons   •  Recovery  in  FaCE   •  Performance  EvaluaIon   •  Conclusion   26
  • 27. Experimental  Set-­‐Up •  FaCE  ImplementaIon  in  PostgreSQL   –  3  funcIons  in  buffer  mgr.  :  bufferAlloc(),  getFreeBuffer(),  buff erSync()   –  2  funcIons  in  bootstrap  for  recovery  :  startupXLOG(),  initBuff erPool()   •  Experiment  Setup   –  Centos  Linux   –  Intel  Core  i7-­‐860  2.8  GHz  (quad  core)  and  4G  DRAM   –  Disks  :  8  RAIDed  15k  rpm  Seagate  SAS  HDDs  (146.8GB)   –  SSD  :  Samsung  MLC  (256GB)   •  Workloads   –  TPC-­‐C  with  500  warehouses  (50GB)  and  50  concurrent  clients   –  BenchmarkSQL   27
  • 28. TransacIon  Throughput HDD  only   SSD  only   LC   FaCE   FaCE+GR   FaCE+GSC   7000   3.9x 6000   FaCE+GSC FaCE+GR 3.1x 5000   Transac=ons  Per  minute 2.6x 4000   FaCE-­‐basic 2.6x 2.4x 2.1x 3000   1.5x LC 2000   SSD  only 1000   HDD  only 0   4   8   12   16   20   24   28   |Flash  cache|/|Database|  (%) 28
  • 29. Hit  RaIo,  Write  ReducIon,  and  I/O  Throughput Flash  Cache  Hit  Ra=o 100   95   90   LC 85   FaCE+GSC Hit  ra=o  (%) 80   75   FaCE-­‐basic 70   FaCE+GR 65   60   2GB   4GB   6GB   8GB   10GB   Flash  cache  size LC   FaCE   FaCE+GR   FaCE+GSC   29
  • 30. Hit  RaIo,  Write  ReducIon,  and  I/O  Throughput Write  Reduc=on  Ra=o  By  Flash  Cache Write  Reduc=on  Ra=o   100   Flash  Cache  Hit  Ra=o By  Flash  Cache 100   100   90   95   90   90   80   85   80   LC Ra=o(%)   Hit  ra=o  (%) Ra=o(%)   70   80   70   FaCE+GSC 75   60   60   FaCE-­‐basic 70   50   50   FaCE+GR 65   60   40   40   2GB   2GB   4GB   6GB   4GB  8GB   10GB   6GB   2GB   4GB   8GB   6GB   8GB   10GB   10GB   Flash  cache  size Flash  cache  size Flash  cache  size LC   FaCE   FaCE+GR   FaCE+GSC   FaCE   LC   FaCE+GR   LC   FaCE+GSC   FaCE   FaCE+GR   FaCE+GSC   30
  • 31. Hit  RaIo,  Write  ReducIon,  and  I/O  Throughput Write  Reduc=on  Ra=o   Throughput  of  4KB-­‐page  I/O Throughput  of  4KB-­‐page   Flash  Cache  Hit  Ra=o 16000   Throughput  of  4KB-­‐page   By  Flash  Cache I/O 100   I/O 14000   100   FaCE+GSC 16000   16000   95   14000   90   12000   14000   90   12000   FaCE+GR Throughput  (4KB) 10000   80   12000   Throughput  (4KB) 85   10000   Hit  ra=o  (%) Throughput  (4KB) Ra=o(%)   10000   80  8000   70   8000   FaCE-­‐basic 8000   75  6000   6000   60   6000   70  4000   4000   4000   65  2000   LC 50   2000   2000   60   40   0   0   2GB   4GB   6GB   8GB   10GB   2GB   4GB   0   6GB  8GB   10GB   6GB   2GB   4GB   6GB   8GB   10GB   2GB   4GB   8GB   10GB   Flash  cache  size Flash  cache  size 4GB   6GB   2GB   8GB   10GB   Flash  cache  size Flash  cache  size Flash  cache  size LC   FaCE   FaCE+GR   FaCE+GSC   LC   FaCE   FaCE+GR   FaCE+GSC   LC   FaCE   FaCE+GR   FaCE+GSC   LC   FaCE   FaCE+GR   FaCE+GSC   LC   FaCE   FaCE+GR   FaCE+GSC   31
  • 32. Recovery  Performance •  4.4x  faster  recovery  than  HDD  only  approac h   Metadata  recovery  :  2 redo  Ime  :me  :  823 redo  I  186 32
  • 33. Contents •  IntroducIon   •  Related  work   •  Flash  as  Cache  Extension  (FaCE)   –  Design  choice   –  Two  opImizaIons   •  Recovery  in  FaCE   •  Performance  EvaluaIon   •  Conclusion   33
  • 34. Conclusion •  We  presented  a  low-­‐overhead  caching  method   called  FaCE  that  uIlizes  flash  memory  as  an  ext ension  to  a  DRAM  buffer  for  a  recoverable  data base.   •  FaCE  can  maximized  the  I/O  throughput  of  a  fla sh  caching  device  by  turning  small  random  writ es  to  large  sequenIal  ones   •  Also,  FaCE  takes  advantage  of  the  non-­‐volaIlity  of  flash  memory  to  accelerate  the  system  resta rt  from  a  failure.   34
  • 35. QnA