SlideShare a Scribd company logo
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  ReservedCopyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Bit-‐‑‒isle's  three  years  footprint  
with  Ceph
Ikuo  Kumagai – Bit-‐‑‒isle  Equinix  Inc.
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
ビットアイル・エクイニクス株式会社 会社概要
■資本⾦金金
■代表者
■設 ⽴立立
■所在地
■URL
■株主
■主要サービス
■グループ会社:
:35億69百万円
:代表取締役 古⽥田敬
:2000年年6⽉月14⽇日
:〒140-‐‑‒0002
東京都品川区東品川2-‐‑‒2-‐‑‒28  Tビル
Tel:03-‐‑‒5805-‐‑‒8151(代表)
Fax:03-‐‑‒3474-‐‑‒5540
:http://www.bit-‐‑‒isle.jp
:QAON合同会社(エクイニクスグループ)
:データセンターサービス、
クラウドサービス、運⽤用サービス、
システムインテグレーション
2
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Open  my  Stack
•Ikuo  Kumagai(@kumagai19o)
• Blog      :  Bit-‐‑‒isle  R&D  institute  blog  (Japanese  only)
Career
• 2012  ~∼  OpenStack  R&D  &  Providing  hosted  private  OpenStack
• 2011  ~∼  Developing  a  cloud  system  based  VMware  and  BIG-‐‑‒IP
• 2007  ~∼  Designing  a  Financial  System  Infrastructure
• 2006  ~∼  Developing  a  Job  Scheduler  Package
• 2004  ~∼  Production  Control  System  Developer
• 2001  ~∼  Visual  Basic/Java  Programmer
3
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
2011年年
• POC  with  midokura
2012年年/2013年年
• POC  (Essex)
• Swift  β  (for  in-‐‑‒house)
• RD-‐‑‒Cloud  service
(Folsom/in-‐‑‒house)
• POC  of  Ceph
2014年年
•RD-‐‑‒Cloud-‐‑‒1  service (Havana/in-‐‑‒house)  
•OpenStack  Training  (for in-‐‑‒house)
2015年年
•OpenStack  Training (for  external)  
•RD-‐‑‒Cloud-‐‑‒2  service (Juno/in-‐‑‒house)
•Join  in  OpenStack  Days  Tokyo  2015
•Have  a  Session  in  OpenStack  Summit  Tokyo
•Start  hosting  customer's  OpenStack  cloud(Kilo)
2016年年
Bit-‐‑‒isle's  OpenStack  History
4
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Our Ceph Environment  and  History
• Our  Ceph history  is  almost  same  as  OpenStack  history  since  2013.
Each  environment  has  POC  term
5
2013 2014 2015 2016
RD-‐‑‒Cloud-‐‑‒1  for  Develop (for  POC/Testing)
(OpenStack Havana  &  Ceph Dumpling  )
RD-‐‑‒Cloud-‐‑‒2  – (for  Staging  &  Develop)
(OpenStack Juno  &  Ceph Giant)
Customer-‐‑‒Cloud  for  Production
(OpenStack  Kilo &  Ceph Hammer)
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Table  of  Contents
•Why  do  we  chose  and  use  Ceph?
•The  3  environment  of  in  operation
•Things  that  I  am  thinking  now
6
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Why  do  we  choose  and  use  Ceph?
7
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Why  OpenStack  users  need  Ceph storage
• Ceph is  popular  enough  
in  OpenStack  Community.
8
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
This  is  why  we  need  Ceph
•We  needs  a  storage  with  no  IOPS  saturation
‣LVM  Cinder  backend  is  saturated  by  one  action.
•We  do  not  have  budget  and  Storage  appliance
but  we  have  some  servers.
•We  also  do  not  have  engineers  enough  to  operate  
troublesome  storage.
•However,  we  want  to  provide  OpenStack  environment  for  
our  employees.
9
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
First  POC
•Point
‣Deploy, Basic Storage Features
‣Cooperation  with  OpenStack
‣Fault  tolerance
種別 台数 スペック 特記事項
Ceph OSD 3 CPU 12 Core
Memory 96GB
512GB HDD×4
10Gnic ×2
HDD used to
-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒
For journal ×1
For OSD ×3
MON
MDS 1 CPU 12 Core
Memory 96GB
10Gnic ×2
Put Ceph-‐‑‒mds
and cinder-‐‑‒
volume on same
node
Openstack Volume &
image
Compute 2 CPU 12 Core
Memory 96GB
10Gnic ×2
Ceph Public	
  
NW
(10Gbps)
MON.1
OSD.0 OSD.3 OSD.6
Nova	
  
Compute	
  1
Nova	
  
Compute	
  2
Volume
MDS
Image
Ceph Cluster	
  
NW
10G接続
Openstack
内部接続
(100Mbps)
MON.2
OSD.1 OSD.4 OSD.7
MON.2
OSD.2 OSD.5 OSD.8
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Result  of  First  POC
•Deploy  /  Fine                                          (by  ceph-‐‑‒deploy)
•Fault  tolerance  /  Very  Good  (Required  10GbE  NW)
•Cooperation  with  OpenStack  /  Very  Good
•Operationability /  Very  Good    (No  rebalance  operation)
•Performance
‣Parallelism  /  Very  Good
‣Top  Performance  /  Not  expected
‣This  result  is  enough  to  use  Ceph for  our  test  environment.
11
About  3  environment  of  in  operation
12
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
MON.X
OSD.0
HDD  0.5TB
JOURNAL  (SATA  HDD)
OSD.1
HDD  0.5TB
OSD.2
HDD  0.5TB
RD-‐‑‒Cloud-‐‑‒1  – (for  Develop)  -‐‑‒ outline
‣OpenStack Havana  &  Ceph Dumpling
‣12  OpenStack  Compute  nodes  
‣3  nodes  Ceph Cluster  all  4.5TB  /  2replica  – Effective  2.2TB
‣Full  HDD  /  10G  NW  for  cluster  only
‣Hand  made(ceph-‐‑‒deploy)  on  Ubuntu13.10  
13
Ceph Public  
NW
(1Gbps)
Ceph
Cluster  NW
(10Gbps)
OpensStack
Compute  1OpensStack
Compute  2OpensStack
Compute  12
OpenStack
ControllerOpenStack
ControllerOpenStack
Controller
MON.X
OSD.0
HDD  0.5TB
JOURNAL  (SATA  HDD)
OSD.1
HDD  0.5TB
OSD.2
HDD  0.5TB
MON.X
OSD.0
HDD  0.5TB
JOURNAL  (SATA  HDD)
OSD.1
HDD  0.5TB
OSD.2
HDD  0.5TB
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
RD-‐‑‒Cloud-‐‑‒1  – (for  Develop) – Point
14
•Design
‣Do  not  need  10GbE  NIC/Port  ,  when  we  add  Compute  node.
‣Storage  performance  is  not  so  important  ,  but  itʼ’s  important  not  
stopping.
‣It  is  also  important  that  no  data  lost.
•Result
‣No  data  lost
‣No  IO  stop
‣Almost  operation  free  
‣Performance  is  not  good.
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
RD-‐‑‒Cloud-‐‑‒2  – (for  Staging  &  Develop) -‐‑‒ outline
‣OpenStack  Juno  &  Ceph Giant
‣10  OpenStack  Compute  nodes  
‣5  nodes  Ceph Cluster  all  15TB  /  3  replica  – Effective  5  TB
‣SATA  SSD  for  for  journal  /OSD  =HDD  /  10G  NW  for  cluster  only
‣Hand  made(ceph-‐‑‒deploy)  on  CentOS  7  
15
Ceph Public  
NW
(1Gbps)
Ceph
Cluster  NW
(10Gbps)
OpensStack
Compute  1OpensStack
Compute  2OpensStack
Compute  10
OpenStack
ControllerOpenStack
ControllerOpenStack
Controller
MON.X
OSD.0
(HDD  1TB)
OSD.1
(HDD  1TB)
OSD.2
(HDD  1TB)
JOURNAL  (SATA  SSD)
MON.X
OSD.0
(HDD  1TB)
OSD.1
(HDD  1TB)
OSD.2
(HDD  1TB)
JOURNAL  (SATA  SSD)
MON.X
OSD.0
(HDD  1TB)
OSD.1
(HDD  1TB)
OSD.2
(HDD  1TB)
JOURNAL  (SATA  SSD)
MON.X
OSD.0
(HDD  1TB)
OSD.1
(HDD  1TB)
OSD.2
(HDD  1TB)
JOURNAL  (SATA  SSD)
MON.X
OSD.0
(HDD  1TB)
OSD.1
(HDD  1TB)
OSD.2
(HDD  1TB)
JOURNAL  (SATA  SSD)
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
RD-‐‑‒Cloud-‐‑‒2  – (for  Staging  &  Develop)  -‐‑‒ point
•Design
‣Performance  will  be  better  than  RD-‐‑‒Cloud-‐‑‒1.
•Result
‣IOPS  is  bellow
16
サイズ seq-‐‑‒read seq-‐‑‒write rand-‐‑‒read rand-‐‑‒wite
4k 13721   4243   13538   1063  
8k 12635   3701   12294   1009  
16k 6830   2827   6831   877  
32k 3516   2135   3431   655  
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Customerʼ’s  Production  -‐‑‒ outline
‣OpenStack  Kilo  &  Ceph Hammer
‣30  OpenStack  Compute  nodes  
‣30  Ceph Cluster  nodes  all  180TB  /  3  replica  – Effective  60  TB
‣Both  of  that  deployed  on  same  physical  server
‣Jounal PCIe SSD  /  40G  NW  for  All
‣Deployed  from  Juju/MAAS
17
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
•Network  device
‣1  × 40G  Network  for  All  Service
‣1  × 1G  Network  for  IPMI
•OpenStack  Nodes
‣1  Control  and  NW  
‣5  Compute  and  Storage
•Deployment  Node
‣Juju  /Maas  Server
Customerʼ’s  Production  -‐‑‒ Basic  Structure
Compute&Storage
30  node
CTRL/NW
3 node
Deployment
Router
CTRL/NW
Compute/OSD
Compute/OSD
Compute/OSD
MAAS/Juju
OpenStack Segment IPMI Segment
Compute/OSD
Compute/OSD
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
•Resource  Server
40GB/Ethernet
Hyper
Visor
Customerʼ’s  Production  – detail
KVM
VM
VM
VM
VM
VM
VM
KVM
VM
VM
VM
VM
VM
VM
KVM
VM
VM
VM
VM
VM
VM
KVM
VM
VM
VM
VM
VM
VM
Ceph  
Cluster OSD
(HDD)
Journal
(PCIe SSD)
Server  fundamentals
Server   HP  ProLiant  DL360  Gen9
CPU E5-‐‑‒2690v3  2.60GHz  1P/12C  *  2  
HDD SAS  1TB  HDD  *2    RAID1  for  OS
SAS  1TB  HDD  *6    RAID0  for  OSD
Memory 96GB mem  per  node
PCIeSSD Fusion-‐‑‒io iomemory 1.6TB  *  1  for  Journal
40Gbps  NIC Mellanox  ConnectX3-‐‑‒Pro
OpenStack
Components
xxx
xxx
xxx
xxx
Ceph MON
OSD
(HDD)
Journal
(PCIe SSD)
OSD
(HDD)
Journal
(PCIe SSD)
OSD
(HDD)
Journal
(PCIe SSD)
OpenStack
Components
xxx
xxx
xxx
xxx
Ceph MON
OpenStack
Components
xxx
xxx
xxx
xxx
Ceph MON
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Deployed  by  Juju/MAAS
‣Deployed  by  Juju/MAAS  
• Same  as  other  OpenStack  Components  
• Parameters  are  set  by  juju  charm  (with  Canonical  support)
-‐‑‒ reason  -‐‑‒
‣For  avoiding  to  depend  on  individual  skill.
‣For  reducing  operation  cost,  when  HW  failures.
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Customerʼ’s  Production  -‐‑‒ point
•Design
‣Designed  like  Hyper  Converged  
• For  saving  number  of  servers.
•Performance
‣fio result  is  bellow(100  VM  parallel  summary)
21
ブロックサイズ SeqRead SeqWrite RandRead RandWrite
4k 333,286   43,216   211,394   31,121  
8k 333,255   50,218   223,061   30,274  
16k 295,515   46,719   220,171   17,791  
32k 212,678   52,005   179,464   14,457  
ブロックサイズ SeqRead SeqWrite RandRead RandWrite
4k 3,333   432   2,114   311  
8k 3,333   502   2,231   303  
16k 2,955   467   2,202   178  
32k 2,127   520   1,795   145  
1台当たり 100台合計
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Customerʼ’s  Production  -‐‑‒ problem
•A  problem  occurred  from  concentration  of  9000/19000  PGʼ’s  
deep-‐‑‒scrub.
‣VMʼ’s  IO  stopped  temporarily  
•Temporarily  600  ksps =  300MB/sec  
• We  made  a  script  that  do  deep-‐‑‒scrub  scheduled.
22
Things  that  I  am  thinking  now
23
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
Current  Worries  
•Capax
‣Ceph Cluster  for  Production  is  cheap  enough,  but…  
• It  is  premise  that  you  already  have  a  lot  of  servers  and  ssd and  
network  device  for  over  10GbE.  And  also  spaces(=racks  and  power).
•Opex
‣Ceph Cluster  does  not  need  operation  cost.  but…
• It  is  premise  that  hardware  failure  does  not  occur  or  Software  troubles  
does  not  appearance.
‣If  the  customerʼ’s  initial  size  is  enough  to  use  other  storage  appliance,  should  
we  recommend  using  ceph to  them?
24
Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved
My  wishes
•Start  small,  scale  limitless
‣Both  of  Size  and  IOPS  
‣Both  of  Capex(especially  Initial  Cost)  and  Opex
• Appliance  storage  it  too  experience  for  startup  users.  
• But  it's  necessary  after  when    they  will  become  big.
•Separate  effect  of  ceph process  and  kvm
‣For  hyper-‐‑‒converged  use  case.  
•Ceph should  be  more  popular.
‣It  is  not  easy  to  operate  ceph storage  for  our  members  currently.
•Ceph Storage  Status  monitoring  services
25
ビットアイル・エクイニクス株式会社
TEL  03-‐‑‒5805-‐‑‒8154 FAX  03-‐‑‒3474-‐‑‒5538 URL  http://www.bit-‐‑‒isle.jp/
26

More Related Content

What's hot

Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in CephCeph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Community
 
Ceph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/OCeph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/O
Ceph Community
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Community
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Community
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Danielle Womboldt
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Community
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
Patrick McGarry
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
Ceph Community
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Community
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
inwin stack
 
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient Ceph Day Seoul - Ceph on Arm Scaleable and Efficient
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient
Ceph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
Patrick McGarry
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
Ceph Community
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
Ceph Community
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Community
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
Danielle Womboldt
 
Ceph Day KL - Ceph on ARM
Ceph Day KL - Ceph on ARM Ceph Day KL - Ceph on ARM
Ceph Day KL - Ceph on ARM
Ceph Community
 

What's hot (17)

Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in CephCeph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
 
Ceph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/OCeph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/O
 
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash TechnologyCeph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
 
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Ceph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance ArchiectureCeph Day KL - Ceph Tiering with High Performance Archiecture
Ceph Day KL - Ceph Tiering with High Performance Archiecture
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
 
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient Ceph Day Seoul - Ceph on Arm Scaleable and Efficient
Ceph Day Seoul - Ceph on Arm Scaleable and Efficient
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data Ceph Day San Jose - Object Storage for Big Data
Ceph Day San Jose - Object Storage for Big Data
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
 
Ceph Day KL - Ceph on ARM
Ceph Day KL - Ceph on ARM Ceph Day KL - Ceph on ARM
Ceph Day KL - Ceph on ARM
 

Viewers also liked

Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update
Ceph Community
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Community
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce
Ceph Community
 
Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Community
 
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Community
 
Ceph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFSCeph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFS
Ceph Community
 
Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Community
 
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture  Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Community
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Community
 
Ceph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Day Seoul - Community Update
Ceph Day Seoul - Community Update
Ceph Community
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph
Ceph Community
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Community
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore
Ceph Community
 
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiLondon Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
Ceph Community
 
Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Hong-Linh Truong
 
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
Hakka Labs
 
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Community
 
Ceph Day Shanghai - Ceph in Ctrip
Ceph Day Shanghai - Ceph in CtripCeph Day Shanghai - Ceph in Ctrip
Ceph Day Shanghai - Ceph in Ctrip
Ceph Community
 

Viewers also liked (20)

Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
 
Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce Ceph Day San Jose - Ceph at Salesforce
Ceph Day San Jose - Ceph at Salesforce
 
Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute Ceph Day San Jose - From Zero to Ceph in One Minute
Ceph Day San Jose - From Zero to Ceph in One Minute
 
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
 
Ceph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFSCeph Day San Jose - HA NAS with CephFS
Ceph Day San Jose - HA NAS with CephFS
 
Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World
 
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture  Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
 
Ceph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Day Seoul - Community Update
Ceph Day Seoul - Community Update
 
iSCSI Target Support for Ceph
iSCSI Target Support for Ceph iSCSI Target Support for Ceph
iSCSI Target Support for Ceph
 
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore
 
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiLondon Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
 
Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...
 
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
 
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
 
Ceph Day Shanghai - Ceph in Ctrip
Ceph Day Shanghai - Ceph in CtripCeph Day Shanghai - Ceph in Ctrip
Ceph Day Shanghai - Ceph in Ctrip
 

Similar to Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph

Ceph Performance on OpenStack - Barcelona Summit
Ceph Performance on OpenStack - Barcelona SummitCeph Performance on OpenStack - Barcelona Summit
Ceph Performance on OpenStack - Barcelona Summit
Takehiro Kudou
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Community
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data Center
Ettore Simone
 
Approaching hyperconvergedopenstack
Approaching hyperconvergedopenstackApproaching hyperconvergedopenstack
Approaching hyperconvergedopenstack
Ikuo Kumagai
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Community
 
OpenStack and NetApp - Chen Reuven - OpenStack Day Israel 2017
OpenStack and NetApp - Chen Reuven - OpenStack Day Israel 2017OpenStack and NetApp - Chen Reuven - OpenStack Day Israel 2017
OpenStack and NetApp - Chen Reuven - OpenStack Day Israel 2017
Cloud Native Day Tel Aviv
 
DataCore Technology Overview
DataCore Technology OverviewDataCore Technology Overview
DataCore Technology Overview
Jeff Slapp
 
VM-aware Adaptive Storage Cache Prefetching
VM-aware Adaptive Storage Cache PrefetchingVM-aware Adaptive Storage Cache Prefetching
VM-aware Adaptive Storage Cache Prefetching
Shinagawa Laboratory, The University of Tokyo
 
Stabilizing Ceph
Stabilizing CephStabilizing Ceph
Stabilizing Ceph
Ceph Community
 
Addressing Issues of Risk & Governance in OpenStack without sacrificing Agili...
Addressing Issues of Risk & Governance in OpenStack without sacrificing Agili...Addressing Issues of Risk & Governance in OpenStack without sacrificing Agili...
Addressing Issues of Risk & Governance in OpenStack without sacrificing Agili...
OpenStack
 
DRBD + OpenStack (Openstack Live Prague 2016)
DRBD + OpenStack (Openstack Live Prague 2016)DRBD + OpenStack (Openstack Live Prague 2016)
DRBD + OpenStack (Openstack Live Prague 2016)
Jaroslav Jacjuk
 
Hortonworks Technical Workshop - Operational Best Practices Workshop
Hortonworks Technical Workshop - Operational Best Practices WorkshopHortonworks Technical Workshop - Operational Best Practices Workshop
Hortonworks Technical Workshop - Operational Best Practices Workshop
Hortonworks
 
Xap memory xtend-tutorial-2014
Xap memory xtend-tutorial-2014Xap memory xtend-tutorial-2014
Xap memory xtend-tutorial-2014
Shay Hassidim
 
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
Aerospike
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Community
 
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity HardwareMirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Ryan Aydelott
 
How to deploy SQL Server on an Microsoft Azure virtual machines
How to deploy SQL Server on an Microsoft Azure virtual machinesHow to deploy SQL Server on an Microsoft Azure virtual machines
How to deploy SQL Server on an Microsoft Azure virtual machines
SolarWinds
 
Getting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDsGetting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDs
Aerospike, Inc.
 
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Community
 
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
Ceph Community
 

Similar to Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph (20)

Ceph Performance on OpenStack - Barcelona Summit
Ceph Performance on OpenStack - Barcelona SummitCeph Performance on OpenStack - Barcelona Summit
Ceph Performance on OpenStack - Barcelona Summit
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
TUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data CenterTUT18972: Unleash the power of Ceph across the Data Center
TUT18972: Unleash the power of Ceph across the Data Center
 
Approaching hyperconvergedopenstack
Approaching hyperconvergedopenstackApproaching hyperconvergedopenstack
Approaching hyperconvergedopenstack
 
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons LearnedCeph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
 
OpenStack and NetApp - Chen Reuven - OpenStack Day Israel 2017
OpenStack and NetApp - Chen Reuven - OpenStack Day Israel 2017OpenStack and NetApp - Chen Reuven - OpenStack Day Israel 2017
OpenStack and NetApp - Chen Reuven - OpenStack Day Israel 2017
 
DataCore Technology Overview
DataCore Technology OverviewDataCore Technology Overview
DataCore Technology Overview
 
VM-aware Adaptive Storage Cache Prefetching
VM-aware Adaptive Storage Cache PrefetchingVM-aware Adaptive Storage Cache Prefetching
VM-aware Adaptive Storage Cache Prefetching
 
Stabilizing Ceph
Stabilizing CephStabilizing Ceph
Stabilizing Ceph
 
Addressing Issues of Risk & Governance in OpenStack without sacrificing Agili...
Addressing Issues of Risk & Governance in OpenStack without sacrificing Agili...Addressing Issues of Risk & Governance in OpenStack without sacrificing Agili...
Addressing Issues of Risk & Governance in OpenStack without sacrificing Agili...
 
DRBD + OpenStack (Openstack Live Prague 2016)
DRBD + OpenStack (Openstack Live Prague 2016)DRBD + OpenStack (Openstack Live Prague 2016)
DRBD + OpenStack (Openstack Live Prague 2016)
 
Hortonworks Technical Workshop - Operational Best Practices Workshop
Hortonworks Technical Workshop - Operational Best Practices WorkshopHortonworks Technical Workshop - Operational Best Practices Workshop
Hortonworks Technical Workshop - Operational Best Practices Workshop
 
Xap memory xtend-tutorial-2014
Xap memory xtend-tutorial-2014Xap memory xtend-tutorial-2014
Xap memory xtend-tutorial-2014
 
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity HardwareMirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware
 
How to deploy SQL Server on an Microsoft Azure virtual machines
How to deploy SQL Server on an Microsoft Azure virtual machinesHow to deploy SQL Server on an Microsoft Azure virtual machines
How to deploy SQL Server on an Microsoft Azure virtual machines
 
Getting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDsGetting The Most Out Of Your Flash/SSDs
Getting The Most Out Of Your Flash/SSDs
 
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
 
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
NVMe over Fabrics and Composable Infrastructure - What Do They Mean for Softw...
 

Recently uploaded

FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Jeffrey Haguewood
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
g2nightmarescribd
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 

Recently uploaded (20)

FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 

Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph

  • 1. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  ReservedCopyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Bit-‐‑‒isle's  three  years  footprint   with  Ceph Ikuo  Kumagai – Bit-‐‑‒isle  Equinix  Inc.
  • 2. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved ビットアイル・エクイニクス株式会社 会社概要 ■資本⾦金金 ■代表者 ■設 ⽴立立 ■所在地 ■URL ■株主 ■主要サービス ■グループ会社: :35億69百万円 :代表取締役 古⽥田敬 :2000年年6⽉月14⽇日 :〒140-‐‑‒0002 東京都品川区東品川2-‐‑‒2-‐‑‒28  Tビル Tel:03-‐‑‒5805-‐‑‒8151(代表) Fax:03-‐‑‒3474-‐‑‒5540 :http://www.bit-‐‑‒isle.jp :QAON合同会社(エクイニクスグループ) :データセンターサービス、 クラウドサービス、運⽤用サービス、 システムインテグレーション 2
  • 3. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Open  my  Stack •Ikuo  Kumagai(@kumagai19o) • Blog      :  Bit-‐‑‒isle  R&D  institute  blog  (Japanese  only) Career • 2012  ~∼  OpenStack  R&D  &  Providing  hosted  private  OpenStack • 2011  ~∼  Developing  a  cloud  system  based  VMware  and  BIG-‐‑‒IP • 2007  ~∼  Designing  a  Financial  System  Infrastructure • 2006  ~∼  Developing  a  Job  Scheduler  Package • 2004  ~∼  Production  Control  System  Developer • 2001  ~∼  Visual  Basic/Java  Programmer 3
  • 4. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved 2011年年 • POC  with  midokura 2012年年/2013年年 • POC  (Essex) • Swift  β  (for  in-‐‑‒house) • RD-‐‑‒Cloud  service (Folsom/in-‐‑‒house) • POC  of  Ceph 2014年年 •RD-‐‑‒Cloud-‐‑‒1  service (Havana/in-‐‑‒house)   •OpenStack  Training  (for in-‐‑‒house) 2015年年 •OpenStack  Training (for  external)   •RD-‐‑‒Cloud-‐‑‒2  service (Juno/in-‐‑‒house) •Join  in  OpenStack  Days  Tokyo  2015 •Have  a  Session  in  OpenStack  Summit  Tokyo •Start  hosting  customer's  OpenStack  cloud(Kilo) 2016年年 Bit-‐‑‒isle's  OpenStack  History 4
  • 5. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Our Ceph Environment  and  History • Our  Ceph history  is  almost  same  as  OpenStack  history  since  2013. Each  environment  has  POC  term 5 2013 2014 2015 2016 RD-‐‑‒Cloud-‐‑‒1  for  Develop (for  POC/Testing) (OpenStack Havana  &  Ceph Dumpling  ) RD-‐‑‒Cloud-‐‑‒2  – (for  Staging  &  Develop) (OpenStack Juno  &  Ceph Giant) Customer-‐‑‒Cloud  for  Production (OpenStack  Kilo &  Ceph Hammer)
  • 6. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Table  of  Contents •Why  do  we  chose  and  use  Ceph? •The  3  environment  of  in  operation •Things  that  I  am  thinking  now 6
  • 7. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Why  do  we  choose  and  use  Ceph? 7
  • 8. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Why  OpenStack  users  need  Ceph storage • Ceph is  popular  enough   in  OpenStack  Community. 8
  • 9. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved This  is  why  we  need  Ceph •We  needs  a  storage  with  no  IOPS  saturation ‣LVM  Cinder  backend  is  saturated  by  one  action. •We  do  not  have  budget  and  Storage  appliance but  we  have  some  servers. •We  also  do  not  have  engineers  enough  to  operate   troublesome  storage. •However,  we  want  to  provide  OpenStack  environment  for   our  employees. 9
  • 10. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved First  POC •Point ‣Deploy, Basic Storage Features ‣Cooperation  with  OpenStack ‣Fault  tolerance 種別 台数 スペック 特記事項 Ceph OSD 3 CPU 12 Core Memory 96GB 512GB HDD×4 10Gnic ×2 HDD used to -‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒-‐‑‒ For journal ×1 For OSD ×3 MON MDS 1 CPU 12 Core Memory 96GB 10Gnic ×2 Put Ceph-‐‑‒mds and cinder-‐‑‒ volume on same node Openstack Volume & image Compute 2 CPU 12 Core Memory 96GB 10Gnic ×2 Ceph Public   NW (10Gbps) MON.1 OSD.0 OSD.3 OSD.6 Nova   Compute  1 Nova   Compute  2 Volume MDS Image Ceph Cluster   NW 10G接続 Openstack 内部接続 (100Mbps) MON.2 OSD.1 OSD.4 OSD.7 MON.2 OSD.2 OSD.5 OSD.8
  • 11. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Result  of  First  POC •Deploy  /  Fine                                          (by  ceph-‐‑‒deploy) •Fault  tolerance  /  Very  Good  (Required  10GbE  NW) •Cooperation  with  OpenStack  /  Very  Good •Operationability /  Very  Good    (No  rebalance  operation) •Performance ‣Parallelism  /  Very  Good ‣Top  Performance  /  Not  expected ‣This  result  is  enough  to  use  Ceph for  our  test  environment. 11
  • 12. About  3  environment  of  in  operation 12
  • 13. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved MON.X OSD.0 HDD  0.5TB JOURNAL  (SATA  HDD) OSD.1 HDD  0.5TB OSD.2 HDD  0.5TB RD-‐‑‒Cloud-‐‑‒1  – (for  Develop)  -‐‑‒ outline ‣OpenStack Havana  &  Ceph Dumpling ‣12  OpenStack  Compute  nodes   ‣3  nodes  Ceph Cluster  all  4.5TB  /  2replica  – Effective  2.2TB ‣Full  HDD  /  10G  NW  for  cluster  only ‣Hand  made(ceph-‐‑‒deploy)  on  Ubuntu13.10   13 Ceph Public   NW (1Gbps) Ceph Cluster  NW (10Gbps) OpensStack Compute  1OpensStack Compute  2OpensStack Compute  12 OpenStack ControllerOpenStack ControllerOpenStack Controller MON.X OSD.0 HDD  0.5TB JOURNAL  (SATA  HDD) OSD.1 HDD  0.5TB OSD.2 HDD  0.5TB MON.X OSD.0 HDD  0.5TB JOURNAL  (SATA  HDD) OSD.1 HDD  0.5TB OSD.2 HDD  0.5TB
  • 14. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved RD-‐‑‒Cloud-‐‑‒1  – (for  Develop) – Point 14 •Design ‣Do  not  need  10GbE  NIC/Port  ,  when  we  add  Compute  node. ‣Storage  performance  is  not  so  important  ,  but  itʼ’s  important  not   stopping. ‣It  is  also  important  that  no  data  lost. •Result ‣No  data  lost ‣No  IO  stop ‣Almost  operation  free   ‣Performance  is  not  good.
  • 15. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved RD-‐‑‒Cloud-‐‑‒2  – (for  Staging  &  Develop) -‐‑‒ outline ‣OpenStack  Juno  &  Ceph Giant ‣10  OpenStack  Compute  nodes   ‣5  nodes  Ceph Cluster  all  15TB  /  3  replica  – Effective  5  TB ‣SATA  SSD  for  for  journal  /OSD  =HDD  /  10G  NW  for  cluster  only ‣Hand  made(ceph-‐‑‒deploy)  on  CentOS  7   15 Ceph Public   NW (1Gbps) Ceph Cluster  NW (10Gbps) OpensStack Compute  1OpensStack Compute  2OpensStack Compute  10 OpenStack ControllerOpenStack ControllerOpenStack Controller MON.X OSD.0 (HDD  1TB) OSD.1 (HDD  1TB) OSD.2 (HDD  1TB) JOURNAL  (SATA  SSD) MON.X OSD.0 (HDD  1TB) OSD.1 (HDD  1TB) OSD.2 (HDD  1TB) JOURNAL  (SATA  SSD) MON.X OSD.0 (HDD  1TB) OSD.1 (HDD  1TB) OSD.2 (HDD  1TB) JOURNAL  (SATA  SSD) MON.X OSD.0 (HDD  1TB) OSD.1 (HDD  1TB) OSD.2 (HDD  1TB) JOURNAL  (SATA  SSD) MON.X OSD.0 (HDD  1TB) OSD.1 (HDD  1TB) OSD.2 (HDD  1TB) JOURNAL  (SATA  SSD)
  • 16. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved RD-‐‑‒Cloud-‐‑‒2  – (for  Staging  &  Develop)  -‐‑‒ point •Design ‣Performance  will  be  better  than  RD-‐‑‒Cloud-‐‑‒1. •Result ‣IOPS  is  bellow 16 サイズ seq-‐‑‒read seq-‐‑‒write rand-‐‑‒read rand-‐‑‒wite 4k 13721   4243   13538   1063   8k 12635   3701   12294   1009   16k 6830   2827   6831   877   32k 3516   2135   3431   655  
  • 17. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Customerʼ’s  Production  -‐‑‒ outline ‣OpenStack  Kilo  &  Ceph Hammer ‣30  OpenStack  Compute  nodes   ‣30  Ceph Cluster  nodes  all  180TB  /  3  replica  – Effective  60  TB ‣Both  of  that  deployed  on  same  physical  server ‣Jounal PCIe SSD  /  40G  NW  for  All ‣Deployed  from  Juju/MAAS 17
  • 18. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved •Network  device ‣1  × 40G  Network  for  All  Service ‣1  × 1G  Network  for  IPMI •OpenStack  Nodes ‣1  Control  and  NW   ‣5  Compute  and  Storage •Deployment  Node ‣Juju  /Maas  Server Customerʼ’s  Production  -‐‑‒ Basic  Structure Compute&Storage 30  node CTRL/NW 3 node Deployment Router CTRL/NW Compute/OSD Compute/OSD Compute/OSD MAAS/Juju OpenStack Segment IPMI Segment Compute/OSD Compute/OSD
  • 19. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved •Resource  Server 40GB/Ethernet Hyper Visor Customerʼ’s  Production  – detail KVM VM VM VM VM VM VM KVM VM VM VM VM VM VM KVM VM VM VM VM VM VM KVM VM VM VM VM VM VM Ceph   Cluster OSD (HDD) Journal (PCIe SSD) Server  fundamentals Server   HP  ProLiant  DL360  Gen9 CPU E5-‐‑‒2690v3  2.60GHz  1P/12C  *  2   HDD SAS  1TB  HDD  *2    RAID1  for  OS SAS  1TB  HDD  *6    RAID0  for  OSD Memory 96GB mem  per  node PCIeSSD Fusion-‐‑‒io iomemory 1.6TB  *  1  for  Journal 40Gbps  NIC Mellanox  ConnectX3-‐‑‒Pro OpenStack Components xxx xxx xxx xxx Ceph MON OSD (HDD) Journal (PCIe SSD) OSD (HDD) Journal (PCIe SSD) OSD (HDD) Journal (PCIe SSD) OpenStack Components xxx xxx xxx xxx Ceph MON OpenStack Components xxx xxx xxx xxx Ceph MON
  • 20. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Deployed  by  Juju/MAAS ‣Deployed  by  Juju/MAAS   • Same  as  other  OpenStack  Components   • Parameters  are  set  by  juju  charm  (with  Canonical  support) -‐‑‒ reason  -‐‑‒ ‣For  avoiding  to  depend  on  individual  skill. ‣For  reducing  operation  cost,  when  HW  failures.
  • 21. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Customerʼ’s  Production  -‐‑‒ point •Design ‣Designed  like  Hyper  Converged   • For  saving  number  of  servers. •Performance ‣fio result  is  bellow(100  VM  parallel  summary) 21 ブロックサイズ SeqRead SeqWrite RandRead RandWrite 4k 333,286   43,216   211,394   31,121   8k 333,255   50,218   223,061   30,274   16k 295,515   46,719   220,171   17,791   32k 212,678   52,005   179,464   14,457   ブロックサイズ SeqRead SeqWrite RandRead RandWrite 4k 3,333   432   2,114   311   8k 3,333   502   2,231   303   16k 2,955   467   2,202   178   32k 2,127   520   1,795   145   1台当たり 100台合計
  • 22. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Customerʼ’s  Production  -‐‑‒ problem •A  problem  occurred  from  concentration  of  9000/19000  PGʼ’s   deep-‐‑‒scrub. ‣VMʼ’s  IO  stopped  temporarily   •Temporarily  600  ksps =  300MB/sec   • We  made  a  script  that  do  deep-‐‑‒scrub  scheduled. 22
  • 23. Things  that  I  am  thinking  now 23
  • 24. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved Current  Worries   •Capax ‣Ceph Cluster  for  Production  is  cheap  enough,  but…   • It  is  premise  that  you  already  have  a  lot  of  servers  and  ssd and   network  device  for  over  10GbE.  And  also  spaces(=racks  and  power). •Opex ‣Ceph Cluster  does  not  need  operation  cost.  but… • It  is  premise  that  hardware  failure  does  not  occur  or  Software  troubles   does  not  appearance. ‣If  the  customerʼ’s  initial  size  is  enough  to  use  other  storage  appliance,  should   we  recommend  using  ceph to  them? 24
  • 25. Copyright  ©  2016  Bit-‐‑‒isle  Equinix  Inc.  All  Rights  Reserved My  wishes •Start  small,  scale  limitless ‣Both  of  Size  and  IOPS   ‣Both  of  Capex(especially  Initial  Cost)  and  Opex • Appliance  storage  it  too  experience  for  startup  users.   • But  it's  necessary  after  when    they  will  become  big. •Separate  effect  of  ceph process  and  kvm ‣For  hyper-‐‑‒converged  use  case.   •Ceph should  be  more  popular. ‣It  is  not  easy  to  operate  ceph storage  for  our  members  currently. •Ceph Storage  Status  monitoring  services 25
  • 26. ビットアイル・エクイニクス株式会社 TEL  03-‐‑‒5805-‐‑‒8154 FAX  03-‐‑‒3474-‐‑‒5538 URL  http://www.bit-‐‑‒isle.jp/ 26