SlideShare a Scribd company logo
1 of 23
Download to read offline
iSCSI Target Support for Ceph
Lin Pei Feng
pflin@suse.com
Ceph Day Shanghai 2015-10-18
2
Ceph RADOS Block Device (RBD)
Introduction
• Block device backed by RADOS objects
‒ Objects replicated across Ceph OSDs
• Thin provisioned
• Online resize
• Snapshots and clones
3
Ceph RADOS Block Device (RBD)
Limitation
• Linux kernel or librbd clients
‒ Usage restricted to a subset of operating systems and
applications
Ceph Storage Cluster (RADOS)
Librados
Ceph Block Device
(RBD)
Virtual Disk
Host/VM (Linux)
4
Ceph RADOS Block Device (RBD)
Idea
• Heterogeneous OS access
• Easy to use
Ceph Storage Cluster (RADOS)
Librados
Ceph Block Device
(RBD)
Virtual Disk
Linux Windows Unix VMware
6
iSCSI gateway for Ceph RBD
• Expose benefits of Ceph RBD to other systems
‒ No requirement for Ceph aware applications or operating
systems
• Standardised interface
‒ Mature and trusted protocol (rfc3720)
• iSCSI initiator implementations are widespread
‒ Provided with most modern operating systems
Client
(iSCSI initiator)
Gateway
(iSCSI target) Ceph Cluster
7
iSCSI Target
• LIO – Linux IO Target
• In kernel SCSI target implementation
‒ Support for a number of SCSI transports
‒ iSCSI, as well as iSER, FC and others
‒ Pluggable storage backend
‒ Iblock, as well as fileio, pscsi and others
• Flexible configuration
‒ targetcli utility
8
LIO + RBD iSCSI gateway
Overview
OSD1 OSD2 OSD3 OSD4
LIO iSCSI
Target
RBD
Image
ISCSI
Initiator
LIO rbd
backstore
rbd.ko
9
RBD
Image
LIO + RBD Cluster iSCSI gateway
LIO iSCSI
Target
LIO iSCSI
Target
RBD
module
RBD
module
ISCSI
Initiator
OSD1 OSD2 OSD3 OSD4
10
LIO + RBD Cluster iSCSI gateway
• LIO target configured with iSCSI transport fabric
• RBD backstore module
‒ Translates SCSI IO into Ceph OSD requests
‒ Special handling of operations that require exclusive device
access
‒ Atomic COMPARE AND WRITE
‒ SCSI reservations
‒ LUN reset
• Lrbd: Multi-node configuration utility
‒ Applies iSCSI target configuration across multiple gateways
via targetcli
11
LIO + RBD iSCSI gateway
Cluster support
• Allows for initiator access via redundant paths
‒ iSCSI gateway node with multiple network adapters
‒ Protection from network adapter failure
‒ Multiple iSCSI gateways exporting same RBD image
‒ Protection from entire gateway failure
• Initiator responsible for utilisation of redundant paths
‒ Available paths advertised in iSCSI discovery exchange
‒ May choose to round-robin IO or failover/failback
12
iSCSI Initiators
• open-iscsi
‒ Default iSCSI initiator shipped with SLES10 and later
‒ Multipath supported in combination with dm-multipath
• Microsoft iSCSI initiator
‒ Installed by default from Windows Server 2008 and later
‒ Supports MPIO in recent versions
• VMWare ESX
‒ Concurrent clustered filesystem (VMFS) access from multiple
initiators
13
A Simple iSCSI gateway for Ceph RBD
14
lrbd configuration
"pools": [
{
"pool": "rbd",
"gateways": [
{
"host": "igw1",
"tpg": [
{
"portal": "portal1",
"image": "archive"
}
]
}
]
}
]
"targets": [
{
"host": "igw1",
"target": "iqn.2003-01.org.linux-iscsi.generic.x86:sn.abcdefghijk"
}
],
"portals": [
{
"name": "portal1",
"addresses": [ "172.16.1.16" ]
}
],
"auth": [
{
"host": "igw1",
"authentication": "tpg",
"tpg": {
"userid": "common1",
"password": "pass1"
}
}
],
15
Apply configuration
• # lrbd -f ~/simple.json
• # lrbd
• # targetcli ls
o- / ..................................................................... [...]
o- backstores .......................................................... [...]
| o- fileio ............................................... [0 Storage Object]
| o- iblock ............................................... [0 Storage Object]
| o- pscsi ................................................ [0 Storage Object]
| o- rbd .................................................. [1 Storage Object]
| | o- archive .............................. [/dev/rbd/rbd/archive activated]
| o- rd_mcp ............................................... [0 Storage Object]
o- ib_srpt ....................................................... [0 Targets]
o- iscsi .......................................................... [1 Target]
| o- iqn.2003-01.org.linux-iscsi.generic.x86:sn.abcdefghijk .......... [1 TPG]
| o- tpg1 ........................................................ [enabled]
| o- acls ........................................................ [1 ACL]
| | o- iqn.1996-04.de.suse:01:e6ca28cc9f20 ................ [1 Mapped LUN]
| | o- mapped_lun0 ......................................... [lun0 (rw)]
| o- luns ........................................................ [1 LUN]
| | o- lun0 ......................... [rbd/archive (/dev/rbd/rbd/archive)]
| o- portals .................................................. [1 Portal]
| o- 172.16.1.16:3260 .............................. [OK, iser disabled]
o- loopback ...................................................... [0 Targets]
o- qla2xxx ....................................................... [0 Targets]
o- tcm_fc ........................................................ [0 Targets]
o- vhost ......................................................... [0 Targets]
16
Simple gateway for Ceph RBD
17
Multiple iSCSI gateways for Ceph RBD
18
lrbd configuration
"pools": [
{
"pool": "rbd",
"gateways": [
{
"target": "iqn.2003-01.org.linux-
iscsi.igw.x86:sn.redundant",
"tpg": [
{
"image": "city"
}
]
}
]
}
]
"targets": [
{
"hosts": [
{ "host": "igw1", "portal": "portal1" },
{ "host": "igw2", "portal": "portal2" }
],
"target": "iqn.2003-01.org.linux-iscsi.igw.x86:sn.redundant"
}
],
"portals": [
{
"name": "portal1",
"addresses": [ "172.16.1.16"]
},
{
"name": "portal2",
"addresses": [ "172.16.1.17" ]
}
],
"auth": [
{
"authentication": "none",
"target": "iqn.2003-01.org.linux-
iscsi.igw.x86:sn.redundant"
}
],
19
Apply configuration
• # targetcli ls o- / ..................................................................... [...]
o- backstores .......................................................... [...]
| o- fileio ............................................... [0 Storage Object]
| o- iblock ............................................... [0 Storage Object]
| o- pscsi ................................................ [0 Storage Object]
| o- rbd .................................................. [1 Storage Object]
| | o- city .................................... [/dev/rbd/rbd/city activated]
| o- rd_mcp ............................................... [0 Storage Object]
o- ib_srpt ....................................................... [0 Targets]
o- iscsi .......................................................... [1 Target]
| o- iqn.2003-01.org.linux-iscsi.igw.x86:sn.redundant ............... [2 TPGs]
| o- tpg1 ........................................................ [enabled]
| | o- acls ....................................................... [0 ACLs]
| | o- luns ........................................................ [1 LUN]
| | | o- lun0 ............................... [rbd/city (/dev/rbd/rbd/city)]
| | o- portals .................................................. [1 Portal]
| | o- 172.16.1.16:3260 .............................. [OK, iser disabled]
| o- tpg2 ....................................................... [disabled]
| o- acls ....................................................... [0 ACLs]
| o- luns ........................................................ [1 LUN]
| | o- lun0 ............................... [rbd/city (/dev/rbd/rbd/city)]
| o- portals .................................................. [1 Portal]
| o- 172.16.1.17:3260 .............................. [OK, iser disabled]
o- loopback ...................................................... [0 Targets]
o- qla2xxx ....................................................... [0 Targets]
o- tcm_fc ........................................................ [0 Targets]
o- vhost ......................................................... [0 Targets]
20
Multiple gateways for Ceph RBD
DEMO
Thank you.
22
Questions?
More to visit:
https://github.com/SUSE/lrbd/wiki
Corporate Headquarters
Maxfeldstrasse 5
90409 Nuremberg
Germany
+49 911 740 53 0 (Worldwide)
www.suse.com
Join us on:
www.opensuse.org
23
Unpublished Work of SUSE LLC. All Rights Reserved.
This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC.
Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of
their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated,
abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE.
Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability.
General Disclaimer
This document is not to be construed as a promise by any participating company to develop, deliver, or market a
product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making
purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document,
and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The
development, release, and timing of features or functionality described for SUSE products remains at the sole
discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at
any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in
this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All
third-party trademarks are the property of their respective owners.

More Related Content

What's hot

Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0
sprdd
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
buildacloud
 
Magnet Shuffle Service: Push-based Shuffle at LinkedIn
Magnet Shuffle Service: Push-based Shuffle at LinkedInMagnet Shuffle Service: Push-based Shuffle at LinkedIn
Magnet Shuffle Service: Push-based Shuffle at LinkedIn
Databricks
 

What's hot (20)

Linux IO
Linux IOLinux IO
Linux IO
 
Backup using rsync
Backup using rsyncBackup using rsync
Backup using rsync
 
Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0Glusterfs 구성제안 및_운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0
 
Ceph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross TurkCeph Intro and Architectural Overview by Ross Turk
Ceph Intro and Architectural Overview by Ross Turk
 
Ceph Month 2021: RADOS Update
Ceph Month 2021: RADOS UpdateCeph Month 2021: RADOS Update
Ceph Month 2021: RADOS Update
 
Magnet Shuffle Service: Push-based Shuffle at LinkedIn
Magnet Shuffle Service: Push-based Shuffle at LinkedInMagnet Shuffle Service: Push-based Shuffle at LinkedIn
Magnet Shuffle Service: Push-based Shuffle at LinkedIn
 
OverlayFS as a Docker Storage Driver
OverlayFS as a Docker Storage DriverOverlayFS as a Docker Storage Driver
OverlayFS as a Docker Storage Driver
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 
[232] 성능어디까지쥐어짜봤니 송태웅
[232] 성능어디까지쥐어짜봤니 송태웅[232] 성능어디까지쥐어짜봤니 송태웅
[232] 성능어디까지쥐어짜봤니 송태웅
 
DevOps Meetup ansible
DevOps Meetup   ansibleDevOps Meetup   ansible
DevOps Meetup ansible
 
Containerd Internals: Building a Core Container Runtime
Containerd Internals: Building a Core Container RuntimeContainerd Internals: Building a Core Container Runtime
Containerd Internals: Building a Core Container Runtime
 
Disaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoFDisaggregating Ceph using NVMeoF
Disaggregating Ceph using NVMeoF
 
Lisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introductionLisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introduction
 
PostgreSQL on EXT4, XFS, BTRFS and ZFS
PostgreSQL on EXT4, XFS, BTRFS and ZFSPostgreSQL on EXT4, XFS, BTRFS and ZFS
PostgreSQL on EXT4, XFS, BTRFS and ZFS
 
Dpdk performance
Dpdk performanceDpdk performance
Dpdk performance
 
A brief study on Kubernetes and its components
A brief study on Kubernetes and its componentsA brief study on Kubernetes and its components
A brief study on Kubernetes and its components
 
Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph RBD Update - June 2021
Ceph RBD Update - June 2021
 
BlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for CephBlueStore: a new, faster storage backend for Ceph
BlueStore: a new, faster storage backend for Ceph
 
Terraform
TerraformTerraform
Terraform
 
Kubernetes networking
Kubernetes networkingKubernetes networking
Kubernetes networking
 

Viewers also liked

Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Community
 
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Community
 

Viewers also liked (20)

Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture Ceph Day Taipei - Ceph Tiering with High Performance Architecture
Ceph Day Taipei - Ceph Tiering with High Performance Architecture
 
Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools Ceph Day Shanghai - Ceph Performance Tools
Ceph Day Shanghai - Ceph Performance Tools
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore
 
Ceph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Day Seoul - Community Update
Ceph Day Seoul - Community Update
 
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture  Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
 
Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
 
librados
libradoslibrados
librados
 
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
Ceph Day Shanghai - CeTune - Benchmarking and tuning your Ceph cluster
 
Ceph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to EnterpriseCeph Day Taipei - Bring Ceph to Enterprise
Ceph Day Taipei - Bring Ceph to Enterprise
 
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
 
Ceph Day Shanghai - Ceph in Ctrip
Ceph Day Shanghai - Ceph in CtripCeph Day Shanghai - Ceph in Ctrip
Ceph Day Shanghai - Ceph in Ctrip
 
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 

Similar to iSCSI Target Support for Ceph

Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Jan Kalcic
 
Docking your services_with_docker
Docking your services_with_dockerDocking your services_with_docker
Docking your services_with_docker
Tikal Knowledge
 
Ceph_in_a_Windows_world
Ceph_in_a_Windows_worldCeph_in_a_Windows_world
Ceph_in_a_Windows_world
suncbing1
 
Securing Hadoop with OSSEC
Securing Hadoop with OSSECSecuring Hadoop with OSSEC
Securing Hadoop with OSSEC
Vic Hargrave
 

Similar to iSCSI Target Support for Ceph (20)

Open stack meetup 2014 11-13 - 101 + high availability
Open stack meetup 2014 11-13 - 101 + high availabilityOpen stack meetup 2014 11-13 - 101 + high availability
Open stack meetup 2014 11-13 - 101 + high availability
 
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLESQuick-and-Easy Deployment of a Ceph Storage Cluster with SLES
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
 
SUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle IntroductionSUSE Enterprise Storage - a Gentle Introduction
SUSE Enterprise Storage - a Gentle Introduction
 
Docking your services_with_docker
Docking your services_with_dockerDocking your services_with_docker
Docking your services_with_docker
 
Bare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and ChefBare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and Chef
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawn
 
MySQL Webinar 2/4 Performance tuning, hardware, optimisation
MySQL Webinar 2/4 Performance tuning, hardware, optimisationMySQL Webinar 2/4 Performance tuning, hardware, optimisation
MySQL Webinar 2/4 Performance tuning, hardware, optimisation
 
Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...
Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...
Room 1 - 6 - Trần Quốc Sang - Autoscaling for multi cloud platform based on S...
 
FIWARE Tech Summit - Docker Swarm Secrets for Creating Great FIWARE Platforms
FIWARE Tech Summit - Docker Swarm Secrets for Creating Great FIWARE PlatformsFIWARE Tech Summit - Docker Swarm Secrets for Creating Great FIWARE Platforms
FIWARE Tech Summit - Docker Swarm Secrets for Creating Great FIWARE Platforms
 
Openshift cheat rhce_r3v1 rhce
Openshift cheat rhce_r3v1 rhceOpenshift cheat rhce_r3v1 rhce
Openshift cheat rhce_r3v1 rhce
 
Docker Swarm secrets for creating great FIWARE platforms
Docker Swarm secrets for creating great FIWARE platformsDocker Swarm secrets for creating great FIWARE platforms
Docker Swarm secrets for creating great FIWARE platforms
 
OpenDaylight SDN Controller - Introduction
OpenDaylight SDN Controller - IntroductionOpenDaylight SDN Controller - Introduction
OpenDaylight SDN Controller - Introduction
 
Ceph_in_a_Windows_world
Ceph_in_a_Windows_worldCeph_in_a_Windows_world
Ceph_in_a_Windows_world
 
MicroK8s 1.28 - MicroCeph on MicroK8s.pdf
MicroK8s 1.28 - MicroCeph on MicroK8s.pdfMicroK8s 1.28 - MicroCeph on MicroK8s.pdf
MicroK8s 1.28 - MicroCeph on MicroK8s.pdf
 
Securing Hadoop with OSSEC
Securing Hadoop with OSSECSecuring Hadoop with OSSEC
Securing Hadoop with OSSEC
 
Software-definierte Infrastrukturen, DevOps, Digitale Transformation
Software-definierte Infrastrukturen, DevOps, Digitale TransformationSoftware-definierte Infrastrukturen, DevOps, Digitale Transformation
Software-definierte Infrastrukturen, DevOps, Digitale Transformation
 
Mysql repos testing.odp
Mysql repos testing.odpMysql repos testing.odp
Mysql repos testing.odp
 
2014 OpenSuse Conf: Protect your MySQL Server
2014 OpenSuse Conf: Protect your MySQL Server2014 OpenSuse Conf: Protect your MySQL Server
2014 OpenSuse Conf: Protect your MySQL Server
 
Features supported by squid proxy server
Features supported by squid proxy serverFeatures supported by squid proxy server
Features supported by squid proxy server
 
PostgreSQL : Introduction
PostgreSQL : IntroductionPostgreSQL : Introduction
PostgreSQL : Introduction
 

Recently uploaded

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Recently uploaded (20)

Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 

iSCSI Target Support for Ceph

  • 1. iSCSI Target Support for Ceph Lin Pei Feng pflin@suse.com Ceph Day Shanghai 2015-10-18
  • 2. 2 Ceph RADOS Block Device (RBD) Introduction • Block device backed by RADOS objects ‒ Objects replicated across Ceph OSDs • Thin provisioned • Online resize • Snapshots and clones
  • 3. 3 Ceph RADOS Block Device (RBD) Limitation • Linux kernel or librbd clients ‒ Usage restricted to a subset of operating systems and applications Ceph Storage Cluster (RADOS) Librados Ceph Block Device (RBD) Virtual Disk Host/VM (Linux)
  • 4. 4 Ceph RADOS Block Device (RBD) Idea • Heterogeneous OS access • Easy to use Ceph Storage Cluster (RADOS) Librados Ceph Block Device (RBD) Virtual Disk Linux Windows Unix VMware
  • 5. 6 iSCSI gateway for Ceph RBD • Expose benefits of Ceph RBD to other systems ‒ No requirement for Ceph aware applications or operating systems • Standardised interface ‒ Mature and trusted protocol (rfc3720) • iSCSI initiator implementations are widespread ‒ Provided with most modern operating systems Client (iSCSI initiator) Gateway (iSCSI target) Ceph Cluster
  • 6. 7 iSCSI Target • LIO – Linux IO Target • In kernel SCSI target implementation ‒ Support for a number of SCSI transports ‒ iSCSI, as well as iSER, FC and others ‒ Pluggable storage backend ‒ Iblock, as well as fileio, pscsi and others • Flexible configuration ‒ targetcli utility
  • 7. 8 LIO + RBD iSCSI gateway Overview OSD1 OSD2 OSD3 OSD4 LIO iSCSI Target RBD Image ISCSI Initiator LIO rbd backstore rbd.ko
  • 8. 9 RBD Image LIO + RBD Cluster iSCSI gateway LIO iSCSI Target LIO iSCSI Target RBD module RBD module ISCSI Initiator OSD1 OSD2 OSD3 OSD4
  • 9. 10 LIO + RBD Cluster iSCSI gateway • LIO target configured with iSCSI transport fabric • RBD backstore module ‒ Translates SCSI IO into Ceph OSD requests ‒ Special handling of operations that require exclusive device access ‒ Atomic COMPARE AND WRITE ‒ SCSI reservations ‒ LUN reset • Lrbd: Multi-node configuration utility ‒ Applies iSCSI target configuration across multiple gateways via targetcli
  • 10. 11 LIO + RBD iSCSI gateway Cluster support • Allows for initiator access via redundant paths ‒ iSCSI gateway node with multiple network adapters ‒ Protection from network adapter failure ‒ Multiple iSCSI gateways exporting same RBD image ‒ Protection from entire gateway failure • Initiator responsible for utilisation of redundant paths ‒ Available paths advertised in iSCSI discovery exchange ‒ May choose to round-robin IO or failover/failback
  • 11. 12 iSCSI Initiators • open-iscsi ‒ Default iSCSI initiator shipped with SLES10 and later ‒ Multipath supported in combination with dm-multipath • Microsoft iSCSI initiator ‒ Installed by default from Windows Server 2008 and later ‒ Supports MPIO in recent versions • VMWare ESX ‒ Concurrent clustered filesystem (VMFS) access from multiple initiators
  • 12. 13 A Simple iSCSI gateway for Ceph RBD
  • 13. 14 lrbd configuration "pools": [ { "pool": "rbd", "gateways": [ { "host": "igw1", "tpg": [ { "portal": "portal1", "image": "archive" } ] } ] } ] "targets": [ { "host": "igw1", "target": "iqn.2003-01.org.linux-iscsi.generic.x86:sn.abcdefghijk" } ], "portals": [ { "name": "portal1", "addresses": [ "172.16.1.16" ] } ], "auth": [ { "host": "igw1", "authentication": "tpg", "tpg": { "userid": "common1", "password": "pass1" } } ],
  • 14. 15 Apply configuration • # lrbd -f ~/simple.json • # lrbd • # targetcli ls o- / ..................................................................... [...] o- backstores .......................................................... [...] | o- fileio ............................................... [0 Storage Object] | o- iblock ............................................... [0 Storage Object] | o- pscsi ................................................ [0 Storage Object] | o- rbd .................................................. [1 Storage Object] | | o- archive .............................. [/dev/rbd/rbd/archive activated] | o- rd_mcp ............................................... [0 Storage Object] o- ib_srpt ....................................................... [0 Targets] o- iscsi .......................................................... [1 Target] | o- iqn.2003-01.org.linux-iscsi.generic.x86:sn.abcdefghijk .......... [1 TPG] | o- tpg1 ........................................................ [enabled] | o- acls ........................................................ [1 ACL] | | o- iqn.1996-04.de.suse:01:e6ca28cc9f20 ................ [1 Mapped LUN] | | o- mapped_lun0 ......................................... [lun0 (rw)] | o- luns ........................................................ [1 LUN] | | o- lun0 ......................... [rbd/archive (/dev/rbd/rbd/archive)] | o- portals .................................................. [1 Portal] | o- 172.16.1.16:3260 .............................. [OK, iser disabled] o- loopback ...................................................... [0 Targets] o- qla2xxx ....................................................... [0 Targets] o- tcm_fc ........................................................ [0 Targets] o- vhost ......................................................... [0 Targets]
  • 17. 18 lrbd configuration "pools": [ { "pool": "rbd", "gateways": [ { "target": "iqn.2003-01.org.linux- iscsi.igw.x86:sn.redundant", "tpg": [ { "image": "city" } ] } ] } ] "targets": [ { "hosts": [ { "host": "igw1", "portal": "portal1" }, { "host": "igw2", "portal": "portal2" } ], "target": "iqn.2003-01.org.linux-iscsi.igw.x86:sn.redundant" } ], "portals": [ { "name": "portal1", "addresses": [ "172.16.1.16"] }, { "name": "portal2", "addresses": [ "172.16.1.17" ] } ], "auth": [ { "authentication": "none", "target": "iqn.2003-01.org.linux- iscsi.igw.x86:sn.redundant" } ],
  • 18. 19 Apply configuration • # targetcli ls o- / ..................................................................... [...] o- backstores .......................................................... [...] | o- fileio ............................................... [0 Storage Object] | o- iblock ............................................... [0 Storage Object] | o- pscsi ................................................ [0 Storage Object] | o- rbd .................................................. [1 Storage Object] | | o- city .................................... [/dev/rbd/rbd/city activated] | o- rd_mcp ............................................... [0 Storage Object] o- ib_srpt ....................................................... [0 Targets] o- iscsi .......................................................... [1 Target] | o- iqn.2003-01.org.linux-iscsi.igw.x86:sn.redundant ............... [2 TPGs] | o- tpg1 ........................................................ [enabled] | | o- acls ....................................................... [0 ACLs] | | o- luns ........................................................ [1 LUN] | | | o- lun0 ............................... [rbd/city (/dev/rbd/rbd/city)] | | o- portals .................................................. [1 Portal] | | o- 172.16.1.16:3260 .............................. [OK, iser disabled] | o- tpg2 ....................................................... [disabled] | o- acls ....................................................... [0 ACLs] | o- luns ........................................................ [1 LUN] | | o- lun0 ............................... [rbd/city (/dev/rbd/rbd/city)] | o- portals .................................................. [1 Portal] | o- 172.16.1.17:3260 .............................. [OK, iser disabled] o- loopback ...................................................... [0 Targets] o- qla2xxx ....................................................... [0 Targets] o- tcm_fc ........................................................ [0 Targets] o- vhost ......................................................... [0 Targets]
  • 20. DEMO
  • 21. Thank you. 22 Questions? More to visit: https://github.com/SUSE/lrbd/wiki
  • 22. Corporate Headquarters Maxfeldstrasse 5 90409 Nuremberg Germany +49 911 740 53 0 (Worldwide) www.suse.com Join us on: www.opensuse.org 23
  • 23. Unpublished Work of SUSE LLC. All Rights Reserved. This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE LLC. Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope of their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated, abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE. Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability. General Disclaimer This document is not to be construed as a promise by any participating company to develop, deliver, or market a product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The development, release, and timing of features or functionality described for SUSE products remains at the sole discretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced in this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All third-party trademarks are the property of their respective owners.