SlideShare a Scribd company logo
1 of 7
Download to read offline
Storage Configuration
Here, we will create 5GB of LVM disk on the target server to use as shared storage for clients.
Let’s list the available disks attached to the target server using below command. If you want to
use the whole disk for LVM, then skip the disk partitioning step.
[root@server ~]# fdisk -l | grep -i sd
Output:
Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 209715199 104344576 8e Linux LVM
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
From the above output, you can see that my system has a 10GB of disk (/dev/sdb). We will
create a 5GB partition on the above disk and will use it for LVM.
[root@server ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x173dfa4d.
Command (m for help): n --> New partition
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p --> Pimary partition
Partition number (1-4, default 1): 1 - -> Partition number
First sector (2048-20971519, default 2048): --> Just enter
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): +5G
--> Enter the size
Partition 1 of type Linux and of size 5 GiB is set
Command (m for help): t --> Change label
Selected partition 1
Hex code (type L to list all codes): 8e --> Change it as LVM label
Changed type of partition 'Linux' to 'Linux LVM'
Command (m for help): w --> Save
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Create a LVM with /dev/sdb1 partition (replace /dev/sdb1 with your disk name)
[root@server ~]# pvcreate /dev/sdb1
[root@server ~]# vgcreate vg_iscsi /dev/sdb1
[root@server ~]# lvcreate -l 100%FREE -n lv_iscsi vg_iscsi
Configure iSCSI target
Now you have an option of creating target either with or without authentication. In this article,
you can find steps for both scenarios. It is up to you to decide which one is suitable for your
environment.
Here, will configure iSCSI target without CHAP authentication.
Install the targetcli package on the server.
[root@server ~]# yum install targetcli -y
Once you installed the package, enter below command to get an iSCSI CLI for an interactive
prompt.
[root@server ~]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb41
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
>
Now use an existing logical volume (/dev/vg_iscsi/lv_iscsi) as a block-type backing store for
storage object scsi_disk1_server“.
/> cd backstores/block
/backstores/block> create scsi_disk1_server /dev/vg_iscsi/lv_iscsi
Created block storage object scsi_disk1_server using /dev/vg_iscsi/lv_iscsi.
Create a target.
/backstores/block> cd /iscsi
/iscsi> create iqn.2016-02.local.itzgeek.server:disk1
Created target iqn.2016-02.local.itzgeek.server:disk1.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/iscsi>
Create ACL for client machine (It’s the IQN which clients use to connect).
/iscsi> cd /iscsi/iqn.2016-02.local.itzgeek.server:disk1/tpg1/acls
/iscsi/iqn.20...sk1/tpg1/acls> create iqn.2016-
02.local.itzgeek.server:node1node2
Created Node ACL for iqn.2016-02.local.itzgeek.server:node1node2
Create a LUN under the target. The LUN should use the previously mentioned backing storage
object named “scsi_disk1_server“.
/iscsi/iqn.20...er:disk1/tpg1> cd /iscsi/iqn.2016-
02.local.itzgeek.server:disk1/tpg1/luns
/iscsi/iqn.20...sk1/tpg1/luns> create /backstores/block/scsi_disk1_server
Created LUN 0.
Created LUN 0->0 mapping in node ACL iqn.2016-
02.local.itzgeek.server:node1node2
Verify the target server configuration.
/iscsi/iqn.20.../tpg1/portals> cd /
/> ls
o- /
.............................................................................
............................................ [...]
o- backstores
.............................................................................
................................. [...]
| o- block
.............................................................................
..................... [Storage Objects: 1]
| | o- scsi_disk1_server ..................................................
[/dev/vg_iscsi/lv_iscsi (5.0GiB) write-thru activated]
| o- fileio
.............................................................................
.................... [Storage Objects: 0]
| o- pscsi
.............................................................................
..................... [Storage Objects: 0]
| o- ramdisk
.............................................................................
................... [Storage Objects: 0]
o- iscsi
.............................................................................
............................... [Targets: 1]
| o- iqn.2016-02.local.itzgeek.server:disk1
............................................................................
[TPGs: 1]
| o- tpg1
.............................................................................
..................... [gen-acls, no-auth]
| o- acls
.............................................................................
............................. [ACLs: 1]
| | o- iqn.2016-02.local.itzgeek.server:node1node2
.......................................................... [Mapped LUNs: 1]
| | o- mapped_lun0
..................................................................... [lun0
block/scsi_disk1_server (rw)]
| o- luns
.............................................................................
............................. [LUNs: 1]
| | o- lun0
...............................................................
[block/scsi_disk1_server (/dev/vg_iscsi/lv_iscsi)]
| o- portals
.............................................................................
....................... [Portals: 1]
| o- 0.0.0.0:3260
.............................................................................
........................ [OK]
o- loopback
.............................................................................
............................ [Targets: 0]Save and exit from target CLI.
/> saveconfig
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json
Enable and restart the target service.
[root@server ~]# systemctl enable target.service
[root@server ~]# systemctl restart target.service
Configure the firewall to allow iSCSI traffic.
[root@server ~]# firewall-cmd --permanent --add-port=3260/tcp
[root@server ~]# firewall-cmd --reload
Configure Initiator
Now, it’s the time to configure a client machine to use the created target as storage. Install the
below package on the client machine (node1).
[root@node1 ~]# yum install iscsi-initiator-utils -y
Edit the initiatorname.iscsi file.
[root@node1 ~]# vi /etc/iscsi/initiatorname.iscsi
Add the iSCSI initiator name.
InitiatorName=iqn.2016-02.local.itzgeek.server:node1node2
Discover the target using the below command.
[root@node1 ~]# iscsiadm -m discovery -t st -p 192.168.12.20
Output:
192.168.12.20:3260,1 iqn.2016-02.local.itzgeek.server:disk1
Restart and enable the initiator service.
[root@node1 ~]# systemctl restart iscsid.service
[root@node1 ~]# systemctl enable iscsid.service
Login to the discovered target.
[root@node1 ~]# iscsiadm -m node -T iqn.2016-02.local.itzgeek.server:disk1 -p
192.168.12.20 -l
Output:
Logging in to [iface: default, target: iqn.2016-
02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260] (multiple)
Login to [iface: default, target: iqn.2016-02.local.itzgeek.server:disk1,
portal: 192.168.12.20,3260] successful.
Create File System on ISCSI Disk
After login (connecting) to discovered target, have a look at messages file. You would find
similar output like below, from where you can find a name of the disk.
[root@node1 ~]# cat /var/log/messages
Feb 23 14:54:47 node2 kernel: sd 34:0:0:0: [sdb] 10477568 512-byte logical
blocks: (5.36 GB/4.99 GiB)
Feb 23 14:54:47 node2 kernel: sd 34:0:0:0: [sdb] Write Protect is off
Feb 23 14:54:47 node2 kernel: sd 34:0:0:0: [sdb] Write cache: disabled, read
cache: enabled, doesn't support DPO or FUA
Feb 23 14:54:48 node2 kernel: sdb: unknown partition table
Feb 23 14:54:48 node2 kernel: sd 34:0:0:0: [sdb] Attached SCSI disk
Output:
Feb 23 14:54:48 node2 iscsid: Could not set session2 priority. READ/WRITE
throughout and latency could be affected.
Feb 23 14:54:48 node2 iscsid: Connection2:0 to [target: iqn.2016-
02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260] through [iface:
default] is operational now
List down the attached disks.
[root@node1 ~]# cat /proc/partitions
Output:
major minor #blocks name
8 0 104857600 sda
8 1 512000 sda1
8 2 104344576 sda2
11 0 1048575 sr0
253 0 2113536 dm-0
253 1 52428800 dm-1
253 2 49799168 dm-2
8 16 5238784 sdb
Format the new disk (for the sake of article, I have formatted whole disk instead of creating
partition)
root@node1 ~]# mkfs.xfs /dev/sdb
Output:
meta-data=/dev/sdb isize=256 agcount=8, agsize=163712 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=1309696, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Mount the disk.
[root@node1 ~]# mount /dev/sdb /mnt
Verify the disk is mounted using the below command.
[root@node1 ~]# df -hT
Output:
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-root xfs 50G 955M 50G 2% /
devtmpfs devtmpfs 908M 0 908M 0% /dev
tmpfs tmpfs 914M 54M 861M 6% /dev/shm
tmpfs tmpfs 914M 8.5M 905M 1% /run
tmpfs tmpfs 914M 0 914M 0% /sys/fs/cgroup
/dev/mapper/centos-home xfs 48G 33M 48G 1% /home
/dev/sda1 xfs 497M 97M 401M 20% /boot
/dev/sdb xfs 5.0G 33M 5.0G 1% /mnt
Automount iSCSI storage
To automount the iSCSI storage during every reboot, you would need to make an entry in
/etc/fstab file.
Before updating the /etc/fstab file, get the UUID of the iSCSI disk using the following command.
Replace /dev/sdb with your iSCSI disk name.
blkid /dev/sdb
Output:
/dev/sdb: LABEL="/" UUID="9df472f4-1b0f-41c0-a6eb-89574d2caee3" TYPE="xfs"
Now, edit the /etc/fstab file.
vi /etc/fstab
Make an entry something like below.
#
# /etc/fstab
# Created by anaconda on Tue Jan 30 02:14:21 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=9df472f4-1b0f-41c0-a6eb-89574d2caee3 / xfs
defaults 0 0
UUID=c7469f92-75ec-48ac-b42d-d5b89ab75b39 /mnt xfs
_netdev 0 0
Remove iSCSI storage
In case you want to de-attach the added disk, please follow the procedure (unmount and logout).
[root@node1 ~]# umount /mnt/
[root@node1 ~]# iscsiadm -m node -T iqn.2016-02.local.itzgeek.server:disk1 -p
192.168.12.20 -u
Output:
Logging out of session [sid: 1, target: iqn.2016-
02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260]
Logout of [sid: 1, target: iqn.2016-02.local.itzgeek.server:disk1, portal:
192.168.12.20,3260] successful.
That’s All.

More Related Content

What's hot

Rh202 q&a-demo-cert magic
Rh202 q&a-demo-cert magicRh202 q&a-demo-cert magic
Rh202 q&a-demo-cert magicEllina Beckman
 
ZFS and MySQL on Linux, the Sweet Spots
ZFS and MySQL on Linux, the Sweet SpotsZFS and MySQL on Linux, the Sweet Spots
ZFS and MySQL on Linux, the Sweet SpotsJervin Real
 
Veritas Software Foundations
Veritas Software FoundationsVeritas Software Foundations
Veritas Software Foundations.Gastón. .Bx.
 
Linux io-stack-diagram v1.0
Linux io-stack-diagram v1.0Linux io-stack-diagram v1.0
Linux io-stack-diagram v1.0bsd free
 
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)Isabella789
 

What's hot (6)

Rh202 q&a-demo-cert magic
Rh202 q&a-demo-cert magicRh202 q&a-demo-cert magic
Rh202 q&a-demo-cert magic
 
ZFS and MySQL on Linux, the Sweet Spots
ZFS and MySQL on Linux, the Sweet SpotsZFS and MySQL on Linux, the Sweet Spots
ZFS and MySQL on Linux, the Sweet Spots
 
Veritas Software Foundations
Veritas Software FoundationsVeritas Software Foundations
Veritas Software Foundations
 
Linux io-stack-diagram v1.0
Linux io-stack-diagram v1.0Linux io-stack-diagram v1.0
Linux io-stack-diagram v1.0
 
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)
 
4.4 manage disk quotas
4.4 manage disk quotas4.4 manage disk quotas
4.4 manage disk quotas
 

Similar to Create 5GB LVM storage and configure iSCSI target

Connect dell equallogic storage to linux instance
Connect dell equallogic storage to linux instanceConnect dell equallogic storage to linux instance
Connect dell equallogic storage to linux instanceSaeed Siddik
 
openbsd-as-nas.pdf
openbsd-as-nas.pdfopenbsd-as-nas.pdf
openbsd-as-nas.pdfssuserabc40f
 
Lacie Cloud Box data recovery with Linux
Lacie Cloud Box data recovery with LinuxLacie Cloud Box data recovery with Linux
Lacie Cloud Box data recovery with LinuxJordi Clopés Esteban
 
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebula
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebulaOpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebula
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebulaOpenNebula Project
 
Analyze corefile and backtraces with GDB for Mysql/MariaDB on Linux - Nilanda...
Analyze corefile and backtraces with GDB for Mysql/MariaDB on Linux - Nilanda...Analyze corefile and backtraces with GDB for Mysql/MariaDB on Linux - Nilanda...
Analyze corefile and backtraces with GDB for Mysql/MariaDB on Linux - Nilanda...Mydbops
 
Logical volume manager xfs
Logical volume manager xfsLogical volume manager xfs
Logical volume manager xfsSarwar Javaid
 
Linux lv ms step by step
Linux lv ms step by stepLinux lv ms step by step
Linux lv ms step by stepsudakarman
 
Inspection and maintenance tools (Linux / OpenStack)
Inspection and maintenance tools (Linux / OpenStack)Inspection and maintenance tools (Linux / OpenStack)
Inspection and maintenance tools (Linux / OpenStack)Gerard Braad
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawnGábor Nyers
 
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...Ron Munitz
 
Docker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in PragueDocker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in Praguetomasbart
 
Faq on SnapDrive for UNIX NetApp
Faq on SnapDrive for UNIX NetAppFaq on SnapDrive for UNIX NetApp
Faq on SnapDrive for UNIX NetAppAshwin Pawar
 
101 2.1 design hard disk layout
101 2.1 design hard disk layout101 2.1 design hard disk layout
101 2.1 design hard disk layoutAcácio Oliveira
 
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...Adrian Huang
 
SiteGround Tech TeamBuilding
SiteGround Tech TeamBuildingSiteGround Tech TeamBuilding
SiteGround Tech TeamBuildingMarian Marinov
 
GlusterFS Update and OpenStack Integration
GlusterFS Update and OpenStack IntegrationGlusterFS Update and OpenStack Integration
GlusterFS Update and OpenStack IntegrationEtsuji Nakai
 

Similar to Create 5GB LVM storage and configure iSCSI target (20)

Connect dell equallogic storage to linux instance
Connect dell equallogic storage to linux instanceConnect dell equallogic storage to linux instance
Connect dell equallogic storage to linux instance
 
openbsd-as-nas.pdf
openbsd-as-nas.pdfopenbsd-as-nas.pdf
openbsd-as-nas.pdf
 
Lacie Cloud Box data recovery with Linux
Lacie Cloud Box data recovery with LinuxLacie Cloud Box data recovery with Linux
Lacie Cloud Box data recovery with Linux
 
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebula
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebulaOpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebula
OpenNebulaConf 2016 - Storage Hands-on Workshop by Javier Fontán, OpenNebula
 
Mac os x mount ntfs
Mac os x mount ntfsMac os x mount ntfs
Mac os x mount ntfs
 
Analyze corefile and backtraces with GDB for Mysql/MariaDB on Linux - Nilanda...
Analyze corefile and backtraces with GDB for Mysql/MariaDB on Linux - Nilanda...Analyze corefile and backtraces with GDB for Mysql/MariaDB on Linux - Nilanda...
Analyze corefile and backtraces with GDB for Mysql/MariaDB on Linux - Nilanda...
 
Logical volume manager xfs
Logical volume manager xfsLogical volume manager xfs
Logical volume manager xfs
 
Linux lv ms step by step
Linux lv ms step by stepLinux lv ms step by step
Linux lv ms step by step
 
Inspection and maintenance tools (Linux / OpenStack)
Inspection and maintenance tools (Linux / OpenStack)Inspection and maintenance tools (Linux / OpenStack)
Inspection and maintenance tools (Linux / OpenStack)
 
Linux Kernel Debugging
Linux Kernel DebuggingLinux Kernel Debugging
Linux Kernel Debugging
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawn
 
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...
Bringing up Android on your favorite X86 Workstation or VM (AnDevCon Boston, ...
 
Docker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in PragueDocker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in Prague
 
Faq on SnapDrive for UNIX NetApp
Faq on SnapDrive for UNIX NetAppFaq on SnapDrive for UNIX NetApp
Faq on SnapDrive for UNIX NetApp
 
101 2.1 design hard disk layout
101 2.1 design hard disk layout101 2.1 design hard disk layout
101 2.1 design hard disk layout
 
Real time systems
Real time systemsReal time systems
Real time systems
 
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...
qemu + gdb: The efficient way to understand/debug Linux kernel code/data stru...
 
SiteGround Tech TeamBuilding
SiteGround Tech TeamBuildingSiteGround Tech TeamBuilding
SiteGround Tech TeamBuilding
 
Xen time machine
Xen time machineXen time machine
Xen time machine
 
GlusterFS Update and OpenStack Integration
GlusterFS Update and OpenStack IntegrationGlusterFS Update and OpenStack Integration
GlusterFS Update and OpenStack Integration
 

More from Md Shihab

Rhel 7 root password reset
Rhel 7 root password resetRhel 7 root password reset
Rhel 7 root password resetMd Shihab
 
RedHat/CentOs Commands for administrative works
RedHat/CentOs Commands for administrative worksRedHat/CentOs Commands for administrative works
RedHat/CentOs Commands for administrative worksMd Shihab
 
How to transfer core mode into gui in RedHat/centOs
How to transfer core mode into gui in RedHat/centOsHow to transfer core mode into gui in RedHat/centOs
How to transfer core mode into gui in RedHat/centOsMd Shihab
 
Assignment on windows firewall
Assignment on windows firewallAssignment on windows firewall
Assignment on windows firewallMd Shihab
 
Assignment on high availability(clustering)
Assignment on high availability(clustering)Assignment on high availability(clustering)
Assignment on high availability(clustering)Md Shihab
 

More from Md Shihab (13)

Samba
SambaSamba
Samba
 
Nfs
NfsNfs
Nfs
 
Maria db
Maria dbMaria db
Maria db
 
Mail
MailMail
Mail
 
Dns
DnsDns
Dns
 
Dhcp
DhcpDhcp
Dhcp
 
Boot
BootBoot
Boot
 
Rhel 7 root password reset
Rhel 7 root password resetRhel 7 root password reset
Rhel 7 root password reset
 
Easy vlsm
Easy vlsmEasy vlsm
Easy vlsm
 
RedHat/CentOs Commands for administrative works
RedHat/CentOs Commands for administrative worksRedHat/CentOs Commands for administrative works
RedHat/CentOs Commands for administrative works
 
How to transfer core mode into gui in RedHat/centOs
How to transfer core mode into gui in RedHat/centOsHow to transfer core mode into gui in RedHat/centOs
How to transfer core mode into gui in RedHat/centOs
 
Assignment on windows firewall
Assignment on windows firewallAssignment on windows firewall
Assignment on windows firewall
 
Assignment on high availability(clustering)
Assignment on high availability(clustering)Assignment on high availability(clustering)
Assignment on high availability(clustering)
 

Recently uploaded

"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 

Recently uploaded (20)

"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 

Create 5GB LVM storage and configure iSCSI target

  • 1. Storage Configuration Here, we will create 5GB of LVM disk on the target server to use as shared storage for clients. Let’s list the available disks attached to the target server using below command. If you want to use the whole disk for LVM, then skip the disk partitioning step. [root@server ~]# fdisk -l | grep -i sd Output: Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 209715199 104344576 8e Linux LVM Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors From the above output, you can see that my system has a 10GB of disk (/dev/sdb). We will create a 5GB partition on the above disk and will use it for LVM. [root@server ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x173dfa4d. Command (m for help): n --> New partition Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p --> Pimary partition Partition number (1-4, default 1): 1 - -> Partition number First sector (2048-20971519, default 2048): --> Just enter Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): +5G --> Enter the size Partition 1 of type Linux and of size 5 GiB is set Command (m for help): t --> Change label Selected partition 1 Hex code (type L to list all codes): 8e --> Change it as LVM label Changed type of partition 'Linux' to 'Linux LVM' Command (m for help): w --> Save The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. Create a LVM with /dev/sdb1 partition (replace /dev/sdb1 with your disk name) [root@server ~]# pvcreate /dev/sdb1 [root@server ~]# vgcreate vg_iscsi /dev/sdb1 [root@server ~]# lvcreate -l 100%FREE -n lv_iscsi vg_iscsi
  • 2. Configure iSCSI target Now you have an option of creating target either with or without authentication. In this article, you can find steps for both scenarios. It is up to you to decide which one is suitable for your environment. Here, will configure iSCSI target without CHAP authentication. Install the targetcli package on the server. [root@server ~]# yum install targetcli -y Once you installed the package, enter below command to get an iSCSI CLI for an interactive prompt. [root@server ~]# targetcli Warning: Could not load preferences file /root/.targetcli/prefs.bin. targetcli shell version 2.1.fb41 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. > Now use an existing logical volume (/dev/vg_iscsi/lv_iscsi) as a block-type backing store for storage object scsi_disk1_server“. /> cd backstores/block /backstores/block> create scsi_disk1_server /dev/vg_iscsi/lv_iscsi Created block storage object scsi_disk1_server using /dev/vg_iscsi/lv_iscsi. Create a target. /backstores/block> cd /iscsi /iscsi> create iqn.2016-02.local.itzgeek.server:disk1 Created target iqn.2016-02.local.itzgeek.server:disk1. Created TPG 1. Global pref auto_add_default_portal=true Created default portal listening on all IPs (0.0.0.0), port 3260. /iscsi> Create ACL for client machine (It’s the IQN which clients use to connect). /iscsi> cd /iscsi/iqn.2016-02.local.itzgeek.server:disk1/tpg1/acls /iscsi/iqn.20...sk1/tpg1/acls> create iqn.2016- 02.local.itzgeek.server:node1node2 Created Node ACL for iqn.2016-02.local.itzgeek.server:node1node2 Create a LUN under the target. The LUN should use the previously mentioned backing storage object named “scsi_disk1_server“.
  • 3. /iscsi/iqn.20...er:disk1/tpg1> cd /iscsi/iqn.2016- 02.local.itzgeek.server:disk1/tpg1/luns /iscsi/iqn.20...sk1/tpg1/luns> create /backstores/block/scsi_disk1_server Created LUN 0. Created LUN 0->0 mapping in node ACL iqn.2016- 02.local.itzgeek.server:node1node2 Verify the target server configuration. /iscsi/iqn.20.../tpg1/portals> cd / /> ls o- / ............................................................................. ............................................ [...] o- backstores ............................................................................. ................................. [...] | o- block ............................................................................. ..................... [Storage Objects: 1] | | o- scsi_disk1_server .................................................. [/dev/vg_iscsi/lv_iscsi (5.0GiB) write-thru activated] | o- fileio ............................................................................. .................... [Storage Objects: 0] | o- pscsi ............................................................................. ..................... [Storage Objects: 0] | o- ramdisk ............................................................................. ................... [Storage Objects: 0] o- iscsi ............................................................................. ............................... [Targets: 1] | o- iqn.2016-02.local.itzgeek.server:disk1 ............................................................................ [TPGs: 1] | o- tpg1 ............................................................................. ..................... [gen-acls, no-auth] | o- acls ............................................................................. ............................. [ACLs: 1] | | o- iqn.2016-02.local.itzgeek.server:node1node2 .......................................................... [Mapped LUNs: 1] | | o- mapped_lun0 ..................................................................... [lun0 block/scsi_disk1_server (rw)] | o- luns ............................................................................. ............................. [LUNs: 1] | | o- lun0 ............................................................... [block/scsi_disk1_server (/dev/vg_iscsi/lv_iscsi)]
  • 4. | o- portals ............................................................................. ....................... [Portals: 1] | o- 0.0.0.0:3260 ............................................................................. ........................ [OK] o- loopback ............................................................................. ............................ [Targets: 0]Save and exit from target CLI. /> saveconfig Last 10 configs saved in /etc/target/backup. Configuration saved to /etc/target/saveconfig.json /> exit Global pref auto_save_on_exit=true Last 10 configs saved in /etc/target/backup. Configuration saved to /etc/target/saveconfig.json Enable and restart the target service. [root@server ~]# systemctl enable target.service [root@server ~]# systemctl restart target.service Configure the firewall to allow iSCSI traffic. [root@server ~]# firewall-cmd --permanent --add-port=3260/tcp [root@server ~]# firewall-cmd --reload Configure Initiator Now, it’s the time to configure a client machine to use the created target as storage. Install the below package on the client machine (node1). [root@node1 ~]# yum install iscsi-initiator-utils -y Edit the initiatorname.iscsi file. [root@node1 ~]# vi /etc/iscsi/initiatorname.iscsi Add the iSCSI initiator name. InitiatorName=iqn.2016-02.local.itzgeek.server:node1node2 Discover the target using the below command. [root@node1 ~]# iscsiadm -m discovery -t st -p 192.168.12.20 Output: 192.168.12.20:3260,1 iqn.2016-02.local.itzgeek.server:disk1
  • 5. Restart and enable the initiator service. [root@node1 ~]# systemctl restart iscsid.service [root@node1 ~]# systemctl enable iscsid.service Login to the discovered target. [root@node1 ~]# iscsiadm -m node -T iqn.2016-02.local.itzgeek.server:disk1 -p 192.168.12.20 -l Output: Logging in to [iface: default, target: iqn.2016- 02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260] (multiple) Login to [iface: default, target: iqn.2016-02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260] successful. Create File System on ISCSI Disk After login (connecting) to discovered target, have a look at messages file. You would find similar output like below, from where you can find a name of the disk. [root@node1 ~]# cat /var/log/messages Feb 23 14:54:47 node2 kernel: sd 34:0:0:0: [sdb] 10477568 512-byte logical blocks: (5.36 GB/4.99 GiB) Feb 23 14:54:47 node2 kernel: sd 34:0:0:0: [sdb] Write Protect is off Feb 23 14:54:47 node2 kernel: sd 34:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Feb 23 14:54:48 node2 kernel: sdb: unknown partition table Feb 23 14:54:48 node2 kernel: sd 34:0:0:0: [sdb] Attached SCSI disk Output: Feb 23 14:54:48 node2 iscsid: Could not set session2 priority. READ/WRITE throughout and latency could be affected. Feb 23 14:54:48 node2 iscsid: Connection2:0 to [target: iqn.2016- 02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260] through [iface: default] is operational now List down the attached disks. [root@node1 ~]# cat /proc/partitions Output: major minor #blocks name 8 0 104857600 sda 8 1 512000 sda1 8 2 104344576 sda2 11 0 1048575 sr0
  • 6. 253 0 2113536 dm-0 253 1 52428800 dm-1 253 2 49799168 dm-2 8 16 5238784 sdb Format the new disk (for the sake of article, I have formatted whole disk instead of creating partition) root@node1 ~]# mkfs.xfs /dev/sdb Output: meta-data=/dev/sdb isize=256 agcount=8, agsize=163712 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 data = bsize=4096 blocks=1309696, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Mount the disk. [root@node1 ~]# mount /dev/sdb /mnt Verify the disk is mounted using the below command. [root@node1 ~]# df -hT Output: Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 50G 955M 50G 2% / devtmpfs devtmpfs 908M 0 908M 0% /dev tmpfs tmpfs 914M 54M 861M 6% /dev/shm tmpfs tmpfs 914M 8.5M 905M 1% /run tmpfs tmpfs 914M 0 914M 0% /sys/fs/cgroup /dev/mapper/centos-home xfs 48G 33M 48G 1% /home /dev/sda1 xfs 497M 97M 401M 20% /boot /dev/sdb xfs 5.0G 33M 5.0G 1% /mnt Automount iSCSI storage To automount the iSCSI storage during every reboot, you would need to make an entry in /etc/fstab file. Before updating the /etc/fstab file, get the UUID of the iSCSI disk using the following command. Replace /dev/sdb with your iSCSI disk name.
  • 7. blkid /dev/sdb Output: /dev/sdb: LABEL="/" UUID="9df472f4-1b0f-41c0-a6eb-89574d2caee3" TYPE="xfs" Now, edit the /etc/fstab file. vi /etc/fstab Make an entry something like below. # # /etc/fstab # Created by anaconda on Tue Jan 30 02:14:21 2018 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=9df472f4-1b0f-41c0-a6eb-89574d2caee3 / xfs defaults 0 0 UUID=c7469f92-75ec-48ac-b42d-d5b89ab75b39 /mnt xfs _netdev 0 0 Remove iSCSI storage In case you want to de-attach the added disk, please follow the procedure (unmount and logout). [root@node1 ~]# umount /mnt/ [root@node1 ~]# iscsiadm -m node -T iqn.2016-02.local.itzgeek.server:disk1 -p 192.168.12.20 -u Output: Logging out of session [sid: 1, target: iqn.2016- 02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260] Logout of [sid: 1, target: iqn.2016-02.local.itzgeek.server:disk1, portal: 192.168.12.20,3260] successful. That’s All.