SlideShare a Scribd company logo
1 of 17
IBM iSeries LPAR migration from VSCSI to NPIV
Purpose
This technical document shows an example for migrating an IBM i client partition of the IBM
PowerVM Virtual I/O Server from a virtual SCSI to an NPIV attachment.
This is intended to help Unix Team to successfully plan and perform an IBM i partition migration from
virtual SCSI to NPIV in Power systems.
This technical document is considered that SAN is supported, enabled NPIV capable in SAN switch,
disk are NPIV capable, required license is available for iOS, supported tape library is used to configure
for IBM iSeries Client partition of the IBM Power systems.
In case of confirmation for SAN related contact SAN team, for iOS contact iSeries team. Herewith, I've
given the steps only from Power system side.
1. Considerations for using N_Port ID Virtualization
1.1 Virtual SCSI vs. N_Port ID Virtualization
While for virtual SCSI (VSCSI) the Virtual I/O Server performs generic SCSI device emulation,
with NPIV the Virtual I/O Server simply acts as a Fibre Channel pass-through. Compared to VSCSI an
N_Port ID Virtualization (NPIV) storage environment typically requires less efforts to be configured
and maintained as there is no multi-path device driver, no virtual target device creation and no
administration of corresponding volume device mappings to be required on the Virtual I/O Server.
NPIV allows the IBM i client partition to see its storage devices with its machine type / model
and all device characteristics via virtual Fibre Channel adapters as if it would be natively attached
to the storage devices. Some licensed programs on the IBM i client partition don’t support virtual
SCSI attached devices as they require knowledge about the hardware device characteristics.
1.2 Migration Planning Considerations
• Consider the following for migrating an IBM i client partition of the IBM PowerVM Virtual I/O
Server from VSCSI to NPIV:
• Existing volumes on the SAN created for IBM i VSCSI attachment can be re-used for NPIV
attachment.
• Each virtual Fibre Channel client adapter for IBM i supports up to 64 LUNs vs. up to 16 VSCSI
LUNs supported for each VSCSI client adapter.
• The remapping of the volumes from a VSCSI to an NPIV configuration has to be performed
while the IBM i partition is powered off, i.e. heterogeneous multipathing with simultaneously
using VSCSI and NPIV is not supported in IBM I partition.
2. Overview of Migration Steps
This section contains an overview for the migration steps we used to migrate our IBM i partition
from VSCSI to NPIV attachment of the IBM Storage. This migration procedure is certainly not the
only way in which this migration can be performed. However we think it well serves its purpose with
minimizing required IBM i partition downtime and minimizing risks for a possibly failed migration by
retaining the original VSCSI configuration as far as possible to allow an easy step-back if required until
the targeted NPIV configuration has been verified to work successfully.
1. Verifying all prerequisites for IBM i NPIV storage attachment are fulfilled.
2. Changing the IBM i partition profile to remove the VSCSI client adapters and add virtual Fibre
Channel client adapters.
3. Changing the Virtual I/O Server's partition configuration (current configuration and profile) by
dynamically adding the virtual Fibre Channel server adapters required for NPIV.
4. Enabling NPIV on the SAN switch ports.
5. Mapping the virtual Fibre Channel server adapters to physical Fibre Channel adapters.
6. Adding new switch zones for the new virtual Fibre Channel adapters from the IBM i client partition.
7. Creating a new host object for the IBM i partition on the storage system.
8. Powering down the IBM i partition.
9. Mapping existing volumes to the new host object used for IBM i NPIV attachment.
10. Powering on the IBM i partition and verifying if everything works as expected with NPIV
After NPIV has been verified to work successfully:
11. Removing the VSCSI disk resources from the Virtual I/O Servers.
12. Removing the VSCSI server adapters from the Virtual I/O Servers.
13. Deleting the IBM i partition’s old profile.
14. Request SAN team to unshare the old VSCSI disk resources.
3. Performing the Migration
In this section we describe the detailed steps required to perform the migration of our IBM i
client partition of two Virtual I/O Servers from a VSCSI to an NPIV attachment.
3.1. Verifying all prerequisites for IBM i NPIV storage attachment are fulfilled
a. Supported 8 Gb Fibre Channel (see reference 2)
b. NPIV capable SAN switch.
c. Check iOS level (IBM i 7.1 TR6 or later)
d. Virtual I/O Server 2.2.2.1 (fix pack 26) or later.
e. HMC V7R3.5.0 or later.
f. Recommended latest Power Systems firmware.
g. On the Virtual I/O Server run the chkdev command for each hdisk used for IBM i to be
migrated to NPIV attachment:
$ chkdev -dev <diskname>
Example:
$ chkdev -dev hdisk15
NAME:
hdisk15
IDENTIFIER:
3321360050768028086EDD00000000000002104214503IBMfcp
PHYS2VIRT_CAPABLE:
NA
VIRT2NPIV_CAPABLE:
YES
VIRT2PHYS_CAPABLE:
YES
Ensure VIRT2NPIV_CAPABLE is YES. If it is not, e.g. because of a PVID having been
assigned like when using logical volumes, a migration is not possible without a complete
SAVE and RESTORE of the IBM i partition.
3.2. Changing the IBM i partition profile to remove the VSCSI client adapters and add
virtual Fibre Channel client adapters
a. As a safety measure to be able to easily revert back to using VSCSI in case anything
would go wrong with our NPIV setup, we create a new partition profile for our NPIV configuration by
creating a copy of our IBM i partition’s existing profile used for VSCSI.
We select the partition on the HMC choosing from the context-menu Configuration →
Manage Profiles, selecting the currently used profile and choosing from the menu Actions → Copy
with specifying a new profile name like “default_NPIV” as shown in Figure 1.
Figure 1: Copying the existing IBM i partition profile
b. Within the Manage Profiles dialog we click on the newly created profile
“default_NPIV” to open it for editing with selecting the Virtual Adapters tab as shown in Figure 2.
Figure 2: IBM i partition profile with existing VSCSI adapters
Figure 2: IBM i partition profile with existing virtual SCSI adapters
c. We delete our VSCSI client adapters in slots 11 and 12 we used for VSCSI attachment
by selecting them and choosing from the menu Actions → Delete – we are going to keep the VSCSI
adapter in slot 10 which is used for a virtual DVD drive which independent from the SAN storage
attachment migration to NPIV we like to retain.
d. For the NPIV attachment we create two corresponding virtual Fibre Channel (VFC)
client adapters in slots 13 and 14 each by selecting Actions → Create Virtual Adapter → Fibre Channel
Adapter – we use different slot numbers for VFC than for VSCSI to be able to easily revert back to the
VSCSI adapter configuration until we are fully assured that the desired NPIV configuration works
successfully.
The adapter with the odd slot ID 13 we associate with our 2nd Virtual I/O Server server
adapter ID 13, and the one with the even slot ID 14 with our 1st Virtual I/O Server server adapter ID
14. The resulting virtual adapter configuration for our IBM i partition is shown in Figure 3. It provides
us with the desired IBM i multi-pathing configuration for NPIV across two redundant Virtual I/O
Servers.
Figure 3: IBM i partition profile with removed VSCSI and added VFC adapters
e. We click OK on the Virtual Adapters tab to save the changes to the partition profile.
After getting back to the dialog window with the list of managed profiles we select the partition profile
again and select the Tagged I/O tab which now allows us to select the previously created virtual Fibre
Channel adapter for the load source as shown in Figure 4. Also this configuration change needs to be
saved again by clicking OK.
Figure 4: IBM i I/O tagging for the load source using virtual Fibre Channel
3.3. Changing the Virtual I/O Servers’ partition configuration (current configuration and
profile) by dynamically adding the virtual Fibre Channel server adapters required for NPIV
a. For both of our Virtual I/O Server partitions we dynamically add a virtual Fibre
Channel server adapter by selecting the Virtual I/O Server partition and choosing from the context-
menu Dynamic Logical Partitioning → Virtual Adapters selecting the menu Actions → Create Virtual
Adapter → Fibre Channel Adapter like shown in Figure 5.
Figure 5: Dynamically adding a VFC adapter to a Virtual I/O Server partition
b. The resulting virtual adapter configuration for our two Virtual I/O Server partitions is shown
in Figure 6.
Figure 6: Virtual I/O Server virtual adapter configuration with added VFC adapters
c. To make sure our changes of dynamically adding the virtual Fibre Channel adapters are
retained also after a Virtual I/O Server shutdown, for each Virtual I/O Server we save the current
configuration in its partition profile by selecting the Virtual I/O Server and choosing from the context-
menu Configuration → Save Current Configuration as shown in Figure 7.
Figure 7: Saving the Virtual I/O Server current configuration to its profile
4. Enabling NPIV on the SAN switch
We need to contact SAN team to enable the NPIV capable on the SAN switch. We can use lsnports
command. It displays information for all the ports capable of NPIV.
5. Mapping the virtual Fibre Channel server adapters to physical Fibre Channel adapters
a. The virtual Fibre Channel adapters can easily be mapped to the physical Fibre Channel adapters
owned by the Virtual I/O Servers using the Virtual Storage Management function of the HMC GUI by
selecting the physical server and choosing Configuration → Virtual Resources → Virtual Storage
Management from the Tasks panel.
In the Virtual Storage Management dialog we select the corresponding Virtual I/O Server from the
drop-down list and click Query to retrieve its configuration information like shown in Figure 8.
Figure 8: Retrieving Virtual I/O Server configuration information
b. To modify the virtual Fibre Channel port connections we select the Virtual Fibre Channel tab,
select the physical FC adapter port fcs0 we like to map the virtual Fibre Channel adapter for our IBM i
client partition to, and click Modify partition connections like shown in Figure 9.
Figure 9: Virtual Storage Management virtual Fibre Channel adapter connections
c. In the Modify Virtual Fibre Channel Partition Assignment dialog we select our IBM i client
partition i7PFE2 and click OK like shown in Figure 10.
Figure 10: Selecting the virtual FC adapter to be mapped to the physical port
Above steps for mapping the virtual Fibre Channel adapter to a physical Fibre Channel adapter
port we repeat correspondingly for our 2 nd Virtual I/O Server.
Alternatively to using the Virtual Storage Management function of the HMC GUI the virtual to
physical FC adapter mappings could be created from the Virtual I/O Server command line using the
“vfcmap” command like described below:
On the Virtual I/O Servers the dynamically added virtual Fibre Channel server adapter should show
up as an available vfchostX resource like shown in the lsdev command outputs below for the virtual
Fibre Channel server adapter in slot 14:
$ lsdev | grep vfchost
vfchost0 Available Virtual FC Server Adapter
On each Virtual I/O Server we map each virtual Fibre Channel server adapter vfchostX to a
physical Fibre Channel port fcsX. The lsnports command can be used to list the NPIV capable physical
Fibre Channel ports with “aports” information showing the available virtual ports of the physical port,
while the actual mapping is done using the vfcmap command like shown below:
$ vfcmap -vadapter vfchost0 -fcp fcs0
Our newly created mapping of the virtual Fibre Channel server adapter “vfchost0” to the physical port
“fcs0” is shown in the “lsmap -all -npiv” command output below:
6. Adding new switch zones for the new virtual Fibre Channel adapters from the IBM i client
partition
a. To create the SAN switch zoning for the new virtual Fibre Channel adapters we first need to
retrieve the virtual WWPN from each virtual Fibre Channel client adapter of the IBM i client partition.
On the HMC we select the IBM i client partition choosing from the context-menu Configuration →
Manage Profiles. In the Manage Profiles dialog we click on the profile “default_NPIV” we created for
the NPIV configuration, in the Virtual Adapters tab click on each virtual Fibre Channel adapter we
created with noting down both virtual WWPNs together with its slot number like shown in Figure 11.
Figure 11: IBM i virtual Fibre Channel adapter WWPNs
Use the WWPNs from the above Figure 11, to request SAN team to do the zoning. We should request
SAN team to do the zoning to map from virtual to physical fibre channel adapter.
7. Powering down the IBM i partition
Now, We've to engage the SAN team and share the LUN ID's that are needs to be shared from
VSCSI to NPIV.
Note: The remapping of the volumes from a VSCSI to an NPIV configuration has to be performed
while the IBM i partition is powered off, i.e. heterogeneous multipathing with simultaneously using
VSCSI and NPIV is not supported in IBM I partition.
8. Powering on the IBM i partition and verifying if everything works as expected with NPIV
a. From the HMC we activate our IBM i client partition again using the new partition profile
“default_NPIV” which we created for the NPIV configuration as shown in Figure 12.
Figure 12: Activating the IBM i partition with the new profile for NPIV
Once activate the profile with NPIV adapters, check with iSeries team that everything is working good.
After confirmation from the iSeries team follow the below step.
9. Removing the virtual SCSI disk resources from the Virtual I/O Servers
a. Since we successfully migrated the IBM i volumes from VSCSI attachment to NPIV attachment
we remove the virtual target devices and corresponding hdisk devices on each Virtual I/O Server used
for hosting our IBM i client partition using the rmdev command like follows:
10. Removing the virtual SCSI server adapters from the Virtual I/O Servers
a. We dynamically remove the virtual SCSI server adapters from each Virtual I/O Server we used
before for serving the virtual SCSI LUNs to our IBM i client partition by selecting the Virtual I/O
Server partition on the HMC and choosing from the context-menu Dynamic Logical Partitioning →
Virtual Adapters. In the Virtual Adapters dialog we select the virtual SCSI server adapter(s) to be
deleted and choose from the menu Actions → Delete like shown in Figure 13 and click OK.
Figure 13: Dynamically removing the unused VSCSI adapters from the Virtual I/O Servers
b. To apply our change of virtual SCSI adapter deletion also to the partition profile we save the
current configuration for our Virtual I/O Servers by selecting each Virtual I/O Server partition hosting
our IBM i client partition on the HMC and choosing from the context-menu Configuration → Save
Current Configuration with selecting the appropriate (default) profile and clicking OK like shown in
Figure 14.
Figure 14: Saving the Virtual I/O Server’s current configuration into its profile
11. Deleting the IBM i partition’s old profile
12. Removing the Virtual I/O Servers’ host objects on the storage system – if not used anymore
for other purposes
13. Removing the non-reporting virtual SCSI resources from the IBM i partition
References
[1] IBM developerWorks for IBM I
https://www.ibm.com/developerworks/ibmi/
[2] IBM PowerVM Virtualization Introduction and Configuration
http://www.redbooks.ibm.com/redpieces/abstracts/sg247940.html?Open
[3] IBM System Storage SAN Volume Controller and IBM Storwize V7000 Command-Line
Interface User's Guide
http://www-01.ibm.com/support/docview.wss?uid=ssg1S7003983

More Related Content

Similar to Ibm i series_lpar_migration_from_vscsi_to_npiv_within_or_different_frames

Huawei SAN Storage How To - Configuring the i-SCSI Communication Protocol
Huawei SAN Storage How To - Configuring the i-SCSI Communication ProtocolHuawei SAN Storage How To - Configuring the i-SCSI Communication Protocol
Huawei SAN Storage How To - Configuring the i-SCSI Communication ProtocolIPMAX s.r.l.
 
Virtual fibre-channel-hyperv-tb
Virtual fibre-channel-hyperv-tbVirtual fibre-channel-hyperv-tb
Virtual fibre-channel-hyperv-tbrockysheddy
 
Virtual private cloud fundamentals
Virtual private cloud fundamentalsVirtual private cloud fundamentals
Virtual private cloud fundamentalsSai Viswanath
 
Open-E DSS V7 Active-Passive iSCSI Failover
Open-E DSS V7 Active-Passive iSCSI FailoverOpen-E DSS V7 Active-Passive iSCSI Failover
Open-E DSS V7 Active-Passive iSCSI Failoveropen-e
 
30 important-virtualization-vmware-interview-questions-with-answers
30 important-virtualization-vmware-interview-questions-with-answers30 important-virtualization-vmware-interview-questions-with-answers
30 important-virtualization-vmware-interview-questions-with-answersLatif Siddiqui
 
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...Symantec
 
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)open-e
 
Backup workflow for SMHV on windows 2008R2 HYPER-V
Backup workflow for SMHV on windows 2008R2 HYPER-VBackup workflow for SMHV on windows 2008R2 HYPER-V
Backup workflow for SMHV on windows 2008R2 HYPER-VAshwin Pawar
 
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...IBM India Smarter Computing
 
cloudstudioアプローチ別営業資料202403_en (1) (2).pptx
cloudstudioアプローチ別営業資料202403_en (1) (2).pptxcloudstudioアプローチ別営業資料202403_en (1) (2).pptx
cloudstudioアプローチ別営業資料202403_en (1) (2).pptxcomworks
 
Open Stack compute-service-nova
Open Stack compute-service-novaOpen Stack compute-service-nova
Open Stack compute-service-novaGHANSHYAM MANN
 
Multi-Cloud Load Balancing 101 and Hands-On Lab
Multi-Cloud Load Balancing 101 and Hands-On LabMulti-Cloud Load Balancing 101 and Hands-On Lab
Multi-Cloud Load Balancing 101 and Hands-On LabAvi Networks
 
Manual windows server 2008
Manual windows server 2008Manual windows server 2008
Manual windows server 2008Amin Rasmin
 
2v0 620 Exam-vSphere 6 Foundations
2v0 620 Exam-vSphere 6 Foundations2v0 620 Exam-vSphere 6 Foundations
2v0 620 Exam-vSphere 6 FoundationsIsabella789
 

Similar to Ibm i series_lpar_migration_from_vscsi_to_npiv_within_or_different_frames (20)

Huawei SAN Storage How To - Configuring the i-SCSI Communication Protocol
Huawei SAN Storage How To - Configuring the i-SCSI Communication ProtocolHuawei SAN Storage How To - Configuring the i-SCSI Communication Protocol
Huawei SAN Storage How To - Configuring the i-SCSI Communication Protocol
 
Virtual fibre-channel-hyperv-tb
Virtual fibre-channel-hyperv-tbVirtual fibre-channel-hyperv-tb
Virtual fibre-channel-hyperv-tb
 
Hyper-v Storage
Hyper-v StorageHyper-v Storage
Hyper-v Storage
 
Virtual private cloud fundamentals
Virtual private cloud fundamentalsVirtual private cloud fundamentals
Virtual private cloud fundamentals
 
Open-E DSS V7 Active-Passive iSCSI Failover
Open-E DSS V7 Active-Passive iSCSI FailoverOpen-E DSS V7 Active-Passive iSCSI Failover
Open-E DSS V7 Active-Passive iSCSI Failover
 
30 important-virtualization-vmware-interview-questions-with-answers
30 important-virtualization-vmware-interview-questions-with-answers30 important-virtualization-vmware-interview-questions-with-answers
30 important-virtualization-vmware-interview-questions-with-answers
 
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...
 
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
 
Backup workflow for SMHV on windows 2008R2 HYPER-V
Backup workflow for SMHV on windows 2008R2 HYPER-VBackup workflow for SMHV on windows 2008R2 HYPER-V
Backup workflow for SMHV on windows 2008R2 HYPER-V
 
Lecture 1.pptx
Lecture 1.pptxLecture 1.pptx
Lecture 1.pptx
 
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
 
ITE7_Chp9.pptx
ITE7_Chp9.pptxITE7_Chp9.pptx
ITE7_Chp9.pptx
 
Vpn
VpnVpn
Vpn
 
cloudstudioアプローチ別営業資料202403_en (1) (2).pptx
cloudstudioアプローチ別営業資料202403_en (1) (2).pptxcloudstudioアプローチ別営業資料202403_en (1) (2).pptx
cloudstudioアプローチ別営業資料202403_en (1) (2).pptx
 
Open Stack compute-service-nova
Open Stack compute-service-novaOpen Stack compute-service-nova
Open Stack compute-service-nova
 
Virtualization & tipping point
Virtualization & tipping pointVirtualization & tipping point
Virtualization & tipping point
 
Multi-Cloud Load Balancing 101 and Hands-On Lab
Multi-Cloud Load Balancing 101 and Hands-On LabMulti-Cloud Load Balancing 101 and Hands-On Lab
Multi-Cloud Load Balancing 101 and Hands-On Lab
 
Manual windows server 2008
Manual windows server 2008Manual windows server 2008
Manual windows server 2008
 
70 533 study material
70 533 study material70 533 study material
70 533 study material
 
2v0 620 Exam-vSphere 6 Foundations
2v0 620 Exam-vSphere 6 Foundations2v0 620 Exam-vSphere 6 Foundations
2v0 620 Exam-vSphere 6 Foundations
 

Recently uploaded

Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 

Recently uploaded (20)

Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 

Ibm i series_lpar_migration_from_vscsi_to_npiv_within_or_different_frames

  • 1. IBM iSeries LPAR migration from VSCSI to NPIV
  • 2. Purpose This technical document shows an example for migrating an IBM i client partition of the IBM PowerVM Virtual I/O Server from a virtual SCSI to an NPIV attachment. This is intended to help Unix Team to successfully plan and perform an IBM i partition migration from virtual SCSI to NPIV in Power systems. This technical document is considered that SAN is supported, enabled NPIV capable in SAN switch, disk are NPIV capable, required license is available for iOS, supported tape library is used to configure for IBM iSeries Client partition of the IBM Power systems. In case of confirmation for SAN related contact SAN team, for iOS contact iSeries team. Herewith, I've given the steps only from Power system side.
  • 3.
  • 4. 1. Considerations for using N_Port ID Virtualization 1.1 Virtual SCSI vs. N_Port ID Virtualization While for virtual SCSI (VSCSI) the Virtual I/O Server performs generic SCSI device emulation, with NPIV the Virtual I/O Server simply acts as a Fibre Channel pass-through. Compared to VSCSI an N_Port ID Virtualization (NPIV) storage environment typically requires less efforts to be configured and maintained as there is no multi-path device driver, no virtual target device creation and no administration of corresponding volume device mappings to be required on the Virtual I/O Server. NPIV allows the IBM i client partition to see its storage devices with its machine type / model and all device characteristics via virtual Fibre Channel adapters as if it would be natively attached to the storage devices. Some licensed programs on the IBM i client partition don’t support virtual SCSI attached devices as they require knowledge about the hardware device characteristics. 1.2 Migration Planning Considerations • Consider the following for migrating an IBM i client partition of the IBM PowerVM Virtual I/O Server from VSCSI to NPIV: • Existing volumes on the SAN created for IBM i VSCSI attachment can be re-used for NPIV attachment. • Each virtual Fibre Channel client adapter for IBM i supports up to 64 LUNs vs. up to 16 VSCSI LUNs supported for each VSCSI client adapter. • The remapping of the volumes from a VSCSI to an NPIV configuration has to be performed while the IBM i partition is powered off, i.e. heterogeneous multipathing with simultaneously using VSCSI and NPIV is not supported in IBM I partition.
  • 5. 2. Overview of Migration Steps This section contains an overview for the migration steps we used to migrate our IBM i partition from VSCSI to NPIV attachment of the IBM Storage. This migration procedure is certainly not the only way in which this migration can be performed. However we think it well serves its purpose with minimizing required IBM i partition downtime and minimizing risks for a possibly failed migration by retaining the original VSCSI configuration as far as possible to allow an easy step-back if required until the targeted NPIV configuration has been verified to work successfully. 1. Verifying all prerequisites for IBM i NPIV storage attachment are fulfilled. 2. Changing the IBM i partition profile to remove the VSCSI client adapters and add virtual Fibre Channel client adapters. 3. Changing the Virtual I/O Server's partition configuration (current configuration and profile) by dynamically adding the virtual Fibre Channel server adapters required for NPIV. 4. Enabling NPIV on the SAN switch ports. 5. Mapping the virtual Fibre Channel server adapters to physical Fibre Channel adapters. 6. Adding new switch zones for the new virtual Fibre Channel adapters from the IBM i client partition. 7. Creating a new host object for the IBM i partition on the storage system. 8. Powering down the IBM i partition. 9. Mapping existing volumes to the new host object used for IBM i NPIV attachment. 10. Powering on the IBM i partition and verifying if everything works as expected with NPIV After NPIV has been verified to work successfully: 11. Removing the VSCSI disk resources from the Virtual I/O Servers. 12. Removing the VSCSI server adapters from the Virtual I/O Servers. 13. Deleting the IBM i partition’s old profile. 14. Request SAN team to unshare the old VSCSI disk resources.
  • 6. 3. Performing the Migration In this section we describe the detailed steps required to perform the migration of our IBM i client partition of two Virtual I/O Servers from a VSCSI to an NPIV attachment. 3.1. Verifying all prerequisites for IBM i NPIV storage attachment are fulfilled a. Supported 8 Gb Fibre Channel (see reference 2) b. NPIV capable SAN switch. c. Check iOS level (IBM i 7.1 TR6 or later) d. Virtual I/O Server 2.2.2.1 (fix pack 26) or later. e. HMC V7R3.5.0 or later. f. Recommended latest Power Systems firmware. g. On the Virtual I/O Server run the chkdev command for each hdisk used for IBM i to be migrated to NPIV attachment: $ chkdev -dev <diskname> Example: $ chkdev -dev hdisk15 NAME: hdisk15 IDENTIFIER: 3321360050768028086EDD00000000000002104214503IBMfcp PHYS2VIRT_CAPABLE: NA VIRT2NPIV_CAPABLE: YES VIRT2PHYS_CAPABLE: YES Ensure VIRT2NPIV_CAPABLE is YES. If it is not, e.g. because of a PVID having been assigned like when using logical volumes, a migration is not possible without a complete SAVE and RESTORE of the IBM i partition.
  • 7. 3.2. Changing the IBM i partition profile to remove the VSCSI client adapters and add virtual Fibre Channel client adapters a. As a safety measure to be able to easily revert back to using VSCSI in case anything would go wrong with our NPIV setup, we create a new partition profile for our NPIV configuration by creating a copy of our IBM i partition’s existing profile used for VSCSI. We select the partition on the HMC choosing from the context-menu Configuration → Manage Profiles, selecting the currently used profile and choosing from the menu Actions → Copy with specifying a new profile name like “default_NPIV” as shown in Figure 1. Figure 1: Copying the existing IBM i partition profile b. Within the Manage Profiles dialog we click on the newly created profile “default_NPIV” to open it for editing with selecting the Virtual Adapters tab as shown in Figure 2. Figure 2: IBM i partition profile with existing VSCSI adapters Figure 2: IBM i partition profile with existing virtual SCSI adapters
  • 8. c. We delete our VSCSI client adapters in slots 11 and 12 we used for VSCSI attachment by selecting them and choosing from the menu Actions → Delete – we are going to keep the VSCSI adapter in slot 10 which is used for a virtual DVD drive which independent from the SAN storage attachment migration to NPIV we like to retain. d. For the NPIV attachment we create two corresponding virtual Fibre Channel (VFC) client adapters in slots 13 and 14 each by selecting Actions → Create Virtual Adapter → Fibre Channel Adapter – we use different slot numbers for VFC than for VSCSI to be able to easily revert back to the VSCSI adapter configuration until we are fully assured that the desired NPIV configuration works successfully. The adapter with the odd slot ID 13 we associate with our 2nd Virtual I/O Server server adapter ID 13, and the one with the even slot ID 14 with our 1st Virtual I/O Server server adapter ID 14. The resulting virtual adapter configuration for our IBM i partition is shown in Figure 3. It provides us with the desired IBM i multi-pathing configuration for NPIV across two redundant Virtual I/O Servers. Figure 3: IBM i partition profile with removed VSCSI and added VFC adapters e. We click OK on the Virtual Adapters tab to save the changes to the partition profile. After getting back to the dialog window with the list of managed profiles we select the partition profile again and select the Tagged I/O tab which now allows us to select the previously created virtual Fibre Channel adapter for the load source as shown in Figure 4. Also this configuration change needs to be saved again by clicking OK.
  • 9. Figure 4: IBM i I/O tagging for the load source using virtual Fibre Channel 3.3. Changing the Virtual I/O Servers’ partition configuration (current configuration and profile) by dynamically adding the virtual Fibre Channel server adapters required for NPIV a. For both of our Virtual I/O Server partitions we dynamically add a virtual Fibre Channel server adapter by selecting the Virtual I/O Server partition and choosing from the context- menu Dynamic Logical Partitioning → Virtual Adapters selecting the menu Actions → Create Virtual Adapter → Fibre Channel Adapter like shown in Figure 5. Figure 5: Dynamically adding a VFC adapter to a Virtual I/O Server partition
  • 10. b. The resulting virtual adapter configuration for our two Virtual I/O Server partitions is shown in Figure 6. Figure 6: Virtual I/O Server virtual adapter configuration with added VFC adapters c. To make sure our changes of dynamically adding the virtual Fibre Channel adapters are retained also after a Virtual I/O Server shutdown, for each Virtual I/O Server we save the current configuration in its partition profile by selecting the Virtual I/O Server and choosing from the context- menu Configuration → Save Current Configuration as shown in Figure 7. Figure 7: Saving the Virtual I/O Server current configuration to its profile 4. Enabling NPIV on the SAN switch We need to contact SAN team to enable the NPIV capable on the SAN switch. We can use lsnports command. It displays information for all the ports capable of NPIV.
  • 11. 5. Mapping the virtual Fibre Channel server adapters to physical Fibre Channel adapters a. The virtual Fibre Channel adapters can easily be mapped to the physical Fibre Channel adapters owned by the Virtual I/O Servers using the Virtual Storage Management function of the HMC GUI by selecting the physical server and choosing Configuration → Virtual Resources → Virtual Storage Management from the Tasks panel. In the Virtual Storage Management dialog we select the corresponding Virtual I/O Server from the drop-down list and click Query to retrieve its configuration information like shown in Figure 8. Figure 8: Retrieving Virtual I/O Server configuration information b. To modify the virtual Fibre Channel port connections we select the Virtual Fibre Channel tab, select the physical FC adapter port fcs0 we like to map the virtual Fibre Channel adapter for our IBM i client partition to, and click Modify partition connections like shown in Figure 9. Figure 9: Virtual Storage Management virtual Fibre Channel adapter connections c. In the Modify Virtual Fibre Channel Partition Assignment dialog we select our IBM i client partition i7PFE2 and click OK like shown in Figure 10.
  • 12. Figure 10: Selecting the virtual FC adapter to be mapped to the physical port Above steps for mapping the virtual Fibre Channel adapter to a physical Fibre Channel adapter port we repeat correspondingly for our 2 nd Virtual I/O Server. Alternatively to using the Virtual Storage Management function of the HMC GUI the virtual to physical FC adapter mappings could be created from the Virtual I/O Server command line using the “vfcmap” command like described below: On the Virtual I/O Servers the dynamically added virtual Fibre Channel server adapter should show up as an available vfchostX resource like shown in the lsdev command outputs below for the virtual Fibre Channel server adapter in slot 14: $ lsdev | grep vfchost vfchost0 Available Virtual FC Server Adapter On each Virtual I/O Server we map each virtual Fibre Channel server adapter vfchostX to a physical Fibre Channel port fcsX. The lsnports command can be used to list the NPIV capable physical Fibre Channel ports with “aports” information showing the available virtual ports of the physical port, while the actual mapping is done using the vfcmap command like shown below:
  • 13. $ vfcmap -vadapter vfchost0 -fcp fcs0 Our newly created mapping of the virtual Fibre Channel server adapter “vfchost0” to the physical port “fcs0” is shown in the “lsmap -all -npiv” command output below: 6. Adding new switch zones for the new virtual Fibre Channel adapters from the IBM i client partition a. To create the SAN switch zoning for the new virtual Fibre Channel adapters we first need to retrieve the virtual WWPN from each virtual Fibre Channel client adapter of the IBM i client partition. On the HMC we select the IBM i client partition choosing from the context-menu Configuration → Manage Profiles. In the Manage Profiles dialog we click on the profile “default_NPIV” we created for the NPIV configuration, in the Virtual Adapters tab click on each virtual Fibre Channel adapter we created with noting down both virtual WWPNs together with its slot number like shown in Figure 11. Figure 11: IBM i virtual Fibre Channel adapter WWPNs Use the WWPNs from the above Figure 11, to request SAN team to do the zoning. We should request SAN team to do the zoning to map from virtual to physical fibre channel adapter.
  • 14. 7. Powering down the IBM i partition Now, We've to engage the SAN team and share the LUN ID's that are needs to be shared from VSCSI to NPIV. Note: The remapping of the volumes from a VSCSI to an NPIV configuration has to be performed while the IBM i partition is powered off, i.e. heterogeneous multipathing with simultaneously using VSCSI and NPIV is not supported in IBM I partition. 8. Powering on the IBM i partition and verifying if everything works as expected with NPIV a. From the HMC we activate our IBM i client partition again using the new partition profile “default_NPIV” which we created for the NPIV configuration as shown in Figure 12. Figure 12: Activating the IBM i partition with the new profile for NPIV Once activate the profile with NPIV adapters, check with iSeries team that everything is working good. After confirmation from the iSeries team follow the below step. 9. Removing the virtual SCSI disk resources from the Virtual I/O Servers a. Since we successfully migrated the IBM i volumes from VSCSI attachment to NPIV attachment we remove the virtual target devices and corresponding hdisk devices on each Virtual I/O Server used for hosting our IBM i client partition using the rmdev command like follows:
  • 15. 10. Removing the virtual SCSI server adapters from the Virtual I/O Servers a. We dynamically remove the virtual SCSI server adapters from each Virtual I/O Server we used before for serving the virtual SCSI LUNs to our IBM i client partition by selecting the Virtual I/O Server partition on the HMC and choosing from the context-menu Dynamic Logical Partitioning → Virtual Adapters. In the Virtual Adapters dialog we select the virtual SCSI server adapter(s) to be deleted and choose from the menu Actions → Delete like shown in Figure 13 and click OK. Figure 13: Dynamically removing the unused VSCSI adapters from the Virtual I/O Servers b. To apply our change of virtual SCSI adapter deletion also to the partition profile we save the current configuration for our Virtual I/O Servers by selecting each Virtual I/O Server partition hosting our IBM i client partition on the HMC and choosing from the context-menu Configuration → Save Current Configuration with selecting the appropriate (default) profile and clicking OK like shown in Figure 14.
  • 16. Figure 14: Saving the Virtual I/O Server’s current configuration into its profile 11. Deleting the IBM i partition’s old profile 12. Removing the Virtual I/O Servers’ host objects on the storage system – if not used anymore for other purposes 13. Removing the non-reporting virtual SCSI resources from the IBM i partition
  • 17. References [1] IBM developerWorks for IBM I https://www.ibm.com/developerworks/ibmi/ [2] IBM PowerVM Virtualization Introduction and Configuration http://www.redbooks.ibm.com/redpieces/abstracts/sg247940.html?Open [3] IBM System Storage SAN Volume Controller and IBM Storwize V7000 Command-Line Interface User's Guide http://www-01.ibm.com/support/docview.wss?uid=ssg1S7003983