SlideShare a Scribd company logo
IBM iSeries LPAR migration from VSCSI to NPIV
Purpose
This technical document shows an example for migrating an IBM i client partition of the IBM
PowerVM Virtual I/O Server from a virtual SCSI to an NPIV attachment.
This is intended to help Unix Team to successfully plan and perform an IBM i partition migration from
virtual SCSI to NPIV in Power systems.
This technical document is considered that SAN is supported, enabled NPIV capable in SAN switch,
disk are NPIV capable, required license is available for iOS, supported tape library is used to configure
for IBM iSeries Client partition of the IBM Power systems.
In case of confirmation for SAN related contact SAN team, for iOS contact iSeries team. Herewith, I've
given the steps only from Power system side.
1. Considerations for using N_Port ID Virtualization
1.1 Virtual SCSI vs. N_Port ID Virtualization
While for virtual SCSI (VSCSI) the Virtual I/O Server performs generic SCSI device emulation,
with NPIV the Virtual I/O Server simply acts as a Fibre Channel pass-through. Compared to VSCSI an
N_Port ID Virtualization (NPIV) storage environment typically requires less efforts to be configured
and maintained as there is no multi-path device driver, no virtual target device creation and no
administration of corresponding volume device mappings to be required on the Virtual I/O Server.
NPIV allows the IBM i client partition to see its storage devices with its machine type / model
and all device characteristics via virtual Fibre Channel adapters as if it would be natively attached
to the storage devices. Some licensed programs on the IBM i client partition don’t support virtual
SCSI attached devices as they require knowledge about the hardware device characteristics.
1.2 Migration Planning Considerations
• Consider the following for migrating an IBM i client partition of the IBM PowerVM Virtual I/O
Server from VSCSI to NPIV:
• Existing volumes on the SAN created for IBM i VSCSI attachment can be re-used for NPIV
attachment.
• Each virtual Fibre Channel client adapter for IBM i supports up to 64 LUNs vs. up to 16 VSCSI
LUNs supported for each VSCSI client adapter.
• The remapping of the volumes from a VSCSI to an NPIV configuration has to be performed
while the IBM i partition is powered off, i.e. heterogeneous multipathing with simultaneously
using VSCSI and NPIV is not supported in IBM I partition.
2. Overview of Migration Steps
This section contains an overview for the migration steps we used to migrate our IBM i partition
from VSCSI to NPIV attachment of the IBM Storage. This migration procedure is certainly not the
only way in which this migration can be performed. However we think it well serves its purpose with
minimizing required IBM i partition downtime and minimizing risks for a possibly failed migration by
retaining the original VSCSI configuration as far as possible to allow an easy step-back if required until
the targeted NPIV configuration has been verified to work successfully.
1. Verifying all prerequisites for IBM i NPIV storage attachment are fulfilled.
2. Changing the IBM i partition profile to remove the VSCSI client adapters and add virtual Fibre
Channel client adapters.
3. Changing the Virtual I/O Server's partition configuration (current configuration and profile) by
dynamically adding the virtual Fibre Channel server adapters required for NPIV.
4. Enabling NPIV on the SAN switch ports.
5. Mapping the virtual Fibre Channel server adapters to physical Fibre Channel adapters.
6. Adding new switch zones for the new virtual Fibre Channel adapters from the IBM i client partition.
7. Creating a new host object for the IBM i partition on the storage system.
8. Powering down the IBM i partition.
9. Mapping existing volumes to the new host object used for IBM i NPIV attachment.
10. Powering on the IBM i partition and verifying if everything works as expected with NPIV
After NPIV has been verified to work successfully:
11. Removing the VSCSI disk resources from the Virtual I/O Servers.
12. Removing the VSCSI server adapters from the Virtual I/O Servers.
13. Deleting the IBM i partition’s old profile.
14. Request SAN team to unshare the old VSCSI disk resources.
3. Performing the Migration
In this section we describe the detailed steps required to perform the migration of our IBM i
client partition of two Virtual I/O Servers from a VSCSI to an NPIV attachment.
3.1. Verifying all prerequisites for IBM i NPIV storage attachment are fulfilled
a. Supported 8 Gb Fibre Channel (see reference 2)
b. NPIV capable SAN switch.
c. Check iOS level (IBM i 7.1 TR6 or later)
d. Virtual I/O Server 2.2.2.1 (fix pack 26) or later.
e. HMC V7R3.5.0 or later.
f. Recommended latest Power Systems firmware.
g. On the Virtual I/O Server run the chkdev command for each hdisk used for IBM i to be
migrated to NPIV attachment:
$ chkdev -dev <diskname>
Example:
$ chkdev -dev hdisk15
NAME:
hdisk15
IDENTIFIER:
3321360050768028086EDD00000000000002104214503IBMfcp
PHYS2VIRT_CAPABLE:
NA
VIRT2NPIV_CAPABLE:
YES
VIRT2PHYS_CAPABLE:
YES
Ensure VIRT2NPIV_CAPABLE is YES. If it is not, e.g. because of a PVID having been
assigned like when using logical volumes, a migration is not possible without a complete
SAVE and RESTORE of the IBM i partition.
3.2. Changing the IBM i partition profile to remove the VSCSI client adapters and add
virtual Fibre Channel client adapters
a. As a safety measure to be able to easily revert back to using VSCSI in case anything
would go wrong with our NPIV setup, we create a new partition profile for our NPIV configuration by
creating a copy of our IBM i partition’s existing profile used for VSCSI.
We select the partition on the HMC choosing from the context-menu Configuration →
Manage Profiles, selecting the currently used profile and choosing from the menu Actions → Copy
with specifying a new profile name like “default_NPIV” as shown in Figure 1.
Figure 1: Copying the existing IBM i partition profile
b. Within the Manage Profiles dialog we click on the newly created profile
“default_NPIV” to open it for editing with selecting the Virtual Adapters tab as shown in Figure 2.
Figure 2: IBM i partition profile with existing VSCSI adapters
Figure 2: IBM i partition profile with existing virtual SCSI adapters
c. We delete our VSCSI client adapters in slots 11 and 12 we used for VSCSI attachment
by selecting them and choosing from the menu Actions → Delete – we are going to keep the VSCSI
adapter in slot 10 which is used for a virtual DVD drive which independent from the SAN storage
attachment migration to NPIV we like to retain.
d. For the NPIV attachment we create two corresponding virtual Fibre Channel (VFC)
client adapters in slots 13 and 14 each by selecting Actions → Create Virtual Adapter → Fibre Channel
Adapter – we use different slot numbers for VFC than for VSCSI to be able to easily revert back to the
VSCSI adapter configuration until we are fully assured that the desired NPIV configuration works
successfully.
The adapter with the odd slot ID 13 we associate with our 2nd Virtual I/O Server server
adapter ID 13, and the one with the even slot ID 14 with our 1st Virtual I/O Server server adapter ID
14. The resulting virtual adapter configuration for our IBM i partition is shown in Figure 3. It provides
us with the desired IBM i multi-pathing configuration for NPIV across two redundant Virtual I/O
Servers.
Figure 3: IBM i partition profile with removed VSCSI and added VFC adapters
e. We click OK on the Virtual Adapters tab to save the changes to the partition profile.
After getting back to the dialog window with the list of managed profiles we select the partition profile
again and select the Tagged I/O tab which now allows us to select the previously created virtual Fibre
Channel adapter for the load source as shown in Figure 4. Also this configuration change needs to be
saved again by clicking OK.
Figure 4: IBM i I/O tagging for the load source using virtual Fibre Channel
3.3. Changing the Virtual I/O Servers’ partition configuration (current configuration and
profile) by dynamically adding the virtual Fibre Channel server adapters required for NPIV
a. For both of our Virtual I/O Server partitions we dynamically add a virtual Fibre
Channel server adapter by selecting the Virtual I/O Server partition and choosing from the context-
menu Dynamic Logical Partitioning → Virtual Adapters selecting the menu Actions → Create Virtual
Adapter → Fibre Channel Adapter like shown in Figure 5.
Figure 5: Dynamically adding a VFC adapter to a Virtual I/O Server partition
b. The resulting virtual adapter configuration for our two Virtual I/O Server partitions is shown
in Figure 6.
Figure 6: Virtual I/O Server virtual adapter configuration with added VFC adapters
c. To make sure our changes of dynamically adding the virtual Fibre Channel adapters are
retained also after a Virtual I/O Server shutdown, for each Virtual I/O Server we save the current
configuration in its partition profile by selecting the Virtual I/O Server and choosing from the context-
menu Configuration → Save Current Configuration as shown in Figure 7.
Figure 7: Saving the Virtual I/O Server current configuration to its profile
4. Enabling NPIV on the SAN switch
We need to contact SAN team to enable the NPIV capable on the SAN switch. We can use lsnports
command. It displays information for all the ports capable of NPIV.
5. Mapping the virtual Fibre Channel server adapters to physical Fibre Channel adapters
a. The virtual Fibre Channel adapters can easily be mapped to the physical Fibre Channel adapters
owned by the Virtual I/O Servers using the Virtual Storage Management function of the HMC GUI by
selecting the physical server and choosing Configuration → Virtual Resources → Virtual Storage
Management from the Tasks panel.
In the Virtual Storage Management dialog we select the corresponding Virtual I/O Server from the
drop-down list and click Query to retrieve its configuration information like shown in Figure 8.
Figure 8: Retrieving Virtual I/O Server configuration information
b. To modify the virtual Fibre Channel port connections we select the Virtual Fibre Channel tab,
select the physical FC adapter port fcs0 we like to map the virtual Fibre Channel adapter for our IBM i
client partition to, and click Modify partition connections like shown in Figure 9.
Figure 9: Virtual Storage Management virtual Fibre Channel adapter connections
c. In the Modify Virtual Fibre Channel Partition Assignment dialog we select our IBM i client
partition i7PFE2 and click OK like shown in Figure 10.
Figure 10: Selecting the virtual FC adapter to be mapped to the physical port
Above steps for mapping the virtual Fibre Channel adapter to a physical Fibre Channel adapter
port we repeat correspondingly for our 2 nd Virtual I/O Server.
Alternatively to using the Virtual Storage Management function of the HMC GUI the virtual to
physical FC adapter mappings could be created from the Virtual I/O Server command line using the
“vfcmap” command like described below:
On the Virtual I/O Servers the dynamically added virtual Fibre Channel server adapter should show
up as an available vfchostX resource like shown in the lsdev command outputs below for the virtual
Fibre Channel server adapter in slot 14:
$ lsdev | grep vfchost
vfchost0 Available Virtual FC Server Adapter
On each Virtual I/O Server we map each virtual Fibre Channel server adapter vfchostX to a
physical Fibre Channel port fcsX. The lsnports command can be used to list the NPIV capable physical
Fibre Channel ports with “aports” information showing the available virtual ports of the physical port,
while the actual mapping is done using the vfcmap command like shown below:
$ vfcmap -vadapter vfchost0 -fcp fcs0
Our newly created mapping of the virtual Fibre Channel server adapter “vfchost0” to the physical port
“fcs0” is shown in the “lsmap -all -npiv” command output below:
6. Adding new switch zones for the new virtual Fibre Channel adapters from the IBM i client
partition
a. To create the SAN switch zoning for the new virtual Fibre Channel adapters we first need to
retrieve the virtual WWPN from each virtual Fibre Channel client adapter of the IBM i client partition.
On the HMC we select the IBM i client partition choosing from the context-menu Configuration →
Manage Profiles. In the Manage Profiles dialog we click on the profile “default_NPIV” we created for
the NPIV configuration, in the Virtual Adapters tab click on each virtual Fibre Channel adapter we
created with noting down both virtual WWPNs together with its slot number like shown in Figure 11.
Figure 11: IBM i virtual Fibre Channel adapter WWPNs
Use the WWPNs from the above Figure 11, to request SAN team to do the zoning. We should request
SAN team to do the zoning to map from virtual to physical fibre channel adapter.
7. Powering down the IBM i partition
Now, We've to engage the SAN team and share the LUN ID's that are needs to be shared from
VSCSI to NPIV.
Note: The remapping of the volumes from a VSCSI to an NPIV configuration has to be performed
while the IBM i partition is powered off, i.e. heterogeneous multipathing with simultaneously using
VSCSI and NPIV is not supported in IBM I partition.
8. Powering on the IBM i partition and verifying if everything works as expected with NPIV
a. From the HMC we activate our IBM i client partition again using the new partition profile
“default_NPIV” which we created for the NPIV configuration as shown in Figure 12.
Figure 12: Activating the IBM i partition with the new profile for NPIV
Once activate the profile with NPIV adapters, check with iSeries team that everything is working good.
After confirmation from the iSeries team follow the below step.
9. Removing the virtual SCSI disk resources from the Virtual I/O Servers
a. Since we successfully migrated the IBM i volumes from VSCSI attachment to NPIV attachment
we remove the virtual target devices and corresponding hdisk devices on each Virtual I/O Server used
for hosting our IBM i client partition using the rmdev command like follows:
10. Removing the virtual SCSI server adapters from the Virtual I/O Servers
a. We dynamically remove the virtual SCSI server adapters from each Virtual I/O Server we used
before for serving the virtual SCSI LUNs to our IBM i client partition by selecting the Virtual I/O
Server partition on the HMC and choosing from the context-menu Dynamic Logical Partitioning →
Virtual Adapters. In the Virtual Adapters dialog we select the virtual SCSI server adapter(s) to be
deleted and choose from the menu Actions → Delete like shown in Figure 13 and click OK.
Figure 13: Dynamically removing the unused VSCSI adapters from the Virtual I/O Servers
b. To apply our change of virtual SCSI adapter deletion also to the partition profile we save the
current configuration for our Virtual I/O Servers by selecting each Virtual I/O Server partition hosting
our IBM i client partition on the HMC and choosing from the context-menu Configuration → Save
Current Configuration with selecting the appropriate (default) profile and clicking OK like shown in
Figure 14.
Figure 14: Saving the Virtual I/O Server’s current configuration into its profile
11. Deleting the IBM i partition’s old profile
12. Removing the Virtual I/O Servers’ host objects on the storage system – if not used anymore
for other purposes
13. Removing the non-reporting virtual SCSI resources from the IBM i partition
References
[1] IBM developerWorks for IBM I
https://www.ibm.com/developerworks/ibmi/
[2] IBM PowerVM Virtualization Introduction and Configuration
http://www.redbooks.ibm.com/redpieces/abstracts/sg247940.html?Open
[3] IBM System Storage SAN Volume Controller and IBM Storwize V7000 Command-Line
Interface User's Guide
http://www-01.ibm.com/support/docview.wss?uid=ssg1S7003983

More Related Content

What's hot

Managing and governing multi-account AWS environments using AWS Organizations...
Managing and governing multi-account AWS environments using AWS Organizations...Managing and governing multi-account AWS environments using AWS Organizations...
Managing and governing multi-account AWS environments using AWS Organizations...
Amazon Web Services
 
(DVO315) Log, Monitor and Analyze your IT with Amazon CloudWatch
(DVO315) Log, Monitor and Analyze your IT with Amazon CloudWatch(DVO315) Log, Monitor and Analyze your IT with Amazon CloudWatch
(DVO315) Log, Monitor and Analyze your IT with Amazon CloudWatch
Amazon Web Services
 
AWS Support에서 제안하는 멋진 클라우드 아키텍처 디자인::조성열:: AWS Summit Seoul 2018
AWS Support에서 제안하는 멋진 클라우드 아키텍처 디자인::조성열:: AWS Summit Seoul 2018AWS Support에서 제안하는 멋진 클라우드 아키텍처 디자인::조성열:: AWS Summit Seoul 2018
AWS Support에서 제안하는 멋진 클라우드 아키텍처 디자인::조성열:: AWS Summit Seoul 2018Amazon Web Services Korea
 
AWS Control Tower
AWS Control TowerAWS Control Tower
AWS Control Tower
CloudHesive
 
[AWS Builders] 프리티어 서비스부터 계정 보안까지
[AWS Builders] 프리티어 서비스부터 계정 보안까지[AWS Builders] 프리티어 서비스부터 계정 보안까지
[AWS Builders] 프리티어 서비스부터 계정 보안까지
Amazon Web Services Korea
 
Coordinating Microservices with AWS Step Functions.pdf
Coordinating Microservices with AWS Step Functions.pdfCoordinating Microservices with AWS Step Functions.pdf
Coordinating Microservices with AWS Step Functions.pdf
Amazon Web Services
 
AWS Webinar Series - Cost Optimisation Levers, Tools, and Strategies
AWS Webinar Series - Cost Optimisation Levers, Tools, and StrategiesAWS Webinar Series - Cost Optimisation Levers, Tools, and Strategies
AWS Webinar Series - Cost Optimisation Levers, Tools, and Strategies
Amazon Web Services
 
Esegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS FargateEsegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS Fargate
Amazon Web Services
 
AWS Finance Symposium_금융권을 위한 hybrid 클라우드 도입 첫걸음
AWS Finance Symposium_금융권을 위한 hybrid 클라우드 도입 첫걸음AWS Finance Symposium_금융권을 위한 hybrid 클라우드 도입 첫걸음
AWS Finance Symposium_금융권을 위한 hybrid 클라우드 도입 첫걸음
Amazon Web Services Korea
 
V1.1 CD03 Azure Active Directory B2C/B2B コラボレーションによる Customer Identity and Ac...
V1.1 CD03 Azure Active Directory B2C/B2B コラボレーションによる Customer Identity and Ac...V1.1 CD03 Azure Active Directory B2C/B2B コラボレーションによる Customer Identity and Ac...
V1.1 CD03 Azure Active Directory B2C/B2B コラボレーションによる Customer Identity and Ac...
junichi anno
 
Introduction to AWS IAM
Introduction to AWS IAMIntroduction to AWS IAM
Introduction to AWS IAM
Knoldus Inc.
 
Deploy and Govern at Scale with AWS Control Tower
Deploy and Govern at Scale with AWS Control TowerDeploy and Govern at Scale with AWS Control Tower
Deploy and Govern at Scale with AWS Control Tower
Amazon Web Services
 
AWS CloudFormation: Infrastructure as Code | AWS Public Sector Summit 2016
AWS CloudFormation: Infrastructure as Code | AWS Public Sector Summit 2016AWS CloudFormation: Infrastructure as Code | AWS Public Sector Summit 2016
AWS CloudFormation: Infrastructure as Code | AWS Public Sector Summit 2016
Amazon Web Services
 
Top 10 AWS Identity and Access Management (IAM) Best Practices (SEC301) | AWS...
Top 10 AWS Identity and Access Management (IAM) Best Practices (SEC301) | AWS...Top 10 AWS Identity and Access Management (IAM) Best Practices (SEC301) | AWS...
Top 10 AWS Identity and Access Management (IAM) Best Practices (SEC301) | AWS...
Amazon Web Services
 
Hosting .NET Applications on AWS - AWS Federal Pop-Up Loft
Hosting .NET Applications on AWS  - AWS Federal Pop-Up LoftHosting .NET Applications on AWS  - AWS Federal Pop-Up Loft
Hosting .NET Applications on AWS - AWS Federal Pop-Up Loft
Amazon Web Services
 
急なトラフィック増にも動じない、Amazon S3とCloudFrontを活用したWebサイト構築
急なトラフィック増にも動じない、Amazon S3とCloudFrontを活用したWebサイト構築急なトラフィック増にも動じない、Amazon S3とCloudFrontを活用したWebサイト構築
急なトラフィック増にも動じない、Amazon S3とCloudFrontを活用したWebサイト構築Hirokazu Ouchi
 
Depreciation and Revaluation
Depreciation and RevaluationDepreciation and Revaluation
Depreciation and Revaluation
Nico Iswaraputra
 
Amazon Aurora 100% 활용하기
Amazon Aurora 100% 활용하기Amazon Aurora 100% 활용하기
Amazon Aurora 100% 활용하기
Amazon Web Services Korea
 
AWS를 이용한 렌더링 아키텍처 및 사용 사례 :: 박철수 솔루션즈 아키텍트 :: AWS Media Day
AWS를 이용한 렌더링 아키텍처 및 사용 사례 :: 박철수 솔루션즈 아키텍트 :: AWS Media DayAWS를 이용한 렌더링 아키텍처 및 사용 사례 :: 박철수 솔루션즈 아키텍트 :: AWS Media Day
AWS를 이용한 렌더링 아키텍처 및 사용 사례 :: 박철수 솔루션즈 아키텍트 :: AWS Media Day
Amazon Web Services Korea
 
Implementing your landing zone - FND210 - AWS re:Inforce 2019
Implementing your landing zone - FND210 - AWS re:Inforce 2019 Implementing your landing zone - FND210 - AWS re:Inforce 2019
Implementing your landing zone - FND210 - AWS re:Inforce 2019
Amazon Web Services
 

What's hot (20)

Managing and governing multi-account AWS environments using AWS Organizations...
Managing and governing multi-account AWS environments using AWS Organizations...Managing and governing multi-account AWS environments using AWS Organizations...
Managing and governing multi-account AWS environments using AWS Organizations...
 
(DVO315) Log, Monitor and Analyze your IT with Amazon CloudWatch
(DVO315) Log, Monitor and Analyze your IT with Amazon CloudWatch(DVO315) Log, Monitor and Analyze your IT with Amazon CloudWatch
(DVO315) Log, Monitor and Analyze your IT with Amazon CloudWatch
 
AWS Support에서 제안하는 멋진 클라우드 아키텍처 디자인::조성열:: AWS Summit Seoul 2018
AWS Support에서 제안하는 멋진 클라우드 아키텍처 디자인::조성열:: AWS Summit Seoul 2018AWS Support에서 제안하는 멋진 클라우드 아키텍처 디자인::조성열:: AWS Summit Seoul 2018
AWS Support에서 제안하는 멋진 클라우드 아키텍처 디자인::조성열:: AWS Summit Seoul 2018
 
AWS Control Tower
AWS Control TowerAWS Control Tower
AWS Control Tower
 
[AWS Builders] 프리티어 서비스부터 계정 보안까지
[AWS Builders] 프리티어 서비스부터 계정 보안까지[AWS Builders] 프리티어 서비스부터 계정 보안까지
[AWS Builders] 프리티어 서비스부터 계정 보안까지
 
Coordinating Microservices with AWS Step Functions.pdf
Coordinating Microservices with AWS Step Functions.pdfCoordinating Microservices with AWS Step Functions.pdf
Coordinating Microservices with AWS Step Functions.pdf
 
AWS Webinar Series - Cost Optimisation Levers, Tools, and Strategies
AWS Webinar Series - Cost Optimisation Levers, Tools, and StrategiesAWS Webinar Series - Cost Optimisation Levers, Tools, and Strategies
AWS Webinar Series - Cost Optimisation Levers, Tools, and Strategies
 
Esegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS FargateEsegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS Fargate
 
AWS Finance Symposium_금융권을 위한 hybrid 클라우드 도입 첫걸음
AWS Finance Symposium_금융권을 위한 hybrid 클라우드 도입 첫걸음AWS Finance Symposium_금융권을 위한 hybrid 클라우드 도입 첫걸음
AWS Finance Symposium_금융권을 위한 hybrid 클라우드 도입 첫걸음
 
V1.1 CD03 Azure Active Directory B2C/B2B コラボレーションによる Customer Identity and Ac...
V1.1 CD03 Azure Active Directory B2C/B2B コラボレーションによる Customer Identity and Ac...V1.1 CD03 Azure Active Directory B2C/B2B コラボレーションによる Customer Identity and Ac...
V1.1 CD03 Azure Active Directory B2C/B2B コラボレーションによる Customer Identity and Ac...
 
Introduction to AWS IAM
Introduction to AWS IAMIntroduction to AWS IAM
Introduction to AWS IAM
 
Deploy and Govern at Scale with AWS Control Tower
Deploy and Govern at Scale with AWS Control TowerDeploy and Govern at Scale with AWS Control Tower
Deploy and Govern at Scale with AWS Control Tower
 
AWS CloudFormation: Infrastructure as Code | AWS Public Sector Summit 2016
AWS CloudFormation: Infrastructure as Code | AWS Public Sector Summit 2016AWS CloudFormation: Infrastructure as Code | AWS Public Sector Summit 2016
AWS CloudFormation: Infrastructure as Code | AWS Public Sector Summit 2016
 
Top 10 AWS Identity and Access Management (IAM) Best Practices (SEC301) | AWS...
Top 10 AWS Identity and Access Management (IAM) Best Practices (SEC301) | AWS...Top 10 AWS Identity and Access Management (IAM) Best Practices (SEC301) | AWS...
Top 10 AWS Identity and Access Management (IAM) Best Practices (SEC301) | AWS...
 
Hosting .NET Applications on AWS - AWS Federal Pop-Up Loft
Hosting .NET Applications on AWS  - AWS Federal Pop-Up LoftHosting .NET Applications on AWS  - AWS Federal Pop-Up Loft
Hosting .NET Applications on AWS - AWS Federal Pop-Up Loft
 
急なトラフィック増にも動じない、Amazon S3とCloudFrontを活用したWebサイト構築
急なトラフィック増にも動じない、Amazon S3とCloudFrontを活用したWebサイト構築急なトラフィック増にも動じない、Amazon S3とCloudFrontを活用したWebサイト構築
急なトラフィック増にも動じない、Amazon S3とCloudFrontを活用したWebサイト構築
 
Depreciation and Revaluation
Depreciation and RevaluationDepreciation and Revaluation
Depreciation and Revaluation
 
Amazon Aurora 100% 활용하기
Amazon Aurora 100% 활용하기Amazon Aurora 100% 활용하기
Amazon Aurora 100% 활용하기
 
AWS를 이용한 렌더링 아키텍처 및 사용 사례 :: 박철수 솔루션즈 아키텍트 :: AWS Media Day
AWS를 이용한 렌더링 아키텍처 및 사용 사례 :: 박철수 솔루션즈 아키텍트 :: AWS Media DayAWS를 이용한 렌더링 아키텍처 및 사용 사례 :: 박철수 솔루션즈 아키텍트 :: AWS Media Day
AWS를 이용한 렌더링 아키텍처 및 사용 사례 :: 박철수 솔루션즈 아키텍트 :: AWS Media Day
 
Implementing your landing zone - FND210 - AWS re:Inforce 2019
Implementing your landing zone - FND210 - AWS re:Inforce 2019 Implementing your landing zone - FND210 - AWS re:Inforce 2019
Implementing your landing zone - FND210 - AWS re:Inforce 2019
 

Similar to Ibm i series_lpar_migration_from_vscsi_to_npiv_within_or_different_frames

Software defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloudSoftware defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloud
xKinAnx
 
Presentation power vm virtualization without limits
Presentation   power vm virtualization without limitsPresentation   power vm virtualization without limits
Presentation power vm virtualization without limits
solarisyougood
 
V mware admin interview questions
V mware admin interview questionsV mware admin interview questions
V mware admin interview questions
Praveen Raut
 
Vm ware interview questions
Vm ware interview questionsVm ware interview questions
Vm ware interview questions
TELEPERFORMANCE INDIA
 
Vmware admin interview questions
Vmware admin interview questionsVmware admin interview questions
Vmware admin interview questions
Ritesh Rushiya
 
Huawei SAN Storage How To - Configuring the i-SCSI Communication Protocol
Huawei SAN Storage How To - Configuring the i-SCSI Communication ProtocolHuawei SAN Storage How To - Configuring the i-SCSI Communication Protocol
Huawei SAN Storage How To - Configuring the i-SCSI Communication Protocol
IPMAX s.r.l.
 
Virtual fibre-channel-hyperv-tb
Virtual fibre-channel-hyperv-tbVirtual fibre-channel-hyperv-tb
Virtual fibre-channel-hyperv-tb
rockysheddy
 
Hyper-v Storage
Hyper-v StorageHyper-v Storage
Hyper-v Storage
Paulo Freitas
 
Virtual private cloud fundamentals
Virtual private cloud fundamentalsVirtual private cloud fundamentals
Virtual private cloud fundamentals
Sai Viswanath
 
Open-E DSS V7 Active-Passive iSCSI Failover
Open-E DSS V7 Active-Passive iSCSI FailoverOpen-E DSS V7 Active-Passive iSCSI Failover
Open-E DSS V7 Active-Passive iSCSI Failover
open-e
 
30 important-virtualization-vmware-interview-questions-with-answers
30 important-virtualization-vmware-interview-questions-with-answers30 important-virtualization-vmware-interview-questions-with-answers
30 important-virtualization-vmware-interview-questions-with-answers
Latif Siddiqui
 
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...
Symantec
 
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
open-e
 
Backup workflow for SMHV on windows 2008R2 HYPER-V
Backup workflow for SMHV on windows 2008R2 HYPER-VBackup workflow for SMHV on windows 2008R2 HYPER-V
Backup workflow for SMHV on windows 2008R2 HYPER-V
Ashwin Pawar
 
Lecture 1.pptx
Lecture 1.pptxLecture 1.pptx
Lecture 1.pptx
MuhammadYasirKhan42
 
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
IBM India Smarter Computing
 
ITE7_Chp9.pptx
ITE7_Chp9.pptxITE7_Chp9.pptx
ITE7_Chp9.pptx
ssusera76ea9
 
Vpn
VpnVpn
cloudstudioアプローチ別営業資料202403_en (1) (2).pptx
cloudstudioアプローチ別営業資料202403_en (1) (2).pptxcloudstudioアプローチ別営業資料202403_en (1) (2).pptx
cloudstudioアプローチ別営業資料202403_en (1) (2).pptx
comworks
 
Open Stack compute-service-nova
Open Stack compute-service-novaOpen Stack compute-service-nova
Open Stack compute-service-nova
GHANSHYAM MANN
 

Similar to Ibm i series_lpar_migration_from_vscsi_to_npiv_within_or_different_frames (20)

Software defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloudSoftware defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloud
 
Presentation power vm virtualization without limits
Presentation   power vm virtualization without limitsPresentation   power vm virtualization without limits
Presentation power vm virtualization without limits
 
V mware admin interview questions
V mware admin interview questionsV mware admin interview questions
V mware admin interview questions
 
Vm ware interview questions
Vm ware interview questionsVm ware interview questions
Vm ware interview questions
 
Vmware admin interview questions
Vmware admin interview questionsVmware admin interview questions
Vmware admin interview questions
 
Huawei SAN Storage How To - Configuring the i-SCSI Communication Protocol
Huawei SAN Storage How To - Configuring the i-SCSI Communication ProtocolHuawei SAN Storage How To - Configuring the i-SCSI Communication Protocol
Huawei SAN Storage How To - Configuring the i-SCSI Communication Protocol
 
Virtual fibre-channel-hyperv-tb
Virtual fibre-channel-hyperv-tbVirtual fibre-channel-hyperv-tb
Virtual fibre-channel-hyperv-tb
 
Hyper-v Storage
Hyper-v StorageHyper-v Storage
Hyper-v Storage
 
Virtual private cloud fundamentals
Virtual private cloud fundamentalsVirtual private cloud fundamentals
Virtual private cloud fundamentals
 
Open-E DSS V7 Active-Passive iSCSI Failover
Open-E DSS V7 Active-Passive iSCSI FailoverOpen-E DSS V7 Active-Passive iSCSI Failover
Open-E DSS V7 Active-Passive iSCSI Failover
 
30 important-virtualization-vmware-interview-questions-with-answers
30 important-virtualization-vmware-interview-questions-with-answers30 important-virtualization-vmware-interview-questions-with-answers
30 important-virtualization-vmware-interview-questions-with-answers
 
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...
FEATRURE BRIEF▶ NetBackup 7.6 - Direct virtual machine creation from backup w...
 
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
 
Backup workflow for SMHV on windows 2008R2 HYPER-V
Backup workflow for SMHV on windows 2008R2 HYPER-VBackup workflow for SMHV on windows 2008R2 HYPER-V
Backup workflow for SMHV on windows 2008R2 HYPER-V
 
Lecture 1.pptx
Lecture 1.pptxLecture 1.pptx
Lecture 1.pptx
 
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
SAP with IBM Tivoli FlashCopy Manager for VMware and IBM XIV and IBM Storwize...
 
ITE7_Chp9.pptx
ITE7_Chp9.pptxITE7_Chp9.pptx
ITE7_Chp9.pptx
 
Vpn
VpnVpn
Vpn
 
cloudstudioアプローチ別営業資料202403_en (1) (2).pptx
cloudstudioアプローチ別営業資料202403_en (1) (2).pptxcloudstudioアプローチ別営業資料202403_en (1) (2).pptx
cloudstudioアプローチ別営業資料202403_en (1) (2).pptx
 
Open Stack compute-service-nova
Open Stack compute-service-novaOpen Stack compute-service-nova
Open Stack compute-service-nova
 

Recently uploaded

“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
Claudio Di Ciccio
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
innovationoecd
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Neo4j
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Safe Software
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
Kumud Singh
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
Zilliz
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
Infrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI modelsInfrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI models
Zilliz
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
Octavian Nadolu
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
Neo4j
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceAI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
IndexBug
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
Matthew Sinclair
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
KAMESHS29
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
Zilliz
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
Alpen-Adria-Universität
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
shyamraj55
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 

Recently uploaded (20)

“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
Infrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI modelsInfrastructure Challenges in Scaling RAG with Custom AI models
Infrastructure Challenges in Scaling RAG with Custom AI models
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceAI 101: An Introduction to the Basics and Impact of Artificial Intelligence
AI 101: An Introduction to the Basics and Impact of Artificial Intelligence
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
 
RESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for studentsRESUME BUILDER APPLICATION Project for students
RESUME BUILDER APPLICATION Project for students
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 

Ibm i series_lpar_migration_from_vscsi_to_npiv_within_or_different_frames

  • 1. IBM iSeries LPAR migration from VSCSI to NPIV
  • 2. Purpose This technical document shows an example for migrating an IBM i client partition of the IBM PowerVM Virtual I/O Server from a virtual SCSI to an NPIV attachment. This is intended to help Unix Team to successfully plan and perform an IBM i partition migration from virtual SCSI to NPIV in Power systems. This technical document is considered that SAN is supported, enabled NPIV capable in SAN switch, disk are NPIV capable, required license is available for iOS, supported tape library is used to configure for IBM iSeries Client partition of the IBM Power systems. In case of confirmation for SAN related contact SAN team, for iOS contact iSeries team. Herewith, I've given the steps only from Power system side.
  • 3.
  • 4. 1. Considerations for using N_Port ID Virtualization 1.1 Virtual SCSI vs. N_Port ID Virtualization While for virtual SCSI (VSCSI) the Virtual I/O Server performs generic SCSI device emulation, with NPIV the Virtual I/O Server simply acts as a Fibre Channel pass-through. Compared to VSCSI an N_Port ID Virtualization (NPIV) storage environment typically requires less efforts to be configured and maintained as there is no multi-path device driver, no virtual target device creation and no administration of corresponding volume device mappings to be required on the Virtual I/O Server. NPIV allows the IBM i client partition to see its storage devices with its machine type / model and all device characteristics via virtual Fibre Channel adapters as if it would be natively attached to the storage devices. Some licensed programs on the IBM i client partition don’t support virtual SCSI attached devices as they require knowledge about the hardware device characteristics. 1.2 Migration Planning Considerations • Consider the following for migrating an IBM i client partition of the IBM PowerVM Virtual I/O Server from VSCSI to NPIV: • Existing volumes on the SAN created for IBM i VSCSI attachment can be re-used for NPIV attachment. • Each virtual Fibre Channel client adapter for IBM i supports up to 64 LUNs vs. up to 16 VSCSI LUNs supported for each VSCSI client adapter. • The remapping of the volumes from a VSCSI to an NPIV configuration has to be performed while the IBM i partition is powered off, i.e. heterogeneous multipathing with simultaneously using VSCSI and NPIV is not supported in IBM I partition.
  • 5. 2. Overview of Migration Steps This section contains an overview for the migration steps we used to migrate our IBM i partition from VSCSI to NPIV attachment of the IBM Storage. This migration procedure is certainly not the only way in which this migration can be performed. However we think it well serves its purpose with minimizing required IBM i partition downtime and minimizing risks for a possibly failed migration by retaining the original VSCSI configuration as far as possible to allow an easy step-back if required until the targeted NPIV configuration has been verified to work successfully. 1. Verifying all prerequisites for IBM i NPIV storage attachment are fulfilled. 2. Changing the IBM i partition profile to remove the VSCSI client adapters and add virtual Fibre Channel client adapters. 3. Changing the Virtual I/O Server's partition configuration (current configuration and profile) by dynamically adding the virtual Fibre Channel server adapters required for NPIV. 4. Enabling NPIV on the SAN switch ports. 5. Mapping the virtual Fibre Channel server adapters to physical Fibre Channel adapters. 6. Adding new switch zones for the new virtual Fibre Channel adapters from the IBM i client partition. 7. Creating a new host object for the IBM i partition on the storage system. 8. Powering down the IBM i partition. 9. Mapping existing volumes to the new host object used for IBM i NPIV attachment. 10. Powering on the IBM i partition and verifying if everything works as expected with NPIV After NPIV has been verified to work successfully: 11. Removing the VSCSI disk resources from the Virtual I/O Servers. 12. Removing the VSCSI server adapters from the Virtual I/O Servers. 13. Deleting the IBM i partition’s old profile. 14. Request SAN team to unshare the old VSCSI disk resources.
  • 6. 3. Performing the Migration In this section we describe the detailed steps required to perform the migration of our IBM i client partition of two Virtual I/O Servers from a VSCSI to an NPIV attachment. 3.1. Verifying all prerequisites for IBM i NPIV storage attachment are fulfilled a. Supported 8 Gb Fibre Channel (see reference 2) b. NPIV capable SAN switch. c. Check iOS level (IBM i 7.1 TR6 or later) d. Virtual I/O Server 2.2.2.1 (fix pack 26) or later. e. HMC V7R3.5.0 or later. f. Recommended latest Power Systems firmware. g. On the Virtual I/O Server run the chkdev command for each hdisk used for IBM i to be migrated to NPIV attachment: $ chkdev -dev <diskname> Example: $ chkdev -dev hdisk15 NAME: hdisk15 IDENTIFIER: 3321360050768028086EDD00000000000002104214503IBMfcp PHYS2VIRT_CAPABLE: NA VIRT2NPIV_CAPABLE: YES VIRT2PHYS_CAPABLE: YES Ensure VIRT2NPIV_CAPABLE is YES. If it is not, e.g. because of a PVID having been assigned like when using logical volumes, a migration is not possible without a complete SAVE and RESTORE of the IBM i partition.
  • 7. 3.2. Changing the IBM i partition profile to remove the VSCSI client adapters and add virtual Fibre Channel client adapters a. As a safety measure to be able to easily revert back to using VSCSI in case anything would go wrong with our NPIV setup, we create a new partition profile for our NPIV configuration by creating a copy of our IBM i partition’s existing profile used for VSCSI. We select the partition on the HMC choosing from the context-menu Configuration → Manage Profiles, selecting the currently used profile and choosing from the menu Actions → Copy with specifying a new profile name like “default_NPIV” as shown in Figure 1. Figure 1: Copying the existing IBM i partition profile b. Within the Manage Profiles dialog we click on the newly created profile “default_NPIV” to open it for editing with selecting the Virtual Adapters tab as shown in Figure 2. Figure 2: IBM i partition profile with existing VSCSI adapters Figure 2: IBM i partition profile with existing virtual SCSI adapters
  • 8. c. We delete our VSCSI client adapters in slots 11 and 12 we used for VSCSI attachment by selecting them and choosing from the menu Actions → Delete – we are going to keep the VSCSI adapter in slot 10 which is used for a virtual DVD drive which independent from the SAN storage attachment migration to NPIV we like to retain. d. For the NPIV attachment we create two corresponding virtual Fibre Channel (VFC) client adapters in slots 13 and 14 each by selecting Actions → Create Virtual Adapter → Fibre Channel Adapter – we use different slot numbers for VFC than for VSCSI to be able to easily revert back to the VSCSI adapter configuration until we are fully assured that the desired NPIV configuration works successfully. The adapter with the odd slot ID 13 we associate with our 2nd Virtual I/O Server server adapter ID 13, and the one with the even slot ID 14 with our 1st Virtual I/O Server server adapter ID 14. The resulting virtual adapter configuration for our IBM i partition is shown in Figure 3. It provides us with the desired IBM i multi-pathing configuration for NPIV across two redundant Virtual I/O Servers. Figure 3: IBM i partition profile with removed VSCSI and added VFC adapters e. We click OK on the Virtual Adapters tab to save the changes to the partition profile. After getting back to the dialog window with the list of managed profiles we select the partition profile again and select the Tagged I/O tab which now allows us to select the previously created virtual Fibre Channel adapter for the load source as shown in Figure 4. Also this configuration change needs to be saved again by clicking OK.
  • 9. Figure 4: IBM i I/O tagging for the load source using virtual Fibre Channel 3.3. Changing the Virtual I/O Servers’ partition configuration (current configuration and profile) by dynamically adding the virtual Fibre Channel server adapters required for NPIV a. For both of our Virtual I/O Server partitions we dynamically add a virtual Fibre Channel server adapter by selecting the Virtual I/O Server partition and choosing from the context- menu Dynamic Logical Partitioning → Virtual Adapters selecting the menu Actions → Create Virtual Adapter → Fibre Channel Adapter like shown in Figure 5. Figure 5: Dynamically adding a VFC adapter to a Virtual I/O Server partition
  • 10. b. The resulting virtual adapter configuration for our two Virtual I/O Server partitions is shown in Figure 6. Figure 6: Virtual I/O Server virtual adapter configuration with added VFC adapters c. To make sure our changes of dynamically adding the virtual Fibre Channel adapters are retained also after a Virtual I/O Server shutdown, for each Virtual I/O Server we save the current configuration in its partition profile by selecting the Virtual I/O Server and choosing from the context- menu Configuration → Save Current Configuration as shown in Figure 7. Figure 7: Saving the Virtual I/O Server current configuration to its profile 4. Enabling NPIV on the SAN switch We need to contact SAN team to enable the NPIV capable on the SAN switch. We can use lsnports command. It displays information for all the ports capable of NPIV.
  • 11. 5. Mapping the virtual Fibre Channel server adapters to physical Fibre Channel adapters a. The virtual Fibre Channel adapters can easily be mapped to the physical Fibre Channel adapters owned by the Virtual I/O Servers using the Virtual Storage Management function of the HMC GUI by selecting the physical server and choosing Configuration → Virtual Resources → Virtual Storage Management from the Tasks panel. In the Virtual Storage Management dialog we select the corresponding Virtual I/O Server from the drop-down list and click Query to retrieve its configuration information like shown in Figure 8. Figure 8: Retrieving Virtual I/O Server configuration information b. To modify the virtual Fibre Channel port connections we select the Virtual Fibre Channel tab, select the physical FC adapter port fcs0 we like to map the virtual Fibre Channel adapter for our IBM i client partition to, and click Modify partition connections like shown in Figure 9. Figure 9: Virtual Storage Management virtual Fibre Channel adapter connections c. In the Modify Virtual Fibre Channel Partition Assignment dialog we select our IBM i client partition i7PFE2 and click OK like shown in Figure 10.
  • 12. Figure 10: Selecting the virtual FC adapter to be mapped to the physical port Above steps for mapping the virtual Fibre Channel adapter to a physical Fibre Channel adapter port we repeat correspondingly for our 2 nd Virtual I/O Server. Alternatively to using the Virtual Storage Management function of the HMC GUI the virtual to physical FC adapter mappings could be created from the Virtual I/O Server command line using the “vfcmap” command like described below: On the Virtual I/O Servers the dynamically added virtual Fibre Channel server adapter should show up as an available vfchostX resource like shown in the lsdev command outputs below for the virtual Fibre Channel server adapter in slot 14: $ lsdev | grep vfchost vfchost0 Available Virtual FC Server Adapter On each Virtual I/O Server we map each virtual Fibre Channel server adapter vfchostX to a physical Fibre Channel port fcsX. The lsnports command can be used to list the NPIV capable physical Fibre Channel ports with “aports” information showing the available virtual ports of the physical port, while the actual mapping is done using the vfcmap command like shown below:
  • 13. $ vfcmap -vadapter vfchost0 -fcp fcs0 Our newly created mapping of the virtual Fibre Channel server adapter “vfchost0” to the physical port “fcs0” is shown in the “lsmap -all -npiv” command output below: 6. Adding new switch zones for the new virtual Fibre Channel adapters from the IBM i client partition a. To create the SAN switch zoning for the new virtual Fibre Channel adapters we first need to retrieve the virtual WWPN from each virtual Fibre Channel client adapter of the IBM i client partition. On the HMC we select the IBM i client partition choosing from the context-menu Configuration → Manage Profiles. In the Manage Profiles dialog we click on the profile “default_NPIV” we created for the NPIV configuration, in the Virtual Adapters tab click on each virtual Fibre Channel adapter we created with noting down both virtual WWPNs together with its slot number like shown in Figure 11. Figure 11: IBM i virtual Fibre Channel adapter WWPNs Use the WWPNs from the above Figure 11, to request SAN team to do the zoning. We should request SAN team to do the zoning to map from virtual to physical fibre channel adapter.
  • 14. 7. Powering down the IBM i partition Now, We've to engage the SAN team and share the LUN ID's that are needs to be shared from VSCSI to NPIV. Note: The remapping of the volumes from a VSCSI to an NPIV configuration has to be performed while the IBM i partition is powered off, i.e. heterogeneous multipathing with simultaneously using VSCSI and NPIV is not supported in IBM I partition. 8. Powering on the IBM i partition and verifying if everything works as expected with NPIV a. From the HMC we activate our IBM i client partition again using the new partition profile “default_NPIV” which we created for the NPIV configuration as shown in Figure 12. Figure 12: Activating the IBM i partition with the new profile for NPIV Once activate the profile with NPIV adapters, check with iSeries team that everything is working good. After confirmation from the iSeries team follow the below step. 9. Removing the virtual SCSI disk resources from the Virtual I/O Servers a. Since we successfully migrated the IBM i volumes from VSCSI attachment to NPIV attachment we remove the virtual target devices and corresponding hdisk devices on each Virtual I/O Server used for hosting our IBM i client partition using the rmdev command like follows:
  • 15. 10. Removing the virtual SCSI server adapters from the Virtual I/O Servers a. We dynamically remove the virtual SCSI server adapters from each Virtual I/O Server we used before for serving the virtual SCSI LUNs to our IBM i client partition by selecting the Virtual I/O Server partition on the HMC and choosing from the context-menu Dynamic Logical Partitioning → Virtual Adapters. In the Virtual Adapters dialog we select the virtual SCSI server adapter(s) to be deleted and choose from the menu Actions → Delete like shown in Figure 13 and click OK. Figure 13: Dynamically removing the unused VSCSI adapters from the Virtual I/O Servers b. To apply our change of virtual SCSI adapter deletion also to the partition profile we save the current configuration for our Virtual I/O Servers by selecting each Virtual I/O Server partition hosting our IBM i client partition on the HMC and choosing from the context-menu Configuration → Save Current Configuration with selecting the appropriate (default) profile and clicking OK like shown in Figure 14.
  • 16. Figure 14: Saving the Virtual I/O Server’s current configuration into its profile 11. Deleting the IBM i partition’s old profile 12. Removing the Virtual I/O Servers’ host objects on the storage system – if not used anymore for other purposes 13. Removing the non-reporting virtual SCSI resources from the IBM i partition
  • 17. References [1] IBM developerWorks for IBM I https://www.ibm.com/developerworks/ibmi/ [2] IBM PowerVM Virtualization Introduction and Configuration http://www.redbooks.ibm.com/redpieces/abstracts/sg247940.html?Open [3] IBM System Storage SAN Volume Controller and IBM Storwize V7000 Command-Line Interface User's Guide http://www-01.ibm.com/support/docview.wss?uid=ssg1S7003983