AdvancedTopics
VxVM
A layered volume is a virtual VERITAS Volume Manager
object that is built on top of other volumes. The layered
volume structure tolerates failure better and has greater
redundancy than the standard volume structure. For
example, in a striped-mirror layered volume, each
mirror (plex) covers a smaller area of storage space, so
recovery is quicker than with a standard mirrored
volume.
Layered volumes
Raid (0 +1) – Mirrored-Stripe
Vol
Plex1 Plex2
Layered Vol
Layered Plex
Layered Vol
Layered Plex
Sd1 sd2sd2 sd1
mirroring
striping striping
The logical objects layered volume and layered plex are used
for more efficient I/O.
The primary reason for using mirrored stripe volume is to
gain the performance offered by striping and the availability
offered by mirroring.
Here between subdisks, striping happens and between plexes
mirroring will happens. If one subdisk goes down in one
plex, data can be redundant from another plex.
Limitations:sd2
Mirror-striped volumes suffers the high cost of mirroring and
requires twice the disk drive space of Non-Redundant volumes.
Raid (1 +0) – Striped-Mirror
Vol
Layered
subdisk
Layered
Sub disk
Layered Vol
Layered Plex
Layered Vol
Layered Plex
Sd1 sd2
Plex
Layered Plex
Sd1 sd2
Layered Plex
striping
mirroring mirroring
At the subdisk level mirroring will happens
For the efficient access we are using layered volumes and
layered plexes.
Striped mirror volumes have the performance and reliability
advantages of a mirrored stripe volumes. But can tolerate a
high percentage of disk drive failures without data loss.
Stripe-mirrored volumes also have a quick recovery time
after a disk drive failure. Because only a single stripe must be
resynchronized instead of on entire mirror.
Limitations:
Striped-mirror volumes suffers the high cost of mirroring
requires twice the disk drive space of a non-redundant
volumes.
VxVM Daemons
Vxconfigd
Vxsvc
Vxconfigbackupd : /etc/vx/cbr/bk
Vxrelocd – hot relocation
Vxnotify – disk configuration changes managed by Vxconfigd
Vxcached – manages cached volumes associated with space
optimized snapshots.
Online relayout
 Online relayout allows you to convert between storage layouts in VxVM, with
uninterrupted data access. Typically, you would do this to change the
redundancy or performance characteristics of a volume. VxVM adds
redundancy to storage either by duplicating the data (mirroring) or by adding
parity (RAID-5). Performance characteristics of storage in VxVM can be
changed by changing the striping parameters, which are the number of columns
and the stripe width.
 Limitations of online relayout:
Limitations of online relayout
 Log plexes cannot be transformed.
 Volume snapshots cannot be taken when there is an online relayout operation running
on the volume.
 Online relayout cannot create a non-layered mirrored volume in a single step.
 It always creates a layered mirrored volume even if you specify a non-layered mirrored
layout, such as mirror-stripe or mirror-concat. Use the vxassist convert command to
turn the layered mirrored volume that results from a relayout into a non-layered
volume.
 The usual restrictions apply for the minimum number of physical disks that are required
to create the destination layout. For example, mirrored volumes require at least as
many disks as mirrors, striped and RAID-5 volumes require at least as many disks as
columns, and striped-mirror volumes require at least as many disks as columns
multiplied by mirrors.
 To be eligible for layout transformation, the plexes in a mirrored volume must have
identical stripe widths and numbers of columns. Relayout is not possible unless you
make the layouts of the individual plexes identical.
 Online relayout involving RAID-5 volumes is not supported for shareable disk groups in
a cluster environment.
 Online relayout cannot transform sparse plexes, nor can it make any plex sparse. (A
sparse plex is a plex that is not the same size as the volume, or that has regions that are
not mapped to any subdisk.)
 The number of mirrors in a mirrored volume cannot be changed using relayout.
 Only one relayout may be applied to a volume at a time.
Performing online relayout
 # vxassist [-b] [-g diskgroup] relayout volume [layout=layout]  [relayout_options]
 If specified, the -b option makes relayout of the volume a background task.
 The following destination layout configurations are supported.
 concat-mirror concatenated-mirror
 concat concatenated
 nomirror concatenated
 Administering volumes 363
 Performing online relayout
 nostripe concatenated
 raid5 RAID-5 (not supported for shared disk groups)
 span concatenated
 stripe striped
For example, the following command changes a
concatenated volume, vol02, in disk group, mydg, to a
striped volume with the default number of columns, 2, and
default stripe unit size, 64 kilobytes:
# vxassist -g mydg relayout vol02 layout=stripe
Hot-relocation
 Hot-relocation is a feature that allows a system to react automatically to I/O
failures on redundant objects (mirrored or RAID-5 volumes) in VxVM and
restore redundancy and access to those objects. VxVM detects I/O failures on
objects and relocates the affected subdisks. The subdisks are relocated to disks
designated as spare disks or to free space within the disk group. VxVM then
reconstructs the objects that existed before the failure and makes them
accessible again. When a partial disk failure occurs (that is, a failure affecting
only some subdisks on a disk), redundant data on the failed portion of the disk
is relocated. Existing volumes on the unaffected portions of the disk remain
accessible.
Discovering and configuring newly
added disk devices
 The vxdiskconfig utility scans and configures new disk devices attached to the host, disk
devices that become online, or fibre channel devices that are zoned to host bus adapters
connected to this host. The command calls platform specific interfaces to configure new
disk devices and brings them under control of the operating system. It scans for disks
that were added since VxVM’s configuration daemon was last started. These disks are
then dynamically configured and recognized by VxVM.
 # vxdctl -f enable
 # vxdisk -f scandisks
 However, a complete scan is initiated if the system configuration has been modified
 by changes to:
 Installed array support libraries.
 The devices that are listed as being excluded from use by VxVM.
 DISKS (JBOD), SCSI3, or foreign device definitions.
To list the targets
To list the devices configured from
a Host Bus Adapter
To add an unsupported disk array
to the DISKS category
To verify that the DMP paths are recognized, use the vxdmpadm
getdmpnode command as shown in the following sample output for
the example array:
To change the disk-naming scheme
 Select Change the disk naming scheme from the vxdiskadm main menu to change the
disk-naming scheme that you wantVxVMto use.Whenprompted, enter y to change the
naming scheme. This restarts the vxconfigd daemon to bring the new disk naming
scheme into effect. Alternatively, you can change the naming scheme from the
command line.
 Use the following command to select enclosure-based naming:
 # vxddladm set namingscheme=ebn [persistence={yes|no}] 
[use_avid=yes|no] [lowercase=yes|no]
 Use the following command to select operating system-based naming:
 # vxddladm set namingscheme=osn [persistence={yes|no}] 
[lowercase=yes|no]
The optional persistence argument allows you to select whether
the names of disk devices that are displayed by VxVM remain
unchanged after disk hardware has been reconfigured and the
system rebooted. By default, enclosure-based naming is
persistent. Operating system-based naming is not persistent by
default.
To remove the error state for simple or
nopriv disks in the boot disk group
Removing and replacing disks
 A replacement disk should have the same disk geometry as the disk that failed.
That is, the replacement disk should have the same bytes per sector, sectors per
track, tracks per cylinder and sectors per cylinder, same number of cylinders,
and the same number of accessible cylinders.
 You can use the prtvtoc command to obtain disk information.
To replace a disk
Replacing a failed or removed disk
Dynamic new LUN addition to a
new target ID
In this case, a new group of LUNS is mapped to the host by
multiple HBA ports.
An OS device scan is issued for the LUNs to be recognized and
added to DMP control.
The high-level procedure and the VxVM commands are generic.
However, the OS commands may vary for Solaris versions.
To perform online LUN addition
To clean up the device tree after
you remove LUNs
Dynamic Multipathing
 How DMP works
 The Dynamic Multipathing (DMP) feature of Veritas Volume Manager (VxVM)
provides greater availability, reliability and performance by using path failover and load
balancing. This feature is available for multiported disk arrays from various vendors.
 Multiported disk arrays can be connected to host systems through multiple paths. To
detect the various paths to a disk, DMP uses a mechanism that is specific to each
supported array type.DMPcan also differentiate between different enclosures of a
supported array type that are connected to the same host system.
 The multipathing policy used by DMP depends on the characteristics of the disk array.
DMP supports the following standard
array types:
 Active/Active (A/A) :
 Allows several paths to be used concurrently for I/O. Such arrays allow DMP to
provide greater I/O throughput by balancing the I/O load uniformly across the
multiple paths to the LUNs. In the event that one path fails, DMP automatically
routes I/O over the other available paths.
 Asymmetric Active/Active (A/A-A):
 A/A-A or Asymmetric Active/Active arrays can be accessed through secondary
storage paths with little performance degradation. Usually an A/A-A array behaves
like an A/P array rather than an A/A array. However, during failover, an A/A-A
array behaves like an A/A array.
 Active/Passive (A/P):
Allows access to its LUNs (logical units; real disks or virtual disks created using
hardware) via the primary (active) path on a single controller (also known as an
access port or a storage processor) during normal operation.
Active/Passive in explicit failover mode or non-autotrespass
mode (A/P-F):
The appropriate command must be issued to the array to make the LUNs fail
over to the secondary path.
Active/Passive with LUNgroup failover (A/P-G):
 For Active/Passive arrays withLUNgroup failover (A/PG arrays), a group of LUNs
that are connected through a controller is treated as a single failover entity. Unlike
A/P arrays, failover occurs at the controller level, and not for individual LUNs. The
primary and secondary controller are each connected to a separate group of LUNs. If
a single LUN in the primary controller’s LUN group fails, all LUNs in that group fail
over to the secondary controller.
 Concurrent Active/Passive (A/P-C)
 Concurrent Active/Passive in explicit failover mode or non-autotrespass mode (A/PF-C)
 Concurrent Active/Passive with LUN group failover (A/PG-C)
 Variants of the A/P, AP/F and A/PG array types that support concurrent I/O and load
balancing by having multiple primary paths into a controller. This functionality is provided by a
controller with multiple ports, or by the insertion of a SAN hub or switch between an array and
a controller. Failover to the secondary (passive) path occurs only if all the active primary paths
fail.
How DMP represents multiple physical
paths to a disk as one node
Example of multipathing for a disk
enclosure in a SAN environment
Displaying the paths to a disk
 The vxdisk command is used to display the multipathing information for a
particular metadevice. The metadevice is a device representation of a particular
physical disk having multiple physical paths from one of the system’s HBA
controllers. In VxVM, all the physical disks in the system are represented as
metadevices with one or more physical paths.
To view multipathing information for a
particular metadevice
Retrieving information about a DMP
node
Displaying the members of a LUN group
To list all subpaths known to DMP:
You can use getsubpaths to obtain information about all
the paths that are connected to a particular HBA controller:
Displaying information about
controllers
Displaying HBA details
 The vxdmpadm getctlr command displays HBA vendor details and the
Controller ID. For iSCSI devices, the Controller ID is the IQN or IEEE-format
based name. For FC devices, the Controller ID is the WWN. Because the
WWN is obtained from ESD, this field is blank if ESD is not running. ESD is a
daemon process used to notify DDL about occurrence of events. The WWN
shown as ‘Controller ID’ maps to the WWN of the HBA port associated with
the host controller.
Examples of using the vxdmpadm
iostat command
Understanding the Plex State
Cycle
Additional Plex States
Displaying plex information
Listing plexes helps identify free plexes for building volumes.
Use the plex (–p) option to the vxprint command to list
information about all plexes. To display detailed information
about all plexes in the system, use the following command:
# vxprint -lp
To display detailed information about a specific plex, use the
following command:
# vxprint [-g diskgroup] -l plex
The -t option prints a single line of information about the
plex. To list free plexes, use the following command:
# vxprint -pt
Plex states
Plex condition flags
Plex kernel states
The plex kernel state indicates the accessibility of the plex to
the volume driver which monitors it.
No user intervention is required to set these states; they are
maintained internally. On a system that is operating
properly, all plexes are enabled.
Attaching and associating plexes
A plex becomes a participating plex for a volume by
attaching it to a volume. (Attaching a plex associates it with
the volume and enables the plex for use.) To attach a plex to
an existing volume, use the following command:
# vxplex [-g diskgroup] att volume plex
Example:
# vxplex -g mydg att vol01 vol01-02
If the volume does not already exist, a plex (or multiple
plexes) can be associated with the volume when it is created
using the following command:
 # vxmake [-g diskgroup] -U usetype vol volume plex=plex1[,plex2...]
For example, to create a mirrored, fsgen-type volume
named home, and to associate two existing plexes named
home-1 and home-2 with home, use the following
command:
# vxmake -g mydg -U fsgen vol home plex=home-1,home-2
Taking plexes offline
To take a plex OFFLINE so that repair or maintenance can be
performed on the physical disk containing subdisks of that plex, use the
following command:
# vxmend [-g diskgroup] off plex
If a disk has a head crash, put all plexes that have associated subdisks on
the affected disk OFFLINE. For example, if plexes vol01-02 and vol02-
02 in the disk group, mydg, had subdisks on a drive to be repaired, use
the following command to take these plexes offline:
# vxmend -g mydg off vol01-02 vol02-02
This command places vol01-02 and vol02-02 in the OFFLINE state, and
they remain in that state until it is changed. The plexes are not
automatically recovered on rebooting the system.
Detaching plexes
To temporarily detach one data plex in a mirrored volume,
use the following command:
# vxplex [-g diskgroup] det plex
For example, to temporarily detach a plex named vol01-02
in the disk group, mydg, and place it in maintenance mode,
use the following command:
# vxplex -g mydg det vol01-02
Reattaching plexes
 When a disk has been repaired or replaced and is again ready for use, the plexes
 must be put back online (plex state set to ACTIVE). To set the plexes to
ACTIVE, use one of the following procedures depending on the state of the
volume.
 ■ If the volume is currently ENABLED, use the following command to reattach
the plex:
 # vxplex [-g diskgroup] att volume plex ...
 For example, for a plex named vol01-02 on a volume named vol01 in the disk
 group, mydg, use the following command:
 # vxplex -g mydg att vol01 vol01-02
 As when returning an OFFLINE plex to ACTIVE, this command starts to
recover the contents of the plex and, after the revive is complete, sets the plex
utility state to ACTIVE.
If the volume is not in use (not ENABLED), use the
following command to re-enable the plex for use:
# vxmend [-g diskgroup] on plex
For example, to re-enable a plex named vol01-02 in the disk
group, mydg, enter:
# vxmend -g mydg on vol01-02
Listing Unstartable Volumes
An unstartable volume can be incorrectly configured or have
other errors or conditions that prevent it from being started.
To display unstartable volumes, use the vxinfo command.
This displays information about the accessibility and usability
of volumes:
How to recover and start a Veritas Volume
Manager logical volume where the volume is
DISABLED ACTIVE and has a plex that is
DISABLED RECOVER
 # vxprint -ht -g testdg
DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
dg testdg default default 84000 970356463.1203.alu
dm testdg01 c1t4d0s2 sliced 2179 8920560 -
dm testdg02 c1t6d0s2 sliced 2179 8920560 -
v test - DISABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test DISABLED RECOVER 17841120 CONCAT - RW
sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
Change the plex test-01 to the DISABLED STALE state:
#vxmend -g diskgroup fix stale <plex_name>
For example:
# vxmend -g testdg fix stale test-01
 # vxprint -ht -g testdg
DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
dg testdg default default 84000 970356463.1203.alu
dm testdg01 c1t4d0s2 sliced 2179 8920560 -
dm testdg02 c1t6d0s2 sliced 2179 8920560 -
v test - DISABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test DISABLED STALE 17841120 CONCAT - RW
sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
Change the plex test-01 to the
DISABLED CLEAN state:
vxmend -g diskgroup fix clean <plex_name>
For example:
# vxmend -g testdg fix clean test-01
# vxprint -ht -g testdg
DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
dg testdg default default 84000 970356463.1203.alu
dm testdg01 c1t4d0s2 sliced 2179 8920560 -
dm testdg02 c1t6d0s2 sliced 2179 8920560 -
v test - DISABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test DISABLED CLEAN 17841120 CONCAT - RW
sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
Start the volume test:
 vxvol -g diskgroup start <volume>
For example:
# vxvol -g diskgroup start test
# vxprint -ht -g testdg
DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
dg testdg default default 84000 970356463.1203.alu
dm testdg01 c1t4d0s2 sliced 2179 8920560 -
dm testdg02 c1t6d0s2 sliced 2179 8920560 -
v test - ENABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test ENABLED ACTIVE 17841120 CONCAT - RW
sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
Recovering an unstartable
volume with a disabled plex in
the RECOVER state
 To recover an unstartable volume with a disabled plex in the RECOVER state
 Use the following command to force the plex into the OFFLINE state:
 # vxmend [-g diskgroup] -o force off plex
 Place the plex into the STALE state using this command:
 # vxmend [-g diskgroup] on plex
 If there are other ACTIVE or CLEAN plexes in the volume, use the following
command to reattach the plex to the volume:
 # vxplex [-g diskgroup] att plex volume
 If the volume is already enabled, resynchronization of the plex is started
immediately.
 If there are no other clean plexes in the volume, use this command to make the
plex DISABLED and CLEAN:
 # vxmend [-g diskgroup] fix clean plex
 If the volume is not already enabled, use the following command to start it, and
preform any resynchronization of the plexes in the background:
 # vxvol [-g diskgroup] -o bg start volume
 If the data in the plex was corrupted, and the volume has no ACTIVE or CLEAN
redundant plexes from which its contents can be resynchronized, it must be
restored from a backup or from a snapshot image.
Clearing the failing flag on a
disk
If I/O errors are intermittent rather than persistent, Veritas Volume
Manager sets the failing flag on a disk, rather than detaching the disk.
Such errors can occur due to the temporary removal of a cable,
controller faults, a partially faulty LUN in a disk array, or a disk with a
few bad sectors or tracks.
If the hardware fault is not with the disk itself (for example, it is caused
by problems with the controller or the cable path to the disk), you can
use the vxedit command to unset the failing flag after correcting the
source of the
I/O error.
Warning: Do not unset the failing flag if the reason for the I/O errors is
unknown. If the disk hardware truly is failing, and the flag is cleared,
there is a risk of data loss.
 To clear the failing flag on a disk
1. Use the vxdisk list command to find out which disks are failing:
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
hdisk10 auto:simple mydg01 mydg online
hdisk11 auto:simple mydg02 mydg online failing
hdisk12 auto:simple mydg03 mydg online
. . . Use the vxedit set command to clear the flag for each disk that is marked as
failing (in this example, mydg02):
# vxedit set failing=off mydg02
Use the vxdisk list command to verify that the failing flag has been cleared: #
vxdisk list
DEVICE TYPE DISK GROUP STATUS
hdisk10 auto:simple mydg01 mydg online
hdisk11 auto:simple mydg02 mydg online
hdisk12 auto:simple mydg03 mydg online
Veritas Unstartable Volume
In this example of VXVM 4.0 on a Solaris 8 system, an array
was temporarily unavailable, causing problems with a file
system whose two plexes resided on the array.
Veritas Unstartable Volume
 bash-2.03# cd /files04
bash: cd: /files04: I/O error
The volume was in DISABLED ACTIVE state, and both plexes were in DISABLED RECOVER state.
v vol04 - DISABLED ACTIVE 29360128 SELECT
- fsgen
pl vol04-01 vol04 DISABLED RECOVER 29367434 STRIPE
3/128 RW
sd appsdg01-04 vol04-01 cs_array07-f0 8392167 2797389 0/0
c1t0d0 ENA
sd appsdg07-01 vol04-01 cs_array03-f2 0 5594778 0/2797389
c4t2d0 ENA
sd appsdg07-04 vol04-01 cs_array03-f2 11189556 1396899
0/8392167 c4t2d0 ENA
sd appsdg02-04 vol04-01 cs_array07-f1 8392167 2797389 1/0
c1t1d0 ENA
sd appsdg10-02 vol04-01 cs_array06-f1 2797389 5594778 1/2797389
c5t1d0 ENA
sd appsdg10-05 vol04-01 cs_array06-f1 13986945 1396899
1/8392167 c5t1d0 ENA
sd appsdg03-04 vol04-01 cs_array07-f2 8392167 2797389 2/0
c1t2d0 ENA
sd appsdg11-02 vol04-01 cs_array06-f2 8392167 6991677 2/2797389
c5t2d0 ENA
pl vol04-02 vol04 DISABLED RECOVER 29367434 STRIPE
3/128 RW
sd appsdg04-02 vol04-02 cs_array07-f3 2797389 2797389 0/0
c1t3d0 ENA
sd appsdg04-05 vol04-02 cs_array07-f3 0 2797389 0/2797389
c1t3d0 ENA
sd appsdg04-06 vol04-02 cs_array07-f3 16784334 894159 0/5594778
c1t3d0 ENA
 We confirmed that the storage array was available to the operating system.
# luxadm probe
Found Enclosure(s):
...
SENA               Name:cs_array06   Node WWN:5080020000038ba8  
  Logical Path:/dev/es/ses6
  Logical Path:/dev/es/ses7
 # luxadm display cs_array06
SLOT FRONT DISKS (Node WWN) REAR DISKS
(Node WWN)
0 On (O.K.) 2000002037094289 On
(O.K.) 200000203709422e
1 On (O.K.) 2000002037093aaf On
(O.K.) 2000002037094220
2 On (O.K.) 200000203709410b On
(O.K.) 2000002037093ddd
3 On (O.K.) 2000002037094254 On
(O.K.) 200000203709422b
4 On (O.K.) 20000020370940da On
(O.K.) 2000002037094247
5 Not Installed Not
Installed
6 On (O.K.) 2000002037093df0 On
 # vxdisk list
...
- - cs_array06-f0 appsdg failed
was:c5t0d0s2
- - cs_array06-f1 appsdg failed
was:c5t1d0s2
- - cs_array06-f2 appsdg failed
was:c5t2d0s2
- - cs_array06-f3 appsdg failed
was:c5t3d0s2
- - cs_array06-r4 appsdg failed
spare was:c5t20d0s2
- - cs_array06-f4 appsdg failed
was:c5t4d0s2
# cd /usr/lib/vxvm/bin
# ./vxreattach c5t0d0s2
# ./vxreattach c5t1d0s2
# ./vxreattach c5t2d0s2
# ./vxreattach c5t3d0s2
# ./vxreattach c5t20d0s2
# ./vxreattach c5t4d0s2
 We then followed the "Recovering an Unstartable Volume with a Disabled Plex in the RECOVER
State" procedure in the Volume Manager Troubleshooting Guide.
1. Force plex vol04-01 into the OFFLINE state.
# vxmend -g appsdg -o force off vol04-01
2. Place plex vol04-01 into the STALE state.
# vxmend -g appsdg on vol04-01
3. There are no other clean plexes in the volume, so make plex vol04-01 DISABLED and
CLEAN.
# vxmend -g appsdg fix clean vol04-01
4. Start the volume, and perform resynchronization of the plexes in the background.
# vxvol -g appsdg -o bg start vol04
At this point, the file system is unmounted, checked for file system consistency, and remounted.
# umount /files04
# mount /files04
UX:vxfs mount: ERROR: V-3-21268: /dev/vx/dsk/appsdg/vol04
is corrupted. needs checking
# fsck -F vxfs /dev/vx/rdsk/appsdg/vol04
log replay in progress
replay complete - marking super-block as CLEAN
# mount /files04
Restarting a Disabled Volume
If a disk failure caused a volume to be disabled, you must
restore the volume from a backup after replacing the failed
disk. Any volumes that are listed as Unstartable must be
restarted using the vxvol command before restoring their
contents from a backup. For example, to restart the volume
mkting so that it can be restored from backup, use the
following command:
Recovering a Mirrored
Volume
Backing Up a Disk Group
Configuration
Restoring a Disk Group
Configuration
The following command performs a precommit analysis of
the state of the disk group configuration, and reinstalls the
disk headers where these have become corrupted:
# /etc/vx/bin/vxconfigrestore -p [-l directory]
{diskgroup | dgid}
The disk group can be specified either by name or by ID.

Vx vm

  • 1.
  • 2.
    A layered volumeis a virtual VERITAS Volume Manager object that is built on top of other volumes. The layered volume structure tolerates failure better and has greater redundancy than the standard volume structure. For example, in a striped-mirror layered volume, each mirror (plex) covers a smaller area of storage space, so recovery is quicker than with a standard mirrored volume.
  • 3.
  • 4.
    Raid (0 +1)– Mirrored-Stripe Vol Plex1 Plex2 Layered Vol Layered Plex Layered Vol Layered Plex Sd1 sd2sd2 sd1 mirroring striping striping
  • 5.
    The logical objectslayered volume and layered plex are used for more efficient I/O. The primary reason for using mirrored stripe volume is to gain the performance offered by striping and the availability offered by mirroring. Here between subdisks, striping happens and between plexes mirroring will happens. If one subdisk goes down in one plex, data can be redundant from another plex. Limitations:sd2 Mirror-striped volumes suffers the high cost of mirroring and requires twice the disk drive space of Non-Redundant volumes.
  • 6.
    Raid (1 +0)– Striped-Mirror Vol Layered subdisk Layered Sub disk Layered Vol Layered Plex Layered Vol Layered Plex Sd1 sd2 Plex Layered Plex Sd1 sd2 Layered Plex striping mirroring mirroring
  • 7.
    At the subdisklevel mirroring will happens For the efficient access we are using layered volumes and layered plexes. Striped mirror volumes have the performance and reliability advantages of a mirrored stripe volumes. But can tolerate a high percentage of disk drive failures without data loss. Stripe-mirrored volumes also have a quick recovery time after a disk drive failure. Because only a single stripe must be resynchronized instead of on entire mirror. Limitations: Striped-mirror volumes suffers the high cost of mirroring requires twice the disk drive space of a non-redundant volumes.
  • 8.
    VxVM Daemons Vxconfigd Vxsvc Vxconfigbackupd :/etc/vx/cbr/bk Vxrelocd – hot relocation Vxnotify – disk configuration changes managed by Vxconfigd Vxcached – manages cached volumes associated with space optimized snapshots.
  • 9.
    Online relayout  Onlinerelayout allows you to convert between storage layouts in VxVM, with uninterrupted data access. Typically, you would do this to change the redundancy or performance characteristics of a volume. VxVM adds redundancy to storage either by duplicating the data (mirroring) or by adding parity (RAID-5). Performance characteristics of storage in VxVM can be changed by changing the striping parameters, which are the number of columns and the stripe width.  Limitations of online relayout:
  • 10.
    Limitations of onlinerelayout  Log plexes cannot be transformed.  Volume snapshots cannot be taken when there is an online relayout operation running on the volume.  Online relayout cannot create a non-layered mirrored volume in a single step.  It always creates a layered mirrored volume even if you specify a non-layered mirrored layout, such as mirror-stripe or mirror-concat. Use the vxassist convert command to turn the layered mirrored volume that results from a relayout into a non-layered volume.  The usual restrictions apply for the minimum number of physical disks that are required to create the destination layout. For example, mirrored volumes require at least as many disks as mirrors, striped and RAID-5 volumes require at least as many disks as columns, and striped-mirror volumes require at least as many disks as columns multiplied by mirrors.
  • 11.
     To beeligible for layout transformation, the plexes in a mirrored volume must have identical stripe widths and numbers of columns. Relayout is not possible unless you make the layouts of the individual plexes identical.  Online relayout involving RAID-5 volumes is not supported for shareable disk groups in a cluster environment.  Online relayout cannot transform sparse plexes, nor can it make any plex sparse. (A sparse plex is a plex that is not the same size as the volume, or that has regions that are not mapped to any subdisk.)  The number of mirrors in a mirrored volume cannot be changed using relayout.  Only one relayout may be applied to a volume at a time.
  • 12.
    Performing online relayout # vxassist [-b] [-g diskgroup] relayout volume [layout=layout] [relayout_options]  If specified, the -b option makes relayout of the volume a background task.  The following destination layout configurations are supported.  concat-mirror concatenated-mirror  concat concatenated  nomirror concatenated  Administering volumes 363  Performing online relayout  nostripe concatenated  raid5 RAID-5 (not supported for shared disk groups)  span concatenated  stripe striped
  • 13.
    For example, thefollowing command changes a concatenated volume, vol02, in disk group, mydg, to a striped volume with the default number of columns, 2, and default stripe unit size, 64 kilobytes: # vxassist -g mydg relayout vol02 layout=stripe
  • 14.
    Hot-relocation  Hot-relocation isa feature that allows a system to react automatically to I/O failures on redundant objects (mirrored or RAID-5 volumes) in VxVM and restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks. The subdisks are relocated to disks designated as spare disks or to free space within the disk group. VxVM then reconstructs the objects that existed before the failure and makes them accessible again. When a partial disk failure occurs (that is, a failure affecting only some subdisks on a disk), redundant data on the failed portion of the disk is relocated. Existing volumes on the unaffected portions of the disk remain accessible.
  • 15.
    Discovering and configuringnewly added disk devices  The vxdiskconfig utility scans and configures new disk devices attached to the host, disk devices that become online, or fibre channel devices that are zoned to host bus adapters connected to this host. The command calls platform specific interfaces to configure new disk devices and brings them under control of the operating system. It scans for disks that were added since VxVM’s configuration daemon was last started. These disks are then dynamically configured and recognized by VxVM.  # vxdctl -f enable  # vxdisk -f scandisks  However, a complete scan is initiated if the system configuration has been modified  by changes to:  Installed array support libraries.  The devices that are listed as being excluded from use by VxVM.  DISKS (JBOD), SCSI3, or foreign device definitions.
  • 16.
    To list thetargets
  • 17.
    To list thedevices configured from a Host Bus Adapter
  • 18.
    To add anunsupported disk array to the DISKS category
  • 19.
    To verify thatthe DMP paths are recognized, use the vxdmpadm getdmpnode command as shown in the following sample output for the example array:
  • 20.
    To change thedisk-naming scheme  Select Change the disk naming scheme from the vxdiskadm main menu to change the disk-naming scheme that you wantVxVMto use.Whenprompted, enter y to change the naming scheme. This restarts the vxconfigd daemon to bring the new disk naming scheme into effect. Alternatively, you can change the naming scheme from the command line.  Use the following command to select enclosure-based naming:  # vxddladm set namingscheme=ebn [persistence={yes|no}] [use_avid=yes|no] [lowercase=yes|no]  Use the following command to select operating system-based naming:  # vxddladm set namingscheme=osn [persistence={yes|no}] [lowercase=yes|no]
  • 21.
    The optional persistenceargument allows you to select whether the names of disk devices that are displayed by VxVM remain unchanged after disk hardware has been reconfigured and the system rebooted. By default, enclosure-based naming is persistent. Operating system-based naming is not persistent by default.
  • 22.
    To remove theerror state for simple or nopriv disks in the boot disk group
  • 23.
    Removing and replacingdisks  A replacement disk should have the same disk geometry as the disk that failed. That is, the replacement disk should have the same bytes per sector, sectors per track, tracks per cylinder and sectors per cylinder, same number of cylinders, and the same number of accessible cylinders.  You can use the prtvtoc command to obtain disk information.
  • 24.
  • 27.
    Replacing a failedor removed disk
  • 30.
    Dynamic new LUNaddition to a new target ID In this case, a new group of LUNS is mapped to the host by multiple HBA ports. An OS device scan is issued for the LUNs to be recognized and added to DMP control. The high-level procedure and the VxVM commands are generic. However, the OS commands may vary for Solaris versions.
  • 31.
    To perform onlineLUN addition
  • 32.
    To clean upthe device tree after you remove LUNs
  • 34.
    Dynamic Multipathing  HowDMP works  The Dynamic Multipathing (DMP) feature of Veritas Volume Manager (VxVM) provides greater availability, reliability and performance by using path failover and load balancing. This feature is available for multiported disk arrays from various vendors.  Multiported disk arrays can be connected to host systems through multiple paths. To detect the various paths to a disk, DMP uses a mechanism that is specific to each supported array type.DMPcan also differentiate between different enclosures of a supported array type that are connected to the same host system.  The multipathing policy used by DMP depends on the characteristics of the disk array.
  • 35.
    DMP supports thefollowing standard array types:  Active/Active (A/A) :  Allows several paths to be used concurrently for I/O. Such arrays allow DMP to provide greater I/O throughput by balancing the I/O load uniformly across the multiple paths to the LUNs. In the event that one path fails, DMP automatically routes I/O over the other available paths.  Asymmetric Active/Active (A/A-A):  A/A-A or Asymmetric Active/Active arrays can be accessed through secondary storage paths with little performance degradation. Usually an A/A-A array behaves like an A/P array rather than an A/A array. However, during failover, an A/A-A array behaves like an A/A array.  Active/Passive (A/P): Allows access to its LUNs (logical units; real disks or virtual disks created using hardware) via the primary (active) path on a single controller (also known as an access port or a storage processor) during normal operation.
  • 36.
    Active/Passive in explicitfailover mode or non-autotrespass mode (A/P-F): The appropriate command must be issued to the array to make the LUNs fail over to the secondary path. Active/Passive with LUNgroup failover (A/P-G):  For Active/Passive arrays withLUNgroup failover (A/PG arrays), a group of LUNs that are connected through a controller is treated as a single failover entity. Unlike A/P arrays, failover occurs at the controller level, and not for individual LUNs. The primary and secondary controller are each connected to a separate group of LUNs. If a single LUN in the primary controller’s LUN group fails, all LUNs in that group fail over to the secondary controller.  Concurrent Active/Passive (A/P-C)  Concurrent Active/Passive in explicit failover mode or non-autotrespass mode (A/PF-C)  Concurrent Active/Passive with LUN group failover (A/PG-C)  Variants of the A/P, AP/F and A/PG array types that support concurrent I/O and load balancing by having multiple primary paths into a controller. This functionality is provided by a controller with multiple ports, or by the insertion of a SAN hub or switch between an array and a controller. Failover to the secondary (passive) path occurs only if all the active primary paths fail.
  • 37.
    How DMP representsmultiple physical paths to a disk as one node
  • 38.
    Example of multipathingfor a disk enclosure in a SAN environment
  • 39.
    Displaying the pathsto a disk  The vxdisk command is used to display the multipathing information for a particular metadevice. The metadevice is a device representation of a particular physical disk having multiple physical paths from one of the system’s HBA controllers. In VxVM, all the physical disks in the system are represented as metadevices with one or more physical paths.
  • 40.
    To view multipathinginformation for a particular metadevice
  • 42.
  • 44.
    Displaying the membersof a LUN group
  • 45.
    To list allsubpaths known to DMP:
  • 46.
    You can usegetsubpaths to obtain information about all the paths that are connected to a particular HBA controller:
  • 47.
  • 48.
    Displaying HBA details The vxdmpadm getctlr command displays HBA vendor details and the Controller ID. For iSCSI devices, the Controller ID is the IQN or IEEE-format based name. For FC devices, the Controller ID is the WWN. Because the WWN is obtained from ESD, this field is blank if ESD is not running. ESD is a daemon process used to notify DDL about occurrence of events. The WWN shown as ‘Controller ID’ maps to the WWN of the HBA port associated with the host controller.
  • 49.
    Examples of usingthe vxdmpadm iostat command
  • 50.
  • 51.
  • 52.
    Displaying plex information Listingplexes helps identify free plexes for building volumes. Use the plex (–p) option to the vxprint command to list information about all plexes. To display detailed information about all plexes in the system, use the following command: # vxprint -lp To display detailed information about a specific plex, use the following command: # vxprint [-g diskgroup] -l plex The -t option prints a single line of information about the plex. To list free plexes, use the following command: # vxprint -pt
  • 53.
  • 58.
  • 60.
    Plex kernel states Theplex kernel state indicates the accessibility of the plex to the volume driver which monitors it. No user intervention is required to set these states; they are maintained internally. On a system that is operating properly, all plexes are enabled.
  • 61.
    Attaching and associatingplexes A plex becomes a participating plex for a volume by attaching it to a volume. (Attaching a plex associates it with the volume and enables the plex for use.) To attach a plex to an existing volume, use the following command: # vxplex [-g diskgroup] att volume plex Example: # vxplex -g mydg att vol01 vol01-02
  • 62.
    If the volumedoes not already exist, a plex (or multiple plexes) can be associated with the volume when it is created using the following command:  # vxmake [-g diskgroup] -U usetype vol volume plex=plex1[,plex2...] For example, to create a mirrored, fsgen-type volume named home, and to associate two existing plexes named home-1 and home-2 with home, use the following command: # vxmake -g mydg -U fsgen vol home plex=home-1,home-2
  • 63.
    Taking plexes offline Totake a plex OFFLINE so that repair or maintenance can be performed on the physical disk containing subdisks of that plex, use the following command: # vxmend [-g diskgroup] off plex If a disk has a head crash, put all plexes that have associated subdisks on the affected disk OFFLINE. For example, if plexes vol01-02 and vol02- 02 in the disk group, mydg, had subdisks on a drive to be repaired, use the following command to take these plexes offline: # vxmend -g mydg off vol01-02 vol02-02 This command places vol01-02 and vol02-02 in the OFFLINE state, and they remain in that state until it is changed. The plexes are not automatically recovered on rebooting the system.
  • 64.
    Detaching plexes To temporarilydetach one data plex in a mirrored volume, use the following command: # vxplex [-g diskgroup] det plex For example, to temporarily detach a plex named vol01-02 in the disk group, mydg, and place it in maintenance mode, use the following command: # vxplex -g mydg det vol01-02
  • 65.
    Reattaching plexes  Whena disk has been repaired or replaced and is again ready for use, the plexes  must be put back online (plex state set to ACTIVE). To set the plexes to ACTIVE, use one of the following procedures depending on the state of the volume.  ■ If the volume is currently ENABLED, use the following command to reattach the plex:  # vxplex [-g diskgroup] att volume plex ...  For example, for a plex named vol01-02 on a volume named vol01 in the disk  group, mydg, use the following command:  # vxplex -g mydg att vol01 vol01-02  As when returning an OFFLINE plex to ACTIVE, this command starts to recover the contents of the plex and, after the revive is complete, sets the plex utility state to ACTIVE.
  • 66.
    If the volumeis not in use (not ENABLED), use the following command to re-enable the plex for use: # vxmend [-g diskgroup] on plex For example, to re-enable a plex named vol01-02 in the disk group, mydg, enter: # vxmend -g mydg on vol01-02
  • 67.
    Listing Unstartable Volumes Anunstartable volume can be incorrectly configured or have other errors or conditions that prevent it from being started. To display unstartable volumes, use the vxinfo command. This displays information about the accessibility and usability of volumes:
  • 68.
    How to recoverand start a Veritas Volume Manager logical volume where the volume is DISABLED ACTIVE and has a plex that is DISABLED RECOVER  # vxprint -ht -g testdg DG NAME NCONFIG NLOG MINORS GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE dg testdg default default 84000 970356463.1203.alu dm testdg01 c1t4d0s2 sliced 2179 8920560 - dm testdg02 c1t6d0s2 sliced 2179 8920560 - v test - DISABLED ACTIVE 17840128 fsgen - SELECT pl test-01 test DISABLED RECOVER 17841120 CONCAT - RW sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
  • 69.
    Change the plextest-01 to the DISABLED STALE state: #vxmend -g diskgroup fix stale <plex_name> For example: # vxmend -g testdg fix stale test-01
  • 70.
     # vxprint-ht -g testdg DG NAME NCONFIG NLOG MINORS GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE dg testdg default default 84000 970356463.1203.alu dm testdg01 c1t4d0s2 sliced 2179 8920560 - dm testdg02 c1t6d0s2 sliced 2179 8920560 - v test - DISABLED ACTIVE 17840128 fsgen - SELECT pl test-01 test DISABLED STALE 17841120 CONCAT - RW sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
  • 71.
    Change the plextest-01 to the DISABLED CLEAN state: vxmend -g diskgroup fix clean <plex_name> For example: # vxmend -g testdg fix clean test-01 # vxprint -ht -g testdg DG NAME NCONFIG NLOG MINORS GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE dg testdg default default 84000 970356463.1203.alu dm testdg01 c1t4d0s2 sliced 2179 8920560 - dm testdg02 c1t6d0s2 sliced 2179 8920560 - v test - DISABLED ACTIVE 17840128 fsgen - SELECT pl test-01 test DISABLED CLEAN 17841120 CONCAT - RW sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
  • 72.
    Start the volumetest:  vxvol -g diskgroup start <volume> For example: # vxvol -g diskgroup start test # vxprint -ht -g testdg DG NAME NCONFIG NLOG MINORS GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE dg testdg default default 84000 970356463.1203.alu dm testdg01 c1t4d0s2 sliced 2179 8920560 - dm testdg02 c1t6d0s2 sliced 2179 8920560 - v test - ENABLED ACTIVE 17840128 fsgen - SELECT pl test-01 test ENABLED ACTIVE 17841120 CONCAT - RW sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
  • 73.
    Recovering an unstartable volumewith a disabled plex in the RECOVER state  To recover an unstartable volume with a disabled plex in the RECOVER state  Use the following command to force the plex into the OFFLINE state:  # vxmend [-g diskgroup] -o force off plex  Place the plex into the STALE state using this command:  # vxmend [-g diskgroup] on plex  If there are other ACTIVE or CLEAN plexes in the volume, use the following command to reattach the plex to the volume:  # vxplex [-g diskgroup] att plex volume  If the volume is already enabled, resynchronization of the plex is started immediately.  If there are no other clean plexes in the volume, use this command to make the plex DISABLED and CLEAN:  # vxmend [-g diskgroup] fix clean plex  If the volume is not already enabled, use the following command to start it, and preform any resynchronization of the plexes in the background:  # vxvol [-g diskgroup] -o bg start volume  If the data in the plex was corrupted, and the volume has no ACTIVE or CLEAN redundant plexes from which its contents can be resynchronized, it must be restored from a backup or from a snapshot image.
  • 74.
    Clearing the failingflag on a disk If I/O errors are intermittent rather than persistent, Veritas Volume Manager sets the failing flag on a disk, rather than detaching the disk. Such errors can occur due to the temporary removal of a cable, controller faults, a partially faulty LUN in a disk array, or a disk with a few bad sectors or tracks. If the hardware fault is not with the disk itself (for example, it is caused by problems with the controller or the cable path to the disk), you can use the vxedit command to unset the failing flag after correcting the source of the I/O error. Warning: Do not unset the failing flag if the reason for the I/O errors is unknown. If the disk hardware truly is failing, and the flag is cleared, there is a risk of data loss.
  • 75.
     To clear thefailing flag on a disk 1. Use the vxdisk list command to find out which disks are failing: # vxdisk list DEVICE TYPE DISK GROUP STATUS hdisk10 auto:simple mydg01 mydg online hdisk11 auto:simple mydg02 mydg online failing hdisk12 auto:simple mydg03 mydg online . . . Use the vxedit set command to clear the flag for each disk that is marked as failing (in this example, mydg02): # vxedit set failing=off mydg02 Use the vxdisk list command to verify that the failing flag has been cleared: # vxdisk list DEVICE TYPE DISK GROUP STATUS hdisk10 auto:simple mydg01 mydg online hdisk11 auto:simple mydg02 mydg online hdisk12 auto:simple mydg03 mydg online
  • 76.
    Veritas Unstartable Volume Inthis example of VXVM 4.0 on a Solaris 8 system, an array was temporarily unavailable, causing problems with a file system whose two plexes resided on the array. Veritas Unstartable Volume
  • 77.
     bash-2.03# cd/files04 bash: cd: /files04: I/O error The volume was in DISABLED ACTIVE state, and both plexes were in DISABLED RECOVER state. v vol04 - DISABLED ACTIVE 29360128 SELECT - fsgen pl vol04-01 vol04 DISABLED RECOVER 29367434 STRIPE 3/128 RW sd appsdg01-04 vol04-01 cs_array07-f0 8392167 2797389 0/0 c1t0d0 ENA sd appsdg07-01 vol04-01 cs_array03-f2 0 5594778 0/2797389 c4t2d0 ENA sd appsdg07-04 vol04-01 cs_array03-f2 11189556 1396899 0/8392167 c4t2d0 ENA sd appsdg02-04 vol04-01 cs_array07-f1 8392167 2797389 1/0 c1t1d0 ENA sd appsdg10-02 vol04-01 cs_array06-f1 2797389 5594778 1/2797389 c5t1d0 ENA sd appsdg10-05 vol04-01 cs_array06-f1 13986945 1396899 1/8392167 c5t1d0 ENA sd appsdg03-04 vol04-01 cs_array07-f2 8392167 2797389 2/0 c1t2d0 ENA sd appsdg11-02 vol04-01 cs_array06-f2 8392167 6991677 2/2797389 c5t2d0 ENA pl vol04-02 vol04 DISABLED RECOVER 29367434 STRIPE 3/128 RW sd appsdg04-02 vol04-02 cs_array07-f3 2797389 2797389 0/0 c1t3d0 ENA sd appsdg04-05 vol04-02 cs_array07-f3 0 2797389 0/2797389 c1t3d0 ENA sd appsdg04-06 vol04-02 cs_array07-f3 16784334 894159 0/5594778 c1t3d0 ENA
  • 78.
     We confirmedthat the storage array was available to the operating system. # luxadm probe Found Enclosure(s): ... SENA               Name:cs_array06   Node WWN:5080020000038ba8     Logical Path:/dev/es/ses6   Logical Path:/dev/es/ses7  # luxadm display cs_array06 SLOT FRONT DISKS (Node WWN) REAR DISKS (Node WWN) 0 On (O.K.) 2000002037094289 On (O.K.) 200000203709422e 1 On (O.K.) 2000002037093aaf On (O.K.) 2000002037094220 2 On (O.K.) 200000203709410b On (O.K.) 2000002037093ddd 3 On (O.K.) 2000002037094254 On (O.K.) 200000203709422b 4 On (O.K.) 20000020370940da On (O.K.) 2000002037094247 5 Not Installed Not Installed 6 On (O.K.) 2000002037093df0 On
  • 79.
     # vxdisklist ... - - cs_array06-f0 appsdg failed was:c5t0d0s2 - - cs_array06-f1 appsdg failed was:c5t1d0s2 - - cs_array06-f2 appsdg failed was:c5t2d0s2 - - cs_array06-f3 appsdg failed was:c5t3d0s2 - - cs_array06-r4 appsdg failed spare was:c5t20d0s2 - - cs_array06-f4 appsdg failed was:c5t4d0s2 # cd /usr/lib/vxvm/bin # ./vxreattach c5t0d0s2 # ./vxreattach c5t1d0s2 # ./vxreattach c5t2d0s2 # ./vxreattach c5t3d0s2 # ./vxreattach c5t20d0s2 # ./vxreattach c5t4d0s2
  • 80.
     We thenfollowed the "Recovering an Unstartable Volume with a Disabled Plex in the RECOVER State" procedure in the Volume Manager Troubleshooting Guide. 1. Force plex vol04-01 into the OFFLINE state. # vxmend -g appsdg -o force off vol04-01 2. Place plex vol04-01 into the STALE state. # vxmend -g appsdg on vol04-01 3. There are no other clean plexes in the volume, so make plex vol04-01 DISABLED and CLEAN. # vxmend -g appsdg fix clean vol04-01 4. Start the volume, and perform resynchronization of the plexes in the background. # vxvol -g appsdg -o bg start vol04 At this point, the file system is unmounted, checked for file system consistency, and remounted. # umount /files04 # mount /files04 UX:vxfs mount: ERROR: V-3-21268: /dev/vx/dsk/appsdg/vol04 is corrupted. needs checking # fsck -F vxfs /dev/vx/rdsk/appsdg/vol04 log replay in progress replay complete - marking super-block as CLEAN # mount /files04
  • 81.
    Restarting a DisabledVolume If a disk failure caused a volume to be disabled, you must restore the volume from a backup after replacing the failed disk. Any volumes that are listed as Unstartable must be restarted using the vxvol command before restoring their contents from a backup. For example, to restart the volume mkting so that it can be restored from backup, use the following command:
  • 82.
  • 84.
    Backing Up aDisk Group Configuration
  • 86.
    Restoring a DiskGroup Configuration The following command performs a precommit analysis of the state of the disk group configuration, and reinstalls the disk headers where these have become corrupted: # /etc/vx/bin/vxconfigrestore -p [-l directory] {diskgroup | dgid} The disk group can be specified either by name or by ID.