• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Bheritaas5 foundation
 

Bheritaas5 foundation

on

  • 1,109 views

 

Statistics

Views

Total Views
1,109
Views on SlideShare
1,109
Embed Views
0

Actions

Likes
0
Downloads
15
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Bheritaas5 foundation Bheritaas5 foundation Document Transcript

    • 'symantec.. VERITAS Storage Foundation 5.0 for UNIX: Maintenance 100-002353-8
    • COURSE DEVELOPERS (;ail Adey Bill4t.'Gerrits TECHNICAL CONTRIBUTORS AND REVIEWERS Jade Arrtngton Marg) Cassid) l{fI) Freeman Jue (;allagher Bruce(;arncr Tomer (~urantz Bill Havey (;l'm,' Henriksen (;l'rald Jackson Ravmond Karns Hill Lehman IIflh Lucus Dartvoue 'Ianikhong Chrlstian Rabanus naw I{flgers Kleber Saldanha Albrecht Seriha ,lieh.1 Simoni Ananda Sil'iscnu Pete TtH..-m mes Cupyright ( 20()() Sym.uuec Corporurion. All riglns reserved. Symantcc. the Symantcc Logo. and "'FRn AS arc trademarks or registered trademarks of Symantcc Corporation or its alfili,Hes in the U.S. and other countries. Other names may be trademarks ortheir rcspccuve 0 ners. IIIIS PUBLICAIIO'J IS PKUVIDEi) "'S IS" AND ALL EXPKFSS OR IMPLIED CONDITIUNS. IU·.PRESENIAtIO'JS A'JD '-"ARRAN ru.s. INCIL'OINe; ANY I~IPI.ILD WAKKANTY or MERCIIAN rABII.IIY-ITI NESS [,UK A PAKfiCULAR PURPOSE OR N( ):-.I-INIRIN(jF~IENT. AKE DISCLAIMED. EXCEPT TO IIII' EX lENT Tilt!' SUCII DISCLAIM ERSARF HELD 10 BE LEGALLY 1:-.1VALID. SY~IAN1EC U>RI'O[{ATION SIIAI.L NO r BE LIAIlLE FOR INClIJE:-.I·1:1. OR CONSEQL'EN riAL DAMA(;[S IN CONNEC liON ,ITII Till' H'R:-.IISIII:-.IG PERF()R~IANCE. OR USE OF TillS I'UIlLICAtIO:-.l.IIIl: l:-.IrORMAtION CO:-.lTAINLD HI,REIN IS SUBJECT TO CII/'Jljl' WITHOUT NOTICE. No part uf the COIlIL'lIts of this honk Illay be reproduced or transmitted in any lonu or by ilny means without the written permission of the publisher. / ERn:·IS .1oragc FOlllldclfioI15.() torL 'SI.": .!tlillh'lItIl1Ce August 2006 Printing Symantcc <.. orpor.uion 203311 SICI ells Creek Ilh d. Cupertino. CA 45014 I1l1p: '.syIlWlltL'L'.COIll
    • Table of Contents Course Introduction VERITAS Volume Manager Maintenance Tasks VERITAS Storage Foundation Curriculum . Lesson 1: Maintaining Data Consistency Resynchronization Operations . Interpreting State Information for VxVM Objects Modifying VxVM Object States .. Lesson 2: Managing Devices Within the VxVM Architecture Managing Components in the VxVM Architecture. Discovering Disk Devices... . . Managing Multiple Paths to Disk Devices . Lesson 3: Encapsulation and Rootability Placing the Boot Disk Under VxVM Control Creating an Alternate Boot Disk .. Removing the Boot Disk from VxVM Control. Lesson 4: Troubleshooting the Boot Process Operating System Boot Processes . Troubleshooting the Boot Process . Recovering the Boot Disk Group .. Lesson 5: Volume Maintenance Changing the Volume Layout..... . Managing Volume Tasks... . Analyzing Volume Configurations with Storage Expert . Lesson 6: Performance Monitoring Storage Performance Analysis Process .. VxVM Performance Monitoring Tools and Techniques Lesson 7: Point-ln-Tlme Copies What Is a Point-In-Time Copy? ... Types of PITC Solutions in Storage Foundation .. Creating and Managing Volume Snapshots . Using Volume Snapshots for Off-Host Processing Creating and Managing Storage Checkpoints . Lesson 8: Other Enterprise Features Overview What Is Dynamic Storage Tiering? . What Is Intelligent Storage Provisioning? . What Is the Storage Foundation Management Server? . tntro-z . Intro-S 1·3 1·9 1·18 . .... 2·3 2·13 . 2·16 3·3 . 3·18 3·22 . 4·3 ... 4-4 .... 4·15 5·3 5·12 5·21 6·3 6·7 7·3 . 7·8 7·17 7·27 7·31 . 8·3 . 8·10 8·18 Copvnqtu ?' 2006 Syr'lanltof Corporatrcn All nqnts reserved Table of Contents
    • Appendix A: Lab Exercises Lab 1: Maintaining Data Consistency ... Lab 2: Managing Devices Within the VxVM Architecture Lab 3: Encapsulation and Rootability... . Lab 4: Troubleshooting the Boot Process .. Lab 5 Volume Maintenance. . Lab 6: Performance Monitoring Lab 7 Point-in-Time Copies. Appendix B: Lab Solutions Lab 1 Solutions: Maintaining Data Consistency .. Lab 2 Solutions: Managing Devices Within the VxVM Architecture. Lab 3 Solutions: Encapsulation and Rootability .... Lab 4 Solutions: Troubleshooting the Boot Process Lab 5 Solutions: Volume Maintenance ... Lab 6 Solutions: Performance Monitoring . Lab 7 Solutions Point-in- Time Copies .. Appendix C: Boot Processes and VxVM Start-Up Scripts VxVM and the Solaris Boot Process. VxVM and the HP-UX Boot Process Index . A-3 · A-13 .... A-25 · A-35 . A-43 .... A-49 · A-63 . B-3 ... B-21 . B-37 .. B-51 . B-65 . B-75 .. B-97 ..... C-2 ....... C-15 VERITAS Storage Founttetion 5.0 for UNIX: Maintenance
    • Course Introduction
    • How does VxVM integrate into my system architecture? · How do I discover new devices? · How do I manage dynamic multlpathing? How can I recover critical data? ·How do I resolve disk failure? · How do I recover a plex? ·How do I recover the boot disk? How can I accelerate access to critical data? ·Where are the performance problems? · How is my hardware affecting performance? · How do I tune VxVM and optimize I/O? symantcc VxVM Maintenance Device Management Recovery Management Performance Management VERITAS Volume Manager Maintenance Tasks Before you perform any maintenance tasks, you should understand the VxVM architecture and how to manage devices connected to your system. When you encounter a problem on a system running VERITAS Volume Manager, you must know how to accurately identify the problem and select the appropriate solution. By learning how to use Vx VM recovery tools and apply recovery techniques in appropriate ways, you can troubleshoot problems that may occur in your environment and minimize the loss of critical data. A variety or factors. such as hardware. location or data on drives. and the application I!O profile. can impact the performance of VlRITAS Volume Manager (VxVM). The pcrtonnancc management techniques discussed in this training enable you to idcmily and remove performance bottlenecks without disrupting users.and to accelerate accessto critical infornuuion. tutro-Z Copyrl~tll ~- 2006 Symantec Corpor auon fill nqtots reserved VERITAS Storage Foundation 5.0 for UNIX' Maintenance
    • Storage Foundation Curriculum Path VERITAS Storage Foundation for UNIX: Fundamentals •••••• VERITAS Storage Foundation for UNIX: Maintenance ~------------ ------------~-----v-- VERITAS Storage Foundation for UNIX symantec. VERITAS Storage Foundation Curriculum VERITAS Storag« Foundationfor UNIX: Maintenance training is designed to provide you with comprehensive instruction on making the most of VERIT AS Storage Foundation. C(jp~flqht ~ :!006 Symaruec Coeoorenoo 111 "'lht<; reserveo Inlro-3Course Introduction
    • • Lesson 1: Maintaining Data Consistency • Lesson 2: Managing Devices Within the VxVM Architecture • Lesson 3: Encapsulation and Rootability • Lesson 4: Troubleshooting the Boot Process • Lesson 5: Volume Maintenance • Lesson 6: Performance Monitoring • Lesson 7: Point-in-Time Copies • Lesson 8: Other Enterprise Features Overview syrnantec VERITAS Storage Foundation for UNIX: Maintenance Overview This training provides instruction on device management, troubleshooting, recovery, and performance monitoring lorusers ofVERITAS Storage Foundation. Objectives After completing this course. you will be able tu: Interpret plcx. volume, and kernel states, fix plcx and volume failures by using VxVM tools. and resolve data consistency problems by analyzing plex states. Describe the Vx VM architecture and manage the device discovery layer and dynamic multipathing feature. Place the rout disk under Vx Vlvl control and mirror the root disk. Identify boot processes. debug VxVM during system start-up, and resolve boot disk problems. Rccoufigurc volumes online and use the Storage Expert utility to analyze volume configurations, Monitor VxVM performance and identify how volume configuratious contribute tu performance optimization. Create and manage Instant volume snapshots and storage checkpoints, Describe Dynamic Storage Tiering (DST). Intelligent Storage Provisioning (ISP). and the Storage Foundation Management Server (SFMS). Intro-4 Copynqnt :~ 200n Svma-uec COfpc.JrdIIOIl 111 nqhts reserved VERITAS Storage Fountietiot) 5.0 for UNIX: Meintenence
    • Course Resources • Lab Exercises (Appendix A) • Lab Solutions (Appendix B) • Boot Processes and VxVM Start-Up Scripts (Appendix C) Additional Course Resources symantec Appendix A: Lab Exercises This section contains hands-on exercises that enable you to practice the concepts and procedures presented in the lessons. Appendix B: Lab Solutions This section contains detailed solutions to the lab exercises for each lesson. Appendix C: Boot Processes and VxVM Start-Up Scripts This section contains a summary of the scripts involved in VxVM startup. Course Introduction Copvnqtu '~ 20(]6 Symantec Corroranoo 111nqtus ft'!ser',RC1 Intro-5
    • Typographic Conventions Used in This Course The following tables describe the typographic conventions used in this course. Typographic Conventions in Text and Commands Cuuveution Courier New. bold Element Command input. both syntax and examples Examples To display the robot and drive configuration: tpconfig -d 1"0 display disk information: vxdisk -0 alldgs list Courier New. plain Courier New, Italic. bold or plain Command output Command names. directory names. tile names. path name'S. user names. passwords. L RLs when used within 1"1'glll'll'tcxt paragraphs, Varia hies in command syntax. and examples: Variables in command input arc Italic. plain. Variable, in command (Hit put arc Italic. bold. In the output: protocol minimum: 40 protocol_maximum: 60 protocol current: 0 Locate the al tnames directory. Cio to http://www.symantec.com. l.nrer the value 300. Log011 asuser 1. To install the media server: / cdrom_ directory/ install To ~I•.cessa manual page: man command name To display detailed information tor a disk: vxdisk -g disk_group list disk name Convention Typographic Conventions in Graphicailiser Interface Descrlptlons Arrow Element Examples ~--------------~ Menu navigarion paths Select l-ile- ->Save, Initial capitalization Hunons. menus. window». Select tire Next button. options. and other interface Open the Task Status clements window. Remove the checkmark from the Print File check box. Quotation marks Interface clements with long names Select the "Include subvolumes in object view window' check box. lntro-B VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqht . 200f, Symaruec Corporation All rights reserved
    • Lesson 1 Maintaining Data Consistency
    • Lesson Introduction • Lesson 1; Maintaining Data ••• .. _Q~n~s~st~'!.£l'______.~ --i • Lesson 2: Managing Devices Within the VxVM Architecture • Lesson 3: Encapsulation and Rootability • Lesson 4: Troubleshooting the Boot Process • Lesson 5: Volume Maintenance • Lesson 6: Performance Monitoring • Lesson 7: Point-in-Time Copies • Lesson 8: Other Enterprise Features Overview svmantcc Lesson Topics and Objectives Topic After completing this lesson, you will be able to: -- Topic 1: Resynchronization Describe mirror resynchronization Operations processes. Topic 2: Interpreting State Interpret plex state and condition Information for VxVM flags, volume states, and kernel Objects states. Topic 3: Modifying VxVM Fix plex and volume failures by using Object States VxVM tools. 1-2 VERITAS Storage Foundation 5.0 for UNIX. Maintenance
    • svmantec IResynchronization is the process of ensuring that after a system crash: • All mirrors in a volume contain exactly the same data. • Data and parity in RAID-5 volumes agree. Ig~~~:::==~1Did all writes '; complete? Do all mirrors contain the same data? Types of mirror resynchronization: • Atomic-copy resynchronization • Read-writeback resynchronization Resynchronization Operations What Is Resynchronization? Resynchronization is the process of ensuring that. after a system crash: All mirrors in mirrored volumes contain exactly the same data. Data and parity in RAID-5 volumes agree. Data is written to the mirrors of a volume in parallel. If a system crash occurs before all the individual writes complete. some writes may complete while other writes do not. This system crash can cause 111'0 reads from the same region of the volume to return different results ifdifferent mirrors are used to satisfy the read request. In the case of RA ID-5 volumes. two reads returning different results can lead to rarity corruption and incorrect data rcconstrucuon. VxVM uses volume resynchronization processes to ensure that all copies of the data match exactly. VxVM records when a volume is first written to and marks it as dirty. When a volume is closed by all processes or stopped cleanly by the administrator. all writes have been completed. and Volume Manager removes the dirty flag for the volume. Only volumes that are marked dirty when the system reboots require resynchronization. Not all volumes require resynchronizarion after a system failure. Volumes that were never written or that had no active 1/0 when the system failure occurred do not require resynchronization. The volume is completely accessible during the two modes of resynchronizaiion. Lesson 1 Maintaining Data Consistency 1-3 Copvnqnt L 2006 Syntaruec Corpo-anon All fights reserve.t
    • Atomic-Copy Resynchronization Atomic-copy rcsynchronizauon refers to the sequential writing of ~II blocks of the volume to a plcx. This operation is used anytime a new mirror is added to a volume, or all existing mirror is in stale mode and has to be rcsynchronizcd. Atomic-Copy Resynchronization Atomic-copy resynchronization involves the sequential writing of all blocks of a volume to a plex. This type of resynchronization is used in: • Adding a new plex (mirror) • Reattaching a detached plex (mirror) to a volume • Online reconfiguration operations: - Moving a plex - Copying a plex - Creating a snapshot - Moving a subdisk Atomic-Copy Resvnchrunizatlon Process 1 The plcx being copied to is set tu a write-only state. 2 A read thread is started on the whole volume. (Every block is read intcrnally.) 3 Blocks arc written from the "good' plcx tu the stale or new plcx. 1-4 Copvnqht ~', 200£ Sym<ll"'!p.c Corpmaltcn "II fights reserved VERITAS Storage Foundation 5.0 for UNIX' Maintenance
    • svrnantec. Read-Writeback Resynchronization Read-writeback resynchronization makes all plexes identical by alternately copying regions between plexes. This type of resynchronization is used in: • Recovery of mirrors after a system crash • Growing a volume In this type of resynchronization: • Mirrors marked ACTIVE remain ACTIVE, and the volume is placed in the SYNC state. • An internal read thread is started. Blocks are read from the plex specified in the read policy, and the data is written to the other plexes. • Upon completion, the SYNC flag is turned off. Read-Writeback Resynchronization Read-writeback rcsynchronization is a process of ensuring that two plcxes have the same content. Because the application must ensure that all writes arc completed. the application must fix any writes that are not completed. The responsibility ofYxYM is to guarantee that the mirrors have the same data. A database (as an application) usually docs this by writing the original data back to the disk. A tile system checks to ensure that all of its structures are intact. The applications using the tile system must do their own checking. Read-Writeback Resynchronlzatlon Process All plexes that were ACTIVE at the time of the crash are set to the ACTIVE state again. and the volume is placed in the SYNC state (or the NEEDSYNC state if the disk group has more than one volume). 2 An internal read thread is started to read the entire volume. and blocks are read from whatever plex is in the read policy and are written hack to the other plexcs, Because the default read policy is Select and this chooses Round Robin over Preferred. blocks are read from one plcx and written to another. alternately. 3 When the resynchronization process is complete. the SYNC flag is turned off (set to ACTIVE). User-initiated reads are also written to the other plexcs in the volume but otherwise have 110 effect 011 the internal read thread. Lesson 1 Maintaining Data Consistency Copyright if 2006 Sy!l181l1eC Corporanon. /III "gtll~ reserv-e 1-5
    • syrnantec Impact of Resynchronization Resynchronization takes time and impacts performance. To minimize this performance impact, VxVM provides the following solutions: • Dirty region logging for mirrored volumes • RAID-5 logging for RAID-5 volumes • FastResync for mirrored and snapshot volumes Minimizing the Impact of Resynchronization The process of rcsynchronizauon call impact system performance and can take time. Tu minimize the performance impact otrcsynchronizmion. VxVM provides: Dirty region lugging lor mirrored volumes RAID-5 logging lor RAID-5 volumes FastResync lor mirrored and snapshut volumes Till; FastResync option requires the lluslrSnup license. 1-6 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqnt 2()1)6 Svmanrec Corporation 1111I<.111lsreserved
    • symamec. Dirty Region Logging • For mirrored volumes with logging enabled, DRL speeds plex resynchronization. Only regions that are dirty need to be resynchronized after a crash. • If you resize a volume, the log size does not change. To resize the log, you must delete the log and add it back after resizing the volume. Dirty Region Logging You were introduced to dirty region logging (DRL) when you created a volume with a log. This section describes how dirty region logging works. How Does DRL Work? DRL logically divides a volume into a set of consecutive regions and keeps track of the regions to which writes occur. A log is maintained that contains a status bit representing each region of the volume. For any write operation to the volume. the regions being written are marked dirty in the log before the data is written. If a write causesa log region to become dirty when it was previously clean. the log is synchronously written to disk before the write operation can occur. On system restart, VxVM recovers only those regions of the volume that are marked as dirty in the dirty region log. Log subdisks store the dirty region log of a volume that has DRL enabled. Only one log subdisk can exist per plex. Multiple log subdisks can be used to mirror the dirty region log. If a plcx contains a log subdisk and no data subdisks. it is called a log pier. Only a limited number of bits can be marked dirty in the log at any time. The dirty bit for a region is not cleared immediately after writing the data to the region. Instead. it remains marked asdirty until the corresponding volume region becomes the least recently used. 1-7Lesson 1 Maintaining Data Consistency Cr;pyngf):f. 2(J06 Syrnantec Corporanon All n!jfl'<; reserved
    • ji·}_I· Dirty Region Logging: Example 0123 ... [0010000.-::-0 0 100 .. 000 ... 0 0~iiJ 01 23 ... [O!OOOOO... 00000 .. 000 ... O~ Active Bitmap Volume DRL Before a Crash J=;;:;::::I 0 1 2 3 Recovery Bitrnap II DRL After a Crash ~ 0123 ... ): : LL_~_~_:_1_~_:_':_:_,,_. _:_:_:_0_0_:_·._:·._:_:_:_·.·.·_.:_O_~_:_:_:_:_I_R_~_~~_~_~_;y---J Dirty Region Log Size In the dirty region log: A small number of bytes of the DRL arc reserved for internal use. The remaining bytes arc used forthe DRL biunap, The bytes arc divided into two bitmaps: an active biunap and a recovery biunap. Each bit in the active biunap maps to a single region of the volume. A maximum of 204S dirty regions per system is allowed by default. How the Bitmaps Are Used in Dirty Region Logging Both bitmaps arc zeroed when the volume is started initially, after a clean shutdown. As regions transition to dirty, the corresponding bits arc set before the wntcs to the volume occur. If the system crashes, the activ c map is OR' d with the recovery map. Mirror rcsyuchronizatiun is now limited to the dirty bits in the recovery map. The active map is simultaneously reset, and normal volume ''0 is permitted. Usageof two biunaps in this way allows VxVM to handle multiple system crashes. 1-8 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copyright 2006 Svmaotec Corporanon All flgills reserved
    • symantcc Why Is It Important to Understand Object States • VxVM uses plex and volume states or condition flags to decide: - If a volume can be started - If a mirrored volume needs any synchronization - Which copy of data (plex) is to be used as the source if synchronization is required - If I/O is allowed to different copies of data (plexes) • You may need to manipulate plex or volume states under certain conditions: - To recover volumes if one or more failures prevent VxVM from taking automatic recovery actions - To recover from situations where interrupted configuration tasks leave volumes in unusable states - To take specific copies of data out of the I/O path for maintenance purposes - Use disk group configuration backups to create specific configurations Interpreting State Information for VxVM Objects VxVM uses plcx and volume states or condition Ilags to decide which operations can be performed. You may need to manipulate plex or volume states under certain conditions. 1-9Lesson 1 Maintaining Data Consistency Copy-rqtn ? 2UO(; Symanter coroorauoo. All "gills reserveo
    • syrnanter How Volumes Are Created vxassist is a top-down utility-that is, you only specify the properties of the volume you want to create-that creates volumes bottom-up: 1. Create subdisks. 2. Associate subdisks to plexes. 3. Associate plexes to a volume. 4. Initialize the volume's plexes. 5. Start the volume. How Volumes Are Created In order to troubleshoot and solve problems associated with mirrors, you must understand how volumes arc created. The vxassi st utility is a top-down utility. which means that you specify only the properties of the volume that you want to create. However, vxassist actually creates the volumes using a bottom-up approach. which means that subdisks an: created first and used to build volumes. To create a volume, vxassi st follows this process: Determine where you 1 ill place the data and create subdisks on the appropriate disk drives. 2 Create mirrors and associate each of the subdisks to the mirrors that will be used in the volume. 3 Create the volume and associate the mirrors to the volume. The result is a volume with one or more plcxcs. 4 Initialize the volume's plcxcs by selecting the plcx that represents the data for the volume. You perform this action by using the vxvol ini t command. Initializing a volume is similar to using a low-lcvcl lormat command on a disk drive: it states how to nuvig.uc to the data. (13ydefault. vxassist creates both plcxcs as hay ing the data and copies them together-using rcad-writcback synchronization. ) 5 Start the volume. Starting a volume involves enabling the area that the volume represents on disk, and enabling its object in the disk group configuration database. to accept user and system 1,'0. 1-10 VERITAS Storage Foundation 5.0 for UNIX' Maintenance Copyrl(.jht~' 2006 Svrnaruec Corpuranon All rights reserved
    • symantec. Identifying Plex Problems To identify and solve pie x problems, use the following information: • Plex states • Volume states • Plex kernel states • Volume kernel states • Object condition flags Commands to display plex, volume, and kernel states: vxprint -g diskgroup -ht [volume_name] vxinfo -p -g diskgroup [volume] You can use STATE fields in the output of the vxprint and vxinfo commands to determine that a problem has occurred, and to assist in determining how to fix the problem, VxVM displays state information for: Plcx states Volume states Plcx kernel states Volume kernel states Identifying Plex Problems The plex and volume state fields are not always accurate, because administrators can change them. However. kernel state flags are absolute; that is, only VxVM can change them. Therefore, kernel state flags arc always accurate. A particular plex state docs not necessarily mean that the data is good or bad. The plex state represents VxVM's perception of the data in a plex. VxVM is usually conservative; that is, ifVxVM detects that data is not synchronized. then the plcx states are set accordingly. Lesson 1 MaintainingData Consistency COOY(1g111 'f) 2006 Symantec Corporation All nqt-ts reserved 1-11
    • symantec. Displaying Object States vo I volOl plex voIDl-Q! plex volOl-02 Esgen ACTIn ACTIVE Sta.rted vxinfo -p -g datadg volOl vxprint -g datadg -ht volOl v NAME PVG!VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NeQL/WID MODE so NAME PLEX DISK DISKOFFS LENGTH [COLI) OFF DEVICE MODE sv NAME PLEX VOLNAME NVOLLA YR LENGTH tcor./: aFF !>.MINH MODE v voiOl ENABLED ACTIW 204800 SELECT fsgen pi 10101-01 volO! ENABLED ACTIV!: 205200 CONCAT RW ad datadgOl 01 voIOl-O! datadgOl 0 205200 0 diskO 1 ENA p I volOl-02 voiOl ENABLED ACTIVE 205200 CONCAT RW sd datadg02-01 volOl-02 datadg02 0 205200 0 diskO 2 ENA Example of vxinfo and vxprint If you do not specify the volume name on the command line for the vxprint or vxinfo commands, information on all the volume, within the specified disk group is displayed. 1-12 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • syruantec Plex States and Condition Flags EMPTY • CLEAN (SNAPDONE) ACTIVE (SNAPDONE) Pl:ENABLED/ACTIVE P2:ENABLED/ACTIVE [V: ENABLED/ACTIVE] I vxvol start ~J Eval stopvi Key ~ Pl:DISABLED/CLEAN P2:DISABLED/CLEAN [V: DISABLED/CLEAN] i P1: 1st Plex States j P2: 2nd Plex States 1 [v: Volume States} i Interpreting Plex States Plex States EMPTY: When you creme a volume, all of the plexes and the volume are set to the EMPTY state.This state indicates that you have not yet defined which plex has the good data (CLEAN). and which plcx does not have the good data (STALE). You can only achieve the EMPTY state by creating a new volume using vxmake. or by using related administrative commands. CLEAN: The CLEAN state is normal and indicates that the plex has a copy of the data that represents the volume. CLEAN also means that the volume is not started and is not currently able to handle 110 (by the administrator's control). ACTIVE: The ACTIVE state is the same as CLEAN. but the volume is or was currently started, and the volume is or was able to perform 110. SNAPDONE: The SNAPDONE state is the same as ACTIVE or CLEAN, but SNAPDONE is a plcx that has been synchronized with the volume as a result of a vxassist snapstart operation. After a reboot or a manual start of the volume. a plcx in the SNAPDONE state still exists. It is persistent. 1-13Lesson 1 MaintainingDataConsistency Copyrlght;&' 20D6 Svrnantec Corporauon. 111fights reserved
    • syrnantec. Plex States and Condition Flags STALE (SNAPATT) OFFLINE TEMP Key ~ P1: tst Plex States P2: 2nd Plex States [V: Volume States) STALE: The STALE state indicates that VxVM detects that the data in the plcx is not synchronized with the data in the CLEAN plcxcs. This state is usually causedby taking the plcx offline (110 can still be going to the other plcxcs, making them unvynchronizcd) or by a disk tailurc-c-which means that the plcx was not updated when new writes came into the volume. SNAPATT: The SNAPATT state indicates that the object is a snapshot that is currently being synchronized but docs not yet have a complete copy of tile data. OFFLINE: The OFFLINE stare indicates that the administrator has issued the vxmend of f command on the plcx. The plcx does not participate in any I/O when it is offline, so actively writing to the volume causesthe contents to become outdated. When the administrator brings the plcx back online using the vxmend on command, the plcx changes to the STALE state. TEMP: The TEMP state flags (TEMP, TEMPRM, TEMPRMSD) usually indicate that the data was never a copy olthc volume's data, and it is recommended that you not use these plcxcs. These temporary states indicate that the plcx is currently involved in a synchronization operation with the volume, 1-14 VERITAS Storage Foundation 5,0 for UNIX: Maintenance COlJynqhl 7. 2006 Svmantet, Corpornnou All nqhts reservec
    • symantcc Note: If the volume is nonredundant at the time that you reattach the drive, the plex state changes from NODEVICE to FE COVER instead of IOFAIL. Condition Flags If a plcx is not synchronized with the volume and VxVM has information about why it is not synchronized. then a condition flag is displayed. Multiple condition flags can be set on the same plex at the same time. Only the most informative flags are displayed in the state field of the vxprint output. For example, if a disk fails during an I/O operation, the NODEVICE, IOFAIL, and RECOVER flags arc all set for the plex, but only the NODEVICE flag is displayed in the state field. NODEVICE: NODEVICE indicates that the disk drive below the plcx has failed. REMOVED: REMOVED has the same meaning as NODEVICE. but the system administrator has requested that the device appear to have failed (for example. by using the vxdiskadm option. "Remove a disk for replacement"). IOFAIL: IOFAIL is similar to NODEVICE, but it indicates that an unrecoverable failure occurred on the device, and VxVM has not yet verified whether the disk is actually bad. (110 to both the public and the private regions must fail to change the state from IOFAIL to NODEVICE.) RECOVER: The RECOVER flag is set on a plcx when two conditions arc met: A failed disk has been fixed (by using vxreattach or the vxdiskadm option, "Replace a failed or removed disk"). The plex was in the ACTIVE state prior to the failure, This flag indicates that even after fixing the volume, additional action tlIay be required. The data may be lost and must be recovered from backup, or the administrator must verify that the data on the disk is current by using utilities provided by the application that uses that volume. Lesson 1 Maintaining Data Consistency 1-15 Copvrrqhf " 2(106 Symantec Corporation All fI'1hl<; reserved
    • syrnantec Volume States • EMPTY } These volume states have the same • CLEAN meanings as they do for plexes. ACTIVE • NEED SYNC This state is the same as SYNC, but the internal read thread has not been started. • SYNC Plexes are involved in read- writeback or RAIO-5 parity synchronization. • NODEVICE None of the plexes have currently accessible disk devices underneath the volume. Interpreting Volume States Volume States EMPTY, CLEAN. and ACTIVE: The EMPTY. CLEAN. and ACTIVE volume states have the same meanings as they do for plcxcs. NEEDSYNC: The NEEDSYNC volume state is the same as SYNC, but the internal read thread has not been started. This state exists so that volumes that use the same disk arc not synchronized at the same time, and head thrashing is avoided. SYNC: The SYNC volume state indicates that the plcxcs arc involved in read- writcback or RAID-S parity synchronization: Each time that a read occurs from a plcx, it is written back to all the other plcxcs that arc in the ACTIVE state. An internal read thread is started to read the entire volume (or, alter a system crash. only the dirty regions ifdirty region logging (DIU.) is being used). forcing the data to be synchronized completely. On a RAID-S volume, the presence ofa RAID-S log decreases the time ofa SYNC operation. Starting an empty mirrored volume by using the vxvol start command places the volume in SYNC mode. NODEVICE: The NODEVICE volume state indicates that none of the plcxcs have currently accessible disk devices underneath the volume. 1-16 VERITAS Storage Foundation 5,0 for UNIX: Maintenance COtlyfigtil'i'· 2006 Syrnantpc Couorauoo All nqnts reserved
    • symantec. Kernel States Kernel states represent VxVM's ability to transfer 110 to the volume or plex. • ENABLED The object can transfer both system 110and user 110. The object can transfer system 110, but not user 110(maintenance mode). No 110can be transferred. Kernel states represent VxVM ability to transfer 1/0 to the object. Volume kernel stat 1': VxVM's ability to transfer I/O to the volume Plex kernel state: VxVM's ability to transfer I/O to the plex ENABLED: The ENABLED kernel state indicates that the object is currently able to transfer system I/O to the private region and user I/O to the public region. DETACHED: The DETACHEDkernel state indicates that the object can currently transfer system I/O. but 110t user [/0. This state is also considered the maintenance mode where internal plex operations and ioct 1 functions arc accepted. DISABLED: The DISABLED state is the offline state for the volume or the plex. When all object is in this state. no I/O is transferred. • DETACHED • DISABLED Interpreting Kernel States Kernel States Lesson 1 Maintaining Data Consistency Copvnqht f 2006 Syrnanter Corporation, 111nqtus reserved 1-17
    • symantec Example Scenarios • SCENARIO 1: - You are planning to go through an upgrade procedure that may corrupt data. You want to keep one safe copy in case things go wrong. - You do not have enough space to add snapshot volumes, so you take one plex out of the 110 path during the upgrade . • SCENARIO 2: - You are mirroring a critical application data across disk arrays in multiple sites. - A disaster first causes your remote site to be temporarily disconnected and then causes the primary copy to be lost permanently before you can recover from the temporary failure. - You now have the option of using the remote site data, which is several minutes old, or recovering from last night's backup. Modifying VxVM Object States Example Scenarios Determine the best action 1'01' the: scenarios that arc described on the slide. 1-18 Cupynght t- 2006 Sy-u.uuec COrpura11011 Ail nqnts resarvec VERITAS Storage Foundation 5.0 for UNIX Maintenance
    • symantcc. Solving Plex Problems Commands used to fix plex problems include: • vxrecover • vxvol -f start • vxmend fix • vxmend off Ion Resolving Plex Problems When resolving disk and plcx problems, after you fix the underlying disk drives by using the disk commands, you must fix plcx problems by using the following commands: • vxrecover • vxvol -f start • vxmend fix • vxmend off Ion Copvnuht (' 2006 Syraantec Corporation. All nqhts reserved 1-19Lesson 1 Maintaining Data Consistency
    • When vxrecover is executed. VxVM notes the state of the plcxcs in the volume. lfboth ACTIVE and STALE plcxcs exist. the ACTIVE plcxcs issue unconditional block writes over the STALE plcxcs, If there arc only ACTIVE plcxcs. the rcad-wriicback copy procedure is performed. Recovery is performed only on volumes that require recovery (such as volumes marked as dirty before a sudden system failure). During the recovery process. the volume remains online and started. When the synchronization process is complete, the volume and all of its plcxcs arc ACTIVE and ENABLED. Running vxrecover without specifying a volume name can cause a synchronizauon operation to be started in parallel on all volumes that need recovery. One synchronization operation runs on each drive (if necessary), and volumes on different drives are synchronized in parallel. Synchronization can affect system performance. If you have many volumes that need to be recovered. you may prefer to: Start the volumes without recovery by using vxrecover - sri. Note: The - s option is only used when the volume is stopped. 2 Recover individual volumes or recover all of the volumes when I/O traffic is lo« by u,ing vxr e cove r. Note: As lung as one CLEAN or ACTIVE. non-volatile plcx (a plcx with no Ilags set) is available inside a volume. you can start the volume using that plcx. The administrator can recover any other plcxcs in the volume immediately. or defer recovery to a later time. Recovering Volumes vxrecover -g diskgroup -8 [volume] • Recovers and resynchronizes all plexes in a started volume • Runs the vxvol start and vxplexatt commands (and sometimes vxvol resync) • Works in normal situations • Resynchronizes all volumes that need recovery if a volume name is not included • Examples: vxrecover -s vxrecover -s vol0l The vxrecover Command 1-20 C()p'lfl,j~lI yi 2006 Svn.en.ec Cmpl)f.JtHlI1 All pghts reserved VERITAS Storage Foundation 5.0 for UNIX Maintenance
    • symantcc vxvol -g diskgroup -f start volume name • This command ignores problems with the volume and starts the volume. • Only use this command on nonredundant volumes. If this command is used on redundant volumes, data can be corrupted unless all mirrors have the same data. • Example: vxvol -g datadg -f start volOl vxvol -f start volume name When you force a volume to start: Ifall plexes have the same state. then read-writcback synchronization is performed. If the pie xes do not have the same state. then atomic-copy rcsynchronization is performed. The vxvol start Command If a volume does not start with this command. it usually indicates that there is a problem with the underlying plcxes. Forcing a Volume to Start If you add the - f flag. YxYM ignores the underlying problem and forces the volume to start: Caution: Force-starting a volume can have catastrophic resuits. Use extreme caution when force-starting a mirrored volume after a disk failure and replacement. Forcing a mirrored volume to start can unconditionally synchronize the volume using a rcad-writeback method of alternating between plex blocks. NULL plex blocks may overwrite good data in the volume. corrupting the data. Only perform a forced start 011 a nonredundant volume. Lesson 1 Maintaining Data Consistency Copynqtn f' 2006 Svruame- Corpoeanon All nqhls reserveo 1-21
    • Modifying Plex and Volume States Manually syrnaniec • The volume that the plex is associated with must be in DISABLED mode to modify the plex state. • You may need to move a plex to STALE state as an intermediate step before changing to CLEAN or ACTIVE state. • Use this command as a last resort if none of the other recovery options help. vxmend -g diskgroup fix option object • stale • clean • active • empty (only used on a volume) Examples: vxmend -g datadg fix stale volOl-Ol vxmend -g datadg fix clean volOl-Ol The vxmend Command To manually reset or change the stare ofa plcx or volume, you can use the vxme nd fix command. Use this command if you know more about a plcxs data than YxYM docs. You can only set plcx states with vxrne nd fix when the host volume or the plcx is stopped. Caution: Use caution and discretion when issuing the vxmend fix command and its options. The vxmend fix command changes states set and cleared automatically by the vxconfigd daemon. Ifused incorrectly. this command can make the plcx, its volume, and its data inaccessible, and you may have to restore the data Irom backup. 1-22 Copvnqnt -, 2006 Syrnantec Corcoralton All nqnts reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • symantec vxmend -g diskgrollp on plex_name Taking Plexes Out of the 1/0 Path When analyzing plexes, you can temporarily take plexes offline while validating the data in another plex. • To take a plex offline, use the command: vxmend -g diskgrollp off plex_name • To take the plex out of the offline state, use: vXIIlend off Ion When analyzing plex problems to determine which plex has the correct data. you may need to take some plcxes offline temporarily while you are testing a particular plex by using this command. Lesson 1 Maintaining Data Consistency 1-23 Cooynqtu f. 2006 Symanter Corporation All nums reserved
    • syrnantec Example: If the Good Plex Is Not Known The volume is disabled and not startable, and you do not know what happened. There are no CLEAN plexes. To resolve: 1. Take all but one plex offline and set that plex to CLEAN. 2. Run vxrecover -so 3. Verify data on the volume. 4. Run vxvol stop. 5. Repeat for each plex until you identify the plex with the good data. P1 RvolOl-Ol: volOl-02: DISABLED/STALE :DISABLED/STALE Analyzing Plex Problems Example: If the Good l'Icx Is :'IIot Known What if both plcxcs are in the STALE state? Regardless of what happened to the plcxcs or the disks underneath. it is not sale to guess which plcx has the more recent (or good) data and start the volume. I I' you arc not sure which plcx has good data, then the salest solution is to test each plcx one by one. 1 Take all but one plcx online and set that plcx to CLEAN. 2 Run vxrecover - s. 3 Verify data on the volume. Mount the file system as read-only so you do not have to run a file system check. 4 Run vxvol stop. S Repeat for each plcx until you identity the plcx with the good data. This process requires step-by-step attention 10 all volume and plcx object details. Use vxprint -ht to monitor any volume and plcx state changes that occur as a result of your vxmend commands. Without a method to test the validity of the datu, you must restore the data from backup. For example. if your application is staning. call you guarantee that the data it contains is the 1110strecent With a Iilc system. is f sck enough to guarantee that the data in a file is there'! Even if you can mount the file system. you can lose the data in some files in the process. 1-24 VERITAS Storage Foundation 5.0 for UNIX Maintenance Copvnqnt ,2006 Svm<ln!ec Corporanon llil rights reserved
    • If the Good Plex Is Not Known: Example In the example, you can resolve the problem by using the following commands. Set the good plex to CLEAN. vxmend -g diskgroup off volOl-02 vxmend -g diskgroup fix clean volOI-OI Verity that data is on the plcx by using the volume: vxrecover -s volDI vxvol -g diskgroupstop volOI vxmend -g diskgroup -0 force off voIOI-Ol(lastcleanplexin the volume) vxmend -g diskgroupon volOl-02 vxmend -g diskgroupfix clean volOl-02 Verity that data is on the plcx by using the volume: vxrecover -s volOI If the current plex (vol01-02) has the correct data: vxmend -g diskgroupon volOl-Ol vxrecover volOI If vol 01- 01 had the correct data: vxvol -g diskgroup stop volOl vxmend -g diskgroup fix stale volOl-02 vxmend -g diskgroup on volOl-Ol vxmend -g diskgroup fix clean volOI-OI vxrecover -s volDI Lesson 1 Maintaining Data Consistency Conyflg~lt 1':20UOSyruantec Corporation All nqhts reserved 1-25
    • symantec. symanrec. Appendix l3 provides complete lab instructions and solutions. "1 ab 1S"luiiull''': lJiIIL}ljJl[l~ !)~!u{ I)li,j'li,.'n<..'l P~!~'..'H··~ Lesson Summary • Key Points This lesson described mirror resynchronization processes. This lesson also introduced the various states in which Volume Manager objects. such as volumes and plexes. can exist and described the tools that you can use to solve problems related to data consistency by analyzing and changing these states. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Volume Manager Troubleshooting Guide Lab 1 Lab 1: Maintaining Data Consistency In this lab, you practice recovering from a variety of plex problem scenarios, and optionally, observe the benefits of a dirty region log during a system crash. To investigate and practice recovery techniques, you will use a set of interactive lab scripts. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Labs and solutions for this lesson JI'C located on the:Iollo« ing pages: Appendix A provides complete lab instructions. "1 dh i Llililciiillnc:: ):i1:1 {·on"i'kHt.·~,'· pa~i,l'/"'~ 1-26 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Cr)llvflgt11 2006 Sy-nar-tec CJrr)()(iliIOf1 All fightS reserved
    • Lesson 2 Managing Devices Within the VxVM Architecture
    • syrnantec. Lesson Introduction Lesson 1: Maintaining Data Consistency • ~:~Y;Y~;~~r&;!~~~~ices ~ithin ..- Lesson 3: Encapsulation and Rootability Lesson 4: Troubleshooting the Boot Process Lesson 5: Volume Maintenance Lesson 6: Performance Monitoring Lesson 7: Point-in-Time Copies Lesson 8: Other Enterprise Features Overview svrnantec Lesson Topics and Objectives Topic After completing this lesson. you will be able to: Topic 1: Managing Components in the VxVM Architecture Manage components of the VxVM architecture, including the VxVM configuration database, the configuration daemon, and volboot. Topic 2: Discovering Disk Devices Describe the VxVM device discovery function. Topic 3: Managing Multiple Paths to Disk Devices • Describe how dynamic multipathing works with active/active and active/passive disk arrays. • Prevent multipathing for a specific device. • Disable a specific 110 path. • Control the DMP restore daemon. 2-2 VER/TAS Storage Foundation 5.0 for UN/X: Maintenance
    • VxVM Architecture User Applications File Operating System Block Device Switch [dak] Character Device Switch (rdsk) VxVM Config Databases Managing Components in the VxVM Architecture VxVM Architecture VxVM is a device driver that is placed between the UNIX operating system and the SCSI device drivers. When VxVI'v1 is running. UNIX invokes the VxVM device drivers instead of the SCSI device drivers. YxVM determines which SCSI drives are involved ill the requested I/O and delivers the I/O request to the drives. VxVM Daemons YxYM relies on the following constantly running daemons for its operation: vxconfigd- The VxVM configuration daemon maintains disk and group configurations. communicates configuration changes to the kernel. and modifies contigurution information stored on disks. When a system is booted. the vxdctl enable command is automatically executed to start vxconfigd. YxVM reads the /etc/vx/volboot file to determine disk ownership and automatically imports disk groups owned by the host. vxiod The VxYM 1/0 daemon provides extended 1/0 operations without blocking calling processes.Several vxiod daemons arc usually started at boot time. and they continue to run at all times. vxrelocd~~vxrelocd is the hot-relocation daemon that monitors events that affect data redundancy. If redundancy failures are detected. vxrelocd automatically relocates affected data from mirrored or R!ID-5 subdisks to spare disks or other free space within the disk group. ~~~~~- --~--~~---- Lesson 2 ManagingDevicesWithinthe VxVM Architecture 2~3 Copyrlght,f 20(,6 Syrnantec Corporation. All nqhts res erveo
    • symaruec VxVM Configuration Database • Contains all disk, volume, plex, and subdisk configuration records • Is stored in the private region of a VxVM disk • Is replicated to maintain a copy on multiple disks in a disk group - VxVM maintains an appropriate number of active copies per disk group. - Copies are stored across enclosures to maximize redundancy. • Is updated by the vxconfigd process VxVM Configuration Database The Vx VM configuration database stores all disk. volume. plcx. and subdisk configuration records. The vxconf ig device (/ dev/vx/ conf ig) is the interface through which all changes to the volume driver state are performed. This device can only be opened by one process at a time. and the initial volume configuration is downloaded into the kernel through this device. The coufigurution database is stored in the private region ofa VxVM disk. Each disk that has a private region holds an entire copy of the configuration database for the disk group. The size of the configuration database for a disk group is limited by the size of the smallest copy of the configuration database on any of its member disks. Tile VxVM configuration is replicated within the disk group to protect against loss of the configuration in case of physical disk failure. vxconf igd actively monitors live or more copies of the configuration database for each disk group. VxYM balances their locations based on the number of controllers. targets and disks in the disk group. With VxVM 3.2 and later. VxVM configuration copies are placed across the enclosures spanned by a disk group to ensure maximum redundancy across enclosures, The vxconf igd configuration daemon. is the process that updates the configuration through the vxconf ig device. The vxconf igd daemon was designed to be the sole and exclusive owner ofthis device. 2-4 CUiJYrly~ll ~,2006 Svmanlec Corpor<lllol1 All fights reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • symantcc Displaying VxVM Configuration Database Information ivxdg list acctdg Group: acctdg _ .. _ ~gid: . 1023996467.1130. trainsrCOnfi9Uration Impo r t= Ld . 0.1129 database size config copy 1 len=48144 state=clean online config disk c1t1dOs2 copy 1 len=48144 state=clean online config disk c1t2dOs2 copy online config disk c1t3dOs2 copy 1 config disk c1t9dOs2 copy config disk c1t10dOs2 copy 1 online config disk c1t11dOs2 copy 1 log disk c1tOdOs2 copy 1 1en=7296 log disk c1t1dOs2 copy 1 len=7296 iNot active "I copies: config: Displaying Disk Group Configuration Data To display the status of the configuration database for a disk group: vxdg list diskgroup Ifno disk group is specified, information from all disk groups is displayed in an abbreviated format. When you specify a disk group. a longer format is used to display the status of the disk group and its configuration. In the example, live disks have active configuration databases (online). and two disks do not have an active copy of the data (disabled). The configuration database for a disk group is the size of the smallest private region in the disk group. Log entries are on all disks that have databases. The log is used by the VxVM kernel to keep the state of the drives accurate if the database cannot be kept accurate (for example, if the configuration daemon is stopped). By default. for each disk group, VxVM maintains a minimum of live active database copies on the same controller. In most cases. VxVM also attempts to alternate active copies with inactive copies. In the example. clt3dO and clt9dO are disabled. If different controllers are represented on the disks in the same disk group, VxVM maintains a minimum of two active copies per controller. In the output on the slide, the Configuration database size (pe rml en«) is next to a field named f r e e«. The free= field can be used to check how fast the configuration database is filling lip so that action can be taken before the disk group runs out of database space. lesson 2 Managing Devices Within the VxVM Architecture 2-5 Copyright '," 2006 Symanter Corporancn. ,.11nallis reserved
    • Displaying Disk Header Information The terms displayed in the output oi'vxdisk list include: Term Description f----------+- Full LNIX de ice nameotdiskDevice Device name used by VxVM to refer to the physical disk type Method of placing the disk under VxVM control ~------------4------------------------------------------------ devicetag hostid Name of system that manages the disk group (If blank, no host is currently controlling this group.) Disk group name and internal ID disk VM disk media name and internallD info group flags Disk fonnat. private region offset, and partition numbers for public and private regions ------------------------ Settings that describe status and options till' the disk Paths for block and character device tiles of the public region "I' the disk pubpaths iosize The iosize range that the disk accepts version Version number of header format public, private Partition (slice) number. offset trorn beginning of the partition. length of the partition. and disk offset 2--6 C:"PYflynl <, 21l!)6 Svntantec C')'1}(,rd[,on All nqtus reserved VERITAS Storage Foundation 5.0 for UNIK Maintenance
    • symantcc config priv 000256-048207 [047952] :copy=Ol offset=000192 enabled log priv 048208-055503 [007296] :copy=01 offset=OOOOOOenabled lockrgn priv 055504-055647[000144] :part=O offset=OOOOOO configuration database copies, logs, and lock ;-i="7'===~=:;;;;;:7':~:::::7.~;;;;;;;;;:;;;;;;;;;;~~Last update to the private region and location of header (sector 0) and offset to header copies ~~~~"":~~=-r.~7'Z''1''Trr----1 (sector 240) Defined regions: config priv 000048-000239 [000192] :copy=01 Location of offset=OOOOOOenabled Multipathing information: numpaths: 2 cltOdOs2 state=enabled c2tOdOs2 state=disab The following is a continuation of vxdi sk 1 i st output descriptions: Term Description update Date, time, and sequence number ofthe last update to the private ssb region headers Offset to two copies of the private region header configs Number of configuration database copies kept in the private region logs Number ofkerncllogs kept in the private region Defined Location ofconfiguration databases, kernel logs, and lock regions regions in the private region Because the database or logs can be split, there can be multiple pieces. Therefore, the otlset is the starting location within the private region where this piece of the database begins. Copy represents the copy of the database to which this piece belongs. Multipathing ltdynamic multipathing is enabled and there are multiple paths to Lnf orrna t i on the disk, this item shows information about the paths and their status. Lesson 2 Managing Devices Within the VxVM Architecture 2-7 COi1yrlght;!':, 21)06 Symanter. Corporaton All (lallS reserved
    • symantec. Disk Types and Disk Formats Disk types and formats include: auto indicates that when the vxconf igd daemon has been started. YxYM automatically COli figures a disk accessrecord for the disk basedon a list of known disk device addressesobtained trom the operating system. Auto- configured disks arc displayed with their type aud qualified by their format. For example, auto: cdsdisk indicates an auto-configured disk that is formatted as a cross-platform data sharing (CDS) disk that is suitable lor moving between different operating systems. This is the default format lor most disks on a system, hut not lor hoot. root, or swap disks. If a disk is automatically configured by YxYM as a simple or sliced disk, you will seedisk types and formats, such as auto: simple and auto: sliced. auto:none indicates that the disk is not Iormaucd lor YxYM. cdsdisk indicates that the public and private regions are contiguous on the same partition and suitable lor moving between different operating systems. sliced indicates that the public and private regions are separatepartitions. none indicates that there an: no public or private regions on the disk. VxVM Disk Types and Formats • auto: Automaticaffy configured by VxVM auto: cdsdisk (Default for 4.x and higher) • auto:simple • auto: hpdisk (Only on HP-UX) • auto:none • cdsdisk: Public and private regions are contiguous on the same partition and suitable for moving between different operating systems. • sliced: Public and private regions are on separate partitions. • hpdisk: HP-UX specific disk format is used for system disk and before version 4.x. • none: There are no public and private regions. Notes: You can change the default format by using the vxdiskadmoption. "Change/Display the default disk layouts" or in /etc/vx/disk. Non-boot sliced disks can be converted to CDS disks by using the vxcdsconvert command. 2-8 VER/TAS Storage Foundation 5.0 for UN/X: Maintenance COPyright 2nD€> SvrnantscCorrorauon All r<gtlls reserved
    • ,a", VxVM Configuration Daemon vxconfigd: Maintains the configuration database Synchronizes changes between multiple requests, based on a database transaction model: symantec. All utilities make changes through vxconfigd. Utilities identify resources needed at the start of the transaction. Transactions are serialized, as needed. Changes are immediately reflected in all copies. Does not interfere with access to data on disk Must be running for changes to be made to the configuration database If vxconfigd is not running. VxVM operates, but configuration changes are not allowed and queries of the database are not possible. The VxVM configuration daemon must be running in order for configuration changes to be made to the VxVM configuration database. Ifvxconfigd is not running, VxVM operates properly. but configuration changes are not allowed and queries of the database arc not possible. The vxconf igd daemon synchronizes multiple requests and incorporates configuration changes based on a database transaction model: All utilities make changes through vxconf igd. Utilities must identify all resources needed at the start of a transaction. Transactions arc serialized. as needed. Changes are immediately reflected in all copies of the configuration database. The vxconf igd daemon does nut interfere with user or operating system access to data on disk. Controlling the VxVM Configuration Daemon Copynght@,2006 Symaf1ll'!i; Cocooranoo. All flqms reserv"d 2-9Lesson 2 Managing Devices Within the VxVM Architecture
    • VxVM Configuration Daemon • vxconfigd reads the kernel log to determine current states of VxVM components and updates the configuration database. • Kernel logs are updated even if vxconfigd is not running. For example, upon startup, vxconfigd reads the kernel log and determines that a volume needs to be resynchronized. • vxconfigd modes: - Enabled Normal operating state - Disabled Most operations not allowed Booted Part of normal system startup while acquiring the boot disk group vxconfigd Modes vxconfigd reads the kernel log to determine current states ofVxVM components ami updates the configuration database. Kernel logs arc updated even ifvxconfigd is not running. For example, upon startup, vxconfigd reads the kernel log and determines that a volume needs to be rcsynchronizcd. vxconf igd operates in one of three modes: Enabled Enabled is the normal operating mode in which most configuration operations arc allowed. Disk groups arc imported. and Vx"M begins to manage device nodes stored in / dev /vx/ dsk and / dev /vx/ rdsk. Disabled In the disabled mode, most operations arc not allowed. vxconf igd docs not retain configuration information lor the imported disk groups and docs not maintain the volume and plcx device directories. Certain failures, most commonly the loss of all disks or configuration copies in the boot disk group, cause vxconf igd to enter the disabled state automatically. Buotcd The booted mode is part of normal system startup. prior to checking the root file system. The booted mode imports the boot disk group and waits tor a request to enter the enabled mode. Volume device node directories arc not rnauuaincd, because it may not be possible to I rite to the root file system. Copyngtll "::. 2006 Swnantec Corporation All rights reservec 2-10 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • ~m Managing the VxVM Configuration Daemon Use vxdctl to control vxconfigd. vxdctl mode Displays vxconf igd status vxdctl enable Enables vxconfigd vxdctl disable Disables vxcon f igd vxdctl stop Stops vxconfigd vxdctl -k stop Sends a kill - 9 vxconfigd Starts vxconfigd vxdctl license Checks licensing vxdctl support Displays version information The vxdctl Utility symantec. vxconf igd is invoked by startup scripts during the boot procedure. To manage some aspects ofvxconf igd, you can use the vxdct 1 utility. Displaying vxconfigdStatus To determine whether the configuration daemon is enabled. you use the vxdct 1 mode command. This command displays the status of the configuration daemon, If the configurauon daemon is not running. it must be started in order to make configuration changes. Disk failures arc also configuration changes, but there is another way of tracking them if the daemon is down (kernel logs). Enabling vxconfigd If vxc on f igd is running, but not enabled, you use vxdct 1 enable to enable the configuration daemon. This command forces the configuration daemon to read all the disk drives in the system and to set up its tables to reflect each known drive. When a drive fails and the administrator fixes the drive. this command enables YxYM to recognize the drive. Disabling vxconfigd To prevent configuration changes from occurring, you can disable the daemon by using vxdctl disable. vxconf igd records all commands executed. whether through the YEA or the CLI. These commands are stored in /var/adm/vx. The veacmdlog tile records YEA commands and the cmdlog tile records ell commands. Lesson 2 Managing Devices Within the VxVM Architecture Ccpyrrqnt ~) 2006 Svmantar- cororcuoo All fights reserve,1 2-11
    • syrnantec This host ID is used to ensure that two or more hosts that can access disks on a shared SCSI bus do not interfere with each other in their use of those disks. This host ID is important in the generation of unique ID strings that arc used internally lor stamping disks and disk groups. The volboot liic also contains the name of the system-wide default disk group if this has been configured. If the boot disk is under V-xVM control, the vol boot file also contains the name of the boot disk group to which the boot disk belongs. Caution: Never edit the volboot file manually. If you do so. its checksum is im alidated. The volboot File /etc/vx/volboot contains: • The host 10that is used by VxVMto establish ownership of physical disks . • The values of defaultdg and bootdgif these values were set by the user Caution: Do not edit volboot, or its checksum is invalidated. To display the contents of volboot: vxdctl list To change the host 10in volboot: vxdctl hostid newhostid vxdctl enable To re-create volboot: vxdctl init llostid Note: The hostid field in /etc/vx/valboot is not returned by the UNIX hostid command, but rather by the hostnamecommand. Managing the volboot File Viewing the Contents of volboot To view the decoded contents of the vol boot file: vxdctl list volboot file ve rs i on : 3/1 seqno: 0.1 cluster protocol version: 70 hostid: trainl 2-12 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Curyroqlll lO[)6 Svm.sruec Curp'H,11I011 All nor.ts reserveo
    • symaniec Device Discovery Layer (DOL) vxdiskconfig (Solaris only) vxdisk scandisks vxdctl enable User process level Device discovery is the process of locating and identifying disks attached to a host. Kernel process level VxVM Kernel Device discovery occurs automatically whenever you add a new disk array. Discovering Disk Devices What Is Device Discovery? Device discovery is the process of locating and identifying the disks that are accessible to a host. YxYM 3.2 and later features. such asdynamic multipathing (DMP). depend on device discovery. Device discovery enables you to dynamically add support for disk arrays from a variety of vendors without rebooting the system. Discovering and Configuring Disk Devices To dynamically discover new devices, use the vxdiskconf ig utility. This utility scans for disks that were added since YxYM's configuration daemon was last started and dynamically configures the disks to be recognized by YxYM. The vxdiskconf ig utility invokes OS utilities, such as devf sadmon Solaris, to ensure that the OS recognizes the disks. vxdi skconf ig then invokes vxdct 1 enable, which rebuilds volume node directories and the DMP internal database tu reflect the new state of the system. DOL enables YxYM to use more descriptive names when using enclosure-based naming, for example. emcO_1 rather than Di sk_l. Note: The vxdi skconf ig utility docs nut exist on IIP-UX. Lesson 2 Managing Devices Within the VxVM Architecture 2-13 Copynqbt ~~ 2006 Syrnantec Corporal Ion II! nqhts reserven
    • Adding Support for a New Disk Array With V x VM version 3.1 and later. to add support for a new type of disk array that is developed by a third-party vendor, you must add vendor-supplied libraries by using plauorm-spccific package installation commands. The new disk array docs not need to be connected to the system hen the package is installed. You may need to scan tor new devices by issuing plauonn-spccilic commands. Theil run vxdctl enable to ensure that VxVM updates the device list. Adding Disk Array Support • To add support for a new type of disk array, add vendor- supplied libraries. For example: pkgadd -d /edrom/pkgdir SEAGTda (Solaris) swinstall -5 /edrom/depotdir SEAGTda (HP-UX) installp -ae /edrom/pkgfile SEAGTda (AIX) rpm -ihv /edrom/pkgdir SEAGTda,rpm (Linux) Scan for new devices: vxdetl enable - This command invokes vxconfigd to scan lor all disk devices, updates the device list, and reconligures DMP. - You do not need to reboot the host. Note: VxVM supports many arrays "out-of-the-box." See vxddladm listsupport lor a complete list. To remove support for a disk aITaY. you remove the vendor-supplied library package by using the OS-spcci lie command. For example, to remove support lor the SEAGTda disk array: Sularis Removing Support for a Disk Array pkgrm SEAGTda HP-UX swremove SEAGTda AIX I.iIlUX installp -u SEAGTda rpm -ev SEAGTda Ir the arrays remain physically connected to thc host alter support has been removed. they arc listed in the OTHER_DISKS category. and the VOIUllh:S remain available. 2-14 VERITAS Storage Foundation 5. afor UNIX: Maintenance Cooynght 2006 Symantec CorPOfil!lQf1 All rights reserved
    • svmantcc You can use the vxdisk scandiskscommand to scan part of the OS device tree, as follows: • Discover newly added devices previously unknown to VxVM: vxdisk scandisks new • Discover fabric devices: vxdisk scandisks fabric • Scan for the specific devices: vxdisk scandisks device=cltldO,c2t2dO • Scan for all devices except those that are listed: vxdisk scandisks !device=cltldO,c2t2dO • Scan for devices that are connected to logical or physical controllers: vxdisk scandisks ctlr=cl,c2 • Discover devices that are connected to the specified physical controller: vxdisk scandisks pctlr=/pci@lf,4000/scsi@3/ Partial Device Discovery VxVM supports partial device discovery where you can include or exclude sets of disks or disks attached to controllers from the discovery process. Partial device discovery reduces redundant discovery operations by scanning only a part of the OS device tree. The vxdisk scandisks command rescansthe devices in the OS device tree and triggers a DMP rcconfiguration. You can specify parameters to vxdi sk scandisks to implement partial device discovery. Lesson 2 Managing Devices Within the VxVM Architecture 2-15 Ccpynqht r 2006 Svruantec Corporauon All nqtus reserved
    • Managing Multiple Paths to Disk Devices The dynamic multipathing (OMP) feature or Vx VM provides greater reliability and performance for your system by enabling path Iailovcr and load balancing. Dynamic Multipathing (DMP) DMP: A method VxVM uses to manage two or more hardware paths to a single drive (?!Il.......,ru••• !!H:], ( Fibre Channelswitche~/ j Dynamic muhipathing is the method that VxVM usesto manage two or more hardware paths directing 1:0 to a single drive. VxVM arbitrarily selects one ofthe two namesand creates a single del ice entry. and then transfers data across both paths to spread the 1:0. V,VM detects multipath systems by using the universal world-wide device identifiers (WWD IDs) and manages mulupaih targcts. such as disk arrays. which define policies for using 1I10rcIIJ:.lnone p.uh. 1= Host I • Mapped ==>by DMP Single OMP Metanode Benefits or DM P include: High availabitity DMP provides greater reliability using a path Iailovcr mechanism. When one connection to a disk is 1051.the system coniinucs to accessthe critical data over the other sound connections 10 the disk until you replace the railed path. Improved performance DMP provides greater I/O throughput by balancing the I/O load uniformly across multiple I/O paths to the disk device. 2-16 IfBcsiOILfBCBil~ rnn1IIIIIIb~~~o;ure LOiskis disk15 or disk27, dependingon the path. What Is Dynamic Multipathing? Benefits of DMP Copvnqnt ;:; 2006 Syrnamec Corpcranon All rights reSf'rved VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • 'symanlCC. Types of Multiported Arrays Active/Active I • Active/Passive If I Path2 I Passive I Path +- (Secondary) Pathl Path2 Pathl Active Path (Primary) Active Path Active Path Used for: • Load balancing • Path failover Used for path failover only What Is a Multiported Disk Array? A multiported disk array is an array that can be connected to host systems through multiple paths. The two basic types of multiported disk arrays are: Active/active disk arrays Active/passive disk arrays For each supported array type, V.xVM usesa multipathing policy that is based on the characteristics of the disk array. Active/Active Disk Arrays Active/active disk an'ays permit several paths to be used concurrently for 110.With these arrays, DMP provides greater I/O throughput by balancing the I/O load uniformly across the multiple paths to the disk devices. If one connection to an array is lost, DMP automatically routes I/O over the other available connections to the array. Active/Passive Disk Arrays Active/passive disk arrays permit only one path at a time to be used for 1/0. The path that is used for I/O is called the active path. or primary path. An alternate path, or secondary path. is configured for use in the event that the primary path fails. If the primary path to the array is lost. DMP automatically routes I/O over the secondary path or other available primary paths. Lesson 2 Managing Devices Within the VxVM Architecture 2-17 Copyright ." 2006 Symantec Ccrocranoo 111 nqt-ts reservec
    • syrnantec Setting 1/0 Policies and Path Attributes To change the 1/0 policy for balancing the 1/0 load across multiple paths to a disk array or enclosure: vxdmpadm setattr enclosure enc name iopolicy=policy • adaptive • balanced • minimumq • priority • round· robin • singleactive To set path attributes for a disk array or enclosure: vxdmpadm setattr path path_name pathtype=type • active • primary • nomanual secondary • nopreferred • standby • preferred Setting the 1/0 Policy for an Enclosure Alter analyzing statistics. you can use the vxdmpadm setat tr command with the iopolicy option to change the 1/0 policy for balancing the 110load across multiple paths to a disk array or enclosure. You can set policies loran enclosure (Iorcxumplc. HDSOI). for all enclosures ola particular type (for example. HDS). or for all enclosures ora particular array type (AlA lor active.active. or Ail' for active/passive). adapti ve automatically determines the paths that have the least delay and schedules I/O Oil paths that arc expected to curry a higher load. balanced takes the track cache into consideration when balancing I/Oacross paths. minimumq sends 110on paths that have the minimum number of 1/0 requests in the queue. This is suitable lor low-end disks or .IBODs where a signi Iicant track cache does not cxist. priori ty assigns the path with the highest load carrying capacity as the priority path. round-robin sets a simple round-robin policy tor 1/0. singleac ti ve channels liO through the single active path. To display the current I/Opolicy: vxdmpadmgetattr enclosure enclosure name iopolicy 2-18 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Setting Path Attributes You can set the following attributes of the paths to an enclosure or disk array by using the command: vxdmpadmsetattr path path_name pathtype=type active changes a standby path to active. nomanual restores the original primary or secondary attributes of a path. nopreferred restores the normal priority of the path. preferred [priori ty=N] specifies a preferred path and optionally assigns a priority value to it. This indicates a path that is able to carry a higher I/O load. The priority value must be an integer greater than or equal to I. Larger priority values indicate a greater load carrying capacity. ,,"ote: Marking a path as a preferred path does not change its I/O load balancing pol icy, primary assigns a primary path for an Active/Passive disk array. secondary assigns a secondary path for an Active/Passive disk array. standby marks a path as not available for normal I/Oscheduling. This path is only invoked if there are no active paths available for 110. See the 1'ERITAS 1'olume MOl1ager Administrator 's Guide and the vxdmpadm (1m) manual page for more information. Lesson 2 Managing Devices Within the VxVM Architecture Copyr,ght s: 2006 Symantec Corporation All nqhts resr,rved 2-19
    • Displaying 1/0 Statistics for Paths syrnantec 1. Enable the gathering of statistics: vxdmpadm iostat start [memory=sizeJ 2. Reset the I/O counters to zero: vxdmpadm iostat reset 3. Display the accumulated statistics for all paths: vxdrnpadrn iostat show all cpu usage = 7952us OPERATIONS per cpu memory = 8192b BYTES AVG TIME(ms) PATHNAME cOtOdO c2t1l8dO c3 t1l8dO READSWRITES READSWRITES 1088 557056 0 READS WRITES 0.009542 0.000000 0.001194 0.000000 0.000000 0.000000 87 o 44544 o • The displayed statistics can be filtered by path name, DMP node name, and enclosure name. • You can also specify the number of times to display the statistics and the time interval. Displaying 1/0 Statistics for Paths You can use the vxdmpadmiostat command to gather and display JiO statistics 1'01' a specified DMP node, enclosure, or path. The statistics that arc displayed arc the CPU usage and amount 01' memory pCI' CPU used to accumulate statistics. the number 01' read and write operations, the number ofblocks read and written. and the average time in milliseconds pc:r read and write opcrat ion. The interval and count attributes may be used to specify the interval in seconds between displaying the I/O statistics. and the number of lines to be displayed. The actual interval may be smaller than the value specified if insufficient memory is available to record the statistics. 2-20 VER/TAS Storage Foundation 5.0 for UNIX. Maintenance Cunynqlu '<:U06 Swuarnec Corpcrauon All nqnts reserved
    • IPreventing DMP for a Device m;-------- If an array cannot support DMP, you can prevent multipathing for the device by using vxdiskadm: ~~~en~ multipathing/s~press d~~ces ~rom VX~~S Vie~ Allow multipathing/Unsuppress devices from VxVM's Vie~ List currently suppressed/non-multipathed devices IWarning: I If you do not prevent DMP for unsupported arrays: Commands like vxdisk list show duplicate sets of disks as ONLINE, even though only one path is used for 1/0. Disk failures can be represented incorrectly. Preventing Multipathing for a Device If you have an array that cannot support the use ofDMP, or if you want to use Sun's Alternate Pathing driver with Vx VM. you can suppress DI'vIP for some or all devices by using the vxdiskadm menu. Suppressing DMP for a device prevents multipathing without removing the DMP layer. It is important for you to suppress DMP for devices that do not support OM!'. If you do not prevent DMP for unsupported arrays: A VxVM command, such as vxdisk list, shows duplicated sets of disks as ONLINE for each path, even though the command is only using one path for 110. Disk failures can be represented or displayed incorrectly by VxVM if DM!' is running with an unsupported, unsuppressed array. To manage the devices that participate ill DMP. you call use vxdiskadm. Lesson 2 Managing Devices Within the VxVM Architecture Copyright '$ 2()06 Syrnantec Corporation 111nonts reserved 2-21
    • syrnanrec Preventing DMP for a Device When you select the option to prevent multipathing in the vxdiskadm main menu, you have these choices: , I,i. suppre-;;'';ii' paths th;ough a controller from ] . VxVM's view Suppress a path from VxVM's view Suppress disks from VxVM's view by specifying a VID:PID combination Su resa all but one aths to a di.~s~k ~ Prevent multipathing of all disks on a controller l __ .:::by£-V:;X::.VM:.::.. •• _ Prevent multipathing of a disk by VxVM Prevent multipathing of disks by specifying a VID:PID combination List currently suppressed/non-multipathed devices Similar choices exist when you reinclude devices for DMP. Excluding Devices from Multipathing When you select the option to prevent mulupathing in the vxdi skadmmain menu, the Exclude Devices submenu is displayed. Both of the following options send the command vxdmpadmdi sable to the kernel: The option "Suppress all paths through a controller from VxVM's view" coutinucs to allow the I/O to use both paths internally. Aller a reboot, vxdisk 1i s t dues not show the suppressed disks. "Prcvcnt multipathing of all disks on a controller by VxVM" docs not allow the l.O to use intcrnal muliipathiug. The vxdisk list command shows all disks as ONLINE. This option has no effect on arrays that arc not performing dynamic multipathing or that do not support VxVM DM!'. Including Devices for Multipathing Fur previously excluded devices. if you later decide that you want to rcincludc the device in muluparhing. then you select the vxdiskadm option "Allow mulripathing.Unsupprcss dcv iccs lrom VxVM's view." A similar set of options is available in the Include Devices submenu. 2-22 Copvnqht " 20!)6 Syrnantec Corpo-auon All r'lj~lls reserved VERITAS Storage Foundation 5.0 for UNIX.' Maintenance
    • You can disable 1/0 to a controller to perform maintenance, for example: • To replace a system board • To test path failover Use the following commands: • To disable 1/0 to a particular controller: vxdmpadm disable ctlr=ctlI~name • To disable 1/0 to a particular enclosure: vxdmpadm disable enclosure=enc_name • To reenable 1/0 to a particular controller: vxdmpadm enable ctlr=ctlr_name In VEA: Select Actions->Disable (or Actions->Enable) and completethe associateddialog box. symantcc. Enabling or Disabling 1/0 to a Controller By disabling 110 to a host disk controller. you can prevent DMP from issuing 110 through a specified controller. You can disable I/O to a controller to perform maintenance on disk arrays or controllers attached to the host. For example. when replacing a system board. you can stop all 110 to the disk controllers connected to the board before you detach the board. For active/active disk arrays. when you disable [/0 to one active path, all 1'0 shifts to other active paths. For active/passive disk arrays. when you disable I/O to one active path. all 110 shifts to a secondary path or to an active primary path on another controller. You cannot disable the last enabled path to the root disk or any other disk. On IIP-UX. you can disable the last enabled path to any other disk (without using the -f (force) option). When you disable 1/0 to a controller. disk. or path. you override the DMP restore daemon's ability to reset the path to ENABLED. When you enable I/O to a controller: For active/active disk arrays. the controller is used again tor load balancing. For active/passive disk arrays. the operation results in tailback of I/O to the primary path. Enablingor Disablinga Controller: VEA To disable or enable a controller in YEA. select the controller. select Actions >Disablc or Actions >Enable. and complete the associated dialog box. Copvnont 'f 20D6 Svmamer- Corporation All nqhts reservec 2-23Lesson 2 Managing Devices Within the VxVM Architecture
    • The [)MP restore daemon is an internal process that monitors DMP paths and automatically enables paths that were previously disabled due to hardware failures after the paths arc back online. Controlling the Restore Daemon The DMP restore daemon is an internal process that monitors DMP paths. To check its status: vxdmpadm stat restored The number of daemons running: 1 The interval of daemon: 300 The policy of daemon: check_disabled To start the [)MP restore daemon, you use the start restore option in the vxdmpadm command. vxdmpadm start restore [interval=intervall [policy=check_disabledlcheck_alll The restore daemon unalyzcs the health olpaths every interval seconds. The: default interval is 300 seconds. Decreasing the interval can adversely affect performance. You can specify one oftwo types olpolicics: If the policy is check_disabled, the restore daemon checks the health of paths that were previously disabled due to hardware failures and revives them if they arc back online. If the policy is check_all, the restore daemon analyzes all paths in the system, revives the paths that arc back online, and disables the paths that arc inaccessible. interval: Frequency of analysis (default: 300seconds) check disabled: Only checks disabled paths (default) To change daemon properties: • Stop the DMP restore daemon: vxdmpadm stop restore • Restart the daemon with new attributes: vxdmpadm start restore interval=400 policy=check_all I check all: All paths are checked. Controlllnq Automatic Restore Processes Starting the DMP Restore Daemon The default policy is check_disabled. 2-24 VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copyr.qnt 2006 Svmantec Corpoeanon All nqhts reservec
    • symantcc. Lesson Summary • Key Points This lesson described components in the VxVM architecture and the device discovery process and described how to administer dynamic multipathing. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Storage Foundation Release Notes - VERITAS Volume Manager Hardware Notes svmantcc Labs and solutions for this lesson are located on the following pages: Appendix A provides complete lab instructions. 'Tab 2: r..LlIu~ing Dc ices 11I:il1 the Y ' I :rL'ililccrllll'," 1',1ic' ,_1,1 Appendix B provides complete lab instructions and solutions, "I ah :: '.;,'!u!iU!h: M:llll1:2i!l)1 Dcvic •.> Withl11 the ' '1 .vnhirccturc.' pagl 11·: I Lab 2 Lab 2: Managing Devices Within the VxVM Architecture In this lab, you explore the VxVM tools used to manage the device discovery layer (DOL) and dynamic multipathing (DMP). The objective of this exercise is to make you familiar with the commands used to administer multipathed disks. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Lesson 2 Managing Devices Within the VxVM Architecture 2-25
    • 2-26 VERITAS Storage Foundation 5.0 for UNIX: Maintenance CllJlYfI!-lot!' 2006 Symamec Cor{lor;;llul"' All fights reserved
    • Lesson 3 Encapsulation and Rootability
    • 'symankc. Lesson Introduction • Lesson 1;Maintaining Data Consistency • Lesson 2: Managing Devices Within the VxVM Architecture • ~~~~~~i~~yEnCapsulati::_a.:~~ ._~ • Lesson 4: Troubleshooting the Boot Process • Lesson 5: Volume Maintenance • Lesson 6: Performance Monitoring • Lesson 7: Point-in-Time Copies • Lesson 8: Other Enterprise Features Overview symantec. Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Placing the Boot Place the boot disk under VxVM Disk Under VxVM Control control. .. Topic 2: Creating an Create an alternate boot disk by Alternate Boot Disk mirroring the boot disk that is under VxVM control. Topic 3: Removing the Remove the boot disk from VxVM Boot Disk from VxVM control. Control 3-2 VERITAS Storage Foundation 5.0 for UNIX Maintenance
    • Placing Disks with Data Under VxVM Control • On Solaris, encapsulation is the process of converting partitions into volumes to bring those partitions under VxVM control. • On HP-UX, conversion is the process of enabling LVM physical volumes to be used byVxVM. SOlarisl HP-UXI Placing the Boot Disk Under VxVM Control What Is Encapsulation or Conversion? symantcc. On Solaris. encapsulationis the process of converting partitions into volumes to bring those partitions under VxVM control. On IIP-UX, conversionis the process of enabling LVM physical volumes to be used by Vx VM, A Her a disk has been encapsulated or converted. the disk is handled like an initialized disk, Placing Disks with Data Under VxVM Control: Solaris Encapsulation is the process of converting partitions into volumes to bring those partitions under Vx Vlvl control. For example. if a system has three partitions on the disk drive and you encapsulate the disk to bring it under VxVM control. there will be three volumes ill the disk group, Solaris Encapsulation Requirements Disk encapsulation cannot occur unless these requirements arc met: Partition table entries must be available on the disk for the public and private regions, During encapsulation. you are prompted to select the disk layout. If you choose a CDS disk layout. then only one partition is needed. However, if encapsulation as a CDS disk fails. you can specify that a sliced layout be used instead, in which case you will need two free partitions, The disk must contain an s2 slice that represents the full disk (The s2 slice cannot contain a tile system), 2048 sectors of unpartitioned free space. rounded up to the nearest cylinder boundary. must be available. either at the beginning or at the end of the disk, Lesson 3 Encapsulation and Rootability COPYright ~~ 2006 Syrnanter- Corporaucn All nqms reserved 3-3
    • Encapsulated data disk ,---._,_ Private region -......--~1~-·-;1----+I homev~ engvol eng --- '----- acct -----_'[~ jBW~--.---~~dist ~~ Solaris: Encapsulating a Data Disk vxdiskadm: "Encapsulate one or more disks" Follow the prompts by specifying: Name of the device to add Name of the disk group to which the disk will be added vxencap: /etc/vx/bin/vxencap -g diskgroup access name Run the script: /etc/init.d/vxvm-reconfig access name Solaris: Reversing the Encapsulation Process Limitations: There arc no tools to help with uncncapsulation. (The system disk is an exception to this.) Encapsulation should not be attempted if: Volume layouts have been altered in any way (for example, hot rclocauon ). Volumes have mirrors. The disk has been used for parts of other volumes. The partition table before encapsulation is stored in: /etc/vx/reconfig.d/disk.d/device/vtoc Follow this procedure to encapsulate: a Stop applications. b Remove volumes on the disk and take the disk out of VxVM control. c Re-create thc partition table as provided ill the stored vtoc tile. d Manually modify /etc/vfstab (itnecessary). e Reboot or manually mount the partitions and start applications. 3-4 VERITAS Storage Foundation 5,0 for UNIX: Maintenance CIJPynghl;: 2006 Svmautec Corrorauou All nqtus reservecr
    • Placing Disks with Data Under VxVM Control: HP-UX Conversion is the process of enabling LVM physical volumes to be used by VxVM. You can convert: Unused physical volumes Physical volumes in volume groups Unused Physical Volume ~.•..... - ...'..•... '. U[VXVM Disk Group Ii,y .-k_·4JWJi8L'!£&;;.'*~#Jiik" ~. ' LVM Physical Volumes VxVM Disks HP-UX: Limitations of LVM Conversion LVM configurations that you cannot convert to VxVM include: A volume group with insufficient space for mctadata A volume group containing the root volume A volume group containing the lusr file system A volume group with any dump or primary swap volumes A volume group disk used in ScrviceGuard clusters A volume group with any disks that have bad blocks HP-UX: Converting Unused Physical Volumes 1 View group membership information to ensure that then: is no data on the LVM disk: pvdisplay disk_name pvdisplay /dev/dsk/c4tldO 2 Remove LVM disk information: pvremove disk_name pvremove /dev/dsk/c4tldO 3 Initialize the disk for VxVM use, Lesson 3 Encapsulationand Rootability 3-5 Copyright? 2006 Svrnantar Corporation All nqtus reserved
    • UP-LJX: Cunverting Volume Groups vxvmconvert Volulne Manager SUPPOI"( Operations Menu: Volume Manager/LvM_Conversion list listvg Allalyze LVM Volume Groups for Conversion convert LVM Voillme G~oups to VxVM Roll back tram VxVM to LVM List disk informatioIl List LVM Volume Group inforntation ?? Display help about menu Display help about the menuing system Ex i t f iorn menusq Select an operation to perfOrl": UP-LJX: Conversion PrOC('SS(LV'I to VxV'I) 1 Identify volume groups. 2 Analyze volume groups. 3 l3ack lip LV M data. 4 Plan for new names. 5 Stup applications. 6 Unrnount the file system. 7 Convert volume groups. 8 Make name changes. 9 Restart applications. 10 Customize the configuration. IIP-LJX: Restoring LVi! Volume Group Cunfiguration Roll Back Using vxvmconvert Full LVM Restoration fur the Volume Group LVM Volume Group VxVM Disk Group LVM Physical Volumes VxVM Disks 3-6 VERITAS Storage Foundation 5.0 for UNIX: Maintenance CIlPVflljlllL 2006 Syruantec Corpnratrcu All r.ghts reserved
    • HP-UX: Roll Back Using vxvmconvert vxvmconvert Option 3, roll hack from ',',1 to 1.',1 Select Volume Group(s) to rollback: [<pattern-list>,all,list,listvg,q,?) vg08 Roll back this Volume Group? [y,n,q,?) (default: y) Rolling back LVM configuration records for Volume Group vg08 Selected Volume Groups have been restored. Hit RETURN to continue. Rollback other LVM Volume Groups? [y,n,q,?) (default: n) UP-lIX: Full L'll Restoration for the Volume Group To restore LVM internal data: mkdir /dev/vol group_name mknod /dev/vol_group_name/group c 64 oxoeoooo vxdg destroy diskgroup For each disk in the volume group: vxdiskunsetup disk_name vgcfgrestore -F -f pathname/filename raw_device_name vgimport -s -m pathname/mapfilename vol grp_name raw device name vgchange -a y vol group_name To restore user or application data: mount -F fstype /dev/vol group_name/lvname /mount po~nt frecover -r -f /dev/rmt/cOtOdOBEST ------------------------------------------------------ Lesson 3 Encapsulation and Rootability Copyrlgt1~, 2()06 Syrn"lnl€r Corporation 111nor.ts reserved 3-7
    • What Is Rootability? Rootability is the process of placing the root file system, swap device, and other file systems on the boot disk under VxVM control. On Solaris, you encapsulate the system disk. Encapsulated boot disk II Partitions are mapped to subdisks that are used to create the volumes ¥ that overlay the original partitions. W:---""Sii1l Private region 1Ir---"':!III / ------< 1:? -?i1/usr----------~====~-'------ ~-----~;::;~ ~ Note: Only sliced layout is allowed. What Is Rootability? Rooiability is the process ofplncing the root file system. swap device, and other file systems on the boot disk under VxVM control. Solaris On Solaris, VxVM converts existing partitions ofthe boot disk into VxVM volumes. The system can then mount the standard boot disk file systems (that is. /, /usr. ami so 011) from volumes instead of disk partitions. Boot disk encapsulation has the same requirements as data disk encapsulation, but requires two Ircc partitions (for the public and private regions). When encapsulating the boot disk. you can create the private region from tile swap area, which reduces the swap area by the size otthc private region. The private region is created at the beginning of the swap area. and the swap partition begins one cylinder from its original location. When creating new boot disks, you should start the partitions on the new hoot disks on the next cylinder beyond the 2048 default used tor the private region. ur-ux 011 HP-UX. rootability is carried out by creating a coJlY of the system disk on another VxVM disk. Copvnqhr 'C' 20U6 Svmantec Co-poianon 111nqnts reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance3-8
    • symantec. Why Put the Boot Disk Under VxVM Control? On Solaris: • You should encapsulate the boot disk only if you plan to mirror the boot disk. • Benefits of mirroring the boot disk: -Enables high availability -Fixes bad blocks automatically (for reads) -Improves performance • There is no benefit to boot disk encapsulation for its own sake. You should not encapsulate the boot disk if you do not plan to mirror the boot disk. On HP-UX: Single storage virtualization tool for system disk and data disks. It is highly recommended that you encapsulate and mirror the boot disk. Some of the benefits of encapsulating and mirroring root include: High availability Encapsulating and mirroring root sets up a high availability environment for the boot disk. Ifthe boot disk is lost. the system continues to operate on the mirror disk. Bad block revecturing If the boot disk has bad blocks. then YxYM reads the block from the other disk and copies it back to the bad block to fix it. SCSI drives automatically fix bad blocks on writes, which is called bad Mock rvvcctoring. Improved performance By adding additional mirrors with different volume layouts. you can achieve better performance. Mirroring alone can also improve performance if the root volumes arc performing more reads than writes, which is the case on many systems. Why Put the Boot Disk Under VxVM Control? When Not to Encapsulate Root If you do not plan to mirror root. then you should not encapsulate it. Encapsulation adds a level of complexity to system administration, which increases the complexity of upgrading the operating system. Lesson 3 Encapsulation and Rootability Copyngm t; :'001'; Syruamec Corpoeauon 111r.qhrs reserved 3-9
    • Limitations of the VxVM Boot Disk • Placing the boot disk under VxVM control adds steps to OS upgrades on Solaris platform. • A system cannot boot from a boot disk that spans multiple devices. • Never grow or change the layout of boot disk volumes. These volumes map to a physical underlying partition on disk and must be contiguous. HP-UX Note: HP-UX supports OS installations on a VxVM disk. However, the version of VxVM that is installed from the HP-UX installation media for 11iv2 is 3.5. Limitations of the VxVM Boot Disk A system cannot boot from a boot disk that spans multiple devices. You should never expand or change the layout of boot volumes, No volume associated with an encapsulated boot disk (r oot.vo l , usr. var. opt. swapvoL, and so on) should be expanded or shrunk. because these volumes map to a physical underlying partition on the disk and must be conuguous, If you attempt to expand these volumes. the system can become unbootublc ifit becomes necessary to reven back to slices in order to boot the system. Expanding these volumes can also prevent a successful OS upgrade. and a fresh install can be required. Solaris Additionally. the upgrade_start script (USedin upgrading VxVM to a new version) may Jail. Note: You can add a mirror ofa dillcrcut layout, but the mirror is not bootablc. 3-10 VERITAS Storage Foundation 5.0 for UNIX' Maintenance Copyright! 2006 Symantec Corporation All uqhts reserved
    • symantec.Th -File System Requirements: Solaris Only For root, usr, var, and opt volumes: • Use UFS file systems. (VxFS is not available until later in the boot process.) • Use contiguous disk space. (Volumes cannot use striped, RAID-5, concatenated mirrored, or striped mirrored layouts.) • Do not use dirty region logging on the system volumes. (You can use DRl for the opt and var volumes.) For swap volumes: • The first swap volume must be contiguous, and, therefore, cannot use striped or layered layouts. • Other swap volumes can be noncontiguous and can use any layout. However, there is an implied 2-GB limit of usable swap space per device for 32-bit operating systems. HP.UXI File System Requirements for Root Volumes: Solaris Only To boot from volumes. follow these requirements and recommendations for the file systems on root volumes: For the root. usr. var. and opt volumes: Use lJFS file systems: You must use UFS file systems for these volumes. because the VERITAS File System (VxFS) package is not available until later in the boot process when the scripts run in /etc/rc2.d (multiuser mode). Use contiguous disk space: These volumes must be located in a contiguous area on disk. as required by the OS. For this reason. these volumes cannot use striped. RAID-S. concatenated mirrored. or striped mirrored layouts. Do not use dirty region logging for root or uar: You cannot use dirty region logging (DRL) on the root and usr volumes. If you attempt to add a dirty region log to the root and usr volumes. you receive an error. Note: The opt and var volumes call use dirty region logging. Swap Space Considerations: If you have swap defined, then it needs to be contiguous disk space. The first swap volume (as listed in the /etc/vEstab tile) must be contiguous and. therefore. cannot use striped or layered layouts. Additional swap volumes can be noncontiguous and can use any layout. Note: You can add noncontiguous swap space through VxVM. However, Solaris automatically uses swap devices in a round-robin method. which may reduce expected performance benefits of adding striped swap volumes. For 32-bit operating systems. usable space per swap device is limited to 2 GB. For M-bit operating systems. this limit is much higher (up to :zi,,l - I bytes). lesson 3 Encapsulation and Rootability Copynqtu T: 2(J06 Symaruec Cornoranon. 1(1 ••ghlf> reserved 3-11
    • Volume Requirements: HP·UX All volumes on the root disk must be in thc disk group that you choose to be the bootdg disk group. The names otthc volumes with entries in the LIF LABEL record must be standvol, rootvol, swapvol, and dumpvol (ifpresent). The names ofthe volumes for other file systems on the root disk arc generated by appending vol to the name of their mount point under /. Any volume with an entry in the LI F LABEL record must be contiguous. Thc volume can have only one subdisk. and it cannot span to another disk. The root vol and swapvol volumes must have the special volume usage types, root and swap respectively. Only the disk access types auto with hpdisk and simple formats arc suitable 1(lJ" use as Vx VM root disks, root disk mirrors, or as hot-relocation spares for such disks. The volumes on the root disk cannot use dirty region logging (DRL). 3-12 VERITAS Storage Foundation 5.0 for UNIX: Mamtenance Copyright 2006 Svmautec Corroranon 111nylliS reserve-s
    • Before Placing the Boot Disk Under VxVM Control Plan your rootability configuration: Plan your rootabjlltv conflguration. bootdg is a system-wide reserved disk group name that is an alias for the disk group which contains the volumes that arc used to boot the system. When you place the boot disk tinder VxVM control. VxVM sets bootdg to the appropriate disk group. You should never attempt to change the assigned value nfbootdg: doing so may render your system unbooiable. An example configuration is to place the boot disk into a disk group named sysdg. and add at least two more disks to the disk group: one for a boot disk mirror and one as a spare disk. VxVM then sets bootdg to sysdg. Solaris Encapsulated Boot disk boot disk mirror Spare disks Enable boot disk aliases. On Solaris. before encapsulating your boot disk. set the EEPROM variable use -nvramrc? to true. This enables VxVM to take advantage of boot disk aliases to identify the mirror of the boot disk if a replacement is needed. If this variable is set to false. you must determine which disks are bootable yourself On Solaris, set this variable to true as follows: On Solaris: Enable boot disk aliases: eeprom "use-nvramrc?=true" • Record the layout of the partitions on the unencapsulated boot disk to save for future use. On HP-UX: Set the primary or alternate boot path from the boot menu or from the command line: Main Menu: Enter command or menu: co pa alt path setboot -p primary_path -a alternace __path Save the LVM volume group configuration for vgOO. Before Placing the Boot Disk Under VxVM Control eeprom "use-nvramrc?=true" Save the layout of partitions before you encapsulate the boot disk. For example. on Solaris. you can use the prtvtoc command to record the layout of the partitions on the unencapsulated boot disk (/ dev / rdsk/ cO t OdOs2 in this example): prtvtoc /dev/rdsk/cOtOdOs2 Record the output from this command fur future reference. Lesson 3 Encapsulation and Rootability Copynqru e. 2006 Syrnantac. Corporanon All nqhts reserved 3-13
    • Placing the Boot Disk Under VxVM Control: Solaris symantcc vxdiskadm: "Encapsulate one or more disks" To encapsulate one or more disks, follow the prompts by specifying: • Name of the device to add • Name of the disk group to which the disk will be added • Sliced disk format (The boot disk cannot be a CDS disk.) vxencap: /etc/vx/bin/vxencap -g diskgroup -c diskgl'oup#=access_name Run the script: / etc/ ini t. d/vxvm-reconfig access name HP.ux.1 Placing the Boot Disk Under VxVM Control Encapsulating the Boot Disk: Solaris vxdiskadm on Solaris You cun use vxdi skadm fur encapsulating data disks as well as the boot disk. To encapsulate the boot disk: From the vxdiskadm main menu, select the "Encapsulate one or more disks" option. 2 When prompted. specify the disk device name ofthe boot disk. lfyou do not know the device name ofthe disk to be encapsulated. type list at the prompt for a complete listing ofavailable disks. 3 When prompted, specify the name of the disk group to which the boot disk will be added. The disk group docs not need to already exist. 4 When prompted. accept the default disk name and confirm that you want to encapsulate thc disk. 5 If you arc prompted to choose II hcthcr the disk is to be formatted as a CDS disk that is portable between different operating systems. or as a nonportablc sliced disk, then you must select sliced. Only the sliced format is suitable lor use with root. boot, or swap disks. 6 When prompted, select the default private region size. vxdiskadm then proceeds to encapsulate the disk. 7 A message confirms that the disk is encapsulated and states that you should reboot your system at the curlicxt possible opportunuy. 3-14 VERITAS Storage Foundatioll5.0 for UNIX: Maintenance C()l'ynyl'l 2006 Svtuantec Corpcrauon All rlyhts ff)served
    • vxencapon Solaris The vxencap script identifies any partitions on the specified disk that could be used for file systems or special areas such as swap devices, and then generates volumes to cover those areason the disk. If the file system if unmounted. no reboot is necessary. If the file system is mounted, the system reboots immediately. For more specific information on using the vxencap command. see the manual pages. Creattng a VxVl1 Boot Disk from an LVI1 Boot Disk: HP-liX The vxcp_1vmroot command sets tip a VxVM root disk. The command should be executed at inir level I {single user model. / bootable mirror can also be created at the same time. A user-specified disk is initialized as a VxVM root disk with the rootdisk## disk name. The following example shows how to set up a VxVM root disk on cOt IdO and its mirror on c1t4dO: /etc/vx/bin/vxcp Ivrnroot -g dg -m clt4dO -v -b cOtldO This process can be accomplished in two steps. as follows: /etc/vx/bin/vxcp Ivrnroot -g dg -v -b cOtldO /etc/vx/bin/vxrootmir -g dg -v -b clt4dO -v: verbose - b: set primary and alternate hoot paths 10 the given devices Lesson 3 Encapsulation and Rootabitity 3-15
    • Solaris syrnantec After Placing the Boot Disk Under VxVM Control After boot disk encapsulation, you can view operating system-specific files to better understand the encapsulation process. Solaris: • VTOC • fete/system • /ete/vfstab Linux: • /ete/fstab HP-UX: (Boot up on the VxVM boot disk to see the differences.) • /etc/fstab • /stand/bootconf Solaris I Linux I After Placing the Boot Disk Under VxVM Control Viewing EncapsulatedDisks To better understand encapsulation ofthe boot disk, you can examine operating system files lor the changes made by the VxVM root encapsulation process, Alter encapsulating the boot disk. if you view the VTOC, you notice that Tag 14 is used for the public region, and Tag 15 is used lor the private region. The partitions lor the root. swap,US1', and var partitions arc still on the disk, unlike on data disks where all partitions an: removed, rile boot disk is a special case. so the partitions arc kept. /I prtvtoc /dev!rdsk!cOtOdOs2 VTOC I * First Sector Last * Partition Tag Flags sector Coun t Sector edount. ... 0 2 00 0 4916016 4916015 1 3 01 4916016 2040256 6964211 2 5 00 0 11801280 11801219 3 14 01 0 11801280 11801219 4 15 Ot 11198256 3024 11801219 6 4 00 6964212 4301136 11265401 1 1 00 11265408 4505160 1~111161 As part olthc root encapsulation process. the fete/system file is updated to include intormauun that t..:lls VxVM to boot lip on the encapsulated volumes: rootdev:/pseudo/vxio.O:O 3-16 Copynyht r. 2006 Svruauter. Coeroranon All f'ghlS reserved VERITAS Storage Foundation 5.0 for UNIX' Maintenance
    • set vxio:vol rootdev is volume=l VxVM also updates the /etc/vfstab tile to mount volumes instead of partitions. r------------ ...---.--------------------- i #device I #to mount I i Idevlvxldsklbootdg/swapvol I/devlvxldsklbootdg/rootvol i Idevlvxldsklbootdg/usr ! device mount FS fsck mount to fsck !;f;t~bl point type pass at boot swap no Idevlvxlrdsklbootdg/rootvol ufs no /devlvxlrdsklbootdg/usr lusr ufs no Linux After you encapsulate the boot disk, you can view the changes in the etc/fstab tile. Lesson 3 Encapsulation and Rootability 3-17 Copyrlght:~ 2006 Symarnar CmpOf'lIIOIl All rlqtJts reve-veo
    • syrnantec. Creating an Alternate Boot Disk Requirements: o An alternate boot disk is a mirror of the entire boot disk. An alternate boot disk preserves the boot block in case the initial boot disk fails. o The requirements for creating an alternate boot disk are that: - The boot disk is under VxVM control. - Another disk is available with enough space to contain all of the boot disk partitions. - All disks are in the boot disk group. o The root mirror places the private region at the beginning of the disk. The remaining partitions are placed after the private region. Solaris I HP.UXI Creating an Alternate Boot Disk Creating an Alternate Boot Disk: VEA 1 Select a disk that is at least as large as the boot disk. and add the disk to the boot disk group. 2 In the main window. highlight the boot disk. and then select Acrions=-c-Mirror Disk. 3 In the Mirror Disk dialog box. verify the name of the boot disk. and specify the target disk to use as the alternate boot disk. 4 Click Yes in the Minor Disk dialog box to complete the mirroring process. 5 Aller the root mirror is created. verify that the root mirror is booiablc. Creating an Alternate Boot Disk: vxdiskadm 1 Select a disk that is at least as large as the boot disk. and add the disk to the boot disk group. 2 In the vxdi skadm main menu. select the "Mirror volumes on a disk" option. 3 When prompted. specify the name of the disk containing the volumes to be mirrored (that is. the name of the boot disk). 4 When prompted. specify the name of the disk to which the boot disk will be mirrored. 5 A summary of the action is displayed. and you arc prompted to confirm the operation. 6 Aller the root mirror is created. verify that the root mirror is bootablc. 3-18 VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copyrly~ll_~ 20U6 S}'nl,-lnl.-H Corporation All nqhls reserved
    • Creating an Alternate Boot Disk: CLI for Solaris 1 Select a disk that is at least as large as the boot disk, and add the disk to the boot disk group. 2 To create a mirror for the root volume only, use the vxrootmir command: vxrootmir alternate disk where al ternate_disk is the disk name assigned to the other disk. vxrootmir invokes vxbootsetup (which invokes installboot). so that the disk is partitioned and made bootable. (The process is similar to using vxmirrorandvxdiskadm.) 3 To mirror all other concatenated, nonmirrored volumes on the primary boot disk to your alternate boot disk, you can use the command: vxmirror boot disk alternate disk- - 4 Other volumes on the boot disk can be mirrored separately using vxassist. For example, if you have a /home file system on a volume homevol, you can mirror it to al terna te_ disk using the command: vxassist mirror homevol alternate disk (I' you do not have space for a copy ofsome of these file systems on your alternate boot disk, you can mirror them to other disks. You can also span or stripe these other volumes across other disks attached to your system. 5 After the root mirror is created, verify that the root mirror is bootablc. You can also use the vxbootsetup command. The vxbootsetup utility configures physical disks so that they can be used to boot the system. Note: Before vxbootsetup is called to configure a disk, mirrors of the root. swap, /usr and /var volumes (if they exist) should be created 011 the disk. To set up system boot information on a YxYM disk, type: /etc/vx/bin/vxbootsetup Creating an Alternate Boot Disk: CLI for HP-UX To mirror the system disk: vxrootmir -v -b alternate disk Alternatively, to set up the system boot information and to mirror individual volumes manually: vxdisksetup -iB alternate_disk format=hpdisk vxdg -g bootdg adddisk rootdisk##=alternate_disk vxassist -g bootdg mirror standvol dm:rootdisk## vxvmboot -v /dev/rdsk/alternate_disk lesson3Encapsulation and Rootability Copynqhtf 200fi Syrnant,« Corporauon. All notus reserved 3-19
    • Boot Disk Error Messages Stale root volume I vxvm: vxconfigd: Warning: Plex rootvol-Ol for root volume is stale or unusable , Fail~d startup J vxvm: vxconfigd: Error: System startup failed: ;.Root plex not valid I vxvm: vxconfigd: Error: System boot disk does not have a valid root plex Please boot from one of the following disks: Disk: diskOl Device: cOtldOsO Alternate boot disks are listed. I syrnantec. Possible Boot Disk Errors Root plex is stale or unusable vxvm:vxconfigd: Warning: Plex rootvol-Ol for root volume is stale or unusable System startup failed vxvm:vxconfigd: ERROR: System startup failed System boot disk does 1I0t have a valid root plex vxvm:vxconfigd: ERROR: System boot disk does not have a valid root plex Please boot from one of the following disks: Disk: diskname Device: device ... In the third message, alternate boot disks containing valid root mirrors arc listed as part of the error message. Try to bout from one of the disks named in the error message. You may be able to boot using a device alias for onc of the named disks. For example. use this command: ok> boot vx-disk name Copynght ~ 2D06 Syruantec Corporation All r-yhls reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance3-20
    • A:..! Booting from an Alternate Mirror: Solaris To boot the system using an alternate boot disk after failure of the primary boot disk: 1. Set the eeprorn variable use-nvramrc? to true: syrnantec ok> setenv use-nvramrc? true ok> reset This variable must be set to true to enable the use of alternate boot disks. 2. Check for available boot disk aliases: ok> devalias vx-rootdisk Output displays the name of the vx-diskndme boot disk and available mirrors. 3. Boot from an available boot disk alias: ok> boot vx-disk_name Booting from an Alternate Mirror: Solaris If the boot disk is cncapsularcd and mirrored. you can useone of its mirrors to boot the system if the primary boot disk fail>. Booting from an Alternate Mirror: HP-UX To boot the system using an alternate boot disk after failure of the primary boot disk: Interrupt the automatic boot process: To discontinue, press any key within 10 seconds. 2 Check the alternate boot disk path: Main Menu: Enter command or menu> co alt pa Verify that the alternate disk path is the one you want to boot from. Ifnot, set it lIsing the co a 1t pa path command. 3 1300tusing the alternate disk path: Main Menu: Enter command or menu> bo alt Interact with IPL (Y, N, or Cancel)? n Lesson 3 Encapsulation and Roolability 3-21 Copyngll ,(.) 201)6 Svmantec Corncrauon All nqhts reserved
    • syrnaruec. Removing the Boot Disk from VxVM Control: Solaris • To unencapsulate a boot disk, use vxunroot. • Requirements: Remove all but one plex of rootvol, swapvol, usr, var, opt, and home. • Use vxunroot when you need to: - Boot from physical system partitions. - Change the size or location of the private region on the boot disk. H~.uxl Removing the Boot Disk from VxVM Control Thevxunroot Command: Solaris To couvert the root file systems back to being accessible directly through disk partitions instead or through volume dcv ices. you use the vxunroot utility. Other changes that were made to ensure the booting or the system from the root volume arc also removed sO that the system boots with no dependency on VxVM. For vxunroot to work properly. all but one plcx or r oo t vo L, swapvol. usr, va r, opt. and home must be removed (using vxedi tor vxplex). II'this condition is not met. the vxunroot operation fails, and volumes arc not converted back to disk partitions. When 10 Usevxunroot Use vxunroot when you need to: 1300tfrom physical system partitions. Change the size or location ofthe private region on the boot disk. 3-22 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • The vxunroot Command: Solaris 1. Ensure that the boot disk volumes only have one plex each: vxprint -ht rootvol swapvol usr var 2. If boot disk volumes have more than one plex each, remove the unnecessary plexes: vxassist -g bootdg remove mirror volume !rootdisk 3. Run the vxunroot utility: vxunroot Unencapsulating the Boot Disk: Solaris The vxunroot command changes the volume entries in /etc/vEstab to the underlying disk partitions for the root vol. swapvol. u s r, and var volumes. The command also modifies / etc / system and prompts for a reboot so that disk partitions are mounted instead of volumes for the root, swap,usr. and var volumes. Creating a LVM Boot Disk from a VxVM Boot Disk: HP-UX After booting to single user mode using a VxVM boot disk. you can completely remove the LVM boot disk ifdesircd: /etc/vx/bin/vxdestroy Ivmroot -v cOtldO If you choose to keep the LVM disk. ensure that you update it with any changes to the system disk. This example shows how to create an LVM root disk on the cOt IdO physical disk after removing the existing LVM root disk configuration from that disk. /etc/vx/bin/vxdestroy_lvmroot -v cOtldO /etc/vx/bin/vxres_lvmroot -v cOtldO If you want to take the boot disk completely out ofVxVM control. you then need to boot from the LVM root disk and use Volume Manager commands to remove the old boot disk from Vx VM control. Lesson 3 Encapsulation and Rootability Copynqtu f 2(1(16 Symantec Coeporauou All r'ghts re servec 3-23
    • svrnantec Appendix 13provides complete lab instructions and solutions, "1,:1113 S"lu!ioIlS: 1'_l1C;lp"uLi1i~;n ~I!id l·~Hl!;lhd!l~.. " !~;I.:::CI) Lesson Summary • Key Points This lesson described the disk encapsulation process and how to encapsulate the boot disk on your system. Methods for creating an alternate boot disk and unencapsulating a boot disk were covered. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Storage Foundation Installation Guide syrnantec 3-24 Lab 3 Lab 3: Encapsulation and Rootability In this practice, you create a boot disk mirror, disable the boot disk, and boot up from the mirror. Then you boot up again from the boot disk, break the mirror, and remove the boot disk from the boot disk group. Finally, you reencapsulate the boot disk and re- create the mirror. '-;::L'F~Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Labs and solutions for this lesson arc located Oil the following pages: Appendix A provides complete lab instructions. "Lib .l: l.ucap-ul.uion "!Iii !l,,{)uhlil~,"rJ;t~.~i..~.:- ..:5 ';op),n4"[ I. 2006 Sylll:mtec CurporH(IUI1 All "gilts -eservec VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Lesson 4 Troubleshooting the Boot Process
    • Lesson Introduction • Lesson 1: Maintaining Data Consistency • Lesson 2: Managing Devices Within the VxVM Architecture • Lesson 3: Encapsulation and Rootability • Lesson 4: Troubleshooting the Boot .' Process , • Lesson 5: Volume Maintenance • Lesson 6: Performance Monitoring • Lesson 7: Point-in-Time Copies • Lesson 8: Other Enterprise Features Overview syrnantec Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Operating System Describe how VxVM integrates into Boot Processes operating system boot processes. Topic 2: Troubleshooting Troubleshoot the boot process. the Boot Process Topic 3: Recovering the Recover the boot disk group for Boot Disk Group different boot disk failure scenarios. 4-2 VERITAS Storage Foundation 5,0 for UNIX: Maintenance
    • symantec OS Boot Processes • Boot processes and VxVM startup scripts vary by platform. • Select the button at the bottom of the screen to review the particulars for each platform. SOlarlsl HP·UX I AlX I Linux.1 Operating System Boot Processes The VxVM startup scripts and the way in which they integrate with the operating system vary by platform. These processes are outlined in the "Boot Processes and VxVM Startup Scripts" appendix. For this portion of the training. tum to the appropriate portion of this appendix to follow along with the instructor. 4-3Lesson 4 Troubleshooting the Boot Process COPYright '& 20()6 Svmantec Corporation All nqhl<; reservec
    • syrnantec Files Used in the Boot Process • jete/system (50Iari5). /stand/system (HP-UX). /ete/sysetl.eonf (Linux) Contains VxVM entries • /ete/vfstab (50Iari5). /ete/fstab (HP-UXand Linux) Maps mount points to devices • /ete/vx/volboot Contains disk ownership data • /ete/vx/lieenses/lie. /ete/vx/elm Contains license fites • /var/vxvm/tempdb (50Iari5). /ete/vx/tempdb (HP-UX) Stores data about disk groups • /ete/vx/reeonfig.d/state.d/install-db Indicates that VxVM is not initialized • /VXVMII. II. II-UPGRADE/. start_runed (501ari5) Indicates that the VxVM upgrade is not complete Troubleshooting the Boot Process Files Used in the Boot Process During the boot process, the VxVM startup scripts use information contained in specific files. If any of these liles are missing. misplaced. or misconfigurcd, then problems can occur. Troubleshooting the boot process depends on these files: /ete/system (Solaris only), /stand/system (HP-UX), / ete/ sysetl. eonf (Linux) On Solaris. contains VxVM entries indicating if the root disk has been encapsulated. On all platforms, may contain Storage Foundatiun information, such as kernel module entries and tunable parameters /ete/vfstab (Solarls), /ete/fstab (HP-IJX and Linux) Maps Ii lc system mount points to actual device names /ete/vx/volboot Contains the host ID that was on the system when you ran vxinstall /ete/vx/lieenses/lie./ete/vx/elm Contains the files that represent installed VtRITAS license kcys /var/vxvm/tempdb, (Solaris), /ete/vx/tempdb (lfP-IJX) Stores temporary information about currently imported disk groups • /ete/vx/reeonfig.d/state.d/install-db Indicates that VxVM packages have been added. but vxinstall has not run • /VXVM#.#.#-UPGRADE/.start runed Indicates that a VxVM upgrade has been started but not completed 4-4 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copyright 20Ot. Symantec COfIJO'<JI.on AII"QhIS reserved
    • Problem: Boot device cannot be opened. symantec. Troubleshooting: The Boot Device Cannot Be Opened Possible causes: o Boot disk is not powered on. o Boot disk has failed. SCSI bus is not terminated. o Controller failure has occurred. o Disk is failing and locking the bus. To resolve: o Check SCSI bus connections. - On Solaris. use probe-scsi-alL - On Linux use non- fast or verbose boot in the BIOS. - On HP-UX. use sea from the main menu. o Boot from an alternate boot disk. Troubleshooting: The Boot Device Cannot Be Opened If the boot device cannot be opened. the system is unable to read the boot program from the boot disk. C0l11111oncauses for this problem include: The boot disk is not powered on or has failed. The SCSI bus is not terminated. There is a controller failure. A disk is failing and locking the bus. preventing any disks from identifying themselves to the controller. and making the controller assume that there are no disks attached. To troubleshoot the problem: Check carefully that everything on the SCSI bus is properly connected. If disks are powered off or the SCSI bus is untcrminated, correct the problem and reboot the system. If one of the disks has failed, remove the disk from the SCSI bus and replace it. If no hardware problems are found, the error is probably due to data errors on the boot disk. Attempt to boot from an alternate boot disk that contains a mirror of the root volume. If you arc unable to boot from an alternate boot disk. then you may still have some type of hardware problem. Similarly, itswitching the tailed boot disk with an alternate boot disk does not allow the system to boot. this condition also indicates hardware problems. 4-5Lesson 4 Troubleshooting the Boot Process Copyngnl «J 2()06 Symantec Corporauon fill ncnts reserved
    • Troubleshooting: Startup Scripts Exit Problem: VxVMstartup scripts exit without initialization. Possible causes: Either of the following files are present: • /etc/vx/reconfig.d/state.d/install-db This file indicates that VxVM software packages have been added, but VxVM has not been initialized with vxinstall. Therefore, vxconfigd is not started. • jvXVM#. #. #- UPGRADE/. start _runed (Solarts] This file indicates that a VxVM upgrade has been started but not completed. Therefore, vxconf igd is not started. Troubleshooting: VxVM Startup Scripts Exit Without Initialization symantec. In the boot process. the YxYM startup scripts exits without initializing YxYM if either of the following flag files arc present: • /etc/vx/reconfig.d/state.d/install-db The presence of this file indicates that YxYM software packages have been added. but YxVM has not been initialized with vxinstall. This file is installed when you add the YxYM software packages and is removed by the vxvm-reconfig script alter the configuration specified by vxinstall has been performed. The existence of this file communicates to the YxYM device drivers that YxYM has not yet been initialized (vxinstall) and vxconf igd will not be started. Therefore, if this file is present on the system. then the YxYM startup scripts exit without pcrlonuing any initialization. /VXVM#.#. #-UPGRADE/.start runed (Solaris) The presence ofthis [ilc indicates that a YxYM upgrade has been started but not completed. This file is created by the upgrade_start script specific to a particular VxYM version (for example, /VXVM3.5 - UPGRADE/ . start_runed) and is removed by the upgrade_f ini sh script when an upgrade is completed. If a file with this path is present. then the YxYM startup scripts exit without performing any iuitialization, and vxconf igd is not started. Cop)'rlyht c 200fi Svrnantec Corcorauon All flghls reserved 4-6 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • symantec. Troubleshooting: Conflicting Host 10in volboot Problem: A conflicting host 10 exists in the /etc/vx/volboot file. The volboot file contains the host IDthat was on the system when you installed VxVM. If you manually edit this file. VxVM does not function. • To change the host name in the vol boot file: vxdctl hostid net,'hostname • To re-create the volboot file: vxdctl init (hostname1 Troubleshooting: Conflicting Host 10 in the volboot File The /etc/vx/volboot tile contains the host 10 that was on the system when you first ran vxinstall. The host 10 in the volboot file is matched against the host lD contained in the disk group header stored on every disk to identity the disks belonging to this host. Caution: Never attempt to manually edit the volboot tile. If you attempt to manually edit the file. YxYM cannot function. The volboot file must be a spcci fie size (512 bytes on Solaris, 1024 bytes on II P-UX). If the tile is edited, for example, by using the vi editor. and is not the correct size, the system will not boot. To modify the vol boot file, you use the vxdctl command. To change the host name in the vol boot tile: vxdctl hostid newhostname This command places the new hust name (Y,YM host 10) in vol boot. The new host name is then flushed to the private region ofthe disks. To re-create the volboot tile: vxdctl init [hostname] If you must re-create this file, usethe same host name (YxVM host 10) that the previous vol boot tile contained. Lesson 4 Troubleshooting the Boot Process Copvnqtur 2006 SymantecCorporat.on.All nqnts re""rven 4-7
    • syrnantec Troubleshooting: License Problems Problem: License keys are corrupted, missing, or expired. Save /etc/vx/licenses/lic/* to a backup device. If the license files are removed or corrupted, you can copy the files back. License problems can occur if: • The /etc/vx/licenses/lic files become corrupted. • An evaluation license was installed and not updated to a full license. To resolve license issues: • vxlicinst • vxiod set 16 • vxconfigd Installs a new license Starts the I/Odaemons Starts the configuration daemon Troubleshooting: Corrupted, Missing, or Expired License Keys The /etc/vx/licenses/lic and /etc/vx/elmdircctorics contain the files representing the installed VERITAS license keys. You can encounter license problems itthese files become corrupted or if~1I1evaluation license was installed and not updated to a full license. During the boot process. if the system encounters a missing or invalid license key. YOli rccci vc error messages. Replacing an Expired License To replace an expired license. you can enter a new license by using the command: vxlicinst You must inform the configuration daemon (or reboot): vxdctl enable 4-8 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • symantec. Troubleshooting: Missing /tempdb Problem: The /var/vxvm/tempdb (Solaris), /etc/vx/tempdb (HP-UX) directory is missing, misnamed, or corrupted. This directory stores configuration information about imported disk groups. The contents are re-created after a reboot. If this directory is missing, misnamed, or corrupted, vxconf igd does not start. To remove and re-create this directory: vxconfigd -k -x c1eartempdir Troubleshooting: Missing or Misnamed /var/vxvm/tempdb The /var /vxvm/tempdb directory is used to store configuration information about currently imported disk groups. The contents of this directory are re-created after a reboot. If this directory is missing. misnamed, or corrupted (due to disk 1/0 failure), then vxconf igd does not start, and you receive an error message that states: Cannot recover temp database To remove and re-create the /var /vxvm/tempdb directory. you can use the command: vxconfigd -k -x cleartempdir Caution: You should kill any running operational commands (vxvol. vxsd. or vxmend) before using the -x cleartempdir option. You can use this option while running VEA. or while VxVM background daemons are running (vxsparecheck. vxnot ify. or vxrelocd). Note: Ifthe /var /vxvm (Solaris) or / et c /vx (II P-LJX) directory docs not exist, this command does not correct the problem. For more information, see the vxconf igd (1m) manual page. Lesson 4 Troubleshooting the Boot Process 4-9 Cop,Tignt ?' 2006 Svmamec Corporation. All 'Igllts rsservso
    • syrnantec Troubleshooting: Debugging with vxconfigd Running vxconfigd in debug mode: vxconfigd -k -m enable -x debug_level • debug-tevet = 0 No debugging (default) • debug-tevet = 9 Highest debug tevet Some debugging options: • -x log Log all console output to the /var/vxvm/vxconfigd.log file. Use the specified tog fite instead. Direct all consote output through the syslog (I interface. Attach a date and time-of-day timestamp to all messages. Log all possible tracing information in the given file. -x logfile=name • -x syslog timestamp -x tracefile=name Troubleshooting: Debugging with vxconfigd The vxconfigd Daemon The V.xVM vxconf igd configuration daemon maintains disk configurations and disk groups and is also responsible for initializing VxVM when the system is booted. VxVM docs not start anything ilvxconf igd cannot be started during a boot-up. Under normal circumstances, this daemon is automatically started by the VxVM start-up scripts. However, if there is a problem. it may not be possible to start vxconf igd. or the daemon may be running in disabled mode. 4-10 Copvnqht i. 2D{)6 Svmantec Corporation All fights reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Problem: The fete/system file is invalid or missing. The fete/system file is used in the kernel initialization and / sbin/ ini t phases of the boot process. This file is a standard Solaris system file to which VxVM adds entries to: • Specify drivers to be loaded . • Specify root encapsulation. If the file or these entries are missing. you encounter problems in the boot process. Always maintain backup copies of this file. .iL.: Troubleshooting: Invalid or Missing jete/system File (Solaris Only) svmantec Troubleshooting: Invalid or Missing fete/system File (Solaris Only) The fete/system file is used in the kernel initialization phase as well as ill the /sbin/ init phase of the boot process. If this tile is missing. or if its entries are missing. then you encounter problems at boot time. The jete/system file is a standard Solaris system file. VxVM adds entries to this file that are placed between the tags: *vxvm START (do not remove) *vxvm END (do not remove) It is strongly recommended that youmaintain a backup copy of the /ete/ system file so that you can recover your root volume if the system becomes unbootable. !fthe system cannot read the jete/system file. then you can use the boot -a command to specify a different copy of the jete/system tile to use on booting. VxVM entries in / ete/ system that begin with foreeload: specify drivers to be loaded by V-xVM. For example: foreeload: drv/pei foreeload: drv/dad foreeload: drv/vxdmp foreeload: drv/vxio foreeload: drv/vxspee 4-11Lesson 4 Troubleshooting the Boot Process Copvnutn ~. 2006 Symanter Corporaucn 111nnnrs reserved
    • syrnanrec. Troubleshooting: Invalid or Missing / etc / sys tern File (Solaris Only) -----------,-- ok> boot -a When booting from an alternate system file, do not go past the maintenance mode. Boot up on the alternate system file, fix the VxVM problem, and then reboot with the original system file.Re s e t t f nq Reboot i nq w i t h command: boot a Boot device: /pcLlf,O/pCl.d,1I1ded/disk',Q,Q File and args: Enter filename {kernel/unix]: (Press Return.) Enter default directory for modules l!platform!SUNW,Ultra·S lO/kernel Iplatform!sun4u/kernel Ikernel /usr/kernel): (Presl Return.) SunOS ReLease 5 6 vei-s i on Generic_lOSI81-03 [UNIX(RI Sys t em V Release 4 1] Copyri qht 1983-19')7, Sun ni croeys t ems. Inc. Nameof system file (ete/system1: etclsystem.preencap root filesystem type [uEsl: (Press Return.) Enter physical name of root device [/pci@lf, O/pci@l, 1/ide@3/disk@O, 0: al: (Press Return.) VxVI') st ert ing ill boot mode.; Ty-pe Ct rl vd to proceed with not-mal startup, (01' qi ve root passv.c.rd for syst.em mei nt enance.r: Ent er i nq System t+a intenence node Using an Alternate sys tern Hle (SlIlaris only) When using an alternate system file. you will probably not be able to boot into multiuser mode and will end in maintenance: mode. Note: Do not go past the maintenance mode while booting on this system file. 1300t lip on the alternate system file. fix the VxVM problem, and then reboot with the original system file. The system boots on the partition. not on the volume. When you enter into the maintenance mode. you will notice that the volume rootvol is not started. Aller you arc: finished, unmount /mnt and reboot the system under the normal system IIIe. 4-12 Ccpynqht ~- 2006 Swnaruec Corporauon All rly~lls reserveo VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Using Interactive Mode (Linux only) • On Linux, you can boot into Interactive mode by pressing i after the boot loader window displays. • You are then prompted for additional information. Using Interactive Mode (Linux only) On Linux, you can boot into Interactive mode by pressing i after the boot loader window displays. You are then be prompted for information similar to the Solaris boot - a command. Lesson 4 Troubleshooting the Boot Process 4-13
    • symantec Recovering the VxVM Boot Disk (HP-UX Only) Stop the failed boot process using ctlr-b. Reset the system <r s>, Press any key within 10 seconds when you receive the message: "To discontinue. resa an ke within 10 seconds." Boot from a specified path cbo pri>. Enter y to interact with tPL. Enter bpux -vm to access the Maintenance Mode Boot (MMB). Start VxVM manual! usin the vx emerg start command. Use CBR or various VxVM commands to complete the recovery. Recovering the YxV:1 Bool Disk (lIl'-LJX only) 011HP-UX. you call recover the VxVM boot disk by following the steps displayed on the slide. 4-14 Cor-vnqnt ~ 2006 Svmaritec Corooranon All nqtus reserved VERITAS Storage Foundation 5.0 for UNIX' Maintenance
    • Temporarily Importing the Boot Disk Group Through a temporary import, you can bring the boot disk group to a working system and repair it there: 1. Obtain the disk group 10 of the boot disk group: vxdisk -s list diskid:954254545.2009.train06 dgname: sysdg dgid: 952435045.1025.train06 hostid:train06 2. On the importing host, import and temporarily rename the disk group: vxdg -tC -n tmpdg import 952435045.1025.train06 3. Fix and replace the files and volumes as necessary. 4. Deport the disk group back to the original host: vxdg -h train06 deport tmpdg Recovering the Boot Disk Group Temporarily Importing the Boot Disk Group symantec. By temporarily importing the boot disk group. you can bring the boot disk group from a failed system to a working system and repair it there. Use this method when you have an encapsulated boot disk and do not have a backup system file and emergency boot disk. To temporarily import the boot disk group: 1 Find the disk group lD of the boot disk group. 2 On the importing host, import and temporarily rename the disk group. 3 Repair files and volumes as needed. 4 Deport the disk group back to the original host. ~ut(': This is useful if you can boot 011'an array. Repairing the Failed Root By temporarily importing the boot disk group on another host. you can repair the failed root. Mount the volume and replace tiles as needed: vxrecover -g tmpdg -s rootvol mount /dev/vx/dsk/tmpdg/rootvol /mnt Lesson 4 Troubleshooting the Boot Process Cnpvnqtu '~' 2006 Symantec Corrorauoo 111r"Jllts reserved 4-15
    • syrnantec Boot Disk Failure Scenarios o In the next slides, several disk failure scenarios are presented that involve the boot disk or disks in the boot disk group. o For each of the following failure scenarios, determine the impact of the failure and a recovery strategy. Boot Disk Group Failure and Recovery Scenarios The recommended practice for managing your boot disk is to place the boot disk tinder YxYM control and mirror the boot disk. However. if you place the boot disk under YxYM control and do not mirror the boot disk. then you must develop an understanding of the impact of disk failure involving the boot disk or other disks in the boot disk group. and the associated recovery strategies. In this section. several disk failure scenarios arc presented that involve the boot disk or disks in the boot disk gruup. For each of the following failure scenarios. determine the impact of the failure and a recovery strategy. Base your answers on your understanding of recovery procedures. the boot process, and the files associated with booting the operating system and YxYM. Solutions for each recovery scenario are presented at the end of this section. 4-16 Cupyrlgnt~, 2006 Symantec COfpor"IIOIl All notus reserved VERITAS Storage Foundation 5.0 for UNIK Maintenance
    • Scenario 1 datadg Because the hoot disk is encapsulated and mirrored. there is no negative impact and nothing is lost. The only recovery necessary is to replace the failed disk, as with any other failed disk. www ~~~ What is the impact of the failure? What system information or software is lost? • What is your recovery strategy? The boot disk is under VxVM control, plus mirrored, and fails. B = Boot disk D = Data disk V = Volume Scenario I: Boot Disk Failure (Encapsulated) In this scenario: The boot disk is under VxVM control and mirrored. The boot disk fails. Lesson 4 Troubleshooting the Boot Process Cor,yr,ght ,t 200fi Symamer- Corporation All nchts reserverl 4-17
    • 4-18 Scenario 2 datadg The boot disk is not under VxVM control and fails. www lmrnJrnJ • What is the impact of the failure? What system information or software is lost? What is your recovery strategy? B = Boot disk D = Data disk IV = Volume Scenario 2: Boot Disk Failure (:'IIot encapsulated) In this scenario: The boot disk is not under VxVl'vl control. The boot disk lails. Answer the following questions: What is the immediate impact of the I~i lure? On the system? On disk groups? On vxconf Lqd? What software or configurution data has been lost or is inaccessible? Disk group configurations? Data·) What is your recovery strategy" Copynqhl f' 2006 Svmantec Corpnraucn All fIghts reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Scenario 3 datadg WWW i~u~n~ sysdg WW The boot disk is under VxVM control, but not mirrored, and fails. No other disks exist in the boot disk group. What is the impact of the failure? What system information or software is lost? What is your recovery strategy? B = Boot disk D = Data disk V = Volume Scenario 3: Boot Disk Failure with Onlv One Disk in the Boot Disk Group In this scenario: The boot disk is tinder YxYM control. but not mirrored, The boot disk fails, The boot disk is the only disk in the boot disk group, Answer the following questions: What is the immediate impact of the failure? On the system') On disk groups'? On vxc on f iqd" What software or configuration claw has been lost or is inaccessible'? Disk group configurations'? Data') What is your recovery strategy'? 4-19Lesson 4 Troubleshooting the Boot Process Copyrog~)lC: 2006 Syeuantec Corporanon. All fight!'. reserved
    • Scenario 4 datadg mWLYJ @j@jtQj VxVM control, but not mirrored, and fails. Other disks exist in the boot disk group. • What is the impact of the failure? What system information or software is lost? What is your recovery strategy? B = Boot disk D = Data disk V = Volume Scenario 4: Boot Disk Failure with Other Disks in the Boot Disk Group In this scenario: The bout disk is under VxVM control. but not mirrored. The boot disk fails. Other disks exist in the boot disk group. Answer the Iollowing questions: What is the immediate impact of the failure' On the system" On disk groups" On vxconfigd'.' What software or configuration data has been lost or is inaccessible'! Disk group configurations') Data'! What is your recovery strategy" Copynght· 2()06 Syruantec Corporauon AI! fights reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance4-20
    • Scenario 2: Impact of the Failure symantcc. The boot disk is not under VxVM control and fails. What is the immediate impact of the failure? On the system? System disk has failed. On the disk groups? Disk groups are not available. On the vxconfigd? vxconf igd is not accessible. What software and configuration data has been lost? Disk group configurations? Still stored in private regions on disks within each disk group Data? Data in all disk groups is still present, but it must be checked for integrity before use. Scenario 3: Impact of the Failure symantec Scenario 3: Impact of Failure The boot disk is under VxVM control, but not mirrored, and fails. No other disks exist in the boot disk group. What is the immediate impact of the failure? On the system? The system fails when 1/0 On the disk groups? to a boot disk volume is On the vxconf igd? attempted. What software and configuration data has been lost? Disk group configurations? Bool disk group configuration is lost. Other disk group configurations are still stored in private regions on disks within each disk group. Data? Data stored on the boot disk is lost. Data in other disk groups remains on the disk and is accessible after volumes can be started. The data requires integrity checking. Lesson 4 Troubleshooting the Boot Process CGpyngll1 rc: 2006 Syrnanter Corporation /'III nqms rp~p.rVf:'d 4-21
    • Scenario 4: Impact of the Failure Scenario Recovery Strategy (This recovery strategy applies to scenarios Z,3, and 4.) With a boot disk that is under VxVM control and mirrored. you can replace the failed disk using standard disk replacement procedures. When the boot disk is not under V xVM control. or not mirrored. and fails, your recovery strategy involves the following: symantec. Scenario 4: Impact of Failure The boot disk is under VxVM control, but not mirrored, and fails. Other disks exist in the boot disk group. What is the immediate impact of the failure? On the system? On the disk groups? On the vxconfigd? The system fails when 1i0 to a boot disk volume is attempted. What software and configuration data has been lost? Disk group configurations? Boot disk group configuration is stili stored in private regions of other disks in the boot disk group. Other disk group configurations are still stored in private regions on disks within each disk group. Data? Data stored on the boot disk is lost. Data on other volumes within the boot disk group. or any other imported disk groups. remains intact. The data requires integrity checking. symaniec 4-22 VERITAS Storage Foundation 5.0for UNIX- Maintenance Scenario 4: Recovery Strategy This recovery strategy applies 1. Physically replace the failed disk. to scenarios 2, 3, and 4. 2. Reinstall the OS using the same host name as before the failure. 3. Reinstall Storage Foundation using the installation scripts. 4, Run vxd Lak list. to see if your disks are up and running. The disk groups are imported automatically after vxcon t Lqd is started. If you did not reboot your system after the reinstallation. you must execute vxr ecov ex - s to start the volumes in any data disk groups. 5. Clean up the configuration: • MOllnt file systems that should be mounted. (To mount VxFS file systems, you must have rebooted the system after reinstalling Storage Foundation.) • Restore application binaries and config files that reside on the boot disk. • Reboot the system if necessary. 6. If the boot disk was under VxVM control, reconfigure rootability (for Scenarios 3 and 4): • Remove old volume records for root volumes. • Remove the old disk media record for the boot disk. • Put the boot disk under VxVM control and mirror the boot disk. Copvnqht 2006 Svmantec Co-porauon /'III "ghls reserved
    • Creating an Emergency Boot Disk (Solaris Only): Self Study Why Create an Emergency Boot Disk? Encapsulating and mirroring the boot disk ensures that if your boot disk is lost, the system continues to operate on the mirror disk. You can provide further protection for your system by creating an emergency boot disk that contains the operating system and VxVM software. You can use an emergency boot disk: To repair encapsulated boot failure When there is no backup system file When UNIX does not boot An emergency boot disk boots up on a Volume Manager-knowledgeable disk. "IlIte: YOII cannot boot from SAN disks unless you have a special boot prom from Sun supporting the SAN device. Solaris: Emergency Boot Disk Creation To create an emergency hoot disk: Format a disk, place a mot partition and a swap partition on the disk, and label the disk. Make the root partition large enough to hold us r, va r, and opt. 2 Create a file system: newfs /dev/rdsk/eOtldOsO 3 Mount and copy files to the new boot disk: mount -F ufs /dev/dsk/eOtldOsO /mnt find / /usr /var /opt -local -mount -print I epio -pmudv /mnt The find utility recursively searches the given directory paths and prints (to the standard output) the path names of all the files that arc local to that file system. The cpio -p command reads the standard input to obtain a list of path names of tiles that are then created and copied into the destination directory tree, which is the /mnt mount point. 4 Place a boot block on the disk: /usr/sbin/installboot /usr/platform/-uname -i-/lib/fs/ufs/bootblk /dev/rdsk/eOtldOsO The installboot command installs the specified platform-dependent boot blocks to the given disk partition. S Edit the /mnt/etc/system file to comment out the non-forceload lines related to VxVM. 6 Edit the /mnt/etc/vfstab file to remove references to the root volumes (rootvol. /usr, /var, /opt, and so on), and place an entry for the emergency boot device as the root device. 7 Create the /mnt /tmp, /mnt /proc. and /mnt /mnt directories: Lesson 4 Troubleshooting the Boot Process COPYright = 2(J06 Symanter COfP'Jr;:Il'(f1All flqhis reserveo 4-23
    • mkdir /mnt/tmp /mnt/proc /mnt/mnt 8 Unmount /mnt: umount /mnt 9 Write down the Solaris device name for the emergency boot disk. For example: ls -1 /dev/dsk/cOtldOsO /deviees/pei.lf,O/peiil/sesi~3/sd~e,O:a For booting. you need the device name: /peiilf,O/pei~1/sesia3/diskle,O:a 10 Run the following command: init 0 11 Boot from the emergency boot disk. For example: boot /pci@lf,O/pci@1/scsi@3/disk@e,O:a Solaris: Booting from an Emergency Boot Disk After you have an emergency boot disk. you can boot YOllr system on the disk by using the full Solari, device name. lhcn, you can mount the volume onto a directory: vxrecover -s rootvo1 mount -F ufs /dev/vx/dsk/bootdg/rootvo1 /mnt You can now replace any missing files. II'you have to run vxlieinst, copy the created files in the /ete/vx/lieenses/lie directory to the /mnt / ete/vx/l ieenses/ lie directory. When you arc finished. unmount the volumes, and reboot the system on the regular boot disk. Ifvxeonf igd has problems starting up. try starting VxVM manually by running the following commands: vxiod set 10 vxconfigd vxrecover -s You can also specify debugging options to the vxeonEigd command to identify the problem. Linux: Creating Bootable CDs or Floppy Disks On l.inux , the emergency boot disks arc booiablc CD-ROMs or floppy disks. For example. i I'you have CD I of Red Hat. YOll call bout and select Rescue mode and then you can mount root. You can create a boot lloppy disk. hut it is not a rescue floppy disk. It is simply another way to load the initrd and kernel. The boot Iloppy disk loads and then mounts root. VERITAS Storage Foundation 5.0 for UNIX. Maintenance4--24
    • symantcc. Lesson Summary • Key Points This lesson described how VERITAS Volume Manager integrates into the operating system boot processes, the key scripts and files used in the boot process, and tips on troubleshooting the boot process. This lesson also provided procedures for recovering from various boot disk failures. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Volume Manager Troubleshooting Guide svmantcc Labs and solutions for this lesson are located on the following pages: Appendix A provides complete lab instructions. "I "hi: Irllllbksll,,(lrill~ the' B,")1 Procc-.s." p::lgc f··_~5 Lab 4 Lab 4: Troubleshooting the Boot Process In this lab, you practice recovering from encapsulated boot disk failure scenarios. On the Solaris platform, to investigate and practice recovery techniques, you will use a set of interactive lab scripts. Each script simulates a failure in the encapsulated boot disk (and its mirror, if required) and reboots the system. For Lab Exercises. see Appendix A, For Lab Solutions. see Appendix B. Appendix 13provides complete lab instructions and solutions. "lnl: of SOllli,,'!]S: I'rollhk,h,)(lling the Hout I'II)(C" .." l':l~L'H-~! Lesson 4 Troubleshooting the Boot Process Ccpynutn f, 2006 Symantec Corporanon. 111ncrus reserved 4-25
    • 4-26 VERITAS Storage Foundation 5.0 for UNIX: Maintenance COfl~'flytll -r;. 21)06 Symantec. Corporauon All fights reserved
    • Lesson 5 Volume Maintenance
    • symantec Lesson Introduction " Lesson 1: Maintaining Data Consistency " Lesson 2: Managing Devices Within the VxVM Architecture " Lesson 3: Encapsulation and Rootability " Lesson 4: Troubleshooting the Boot Process _"_!:esson 5: Vo/urn! Ma!!!~n~nce. _ " Lesson 6: Performance Monitoring " Lesson 7: Point-in-Time Copies " Lesson 8: Other Enterprise Features Overview symantec. Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Changing the Change the volume layout while the ~~ume Layout volume remains online. Topic 2: Managing Volume Manage volume maintenance tasks Tasks with VEA and from the command line. Topic 3: Analyzing Analyze volume configurations by Volume Configurations using the Storage Expert utility. with Storage Expert 5-2 C"pynyllt, 2006SvmautecCorpnranonAll foyllis reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • 1,.1JiIIi::----~~1--'::;l"iI£2Jlt----- Changing the Volume Layout Online relayout: Change the volume layout or layout characteristics while the volume is online. volume Examples: • Change concatenated to mirror-concat to achieve redundancy. • Relayout RAID-5 to mirrored for better write performance. Relayout mirrored to RAID-5 to save space. • Change stripe unit size or add columns to achieve desired performance. Convert a mirror-concat to a stripe-mirror to increase redundancy and performance while decreasing the recovery time. Changing the Volume Layout What Is Online Relayout? You may need to change the volume layout in order to change the redundancy or performance characteristics of an existing volume. The online rclayout feature ofVxVM enables you to change from one volume layout to another by invoking a single command. You can also modify the performance characteristics of a particular layout to reflect changes in your application environment. While iclayour is in progress, data on the volume can be accessedwithout interruption. Online relayout eliminates the need tor creating a new volume in order to obtain a different volume layout. Relayout allows you to modify an existing volume into all those layouts you can select when creating a volume. Supported Transformations By using online rclayout, you can change the layout of an entire volume or a specific plex. Use online relayout to change the volume or plex layout to or from: Concatenated Striped RAID-.' Striped mirrored Concatenated mirrored Lesson 5 Volume Maintenance (opyroqhl'~ 2006 Svmanter Corporaucn. All nqnts re serveo 5-3
    • The transformation of data from one layout to another involves rearranging the data in the existing layout into the new layout. Data is removed from the source subvolumc in portions and copied into a temporary subvolumc, or scratch pcu]. The temporary storage space is taken from the free space in the disk group. Data redundancy is maintained by mirroring any temporary space used. The area in the source subvolumc is then transformed to thc new layout, and data saved 111 the temporary subvolumc is written back to the new layout. This operation is repeated until all the storage and data in thc source subvolumc arc transformed to the new layout. Read/write access to data is not interrupted during the transformation. How Does Relayout Work? Data is copied one chunk at a time to a temporary area. Temporary Subvolume (scratch pad) l lull of the plexes in the volume have identical layouts. VxVM changes all plcxes to the new layout. If the volume contains plcxcs with different layouts. you must specify a target plcx. VxVM changes the layout of the target plcx and docs not change the other plcxcs in the volume, File systems mounted on the volumes do not need to be unmounted to perform online rclayout, as long as online rcsizing operations can be performed on the lile system. If the system i~lils during a transformation, data is not corrupted. The transtormauon continues alter the system is restored and read/write access is maintained. I Data is returned from the temporary -----'....:.J...---" area to a new tayout area. By default: • tf votume size is tess than 50 MB, the temp area = votume size. • If the votume size if greater than 50MB, temp area is 10% of the votume size with a minimum value of 50MB and a maximum value of 1GB. Note: Additional temporary or permanent space may be required for certain relayout changes, for example if the number of columns of a striped volume is modilied. How Does Online Relayout Work? 5-4 VERITAS Storage Foundation 5.0 for UNIX: Mall1tenance
    • Temporary StorageSpace YxYM determines the size of the temporary storage area. or you can specify a size through YEA or vxassist. Default sizes arc as follows: If volume size is less than 50 M13. the size of the temporary area is equal to the size of the volume. Ifthe volume size is larger than 50 MB. the temporary area is 10 percent of the volume size with a minimum value of SOMB and a maximum value of I CiB. Specifying a larger temporary space size speeds up the layout change process. because larger pieces of data are copied at one time. If the specified temporary space size is too small. YxYM uses a larger size. :ote: There may be other temporary spare requirements depending on the change. for example. while increasing the column length of a striped volume. Lesson 5 Volume Maintenance 5-5
    • Online Relayout Notes • You can reverse online relayout at any time. • Some layout transformations can cause a slight increase or decrease in the volume length due to subdisk alignment policies. If volume length increases during relayout, VxVM resizes the file system using vxresize. • Relayout does not change log plexes. • You cannot: - Create a snapshot during relayout. - Change the number of mirrors during relayout. - Perform multiple relayouts at the same time. - Perform relayout on a volume with a sparse plex. syrnantec. Notes on Online Relayout Reversing online relayuut: You can reverse the online rclayout process at any time, but the data may not be returned to the exact previous storage location. Stop existing transformauon in the volume before pcrfurming a reversal. Volume length: Some layout transformations can cause a slight increase or decrease in the volume length due to subdisk alignment policies. If the volume length changes during online rclayout, V.xVM uses vxresize to shrink or grow a IiI.: system mounted on the volume. Log plexes: When you change the layout of a volume. the log plcxcs arc not changed. Bellm; you change the layout of a mirrored volume with a log, the log plcxcs should be removed and then re-created alter the rclayout. Volume snapshots: You cannot create a snapshot ola volume when there is an online rclayout operation running on the volume. "lumber of mirrors: During a transformation. you cannot change the number or mirrors in a volume. "u/tipl!' relayouts: A volume cannot undergo multiple rclayouis at the same time. Sparse plexes: Online rclayout cannot be used to change the layout of a volume with a sparse plcx. Suhdisks: After a rclayout. subdisks may need to be 1110 cd andjoined. 5-6 Copyright' 20()6 SvruamecCorporauou All flyills -e se-vec VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • syrnantcc. «!ii, Changing the Layout: VEA volume natl~:Id",t,3.oIOI layoul: r. Ccoceceneted r Striped r RAI~5 r 5tripedMirrorl!'d I r Ret"", .'oIumesizelit completion l Temp space SIZI!: I ~ I D1sks:1 Temp<I>k(s):Ii------------ I rercet plex:,;--------------er=-o"-'-.,-,-j Set relayout options, Browse." Btowse .. Changing the Volume Layout: VEA Select: The volume 10 be changed to a di tferent layout Navigation path: Actions->Change Layout Layout: Select the new volume layout and specify layout details as necessary. Options: To retain the original volume size when the volume layout changes. mark the "Retain volume size at completion' check box, To specify the size of the pieces of data that arc copied to temporary space during the volume relayout. type a size in the "Temp space size" field. To specify additional disk space to be used for the new volume layout (if needed). specify a disk in the Diskis) field or browse to select a disk. To specify the temporary disk space to be used during the volume layout change. specify a disk ill the "Temp diskts)" field or browse to select a disk, If the volume contains plcxcs with different layouts. specify the plex to be changed to the new layout in the "Target plex" field, Input: Lesson 5 Volume Maintenance 5-7
    • svmamec. Changing the Layout: VEA Relayout Status Monitor Window st.tus Status Volume nerne: d4t.volOl Information Ir.itloll.yout: CONCAT Desaed levout: STRIPED-MIRROR, columns=2, stwidth=128 stetus: Executin9 (Rel.youtl % Complete: .•... - .. __ ._ - . 5% C:p;;;Jiej! Abort I . Relayout controls When you launch a rclayout operation. the Rclayout SWills Monitor window is displayed. This window provides intormatiou and options regarding the progress or the rclayout operation. Volume Name: The name of the volume that is undergoing rclayout I nitial Layout: The original layout or the volume Desired Layout: The new layout (or the volume Status: The status or the rclayout task 0;', Complete: The progress or the rclayout task The Rclayout Status Monitor window also contains options that enable you to control the rclayout process: Pause: To temporarily stop the rclayout operation, click Pause. Abort: To cancel the rclayout operation, click Abort. Continue: To resume a paused or aborted operation. click Continue. Reverse: To undo the layout changes and return (he volume to its original layout. click Reverse. 5-8 VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copyflqn!" 20()6 Svrnaruec Corporauon 4.11nqtus fflservtlU
    • Changing the Layout: CLI vxassist relayout Is used for non layered relayout operations • Is used for changing layout characteristics, such as stripe width and number of columns vxassist convert • Changes nonlayered volumes to layered volumes, and changes layered volumes to non layered volumes • Does not require data movement Note: vxassist relayout cannot create a nonlayered mirrored volume in a single step. The command always creates a layered mirrored volume even if you specify a nonlayered mirrored layout. Use vxassist convert to convert the resulting layered volume into a nonlayered volume. symantec. From the command line, online relayout is initiated using the vxassist command. The vxassist relayout command is used for all nonlayered transformations, including changing the layout of a plex, stripe size. and/or number of columns. The vxassist convert command is used to change the resilience level ofa volume; that is. to convert a volume from nonlayered to layered. or from layered to nonlayered. Use this option only when layered volumes are involved in the transformation. The vxassist relayout operation involves the copying of data at the disk level in order to change the structure of the volume. The vxassist convert operation does not copy data: it only changes the way the data is referred to. Note: vxassist relayout cannot create a nonlayercd mirrnrt'dvolume in a single step. The command always creates a layered mirrored volume even if you specify a non-layered mirrored layout, such as mirror-stripe or mirror-concat. Use the vxassist convert command to convert the resulting layered mirrored volume into a nonlayered mirrored volume. Changing the Volume Layout: CLI Lesson 5 Volume Maintenance Copyright ~ 2006 Symamec Corporation All nqnts reserved 5-9
    • vxassist relayout syrnantec vxassist -g diskgroup relayout vOlumelplex layout=layout ncol=[+I-]ncol stripeunit=size To change to a striped layout: vxassist -g datadg relayout datavol layout=stripe ncol=2 To add a column to striped volume datavol: vxassist -9 datadg relayout datavol ncol=+l To remove a column from datavol: vxassist -9 datadg relayout datavol ncol=-l To change stripe unit size and number of columns: vxassist -9 datadg relayout datavol stripeunit=32k ncol=5 To change mirrored layouts to RAID·5, specify the plex to be converted (instead of the volume): vxassist -g datactg relayout datavolOl-Ol layout=raidS strlpeunit=32k ncol=) The vxassist relayout Command When changing to a striped layout. you should always specify the number of columns. or the operation Illay Iail with the following error: vxvm:vxassist: ERROR:Cannot allocate space for 51200 block volume vxvm:vxassist: ERROR:Relayout operation aborted. Any layout can be changed to RAID·S itsuflicicnt disk space and disks exist in the disk group. If the neol and stripeunit options arc 110tspecified, the default characteristics are used. When using vxassist to change the layout ola volume to RAID·S, VxVM may place the RAID·S log on the same disk as a column, lor example. when there is no other free space available. To place the log on a different disk, you can remove the log and then add the log to the location of your choice. If you convert a mirrored volume to RAID·S. you must specify which plcx is to be converted. All other plcxcs are removed when the conversion has finished. releasing their space lor other purposes. Il you convert a mirrored volume to a layout other than RAID·S. the unconverted plcxcs arc not removed. 5-10 Copynqnt .~.;COD6 Svmantec Curposaucn All nqrus reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • symantec vxassist convert Use vxassist convert to convert: • mirror-stripe to stripe-mirror • stripe-mirror to mirror-stripe • mirror-concat to concat-mirror • concat-mirror to mirror-concat To convert the striped mirrored volume datavol to a layered stripe-mirror layout: vxassist -g datadg convert datavol layout=stripe-mirror The vxassist convert Command To change the resilience level ofa volume, that is, to convert a nonlayered volume to a layered volume, or a layered volume to a nonlayered volume, you use the vxassist convert option. Available conversion operations include: mirror-stripe 00 stripe-mirror stripe-mirrorOOmirror-stripe mirror-concattoconcat-mirror concat-mirrorwmirror-concat The syntax for vxassist convert is: vxassist -g diskgroup convert volume name layout=layout Lesson 5 Volume Maintenance 5-11 Copynqnt f 2006 Svmanter. Corporation /1.11nants reserv&(j
    • Managing Volume Tasks: VEA Relayout Status Monitor Window Displays automatically when you start relayout • Enables you to view progress, pause, abort, continue, or reverse the relayout task • Is also accessible from the Volume Properties window Task History Window • Displays information about the current-session tasks • Can be accessed by clicking the Tasks tab at the bottom of the main window Enables you to right-click a task to abort, pause, resume, or throttle a task in progress Task Log • Contains a list of tasks performed in the current session • Can be accessed by clicking the Logs tab to the left of the main window Command Log File Contains history of current- and previous-session tasks • Is located in /var / adm/vx/veacmdlog syrnantec Managing Volume Tasks Managing Volume Tasks: VEA Relayout Status Monitur Winduw Through the Rclayout Statu, Monitor window, you can view the progress of the rclayoui task alld also pausc. abort. continue, or reverse the rclayout task. You can also access the Rclayout Status Monitor through the Volume Properties window. Task Hlstory ''''indow The Task History window displays a list or tasks performed in the current session and includes the name ofthe operation performed, target object, host machine, start time. status. and progress. To display the Task History window, click the Tasks tab at the bottom or the main window. When you right-click a task in the list and select Properties, the Task Properties window is displayed. In this window. you can view the underlying commands executed to pcrtorm the task. Command Log File The command log file. located in /var / adm/vx/veacmdlog, contains a history or V 1-.A tasks performed in the current session and in previous sessions. The file contains task descriptions and properties. such as date, command. output, and exit code. All sessions since the initial VEA session arc recorded. The log rile is not sclt-Iimiting and should therefore be initializcd periodically to prevent excessive use of disk space. 5-12 VERITAS Storage Foundation 5.0 for UNIX: Maintenance C<lpYrlytlt 200£ Svmanter: Corpor auon 111rights reserved
    • symantec. Managing Volume Tasks: ell What is a task? • A task is an operation, such as online relayout, that is in progress on the system. • Task ID is a unique number assigned to a single task. • Task tag is a string assigned to a task or tasks by the administrator to simplify task management. For most utilities, you specify a task tag using: -t task_tag Use the vxtask command to: • Display task information. • Pause, continue, and abort tasks. • Modify the progress rate of a task. Managing Volume Tasks: ell To monitor and control volume maintenance operations from the command line, you use the vxtask and vxrelayout commands. Task tag is a string assigned to a task or tasks by the administrator to simplify task management. For example.: vxassist -g datadg -t CreateMirrorVo1umeTask datavo1 109 1ayout=mirror Lesson 5 Volume Maintenance Copyngllt C, 2016 Symamec Corporation All r.qnts reserveo 5-13
    • vxtask list To display information about tasks: vxtask [-ahlprJ list !task_idltask_tagJ Starting, ending, and current offset type , Affected , VxVM object Displaying Task Information with vxtask svmantec. To display information about tasks. such as rclayout or rcsynchronization processes, you use the vxtask list command. Without any options. vxtask list prints a one-line summary for each task running on the system. lnformauon in the output includes: TASKID The task identifier assigned to the task by VxVM The ID olthc parent task. ilany Ilthc task must be completed before a higher-level task is completed, the higher-level task is called the purenttusk . TYPE/STATE The task type and state PTID The type is a description or the work being performed. such as RELAYOUT. The statc is a single letter representing one or three suucs: -R: Running -P: Paused -A: Aborting ThL' percentage or the operation that has been completed to this point Thc starting. ending. and current offset 1'01' the operation, separated by slashes. a description or thc task. and names or objects that arc affected peT PROGRESS C()P/.,gnt~, 2006 Svru.uuec Corporation AI! fights reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance5-14
    • vxtask list Options To display task information in long format: vxtask -1 list To display a hierarchical listing of parent/child tasks: vxtask -h list To limit output to paused tasks: vxtask -p list To limit output to running tasks: vxtask -r list To limit output to aborted tasks: vxtask -a list To limit output to tasks with a specific task 10 or task tag: vxtask list convertopl c··············, ! Task tag I Options forvxtask list Several options for vxtask 1 ist arc illustrated in the slide. Lesson 5 Volume Maintenance Copynghl t, 2006 Svmantac Corporauon All nqlus reserved 5-15
    • symantec vxtask monitor To provide a continuously updated list of tasks running on the system, use vxtask moni tor: vxtask [-c count] [-In] [-t time] [-w interval] monitor [task_idj task_tag] -1: Displays task information in long format - n: Displays information for tasks that are newly registered while the program is running -c count: Prints count sets of task information and then exits • - t time: Exits program after time seconds -w inr erveJ: Prints ..waiting ... " after interval seconds with no activity When a task is completed, the STATE is displayed as EXITED. Monitoring a Task with vxtask To provide a continuously updated listing oftasks running on the system. you use the vxtask monitor command. (The vxtask list output represents a point in time and is not continuously updaicd.) With vxtask monitor, you can track the progress Dr a task on an ongoing basis. By default. vxtask monitor prints a one-line summary for each task running on the system. vxtask monitor TA5KID PTID TYPE/STATE RELAYOUT/R peT PROGRESS 58.~8% 0/20480111976 RELAYOUT datavol198 The output is the same as for vxtask 1 ist, but changes as information about the task changes. When a task is completed, the STATE is displayed as EXITED. 5-16 VERITAS Storage Foundation 5.0 for UNIX' Maintenance
    • symantec. Using pause, abort. and resume For example, you can pause a task when the system is under heavy contention between the sequential l/O of the synchronization process and the applications trying to access the volume. The pause option allows an indefinite al11011l1tof time for an application to complete before using the resumeoption to continue the process. The abort option is often used when reversing a process. For example, if you start a process and then decide that you do not want to continue, you reverse the process. When the process returns to 0 percent, you use abort to stop the task. Note: After you abort or pause a relayout task, you must at some point either resume or reverse it. vxtask To abort, pause, or resume a task: vxtask abort [pauseI resume task~id I t:ask~tag To pause the task with the task 10198: vxtask pause 198 To resume the task with the task 10198: vxtask resume 198 To abort the task with the task tag convertop1: vxtask abort convertop1 Controlling Tasks with vxtask You can abort. pause, or resume a task by using the vxtask command. You specify the task ID or task tag to identify the task. Lesson 5 Votume Maintenance Copynght'[ 2006 Symamac Corporatmn All nqhts reserved 5~17
    • syrnantec. Controlling Relayout Tasks with vxrelayout The vxrelayout command can alsu be used to display the status otrclayout operations and to control rclayout tasks. The s tat us option displays the status of an ongoing or discontinued layout conversion. vxrelayout The vxrelayout command can also be used to display the status of, reverse, or start a relayout operation: vxrelayout -9 diskgroup statuslreverselstart volume name Note: You cannot stop a relayout with vxrelayout. Only the vxtask command can stop a relayout operation. vxrelayout -g da~~s-t-a-t-u--s--d-a-t-a-V-Q-l-.-_~ ; Source layout I . ..L 58.48% completed. ~estination layout Percentage of task I completed j·. .------i The reverse option reverses a discontinued layout conversion. Before you use this option. the rclayout operation must be stopped using vxtask abort. The start option continues a discontinued layout conversion. Before you use this option, the rclayout operation must have been stopped using vxtask abort. For example. to display information about the rclayout operation being performed on the datavol volume. which exists in the datadg disk group: vxrelayout -g datadg status datavol STRIPED, columns=5, stwidth=128 --> STRIPED, columns=6, stwidth=128 Relayout running, 58.48% completed. The output displays the characteristics or both the source and destination layouts (including the layout type. number of columns. and stripe width). the status of the operation. and the percentage completed. In the example. the output indicates that an increase from live to six columns for a striped volume is more than halfway completed. 5-18 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Cupyrlghl 2006 Svmantec Corpot anon All ngl1ts reserved
    • symantec Controlling Task Progress To control the 1/0 rate for mirror copy operations from the command line, use vxrelayout options: -0 slow=iodelay Use this option to reduce the system performance impact of copy operations by setting a number of milliseconds to delay copy operations. • The process runs faster without this option. -0 iosize=size • Use this option to perform copy operations in regions with the length specified by si ze. • Specifying a larger number typically causes the operation to complete sooner, but with greater impact on other processes using the volume. ContrOlling the Task Progress Rate VxVM provides additional options that you can use with the vxrelayout command to pass usage-type-specific options to an operation. These options can be used to control the 110 rate for mirror copy operations by speeding up or slowing down rcsynchronization times. The iodelay option reduces the system performance impact of copy operations. Copy operations are usually a set of short copy operations on small regions of the volume (normally from 10K to 118K). This option inserts a delay between the recovery of each such region. A specific delay can be specified with iodelayas a number of milliseconds. The process runs faster when you do not set this option. The default value of the delay is 150ms. The iosi zeoption performs copy operations in regions with the length specified by size, which is a standard VxVM length number. Specifying a larger number typically causes the operation to complete sooner, but with greater impact on other processes using the volume. The default I/O size is I MH. Caution: Be careful when using these options to speed up operations. because other system processes may slow down. It is always acceptable to increase the slow options to enable mort: system resources to be used for other operations. Lesson 5 Volume Maintenance Cupynght ~" 2006 Symautec. Corporation All royhls reser.ed 5-19
    • syrnantec. Controlling Task Progress: VEA Right-click a task in the Task History window, and select Throttle Task. 10.180.208.182 - Th You can also set the slow attribute in the vxtask command by using the syntax: vxtask [-i task_idl set slow=value Throttlng velue: I i jJ I ';":'I:::':"~:":'''1' I ~L .1.....'. I,J I I I .!..,~...! Set the throttling : value in the Throttle : Task dialog box. You can reduce the priority of any task that is time-consuming. Right-click the task in the Task History window, and select Throttle Task. In the Throttle Task dialog box, use the slider to set a throttling value. The larger the throttling value, the more slowly the task is performed. n The iorgO( tho throttling volue is, the slower the .., taskwne. (""".1 Help 5-20 01( Slowing a Task with vxtask Throttling a Task with VEA Copvnqtu '~- 2006 Syruautec Corporation Ail fights reserved VERITAS Storage Foundation 5.0 for UNIX. Maintenance
    • symantec. What Is Storage Expert? VERITAS Storage Expert (VxSE) is a command-line utility that provides volume configuration analysis. Storage Expert: • Analyzes configurations based on a set of "rules" or VxVM "best practices" • Produces a report of results in ASCII format • Provides recommendations, but does not launch any administrative operations Analyzing Volume Configurations with Storage Expert What Is Storage Expert? As your environment grows. your volume configurations become increasingly complex. You should monitor your configurations to ensure appropriate fault tolerance. layout. recovery time. and utilization of storage. Checking each volume manually to verify that you have appropriate storage layouts can be a time- consuming task. The VERITAS Storage Expert (VxSE) utility is designed to help you locate poor volume configurations. monitor your volumes. and provide advice on how to improve volume configurarions Storage Expert is a command-line utility that is included as part ofVxVM. Storage Expert provides volume configuration analysis basedon a set of configuration rules that compare your volumes and disk groups to VxVM "best practice" management policies. Storage Expert reports the status of your volumes compared to the rules and makes recommendations, but docs not launch any VxVM administrative operations. Storage Expert consists ofa set of scripts (called rules), an engine that runs the scripts (the 1"II/!!.1 eIIgille). and a report generator. When you run a Storage Expert rule, the utility: Gathers information about your VxVM objects and configuration 2 Analyzes the data by comparing it to predefined VxVM best practices 3 Produces a report in ASCII format containing the results and recommendations for your configuration Administrator: Are al of my logs mirrored? Are al of my volumes redundant? Should my mirror-stripe be a stripe-mirror? Report: INFO VIOLATION PASS Lesson 5 Volume Maintenance Ccoynotn 'f 200fi Svrnanter. Corporaunn ll,1InytllS reserved 5-21
    • symantec What Are the Rules? Storage Expert contains 23 rules. Rules provide answers to questions about: • Resilience Do my mirrored volumes have DRLs? (vxse_drll) - Is my RAID-510g appropriately sized? (vxse_raid51og2) • Disk groups and associated objects - Are all of my disk groups of the current version? (vxse_dg4) - Are all of my volumes redundant? (vxse _redundancy) - Is my disk group configuration database too full? (vxse_dgl) Striping - Are my stripes an appropriate size? (vxse_ stripesl) - Do my striped volumes have too few or too many columns? (vxse_stripes2) • Spare disks - Do I have enough spare disks? (vxse_spares) - Do I have too many spare disks? (vxse _spares) What Are the Storage Expert Rules? Storage Expert currently contains 23 rules. Each rule performs a different check on your storage configuration. A complete list of Storage Expert rules, their custornizablc aunbutcs. and default values is included at the end ofthis section. 5-22 VERITAS Storage Foundation 5.0 for UNIX Maintenance Copyr:gnt ;,:006 Svruanter- Corpornucn 111fOg/Its reserved
    • symantec.~.<Running Storage Expert Rules • VxVM and VEA must be installed. • Rules are located in !opt!VRTS!vxse!vxvm. Add this path to your PATH variable. • Syntax: rule_name [options] {info llist I check I run} In the syntax: - info - list - check Displays rule description Displays attributes of rule Displays default values Runs the rule- run • In the output: - INFO - PASS - VIOLATION Information is displayed. Object met rule conditions. Object did not meet conditions. Running a Storage Expert Rule Storage Expert rules are located in the / opt /VRTS /vxse / vxvm directory. Add this path to your PATH environment variable before running a rule. Notes: By default. output is displayed on the screen. but you can redirect the output to a tile using standard UNIX redirection. You can also set Storage Expert to Dill as a cron job to notify administrators and automatically archive reports. Lesson 5 Volume Maintenance CnDyrlghl if. 2006 Syrnautec Corporation All fig I,lS reserve-t 5-23
    • To run vxse_raid51og1 on the datadgdisk group: vxse_raidSlogl -g datadg run Running Storage Expert: Examples • To display adescription of the vxse_raid51oglrule: vxse_raidSlogl info VxVM vxse:vxse_raidSlogl: INFO: vxse_.raid51og1 -DESCRIPTION This rule checks for RAID-S volumes which do not have an associated log VxVM vxse:vxse_raidSlogl: INFO: vxse_raid51og1 - RESULTS vxse raidSlogl VIOLATION: Disk-group (datadg) RAIDS volume (raidSvol) does not have a log Examples of Rules Displaying Tunable Attributes of a Rule: Example SOIllC rules compare:V.xVM object characteristics against a set ofdefined attribute values. For example. the:vxse_spares rule checks that the number of spare disks in a disk group is within the VxVM best practices threshold. To determine what that threshold is. you can display information about the attributes of the rule by using the list keyword. For example: vxse spares list Displaying Default Attribute Values of a Rule: Example To display the default value of the attributes for the vxse_spares rule. use the check keyword: vxse spares check The:output indicates that when you run the vxse _apa r es rule. you receive a warning iI'the number of sparc disks in the disk group is less than 10 percent or greater than 20 percent. 5-24 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqht 't.; 2006 Symantec Corporal Ion All nqnts reserved
    • symantcc Customizing Rule Defaults You can run a rule against different attribute values by: • Specifying an attribute value in the run command: vxse_drll run mirror_threshold=4g • Running Storage Expert against a user-created defaults file: vxse_drll -d /etc/vxse.myfile run Modifying the Storage Expert defaults file: 1. Open the /etc/defaul t/vxse defaults file. 2. Delete the comment symbol (II) from the line that contains the attribute you want to modify. 3. Type a new default value and save the file. Customizing Rule Default Values You can customize the default attribute values used by Storage Expert rules to meet the needs of your environment by using om: of several methods. To run a rule with an attribute value other than the default. YClU can specify the rule attribute and its new value on the command line when you run the rule. For example. in the vxse_drll rule. the mirror_threshold attribute is I GB by default. This rule issues a warning if a mirror is larger than I GB and docs not have an attached dirty region log. To run the vxse_drll rule with a different mirror threshold value of 4 GB: vxse drll run mirror threshold=4g To run Storage Expert niles against a user-created defaults tile. you create a new defaults tile with customized attribute values. then specify the tile in the command line using the -d option. For example, to run the vxse_drll rule against the user-created /etc/vxse .myfile defaults tile: vxse drll -d let c/vxse.myfile run To change the default value of an attribute in the Storage Expert defaults tile: a Open the /etc/default/vxse defaults tile. b Delete the comment symbol (#) from the beginning of the line that contains the attribute that you want to modify. (You can also specify values that are to be ignored by inserting a # character at the start ota line.) c Type a new value for the attribute and save the file. When you run the rule again. the new value is used for that rule by default. ----------------.-------------------------------- Lesson 5 Volume Maintenance COPyflf1l,t i(. <,n06 Syrnantec Comoranon 111fights reserved 5-25
    • syrnantec Lesson Summary • Key Points This lesson described how to perform online administration tasks, such as changing the layout of a volume, and how to analyze volume configurations with the Storage Expert utility. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Storage Foundation Release Notes symantec Lab 5: Volume Maintenance In this lab, you practice volume maintenance activities, such as changing volume layouts and using the Storage Expert utility. Optional exercises provide additional practice on managing VxVM tasks. I For Lab Exercises, see Appendix A. ~or Lab Solutions, see Appendix B. Labs am! solutions tor this lesson arc located 011the following pages: Appendix A provides complete lab instrucuons. "I.nb ': Volll",e hilliVlld!lc·"'" p,lgi..' r""f) Appendix B provides complete lab instructions and solutions. "Lub " S,.ittll<Olh: "ulumc 'hi!ltl:ll~!ilL'~:," pill:1l' d·,(1) 5-26 COp/Jly!l! c;, 2on6 Syuvanter.Corporation All rights reserved VERITAS Storage FOlilldation 5.0 for UNIX: Maintenance
    • Lesson 6 Performance Monitoring
    • syrnante, • Lesson 1: Maintaining Data Consistency • Lesson 2: Managing Devices Within the VxVM Architecture • Lesson 3: Encapsulation and Rootability • Lesson 4: Troubleshooting the Boot Process • Lesson 5: Volume Maintenance ~J:!!!.s..~'!..~:_p'!!!!..O!.man~~M_~!torin~_ • Lesson 7: Point-in-Time Copies • Lesson 8: Other Enterprise Features Overview syrnantec Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Storage Identify the main steps in a storage Performance Analysis performance analysis process. Process Topic 2: VxVM Monitor disk and application 110 by Performance Monitoring using the vxstat and vxtrace Tools and Techniques utilities. 6-2 Conyf!Jnt!, 2006 Symasuec Corror.mon All flghls reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • symantec. !lI. Storage Performance Analysis Process IIUnderstand your I Identify the application I complete data workload and .j..; I/O path from IIperformance application to objectives. disk. ) Determine theoretical performance characteristics for each component. Measure performance of each component using available tools. These steps are detailed on the following slides. Storage Performance Analysis Process Performance Analysis Process Storage performance analysis involves these main steps: Understand your application workload and your performance objectives for each application workload. 2 Identify all components of the data transfer model of your storage architecture, that is. the complete I/O path of your data from application to disk. 3 Determine the theoretical performance characteristics of each component for each of the hardware components in your architecture. 4 Use performance monitoring and workload generation tools to measure performance for each of the components in your configuration. Lesson 6 Performance Monitoring Copyright (c, 2(106 S.,n','Jn!H Corporation. All OIgII5 -eserveo 6-3
    • symantec. Step 1: Understanding Your Application Workload Workload Type I/O Size? Random or ReadlWrlte Mix? SequentialI/O? Email server Small Random Mixed reads and writes Online Small (typically a single Random Mixed reads and Transaction database block for reads; writes Processing a small set of database (OLTP) blocks for writes) Decision Support Large Sequential Reads System (OSS) Backup and Small Sequential Reads for backup Restore operations; writes for restore operations Image/scientific Large Sequential Mixed processing Web server Small (although some Random Reads graphics may be large) Step 1: Understanding Your Application Workload Your best guidance in performance tuning comes from understanding how your applications work. Which applications arc you running and what arc the characteristics of the application workloads') Example application types and workload characteristics arc displayed in the slide. Before you begin to analyze storage performance, you should know the answers to the following questions about your applications: What is the [/0 size'! Is the 1·0 operation-intensive ur data content-intensive? Is the 1/0 pattern random or sequential" What is the mix of reads vs. writes') Does the application work to file system interfaces. raw purtiuous. or both? Arc shared data access implcmcntauons involved? What arc the performance objcctiv cs tor each application workload? 6-4 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqnt S' 2il06 Svmaf1!er Couiornunn /1.11nqnts reserved
    • 's)~;~tec. Step 2: Understanding the ItO Data Transfer Path Direct- Attached Disk Direct- Attached Array Shared Access Array Storage Area Network f ····tJApp Data Bus . Disk 1 "'(lAPP Data Bus Disk, Disk, App Data Bus Array Ctlr Data BusDisk Disk, L- __ -' Disk, Step 3: Determine the theoretical performance characteristics of each component in your configuration. Disk Disk Step 2: Understanding Your I/O Data Transfer Path In addition to understanding your applications. you must also have a comprehensive understanding of your storage architecture and the complete 1/0 path from the application to the underlying disks. Example data transfer models are displayed in the slide. including direct-attached disk, direct-attached array, shared access array. and SAN models. Each hardware or software layer in the data flow presents a possibility for performance analysis and tuning. You determine overall performance by examining all components within the path of an 1,'0. Step 3: Identifying Theoretical Performance Limits For each of the physical components in the data flow, you can identify the theoretical performance characteristics. These performance characteristics are usually included in the hardware product data sheets of the products. These theoretical limits can be used as a baseline for further performance analysis. lesson 6 Performance Monitoring COPYflgt',1g 2006 Svmanter corco-atoo All nqhts raservso 6-5
    • symantec. Step 4: Using Performance Analysis Tools • Use performance analysis tools to generate sample loads and identify performance limits for each component in your configuration. • Two types of tools are available: - Workload generators: Used to simulate a typical workload of an application Examples: vxbench,PostMark, dd, tar, custom C program, bpbkar (NetBackup) - Performance statistics tools: Used to monitor performance and measure how fast 110moves through a configuration Examples: vxstat, vxtrace, vmstat, iostat, sar, sag,switch port stats, array tools Step 4: Using Performance Analysis Tools The ideal performance analysis involves running the actual application that uses the hardware to generate an I/O load that you can analyze. However, due to the complexity of I/O patterns, analyzing performance by running the actual application can be difficult. By using tools to generate 1/0 loads. you can investigate and combine particular aspectsof the 1/0 in a controlled manner. Examples of load generators include: dd: Although not ideal for a complete analysis, dd provides a quick way to generate sequential 1/0 operations using specific l.O sizes. vxbench: vxbench is a free utility created by VERITAS that you can use in performance analysis to generate a wide variety of workloads (random. sequential. mixed. multithreaded. and so on). This utility hasmany options that enable you to control the generated IiO. vxbench is available from: ftp://ftp.veritas.com/pub/support/vxbench.tar.Z C program: You can create your own C program to generate 1/0. Other benchmarking tools. such as PostMark. arc available as shareware. VxVM utilities (covered later in this lesson) that you can useto analyze performance include: vxstat: You can display statistics lor volume. plcx, subdisk. and disk activity for specific intervals of time by using the vxstat utility. vxtrace: You can display information about individual I/O operations performed on a volume. disk group. or other named object or device by using the vxtrace utility. Copyllylll 2006 Svmaute.. Ccrporanoo All n!~llls reserved 6-6 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • ~.~-..,The vxstat Utility symantcc. Analyze performance for the disk used by the testOl volume: vxstat -i 1 -d testOl OPERATIONS BLOCKS AVG TIME (ms) TYP NAME READ WRITE READ WRITE READ WRITEI".'-''"--''~ , After the first i invocation of Tue 30 Jul 2004 09:35:25 PM eST ! the 1/0 dm datadg01 64 0 4096 0 15.8 0.0 I program Tue 30 Jul 2004 09:35:26 PM eST dm datadg01 62 0 3968 16.1 0.0 After four -. f Tue 30Jul'2004 09: 35: 38 PM eST [ Invocations dm datadg01 78 0 4992 51. 2 0.0 Tue 30 Ju1 2004 09:35:39 PM eST dm datadg01 78 0 4992 51. 5 0.0 , ---- _____ ~_~___.J After six Tue 30 Jul 2004 09:35:46 PM eST invocations dIn datadg01 78 4992 76.5 0.0 Tue 30 Jul 2004 09:35:47 PM 8ST dm datadg01 76 0 4864 78.9 0.0 VxVM Performance Monitoring Tools and Techniques The vxstat Utility To display statistical information on volumes. plcxes. suhdisks. or disks. you can use the vxstat command. The vxstat utility analyzes completed Volume Manager-initiated II0s per sample time increments. The first output from vxstat displays information since the last reboot or the last reset operation. When performing hardware benchmarking, you should use vxstat to display statistics for disk devices.Displaying the statistics for plexcs and volumes is not valuable because the number of I/O operations fur plexes and volumes is constrained by the underlying disks. Yon can use the output ofvxst at in calculating drive or controller throughput. comparing drive performance, and analyzing the balance of the I '0 load across the drives. The syntax for the vxstat command is: vxstat [-dpsv) [-f fields) [-g diskgroup) [-i interval l-c count)) [-r) [object ... ) Use -dpsv in any combination to display statistics for disks (dl, plcxes (pl. subdisks (S), or volumes (v) associated with the object. Use - i interval to display statistics after every interval seconds. Add the - c count option to stop printing interval statistics after count times. Use the - r option to reset statistics instead of printing statistics. Use the - f fields option to select specific statistics to display. object call be the name of a volume. plex. subdisk, or disk. Lesson 6 Performance Monitoring Copynqhte 2006 Svrnenter. Corporauon 111flgh1sreserved
    • vxstat: Measuring Drive Performance Before analyzing performance. you should be familiar with the volume contigurarion.Thc vxprint command displays configuration information for a volume. When analyzing the volume configuration. consider the volume layout, layout characteristics, and size of the volume. By generating an 1/0load and running the vxstat utility, you can analyze drive performance. For example. assume that multiple invocations of a c program are used to perform random reads of size 32Kon the testOl volume. While the load generation program is running. you can use vxstat to display statistics. The example displays statistics every second. In the vxstat output. the maximum throughput of the drive is reached when the number of 110operations per interval stops increasing. vxstat: Measuring Controller and Host Adapter Performance Measuring controller and host adapter performance is similar to measuring drive performance: Analyze each drive on the controller by adding a load to each drive until the numbers in the vxstat output for each drive stop increasing. Calculate the sum of the throughput of the individual drives. The total is the approximate maximum throughput of the controller and host adapter. Calculating the Throughput To calculate the throughput of the drive (in bytes/second), use the formula: ~ No. of Blocks x 512 bytes/block l L~o ofI/OOP"abo"',~:;e",ge I/OTime(m')~ J x No. of Parallel I/Os on the Disk In the example. the throughput lor the datadgO1disk is: l 4992 blocks x 512 bytes/block 1 ( 78 I/Os x 51 2 ms ) x 4 parallel I/Os 1000 2560000 bytes/sec -r- 1048576 bytes 2.4 MB/sec Note: On HP-liX. use 1024 bytes/block. 6-8 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • symantcc. Operations BLOCKS AVG TIME READ WRITE READ WRITE READ WRITE Tue 30 Ju1 2004 06:05:56 PM BST dm datadg01 0 129 0 16512 0.0 45.5 dm datadg02 0 153 0 19584 0.0 114.6 dIn datadg03 0 0 0 0 0.0 0.0 dm datadg04 0 0 0 0 0.0 0.0 !Totallias: 2821 [Total Black~:-3s0961 Tue 30 Ju1 2004 06:08:13 PM BST dm datadg01 0 130 0 16640 0.0 53.3 dm datadg02 0 133 0 17024 0.0 52.1 dm datadg03 0 145 18560 0.0 68.4 dm datadg04 !L~i8l 0 0 0.0 0.0 iTotallias: ~ ITotalBlocks:52224J Tue 30 Ju1 2004 06:10:34 PM BST dIn datadg01 108 0 13824 0.0 32.1 dm datadg02 0 140 17920 0.0 65.0 dm datadg03 0 135 17280 0.0 53.2 dm datadg04 Q 114 0 14592 0.0 38.2 ITalaIlias: 4971 [!atal Blocks: 63616 1 ~ vxstat: Load Balancing and Volume Layout vxstat: Load Balancing and Volume Layout Load balancing is the process of assigning data to physical drives in order to evenly balance the 110load among the disk drives. Load balancing does not guarantee optimal performance and is not a substitute for complete analysis. In the example. vxs ta t is used to analyze three different volume layouts: The volume striped across two columns The volume striped across three columns (One more column is added to increase the bandwidth and decreasethe load on datadgOl and datadg02.) Thc volume striped across four columns (One more column is added. resulting in a four-way striped volumc.) By comparing the vxstat output for the three volume layouts. you observe: As more columns are added to the volume. the total number of 1/0 operations and blocks transferred per analysis interval increases. The load on an individual drive decreaseswhen the number of columns is increased from two to three. As you add more columns. the rate per drive begins to decrease, and the total number of IIOs stabilizes for the volume. indicating that adding more columns will not improve performance. You can use the -v option of the vxstat command to display the total number of l/Os sent to the volume. Lesson 6 Performance Monitoring Copyright if 2006 Symameo Coeoorauoo All f1Qhls reserved 6-9
    • dm datadgOl dm datadg02 dm datadg03 dm datadg04 22 13 10 !7680.~' .0 l '3328 0 2560 0 o 0 ------- A disk group contains two volumes: test and test2. v test ENABLED ACTIVE 204800 SELECT test-Ol fsgen p1 test-Ol test ENABLED ACTIVE 360928 STRIPE 3/32768 RW sd datadg02-01 test-Ol datadg02 1007 98784 0/0 cltOdO ENA sd datadgOl-Ol test-Ol datadgO1 0 98784 1/0 clt2dO ENA sd datadg03-01 test-Ol datadgO3 0 98784 2/0 cl tlOdO ENA v test2 ENABLED ACTIVE 204800 SELECT fsgen p1 test2-01 test2 ENABLED ACTIVE 205632 CONCAT RW sd datadgOl-02 test2-0l datadgOl 98764 205632 0 clt2dO ENA To improve the balance of 110, off-load some data to the unused datadg04 drive by using vxassist move: vxassist move test !datadgOl datadg04 vxstat: Load Balancing for an Overused Disk In this example, the volume named test is striped across three disks. When you run the vxstat command, notice that datadgOl has a disproportionate amount of I/O compared to the other disks. In the output ol'vxstat, notice that the average I/O time per operation is much higher on datadgO 1 compared to the other drives: datadgOl has an average' I/O time of 133.~ milliseconds, datadg02 has an average 1.0 time 01'45.4 milliseconds. datadg03 has an average I/O time of 55,0 milliseconds, One disk with a disproportionate amount of 110 docs not necessarily indicate a performance problem. However, when you note that datadgO 1is also operating near its maximum transfer rate. you should suspect that datadgO 1is overused. In a local context, striping across three disks is better than striping across two disks. Howcx CI". in this case. striping across three disks decreases performance, because the datadgOl disk is overloaded. Because there is an available drive. you should examine the volumes using datadgOl and determine whether you can offload some of the data to the unused da tadg04 drive. 6-10 VERITAS Storage Foundation 5.0 for UNIX Maintenance C(lpynCj~1I ~, 2006 Svntamet. Corpocanoo AI! fI!,jhts reserved
    • syrnantcc By using the vxt race command, you can view multiple processes, measure how many columns arc needed, and verify if the stripe unit is appropriate: vxtrace [-g diskgroup] [-aeE] [-b buffersize] [-c eventcount] [-d outputfile] [-f inputfile] [-0 objtype [, objtype] ",] [-t timeout] l-w waitinterval] [name I device] , , , In the syntax, you use the -0 option to specify the type of trace data to collect. For example, -0 dev traces virrual disk I:Os, and -0 disk traces physical disk liOs. To collect the trace data that is associated with a specific VxVM object, such as a volume, you type the name of the object (name) at the end of the command. The vxt race utility uses a circular buffer. Always use the -d option to dump records to a tile. Then, use the - f option to read the records. 1 Dump trace data on virtual and physical disk I/Os for the volume named datavol to the file named /tmp/filel.out: vxtrace -d /tmp/filel.out -0 dev,disk datavol 2 Stop tilling the tile: Press Ctrl+C. 3 Read the trace records from the tile named /tmp/ f ilel . out: vxtrace -f /tmp/filel.out -0 dev,disk datavol I more You can use the -t timeout option to stop collecting trace data after timeout seconds, or press Ctrl+C to stop the tilling of the tile. (Pressing Ctrl+C can cause the last few records to be discarded.) The vxtrace Utility The vxtrace utility enables you to: • View multiple processes. • Measure how many columns are needed. • Verify if the stripe unit is appropriate. To run the vxtrace command: 1. Dump trace data to a file: vxtrace -g diskgroup -d file_name -0 de v s dLak vol ume_name 2. Stop filling the file with trace data: Press Ctrl+C. 3. Read trace data from the file: vxtrace -9 diskgroup -f file name -0 dav,disk volume_name I more - The -0 dev, disk option selects trace data on virtual (dev) and physical (disk) devices. The vxtrace Utility Lesson 6 Performance Moniloring Coovnqtu f 2006 Svmaurec ccrccranco All r'glll!'>reserveo. 6-11
    • Selecting a Stripe Unit Size symantec. If the SU size Is much greater than 1/0 size, each I/O is more IIkety to be contained within one column. Most 1/0 operations are satisfiad by a single disk I/O. This increases the number of concurrent 1/0 operations that can be performed. As a guideline: 1/0 size: n x Full Stripe Width where n Is a positive integer. vxassist -g diskgrollp [-b) relayout volume stripeuni t= (neh'_sl ripe _uni t_ S1 z e l Random 1/0 Goa I: Perform a complete 110on one column. If the SU size is too small, each 110goes to two columns. Two disk 110operations are required instaad of one. This reduces the number of concurrent 110 operations that can be performed. Sequential 110 Goal: Use all drives for a single 110. To use all drives for a single I/O, the full stripe width should be less than the I/O size. Stripe Unit Size: Random 1/0 Fur a multithreaded random I/O application, such as a multiuser file system, the goal is to perform a complete I/O on one column. To achieve this goal, the stripe unit sizc should be much greater than the 1'0 size, The reason to use striping lor multithreaded random I'O is to allow multiple l/Os to complete simultaucously. A good stripe unit size depends un how often you arc willing tu allow the I/O to be broken up. Determine the percentage of time that you arc willing to allow the I/O If) be broken up and multiply the I 'O size by 100 divided by the percentage. Round the result up to the nearest power of two to determine the stripe unit size, Stripe Unit Size = I/O size x IOO/peralllage. rounded up to a power of two Stripe Unit Size: Sequential 1/0 For single-threaded sequential l.O, such as database batch-type applications, the goal is tu use the maximum available bandwidth for a single JlO so that the I/O completes much laster. For example. instead of sending a single l/O of size 25hK to a single drive, you benefit from the additional band" idth by sending four I/Os of size 64K to four different drives simultaneously. However. the increase in the performance is not directly proportional to the number of columns added, because each of the 64K LiO, must complete lor the 256K I/O to be complete, Therefore, the overall performance is determined by the slowest of the four drives. Copyrighi .c 2006 Svruantec Comoranoo All rights reserved VERITAS Storage Foundation 5.0 for UNIX. Maintenance6-12
    • Interpreting vxtrace Output In the vxtrace output, you can trace 1/0 request 1001 from the user, to the volume, to the disk, and back again: vxtrace -g testdg -f /tmp/file,out -0 dev,disk I pg 1001 START write vdev datavol block 129152 Ien 64 ... 10002 START write disk devicel op 1001 block 66880 Ien 64 10002 END write disk devicel op 1001 block 66880 Ien 64 1001 END write vdev datavol op 1001 block 129152 len 64 ... symantcc. Interpreting vxtrace Output The vxtrace utility writes 1'0 trace event records to a file in binary format, and formats the trace data when you read the file using the - f option. For example: vxtrace -g testdg -f /tmp/fi1e.out -0 dev,disk I pg In the output. each 110 is assigned a request ID. represented by the number at the beginning of each line of output. You can follow the request ID through the output to trace the path of the 110 from the user to VxVM. from VxVM to the disk, and back again. STARTwrite vdev indicates the start of the write 1101001 to the virtual device datavol, starting at block 129152 for a length of 64 sectors (J2K on Solaris). concurrency indicates the number of concurrent processes that arc detected. This number increases as additional processes arc detected. In the slide. concurrency is not shown due to space limitations. but the full line of output reads: 001 START write vdev datavol block 129152 len 64 concurrency 0 STARTwrite disk indicates the start of the write I/O 1002 to the physical disk devicel, for example. cl t1dOs2 on a Solaris platform. ENDwrite disk indicates the completion of the write 1101002 to the physical disk. ENDwrit e vdev indicates the completion of the write I/O 100I from the volume. • START wri te vdev indicates the start of the write 110 1001 to the virtual device da tavol, starting at block 129152 for a length of 64 sectors (32K on Solaris). • START wri te disk indicates the start of write 110 1002 to the physical disk devicel. • END wri te disk indicates the completion of write 110 1002 to the physical disk. • END write vdev indicates the completion of write 110 1001 from the volume. Lesson 6 Performance Monitoring COPYright <to 2006 Syrnantec Corporation 111nghl~ reserved 6-13
    • symantec vxtrace: Analyzing the 1/0 Profile The test volume has a concatenated layout on the cltldOs2 drive. Start vxtrace, generate sample 1/0, stop the trace, and analyze:. ..__ ... ~xtrace -g testdg -d /tmp/output -0 dev,disk test IPress Ctrt+C after the test is complete. jvxtrace -g testdg -f /tmp/output -0 dev,disk I mor I i ------------------------------~ 30147 STARTread vdev test block 115648 len 32 concurrency 0 pid 54 30148 START read disk clt.ldOs2 op 3014" block 115648 len 32 30149 STARTread vdev test block 86976 len 32 concurrency 0 pid 57 30150 ST-ARTread disk cltldOs2 op 30149 block 96976 len 32 30148 ENDread disk cltldOs2 op 30147 block 115648 len 32 time 1 30147 ENDread vdev test op 30147 block 115648 len 32 time 1 30151 STARTread vdev test block 151232 len 32 concurrency 0 pid 60 30152 STARTread disk cltldOs2 op 30151 block 151232 len 32 30153 STARTread vdev test block 29472 len 32 concurrency 0 pid 54 30154 START read disk cltldOs2 op 30153 block 29472 len 32 30150 ENDread disk cltldOs2 op 30149 block 86976 len 32 time 2 30149 ENDread vdev test op 30149 block 86976 len 32 time 2 vxtrace: Process for Analyzing the 1/0 Profile To analyze the application 10 profile by using vxtrace. you can follow this process: Analyze the existing volume layout. 2 Start a trace on the volume by using vxt race. 3 Generate test I/O that simulates application performance. 4 Stop the trace on the volume and analyze the vxtrace output to determine the application 1/0 profile. 5 Based on the application I/O profile, make changes to the volume layuut to improve performance. 6 Rerun the test I/O and vxtrace to determine the impact ofthe changes. The following examples illustratc this process. vxtrace: Analyzing the I/O Profile The test volume has a concatenated layout on the cltldOs2 drive. In thc output, you can determine the number of concurrent processesrunning against the volume by counting thc number of STARTvdev entries that do not have ENDvdev entries. In this example, there arc at least three concurrent processeson the volume named test. The LU is occurring in lengths of 32 sectors ( 16K 011 Solaris). 6-14 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • symantec. vxtrace: Analyzing the 1/0 Profile (continued) To improve performance, change the volume layout to striped: vxassist -g testdg relayout test layout=stripe ncol=3 stripeunit=256k Output of vxtrace after changing the volume layout: vxtrace -g testdg -d /tmp/output -0 dev,disk Press Ctrl+C after the test is complete. vxtraee -g testdg -f /tmp/output -0 dev,disk I more 16955 STARTread vdev test block 31648 len 32 concurrency 0 pld ]119 16956 START read disk clt4dOs2 op 16955 block 10656 len 32 16954 END read disk clt2dOs2 op 16953 block 153392 len 32 time 1 16953 END read vdev test op 16953 block 154976 len 32 time 1 16957 START read vdev test block 128 len 32 concurrency 0 pid 2122 16958 START read disk cltldOs2 op 16957 block 128 len 32 16950 END read disk clt2dOs2 op 16949 block 121584 len 32 time 3 16949 END read vdev test op 16949 block 59680 len 32 time 3 16959 START read vdev test block 131872 len 32 concurrency 0 pid 2125 16960 START read disk clt2dOs2 op 16959 block 145648 len 32 In this example. you can improve performance by changing the volume layout from concatenated to striped. with three columns and a stripe unit size of 256K. The stripe unit size is 256K to maintain an R percent break-up value: 16K x (I OO/R).rounded to the nearest power of two = 256K Change the volume layout by using vxassist relayout. and use vxprint to verify the changes: vxassist -g testdg relayout test layout=stripe neol=3 stripeunit=256k vxprint -ht test After changing the volume layout, start the trace again. rerun the test 110. then stop the trace when the test is complete: vxtraee -g testdg -d /tmp/output -0 dev,disk Press Ctrl+C after the test is complete. When you display the vxtrace output. notice that there arc three concurrent processes running 011 the volume, and each process now uses a different drive to satisfy the (10 request. The result is improved performance. vxtraee -f /tmp/output -0 dev,disk I more Lesson 6 Performance Monitoring Cr)P'ifl91't t~2006 Svmaruec Corporation All rights reserved 6-15
    • syrnaruec. Possible Layout Changes The slide lists many of the possible layout changes you may make after analyzing your data for performance tuning. Possible Layout Changes What changes can you make to a volume's structure to improve performance? Change the RAID level of the volume. Increase or decrease the stripe unit size for a striped volume. Increase or decrease the number of columns. Move (evacuate) a volume so that it does not share its disks with other volumes. Reduce or make equivalent the number of volumes sharing disks. Identify any heavily used subdisks (hot spots) and relocate the containing subdisk to a lower sector offset on the disk. If I/O is mostly read, then mirror the volume. Combine many small subdisks into one larger subdisk. Defragment the file system space. 6-16 Copyrly~J( , 2006 Sernaruec Corporation Ail nyl'ls reserved VERI7AS Storage Foundation 5.0 for UNIX: Maintenance
    • VxVM Tunable Parameters I Caution:1 Perform tuning changes with care. • Tuning changes can adversely affect overall system performance and can make Volume Manager unusable. • In general, default values are adequate for most systems. • Before modifying tunables, back up tunable parameter files, check memory, and read documentation. SolarisI HP·UX I Linu~ I VxVM Tunable Parameters VxVM has a set of tunable parameters that control the system resources used by VxVM. VxVM is optimally tuned for most configurations, but in some configurations, some adjustment to tuning parameters may be required to optimize performance. On the Solaris platform, the tuning parameters arc configured in two files: • /kernel/drv/vxio.conf • /kernel/drv/vxdmp.conf Different versions of VxVM can use different tunable parameters. Displaying Current Tunable Values Sularis You can display the default value of an individual tunable parameter by using the command: echo 'parameter/D' [mdb -k echo 'parameter/E' I mdb -k (use for a M-bit kernel) For example: echo 'vol_maxio/D'lmdb -k You can view the internal default values of all tunable parameters by using: prtconf -VP In the output. unchanged tunables are listed with default values under the heading Dri ver properties. Tunablcs whose value has been changed replace the default values in the vxio. cori f file. For example: Copyright ~'2006 Symantec Corporation All fights -eserveo Lesson 6 Performance Monitoring 6-17
    • prtconf -vP vxio, instance #0 Driver properties: name='vol_rvio_maxpool_sz' type=int items=l dev=none value=cOxGOOOOcOO>. name='vol_vvr_use_nat' type=int items=l dev=none value=cOx00000003>. name='voldrl_max_seq_dirty' type=int items=l dev=none value=cOx000007dO>. In this example. the vol_maxio parameter has been modified. Values arc displayed in hexadecimal format. UP-(lX You can display the values of tunable parameters by selecting Kernel Configuratiou-c-t-Conriguranon Parameters in the System Administration Manager (SAM). From the command line. you can display current tunables and their values by using the kct une command. To display all Volume Manager iunablcs: kctune I grep vol To display details about a specific tunable. use the -v option with the name of the tunable parameter to print a detailed report: kctune -v tunable For example: kctune -v vol max vol Modifying Tunable Parameters Sularis Caution: It is recommended that you do not attempt to change the default values oftunable parameters without consulting VERITAS Support. Before modifying tunable parameters: Back up the /kernel / drv /vxio. conf and /kernel/ drv /vxdmp. conf files. Ensure that memory is available before setting tuuablcs. Expanding the numerical values on most tunablcs demands more system memory. Read the tunahles documentation in the appendix of this course and read about the tunablcs in the IERnAS VotumeMOHLIger Svstcm Administrator '.I' Guide, To modify a VxVM tunable. you add a line Ior the tunable in the /kernel/drv/ vx i.o . conf or /kernel / drv /vxdmp. conf Ii lc and then reboot the system. Changed tunables arc then in effect. For example, to change the tunable parameter vol_max_ vol, add the parameter and the new vuluc to the /kernel/drv/vxio, conf tile: VERITAS Storage Foundation 5.0 for UNIK Maintenance6-18
    • Open the /kernel / drv/vxio. conf file in a text editor: vi /kernel/drv/vxio.conf 2 Add the parameter and its new value after the line name="vxio" parent="pseudo";. vol_max_vol=5000; 3 Save the file and quit. 4 Reboot the system: /usr/sbin/shutdown -gO -y -i6 Hp·l:X Caution: It is recommended that you do not attempt to change the default values of tunable parameters without consulting VERITAS Support. Before modifying tunable parameters: Ensure that memory is available before setting tunablcs. Expanding the numerical values on most tunables demands more system memory, Read the tunables documentation in the appendix of this course and read about the tunables in the' 'ERIT.4S ,'''/III11L' Manager Administrator ','Guid«. Tunable parameters arc stored in the Istand/system file: however. you should not manually edit this file. You can change the value of a tunable parameter by using the kctune command with the t unable= val ue syntax: kctune tunable=value For example: kctune vol max vol=5000 By default. changes to the currently running kernel configuration are applied immediately. Some changes cannot be applied without a reboot: if any such changes are requested, or if the -h nag is given. all changes on the kctune command line are held until the next boot. linux The sysct 1 command is used to view. set. and automate kernel settings ill the /proc/sys/ directory. Tu get all overview of all settings in the / proc / sys / directory. at root, type: sysctl -a net.ipv4.route.min_delay 2 kernel.sysrq = 0 kernel.sem = 250 32000 32 128 The sysct 1 command can be use in place of echo to assign values to writable tiles in the /proc/ sys / directory. For example. instead of using echo 1 > /proc / sys/kernel / sysrq. you can use this sysct 1 command: sysctl -w kernel.sysrq="l" 6-19Lesson 6 Performance Monitoring Copynqnt t: 2006 Syrnantec Corporaucn All rights reserved
    • kernel.sysrq = 1 All /proc/ sys/ special settings arc lost when the machine is rebooted. To save settings to your kernel pennanently. add them to the / etc/ sysct 1.conf file. Every time the system boots. the ini t program runs the /etc/rc. d/ rc. sysini t script. This script contains a command to execute sysctlusing /etc/sysctl. con f to dictate the values passed to the kernel. Any values added to /etc/sysctl. con f take effect each time the system boots. To make persistent changes on Linux. you need to use the sysct 1. conf file to tunc the parameters. Every time the system boots, the let c / rc . d/ rc . sys i ni t script is executed by ini t .This script contains instructions to execute the sysct 1command using the values in the et c/ sysct 1.conf. Therefore, any values added to / etc/ sysctl . conf take effect after the system boots. To modify tunublcs (other than the Vx Vlvl memory tunable parameters). you need to either add a tunable to the sysct 1. con f file or edit an existing tunable In the /etc/sysctl.conf file, as follows: vxvm.vxio.tunable name = value For example. vxvm.vxio.vol nm hb timeout 20 lJsing vxvol tune Iunublc parameters may be adjusted using the vxvol tune command. To display the alue of a Vx Vlvl tunablc. use: vxvoltune vxvm tunable To change the:value of a tunable, specify the new value: as an argument: vxvoltune vxvm tunable value You must then shut down and reboot the system for the change to take cllcct. The new value persists across system reboots until it is changed again. For example, the following command sets the value of vol_ maxkiocount to 8191: vxvoltune vol maxkiocount 8192 Caution: The: vxvol tune utility modifies the tunable values stored in the / etc/vx/vxvm _ t unables file. Use the vxvol tune command to change the values stored ill these Ilks. Do not edit these files directly, 6-20 VERITAS Storage Foundation 5.0 for UNIX Maintenance C');Jylly~lI ~. 20uo Symanter. CorpOf;-l!lor' All nqtus reserved
    • svmantcc symantec Lesson Summary • Key Points This lesson described the storage performance analysis process for performance tuning. This lesson also introduced VERITAS Volume Manager tools that you can use to analyze the performance of the hardware configurations and to tune volume layouts. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Volume Manager Hardware Notes - VERITAS Volume Manager Troubleshooting Guide Labs and solutions for this lesson arc located on the following pages: Appendix A provides complete lab instructions, "Lib 11:Pt'rfllnn:lIlcc lol1i1""'llg," Pil)!.,' ,.-j'l Appendix B provides complete lab instructions and solutions. "I ,ill (, S•.•ilIiIO'h: l'c'l'i'nrJnill1lil' ~louitoriug." Pi;;!", Jl-',' Lab 6 Lab 6: Performance Monitoring In this lab, you analyze Volume Manager 110 operations using the vxstat and the vxtrace utilities. For lab Exercises, see Appendix A. For lab Solutions. see Appendix B. lesson 6 Performance Monitoring Copyright -g} 2lJn6 Symamec Corporation 111flgllts reserved 6-21
    • 6-22 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Lesson 7 Point-in- Time Copies
    • • Lesson 1: Maintaining Data Consistency • Lesson 2: Managing Devices Within the VxVM Architecture • Lesson 3: Encapsulation and Rootability • Lesson 4: Troubleshooting the Boot Process • Lesson 5: Volume Maintenance • Lesson 6: Performance Monitoring ~ __l:e~~f1 !~l~gj!!t:Lf!:rt~fgI!Lf!!_~~~ • Lesson 8: Other Enterprise Features Overview symantec. A~~i~ , symantec. Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: What is a Point-In- Define a point-in-time copy. Time Copy? Topic 2: Types of PITC Comparethe five types of point-in- Solutions in Storage time copy solutions provided by Foundation VERITAS Storage Foundation. Topic 3: Creating and Create and managetraditional, full- Managing Volume Snapshots sized instant, and space-optimized instant volume snapshots. Topic 4: Using Volume Use volume snapshots and deporting Snapshots for Off-Host and importing of disk groups to Processing perform OHP. Topic 5: Creating and Createand managestorage Managing Storage Checkpoints checkpoints. 7-2 Couvnqht t: <'OOb Syn'(lriler. Corporation /11 fightS reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Live Process in Applications 111 I I I 1 1 1 Primary - Primary 1- Primary g Data Data Data r I e Copy Copy Copy of Data of Data of Data at 7 a.m. at Noon at 5 p.m. I I I Time Point-In-Tim Copies of Data Reporting~ Testing! '"if~ Support .:IJ symantec. Backup II~!I What Is a Point-In-Time Copy? A point-in-time copy (PITe) enables you to capture an image of data at a selected instant for use in applications such as backups, decision support, reporting, and development testing. Point-in-time copy solutions can additionally be configured for off-host processing to remove much of the performance overhead on a production system. Common Uses for PITCs Examples of common offline and off-host processing tasks that use point-in-time copies include: Data backup: By taking a PITe of your dura and backing up from this PITe you can ensure that your business-critical applications continue to run without extended downtime and without impacting performance. Report generation: By creating a Pll C, importing it to a secondary server. and then generating billing statements, you can ensure that the performance of the production server is unaffected. Decision support analysis: Because information is constantly being updated, you can perform this decision support procedure on a Pl'IC of the data to avoid using the original data. Testing: PllC data provides developers, system test engineers, and QA groups with a realistic basis for testing the robustness, integrity, and performance of new applications. Training: A PlTt" of the production tiles is made, ancl the training exercises use the copy, rather than the original. Copvnqhtv 2006 Svmamec Corporation All ncbts reserved Lesson 7 Point-in-Time Copies 7-3
    • Physical vs. Logical PITes Physical Copy Primary Data ABC D E F G H I r ABC D E F G H I Copy of Data Physical PITC: • Complete copy of the primary data • Fully synchronized • Requires double the space Logical PITC: • Contains only primary data that has changed one time • References the primary data • Requires less space Logical Copy ABC' D'E F G H I r.;'[ifl ![Q] 1 1 ! 1 A A A ~ L .! symantec Physical or Logical PITC All types of PITCs achieve a consistent point-in-time image of the data. A PITC can be either a complete physical copy or a logical copy of the data. Physical PlI'Cs: The physical PITC is a physically distinct copy of data usually produced by breaking off a mirror of the storage container. Advantage of physical copies: The PIlC is an actual physical copy of the data. Disadvantagesof physical copies: The PITC requires the same amount of storage space as the original. The I'ITC requires time for synchronization of data, Lugicall'ITCs: The logical PITe is composed by logically reconstructing data from multiple sources (original data plus a record ofsubsequent changes), The I'ITC identifies and maintains modified blocks. and in addition. refers to the original data, The PlTt.' is dependent on the primary copy of data, Advantages of logical copies: - The PITC is available for use instantaneously, Potentially less storage space is required, Disadvantage of logical copies: The PITC is dependent on the original. 7-4 Copvnqh! : 2006 Symantec Conxnauon All nyhls reservec VERITAS Storage Foundation 5,0 for UNIX' Maintenance
    • Physical PITes: Reads and Writes ReadslWrites Primary Data rn ABC A B' C A B' C D E F f--. D E' F' - D E' F' G H I G H' I G H' I 7a.m. Noon 5 p.m. ABC ABC ABC D E F -D E F f-- D E' F G H I G H I G' H' I 111 Copy of Data ReadslWrltes Reads/writes to the primary have no performance impact on the copy. symantec Time Reads/writes to the copy have no performance impact on the primary. Reads and Writes on PllCs There are certain performance implications that you should consider when performing I/O on files that have point-in-time copies attached. Performance Issues with Physical PllCs The primary performance impact to consider for physical PITC's is the initial synchronization. This is especially important when large amounts of data need to be copied. For example, a terabyte of data can take several days to synchronize. After this full synchronization is complete, there is very little if any performance impact on the original volume or the PITC' because they are separateobjects. Lesson 7 Point-in-Time Copies Copyrtghl t: 200F SY013ntecCo-roranoo. All nqtus reserved 7-5
    • Copy of Data ,rtr Rrn ABC A B' C A B' C ./. A B' 00- o E F t---- o E F' -o E F' o E F' G H I G H' I G H' I II G H' I .•.. I 7 a.m. Noon Sp.m. 10 p.m. ~ , / / / / B / / B / '-/ B ,.J / / / ~ / / F f-- / / F -'-/ / F V / / / / H / / H / / H ,./ Copy-on-Write No 111 Time Logical PITes: Reads and Writes Primary Data Impact Reads Redirected Reads Performance Issues with Logical PllCs The logical plTe is connected to the primary data. Therefore. the I/O of a logical PI'l'C is subject to the rate of change of the original data. The overall impact of the PlI C is dependent on the read-to-write ratio of an application and the mixing of the I{) operations. Copy-on-Write Logical PlTt,' solutions use copy-on-write (COW). With copy-on-write. if" write is sent to the primary data. till: software first copies the original block of data to the point-in-time copy before the write can be completed. Using copy-on-write, the primary data incurs double the performance impact on writes. Redirected Reads Initially. the logical (lITC satisfies read requestslw checking the reference pointers (which indicate thut thc data can be found on the primary) and returning the data from the primary to the rcqucsung process. If a change is made on block 1/ of the primary data. a subsequent read request Ior block 1/ on the I'ITC is satisfied by checking the reference for block II and reading the data from the indicated block on the PlTf. rather than Irom block 1/ on the primary data. 7-6 VERITAS Storage FOUl/dation 5.0 for UNIK Maintenance
    • Life Cycle of Point-in-Time Copies f1 MakePllC .:.,) (Assign Resources) oUpdatePllC symantec. Time Primary -Primary r-- Primary - Primary Data Data ~ Data Data I 7a.m. Noon I Sp.m. I 10 p.m. , COPyo~LCopy 01 ---- Copy 01 ~ ('oPVof Data Data Data Data 1 ---_._- oUsePllC V .Testing .It • Backup ,I, t4'Destroy PllC .::..J (Release Resources) Life Cycle of PllCs There are four general stages in the life cycle of a point-in-time copy: Make: You can create physical PITes by copying the entire contents of the primary data and then breaking it off, or by creating an entirely new volume and tilling it with the primary data. You can create Logical PITes by allocating space either in or out of storage used tor the primary data. 2 Use: You can use PITCs for many operations that require offline or otT-host processing. including data backup, report generation, decision support analysis, database rollback. testing. and training. 3 Update: You can repopulate PITCs with new data from the primary. or you can repopulate the primary with the original data from the PITe. 4 Destroy: You can remove PITes after you are finished with them. Removing PITes frees the storage space so that it can be used for other operations. 7-7Lesson 7 Point-in-Time Copies Cop~nqt.1 "'. 2006 Svmanter. Corporation All rights reserved
    • syrnantec PITes in Storage Foundation VERITAS Storage Foundation has the followiuq types of PITCs: Traditional snapshots Volume-level Physical Full-sized instant snapshots " Volume-level Physical or logical Third-mirror breakoff snapshots Volume-level Physical or logical Linked break-off Physical or logical snapshots Volume-level Space-optimized Volume-level Logical instant snapshots • Storage checkpoints • File system-level Logical ~stemsnall!lAotS File &~!lte", le¥el Lug1at--- • Requires FlashSnap: FastResync, Disk Group Split/Join, Storage Checkpoints Types of PITe Solutions in Storage Foundation This lesson introduces the point-in-time copy solutions that you can implement using the VERITAS FlaslrSuap technology. flash Snap is included in the Enterprise version of Storage Foundation. VERITAS Storage Foundation with FlashSnap provides tile following types or point-in-time copy solutions: Volume-Level PITC Solutions: Full-Sized Instant Volume Snapshots - Third-mirror Break-off Volume Snapshots Linked Break-off Volume Snapshots - Space-Optimized Instant Volume Snapshots File System-Level Solution: Storage Checkpoints Note: Linked break-offsnapshot volumes arc introduced with SF 5.0. They arc a variant of third-mirror break-off snapshots and they link a specially prepared empty volume to the data volume. The volume that is used for the snapshot is prepared in the same way as Ill!" lull-sized instant snapshots, However, unlike lull- sized instant snapshots. this volume can be set up in a different disk group from the data volume. Note that linked break-off mirrors arc not covered in this course. 7-8 VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copynqht 2006Swuamer-Como-anon 111lights reservert
    • symantec Traditional Volume Snapshots f.jMakePITC .:..J(Assign Resources) ------~datavol CDUpdate PITC Create and synchronize a new mirror. Reattach and resynchronize: Refresh the snapshot, or Restore the original to data in the snapshot newvol Clear the EJ association to create an independent volume. Detach the mirror as a new snapshot volume. oUsePITC :~~~~:~&1 Remove the snapshot f4 Destroy PITC '.::J(Release Resources) Traditional Volume Snapshots The traditional type of volume snapshot that was originally provided in VERITAS Volume Manager (Vx VM) is the third-mirror break-off type. This name comes from its original implementation by adding an additional plcx to a mirrored volume. When you create a traditional volume snapshot. you create a temporary mirror of an existing volume. After the contents of the third mirror (or snapshot plex) are synchronized from the original plexcs of the volume, the snapshot plex can be detached as a snapshot volume for use in backup or decision support applications. Updating Traditional Volume Snapshots Reattach: When you reattach you can reassociate a traditional snapshot copy of a volume with the original volume. When you reattach, the snapshot plcx is detached from the snapshot volume and attached to the original volume. Data is resynchronized so that the piexes are consistent. Rcsynchronizotion using the original volume: Resynchronization using the original volume usesthe data in the original plex to resynchronize the merged volume, Resynchronization using the snapshot: Rcsynchronization using the snapshot uses the data from the snapshot volume to replace the data in the original volume. Lesson 7 Point-in-TimeCopies Copyright c(, 2006 Symanter Corroranoo All ncnts reserveo 7-9
    • symantcc. Improving Volume Snapshots with FastResync When FastResync' is enabled, data change objects are associated with the volume. DCO During reattach or restore operations. FastResync is used to quickly reassociate a snapshot plex with the original volume.Updates to the original volume are recorded in DCO logs and stored on disk. Resynchronization involves applying only changed data, rather than performing an entire atomic resynchronization . • Requires FlashSnap FastResync FastResync performs quick and efficient rcsynchronizatiou of volume snapshots. When you enable Fastkcsyuc, change maps arc stored on disk to survive reboots. Rcsynchronizauon occurs with minimal performance impact by writing only changed blocks. Vl:RITAS volume Manager uses three objects to manage ht>tResync maps: Data change object (DCO): Manages Fastkcsync maps DCO log volume: Stores Fastltcsync maps in logs. which arc stored on disk Snap objects: Track the relationship between volumes and their snapshots How DoesFasrkcsyncWork with Snapshots'! FastResync speeds up the rcsynchronization process: When you enable FastResync for a volume, a data change object (DCO) and a DCO log volume must lirst be associated with the volume. In the example, a mirrored volume has two plcxcs, an associated DCO, and a DCO log volume with two log plcxcs. 2 A snapshot plcx is created in the original volume, and a DCO log plcx is associated with the snapshot plcx. 3 A new DCO object and DCO log volume arc created tor the snapshot volume. Snap objects arc created in the original volume and the snapshot volume. 4 If you later decide to rcsynchronizc the snapshot plcx to the original volume, the l astkcsync maps arc used, Co!)ynghl c: 2006 Svmantec Coree-anon All ngnts reserved 7-10 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Traditional VS. Instant Volume Snapshots I.-~::InstantVolume Snapshots: • Can be the same size as the volume or just a fraction of the size Traditional Volume Snapshots: • Require the same amount of storage space as the original volume • Require time for synchronization of data • Can wait for synchronization or have an immediate PITC Instant Volume vs. Traditional Volume Snapshots symantec . Unlike a traditional snapshot created using the vxassist command, you can make a backup of a full-sized instant snapshot, instantly refresh its contents from the original volume, or attach its plexes to the original volume, without needing to completely synchronize the snapshot plexcs from the original volume. If desired, you can additionally select to perform a synchronization of the snapshot volume. This is useful if you intend to move the snapshot volume into a separate disk group for off-host processing, or you want to turn the snapshot volume into an independent volume. Note:With third-mirror break-off and linked break-offsnapshots, you have to wait for the synchronization to complete before you can initially create the snapshot. However. after the snapshot is created you can make use uf the instant feature to refresh the snapshots without waiting for synchronization. Copynqlll/) 2006 Svmaotac Corporation All fights reserved 7-11Lesson 7 Point-in-Time Copies
    • f2 Use PITC ~. Testing • Backup You can choose to: ~I·.. d,•. • Instantly use the snapshot II as is (logical PITC).or q Fully synchronize with original to move the snapshot off-host (physical PITC). Fully synchronize. and then clear the association to make an independent volume. Full-Sized Instant Volume Snapshots f.j Make PITC :..i (Assign Resourc_e-s.:-)----_ A logical PITC is initialized that employs the copy- on-write technique. Full synchronization is not necessary. CDUpdate PITC You can quickly: • Refresh the snapshot. or • Restore the original to data in the snapshot. ---- Remove the snapshot. f4 Destroy PITC .::.,) (Release Resources) Full-Sized Instant Volume Snapshots VERITAS Volume Manager 4.1 introduced instant snapshots that offer advantages over traditional volume snapshots. The benefits of instant snapshots include: lmmcdiatc availability for use Quick refreshment Easier configuration and administration Full-sized instant snapshots arc the same length as the original volume. The primary benelit of lull-sized instant snapshots is that the snapshot volume is available lor access as soon as the snapshot plcxes have been created. Instant snapshots use copy-on-write to ensure that thc snapshot volume preserves the contents of the original volume at the time that the snapshot is taken. Full-sized instant snapshots can use Fasikcsyuc to rcsynchronize with the primary volume. Updating Full-Sized Instant "olume Snapshots Refresh: When you refresh an instant snapshot. you overwrite it with another point-in-time copy of a parent volume. Reattach: When you reattach a snapshot, the snapshot plcx is detached (rom the snapshot volume and attached to the original volume. Restore: When you restore a snapshot. the snapshot itscl I' remains unchanged. Dissociate: Whcn you dissociate a snapshot. you permanently break the associurion between a snapshot and ib original olumc and maintain the snapshot as an independent volume. 7-12 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvuqht ,; 2006 Svmauter CJTporal,oll All nqhts reserved
    • symantec. Space-Optimized Instant Volume Snapshots f.jMakePITC ..!..) (Assign Resources) datavol CDUpdate PITC You can quickly refresh the snapshot. A logical PIlC is initialized that employs the copy-on- write technique to write original data to a storage cache before the write is committed. The storage cache can be shared among multiple volumes in a disk group. Remove the snapshot. CD~~:s~~~C_I..• Backup Ii.~~ You can instantly use the snapshot as is (logical PIlC). f4 Destroy PITC '.::J(Release Resources) Space-Optimized Instant Volume Snapshots Space-optimized instant volume snapshots use a storage cache rather than requiring a complete copy ofthe uriginal volume's storage space. VxVM uses copy-on-write to preserve the original data contents in the cache before a write is committed. Because the storage cache can be configured to require much less storage than the original volume. it is referred to as being space-optimized. If the cache becomes tou full. you can configure VxVM to grow the size of the cache automatically using any available free space in the disk group. Multiple space-optimized snapshot volumes can share a cache object. Updating Space-Optimized Instant Snapshots Refresh: You can immediately retake an instant snapshot at any time by using the Refresh procedure. Refreshing a space-optimized instant snapshot overwrites it with another point-in-time copy of the parent volume. Restore: You can use an instant snapshot to reinstate the contents of a volume from a snapshot volume. The snapshot itself remains unchanged by the operation. Space-optimized instant snapshots use FastResync to resynchronize with the primary volume. Lesson 7 Point-in-Time Copies Copynqht ?'. 2006 Symaruec(;arpOI;)I,on AlIl'gl1ts reserved 7-13
    • symaniec Storage Checkpoints Create the storage checkpoint. Map/Copy-on- write technique is used to track changed btocks. CDUpdate PITC Take frequent storage checkpoints to enable rollback to more up-to- date backup images. Coordinate checkpoints with database states f2 Use PITC V Mount the checkpoint as read-only or read- write to access it. Use checkpoints with database files, backup L -.:===~applications, or file server environments to back up or restore an individual file or a file system. Storage Checkpoints Storage checkpoints arc a feature of VxFS that you can use to quickly create a logical copy of a Ilk system at an exact point intimc. Storage checkpoints arc created within the same space as the primary tile system and arc actual data objects. Therefore, storage checkpoints arc persistent across reboots. Storage checkpoints serve as an enabling technology for other VERITAS products: NctBackup Advanced Client. including Block-Level Incremental Backups; and Database Edition Storage Checkpoint and Storage Rollback. ,"ote: Storage checkpoints require the FlashSnap license. Why LIse Storage Checkpoints'! Common uses for storage checkpoints include: Database environments: Storage checkpoints arc ideal for file systems containing database tiles. Backup and replication solutions: Various backup and replication solutions can take advantage of storage checkpoints. Storage checkpoints track changes since the last storage checkpoint. This Iaciliuucs applications that only need to retrieve the changed data. High availability environments: Storage checkpoints significantly minimize data movement and may promote higher availability and data integrity by increasing the frequency of backup and replication solutions. lilc server environments: Storage checkpoints can be used in a file server environment so that end user, call retrieve a file that is accidentally deleted. 7-14 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Convnqht 2006Syrn.uuecCorpor.mcnAll"yhls .eserveo
    • ~,6...m"" syrnantcc Storage Checkpoints vs. File System Snapshots Storage Checkpoints: • Are persistent • Use free space from primary block device • Make Incremental backups possible on the block level • Have more administrative flexibility • Can be mounted as read-write • Provide good performance File System Snapshots: • Are transient • Require separate block device • Make Incremental backups only on the file level Are administered like file systems • Are mounted as read-only • Can slow performance Storage Checkpoint vs. File System Snapshots The following table compares storage checkpoints with tile system snapshots: Charactcrlstic Storage Checkpoints rile System Snapshots Oat" persistency Persistent: available ufler a reboot or system crash Transient: lost if the file system is unmounted or if the system is rebooted 1Ises the tree space from the primary block device: maintains a relationship with other checkpoints: ~------------1--------------------4----------------------- Incremental backups possible lncrcmcntal backups only on on block level file level Data storage Uackup Requires a separate volume for saving before- images Administration Has more administrative flexibility: coordinates with Orac le states: call be administered in multiples: can be removed or rolled back Accessibility Can be mounted as read-write Is a read-only file system that is administered like a file system Are mounted as read-only Performance Good file SYS1Cill performance Slower tile system pCrf~1I111anCe Lesson 7 Point-in-Time Copies 7-15
    • Types of Storage Checkpoints You can create the following types of storage checkpoints: Data Storage Checkpoints A data storage checkpoint is a logical image of the Ilk system at the time the storage checkpoint is created. This type of storage checkpoint contains the file system mctaduta and file data blocks. You can mount, access, and write to a data storage checkpoint just as you would to a tile system. Nudata Storage Checkpoints A nodata storage checkpoint only contains file system mctadata; this type of storage checkpoint docs not contain any file data blocks. As the original file system changes, the nodata storage checkpoint records the location of every changed block. RemovabteStorage Checkpoints A removable storage checkpoint can "self-destruct" under certain conditions when the file system runs out of space. After encountering certain conditions, the kernel removes storage checkpoints to free up space for the application to continue running on the file system. Nunmouutable Storage Checkpoints A nonmountablc storage checkpoint cannot be mounted. You can use this type of storage checkpoint as a security feature that prevents other applications Irorn accessing the storage checkpoint and modifying it. 7-16 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copyr".lht' 20()6 SyroantecCorcoranon All flgh!<; leSfHlHrl
    • L. svmantcc Creating and Managing Traditional Volume Snapshots Create: vxassist -g diskgroup l+b l snapstart orig_ vol vxassist -g datadg -b snaps tart datavol vxassist -g r!iskgroup snapshot [orig_ vol) [snap_vol) vxassist -g datadg snapshot datavol snapvol Reassociate: vxassist -g datadg snapback snapvol or vxassist -g datadg -0 resyncfromreplica snapback snapvol Dissociate: vxassist -g datadg snapclear snapvol Destroy: vxassist -g datadg remove volume snapvol Creating and Managing Volume Snapshots Creating and Managing Traditional Volume Snapshots: CLI To create and manage traditional volume snapshots: Create a snapshot mirror, The vxassist snapstart task creates a write-only backup mirror that is attached to and synchronized with the vol lime to be backed lip. The process runs until the mirror is created and has been synchronized. The mirror continues to be updated until it is detached during the snapshot phase. The -b option runs the snapstart process in the background. 2 Create the snapshut volume. This task detaches the snapshot mirror from the original VOIU111C, creates a new volume. and attaches the snapshot mirror to the snapshot volume The state of the snapshot is set to ACTIVE. If the snapshot procedure is interrupted. the snapshot mirror is automatically removed when the volume is started. Resynchf romrepl ica is an offline operation that resynchronizes the original volume with the content of the snapshot. III effect. this is a disk-based restore operation which enables the lost volume to be immediately available. thereby contributing to high availability. Lesson 7 Point-in-Time Copies 7-17 Copyrtght f nlOC Symantec Comcranon. All r,qht~ reserved
    • syrnantec. D splaying Traditional Volume Snapshot Information vxprint -g datadg -ht I I •. Before snapshot is detached, v datavol ENABLED ACTIVE 40960 SELECT fsgen p1 de c evoa c m. datavol ENABLED ACTIVE 40960 CONCAT sd datadgOl-Ql datavol-Ol datadgOl 0 40960 0 e i eaeo p1 datavol-02 snapvol SNAPDONB40960 CONCAT sd datadg02-01 datavol-02 datadg02 0 40960 0 clt2dl IAfter snapshot is detached I dg datadg default default 20000 1110487068.43 trainS dm datadgOl clt2dOs2 auto 2048 10404096 dm datadg02 clt2dOs2 2048 10404096 dm datadg03 ci eaeosa auto 2048 10404096 dm datadg04 clt2dOs2 auto 2048 10404096 Original volume v c1atavol KNABLl:D ACTIVI 40960 SILICT f.gen p1 c1atavol-Ol datavol •••••• L&Il ACTIV.I 40960 CONCAT RW .d datat1g01-01 d.ataval-Ol c1atadgOl 0 40960 cltllSO """ v snapval ENABLED ACTIVB 40960 ROUND lsgen p1 datavol-02 snapvol ENABLED ACTIVE 40960 CONCAT >w sd datad.g02-01 datavol-02 datadg02 0 ~J!~§JL~,Jt~_..._._. clt2dl EN. Snapshot volume l Displaying Traditional Volume Snapshot Information You can use vxprint to display information about volumes and their associated traditional volume snapshots. Copynqtu s: ~006 Seuaruec Corpo.ano-' All nqnts reserved VERITAS Storage Foundeuon 5.0 for UNIX: Maintenance7-18
    • symantec. Preparing to Create a Full-Sized or Third-Mirror Break-off Instant Volume Snapshot: CLI Enable FastResync: vxsnap -g diskgroup [-b] prepare orig_vol vxsnap -g datadg prepare datavol Allocate the storage using one of these methods: Third-mirror Break-off: • Add a mirror to use for third-mirror break-off snapshot: vxsnap -g diskgroup addmir volume_name • Use an existing ACTIVE plex in the volume. Full-sized Instant: Create an empty volume to use as a full-sized instant volume snapshot. Preparing to Create a Full-Sized or Third-Mirror Break-off Instant Volume Snapshot: CLI The vxsnap prepare command enables FastResync, creates a DCO and redundant DCO volume. and associates the DCO with the volume. After you run vxsnapprepare. you create the storage container for the snapshot as follows: You can use vxsnapaddmir to add a new snapshot mirror to the volume. if there is a sufficient number of suitable plexes available in the volume. you can break off and use an existing ACTIVE plcx from the volume. You can create a new, empty volume to be used as the snapshot volume, This volume must be the same size as the volume for which the snapshot is being created. and it must also have the same region size. a Find the required size for the snapshot volume: LEN='vxprint -g diskgroup -F%len volume name' b Find the name of the DCO: DCONAME='vxprint -g diskgroup -F%dco_name volume name' c Use vxprint on the DCO to discover its region size (in blocks): RSZ='vxprint -g diskgroup -F%regionsz $DCONAME' d Create a volume of the required size and redundancy vxassist -g diskgroup make snap_vol $LEN e To prepare the volume for instant snapshot operations. enable FastResync: vxsnap -g diskgroup prepare snap_vol regionsize=$RSZ -------------------------- -------- Lesson 7 Point-in-Time Copies 7-19 Copyright T 200{) Syrnanter. Corporauoo All rights reserved.
    • Creating and Managing Full-Sized or Third-Mirror Break-off Instant Volume Snapshots: CLI Create the snapshot volume using one of these methods: Break off an existing plex to create the new snapshot: vxsnap -9 diskgroup make source=orig_ vol/newvol=snap_ ,/ol/plex=plex_name Break off the mirror added by the vxsnap addmir command: vxsnap -g diskgl-OUp make aour-cee or- 19_ vol/newvol=snap_ vo.l ynmt r r-or-e numoe z- Specify an existing empty volume to be used as the snapshot: vxsnap -g d.iskgroup make source=or19_ vol/snapvol=snap_ voI Update: vxsnap -g vxsnap -g vxsnap -g vxsnap -g diskgrotI[! refresh snap_vol source= or19_ vol diskgroup reattach snap_vol source= ori9_ vol diskgroup restore oriy_ vo.I source= snap_vol diskgroup dis snap_vol Creating a Full-Sized or Third-Mirror Break-off Instant Snapshot: CLI To create a new snapshot volume by breaking 01'1'an existing plcx in the original volume, you use the vxsnap make command. specify the name of the new snapshot volume, and include the name of the plcx to be used. This attribute can only be used with plcxcs that are in the ACTIVE state. To create a new snapshot volume using the minors added by the vxsnap addmir command. youusc the vxsnap make command. specify the name of the new snapshot volume and include the number ofplcxcs that have been added by the vxsnap addmir command and that arc in SNAPDONE state. To use an existing. empty volume as the snapshot volume. you use the vxsnap make command and specify the names of the source volume and the snapshot volume. Prior to running this command. you must create an empty volume with the required degree of redundancy, and with the same size and same region size as the original volume. Remove: vxedit -9 diskgroup -r rm snap_vol Updating a Full-Sized or Third-Mirror Break-off Instant Snapshot: ell You can manage full-sized instant snapshots using the following commands: To refresh a snapshot volume: vxsnap refresh To reattach a snapshot volume: vxsnap reattach To restore a volume from a snapshot: vxsnap restore To disassociate a snapshot volume: vxsnap dis To split a snapshot volume: vxsnap spli t See the vxsnap (1m) manual page for more information. 7-20 Copynght ~.2006 Svmaruec Corporation All nqnts reserveo VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Removing a Full-Sized or Third-Mirror Break-off Instant Snapshot: ell A dissociated instant snapshot can be deleted altogether using vxedi t: vxedit -g diskgroup -r rm snap_vol Sample of DCO and Del The following is an example of a DCa and DC L: vxprint -htg datadgl DG NAME NCONFIG NLOG MINORS GROUP-ID ST NAME STATE DM_CNT SPARE CNT APPVOL_CNT OM NAME DEVICE TYPE PRIVLEN PUBLEN STATE RV NAME RLlNK_CNT KSTATE STATE PRIMARY DATAVOLS SRL RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK CO NAME CACHEVOL KSTATE STATE VT NAME NVOLUME KSTATE STATE V NAME RVGNSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOLIWID MODE SO NAME PLEX DISK DISKOFFS LENGTH [COUjOFF DEVICE MODE SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COUjOFF AM/NM MODE SC NAME PLEX CACHE DISKOFFS LENGTH [COUjOFF DEVICE MODE DC NAME PARENTVOL LOGVOL SP NAME SNAPVOL DCO dg datadgl defaull defaull 31000 1117567000.33.lrainl dm datadq 100 e1t2dOs2 auto 2048 10404096 - dm datadg 10 1 e1t2dls2 auto 2048 10404096 - dm datadq 102 e1t2d2s2 auto 2048 10404096 - dm datadg 103 e1t2d3s2 auto 2048 10404096 - dm datadg 104 e1t2d4s2 auto 2048 10404096 - dm datadgl05 e1t2d5s2 auto 2048 10404096 - v datavoll ENABLED ACTIVE pi dalavoll-0l datavoll ENABLED ACTIVE sd datadgl00-0l dalavoll-0l datadgl00 0 de datavoll_deo datavol 1 datavoll del v datavoll_del ENABLE-D ACTIVE 544 pi datavoll_del-Ol datavoll_del ENABLED ACTIVE 544 sd datadgl0l-0l datavoll_del-Ol datadgl0l 0 544 204800 SELECT 204800 CONCAT 204800 0 fsgen RW e1t2dO ENA SELECT CONCAT o gen RW c1t2dl ENA 7-21lesson 7 Point-in-Time Copies Copynghlc 2006 Svmanrer; Corporanon All nqms reserveu
    • symantec. Creating and Managing Full-Sized or Third-Mirror Break-off Instant Volume Snapshots; VEA Create: I r Cr.::I~:~~~:;:;lnstant sn~psho.t-=~~~e'I:-:e-~-e-af-Y-"'-r-ror-ob-je-ct-s-----' ,r. Specfye,lsting vokme for snapshot f,lJII,~e,g 1 Available ,,;rrors Volume. ISP-OP_ld::OJ Break0 JJorfqvol-Ot (dat,3dgOl ) JJ011';1/01-02(datadg02 ) Update: SelectActions->InstantSnapshot->Refresh. SelectActions->InstantSnapshot->Reattach. SelectActions->InstantSnapshot->Restore. SelectActions->InstantSnapshot->Dissociate. Remove: SelectActions->DeleteVolume. Creating a Full-Sized or Third-Mirror Break-off Instant Volume Snapshot: VEA ,--------------------,---------------------------------------- Select: lhe volume tor which you want a snapshot luput: Actions ->Instant Snapshot=c-Crcate ~ J_ ~ Snapshut type: Select Full sized or Break off Options fur the snapshot: Specify the volume to use. You call create a new volume or select all existing volume. Select Syncing if you want the contents of the instant snapshot to be hilly synchronized with the contents of the orig inal volume at the point when the snapshot is taken. Disks for this snapshot: Allow 'xVM to determine (dctuuln. or manually select disks to use fix the snapshot. ulume auributes: Specify a volume name. the size of the volume. the type of volume layout. and other layout characteristics. Assign a meaningful name to the volume that describes the data stored in the volume, File system: Create a file system on the volume anti set tile system options, 7-22 VERITAS Storage Foundation 5,0 for UNIX: Maintenance Cupynynt 2006 5'I!"II1(I<(; Co-porauon All rights reserved
    • To create a space-optimized instant snapshot and create a new cache object at the same time, you specify the size of the cache object in the command. The cache object is created with a default name that you can view with vxprint. symantcc. Creating and Managing Space-Optimized Instant Snapshots: ell Create a snapshot and cache object at the same time: vxsnap -g diskgroup make source=orig_ vol/newvol=snap_ vol/ cache sLzeesi ze Remember to prepare the volume In advance. Example: vxsnap -g datadg make source=datavol/newvol=snap-datavol/cachesize=lg Update: vxsnap -g diskgroup refresh snap_ vol source=orig_ vol. or vxsnap -g diskgroup restore olig_ 1.'0.1. source=snap - vol Remove the snapshot and then stop and remove the cache object: vxedit -g diskgroup -rf rm snap_ voL vxcache -g diskgroup stop cache_object vxedit -g diskgroup -r rm cache_object Creating a Space-Optimized Instant Snapshot: CLI Updating a Space-Optimized Instant Snapshot: CLI You can refresh a space-optimized instant snapshot or restore a volume from a snapshot. Removing a Space-Optimized Instant Snapshot: CLI To permanently delete the snapshot and release the storage resources, you use the vxedi t command to remove both the snapshot volume and the cache object. Find the names of the top-level snapshot volumes that use the cache object: vxcache listvol cache_object 2 Remove the top-level snapshots and their dependent snapshots: vxedit -g diskgroup -rf rm snap_vol.,. 3 Stop the cache object: vxcache -g diskgroup stop cache_object 4 Remove the cache object and its cache volume: vxedit -g diskgroup -r rm cache_object Lesson 7 Point-in-Time Copies Copyf1!:1I1 .r;',2(ll·b SyrnantecComo-anon 111 nghts reserveo 7-23
    • syrnantec Creating a Shared Cache Object If you want to set up a cache object to be used by multiple space-optimized instant snapshots in a disk group, you can create the shared cache object before creating any snapshots: 1. Create the volume to be used for the cache volume. vxassist -g datadg make cachevol 19 layout=mirror datadg16 datadg17 2. Create a cache object on top of the cache volume. vxmake -g datadg cache cobjdatadg cachevolname=cachevol autogrow=on 3. Enable the cache object. vxcache -g datadg start cobjdatadg 4. Create a space-optimized snapshot associated with the cache object. vxsnap -g datadg make source=datavol/newvol=snapvol/cache=cobjdatadg Creating a Shared Cache Object Before Creating a Snapshot ltyou nccd to create several space-optimized instant snapshots for the volumes in a disk group. you Illay find it more convenient to create a single shared cache object in the disk group rather than a separate cuche object for each snapshot. Decide on the following characteristics that you want to allocate to the cache volume that underlies the cache object: The size ofthe cache volume should be sufficient to record changes to the parent volumes during thc interval between snapshot refreshes. A recommended value is I() percent of the total size otthc parent volumes Ill!' a refresh interval 01" 24 hours. lfrcdundancy is a desired characteristic of the cache volume, the cache volume should be mirrored. This increases the space that is required for the cache volume in proportion to the number 01" mirrors that the cache volume has. If the cache volume is mirrored, space is required on at least as many disks as it has mirrors. These disks should not be shared with the disk> used tor the parent volumes. The disks should also be chosen to avoid impacting I/O performance for critical volumes, or hindering disk group split and join operations. 7-24 VERITAS Storage Foundation 5.0 (or UNIX: Maintenance Copyfi.jrll LiJD6 SyllldlitHC Corporauo« /'III nqrus re<'/lrved
    • symantcc Creating and Managing Space-Optimized Volume Snapshots: VEA Create: Select Actions->Instant Snapshot->Create. Select Sp'acJl-QP1lm~~c!.. _ r. ~!~.~ja_~·~:¥'j:le~06je.:~ Update: Select Actions->Instant Snapshot->Refresh. Select Actions->Instant Snapshot->Restore. Remove: Select Actions->Delete Volume. Creating a Space-Optimized Instant Snapshot: VEA Navigation path: Act ions->I nstant Snapshot->( 'rcatc Select: The volume for which you want a snapshot Input: Disks for FastResync: Allow VxVM to determine (default I. or manually select disks to use. Enable Fastltesync: Specify the number of DCa minors and the region size. Suapshot type: Select Space optimized. Create a new cache object: Select if you wam to create a OC' cache. Choose an existing cache ubject: Select if you have already created a cache. Disks for this snapshot: Allow VxVM to determine (dcfaulr). or manually select disks to use. Updating a Space-Optimized Instant Snapshot: VEA Refresh: Refreshing an instant snapshot replaces it with another point-in-time copy of a parent volume. Restore: For an instant space-optimized snapshot. the cached data is used to restore the contents of the specified volume. The snapshot itself remains unchanged by the operation. Lesson 7 Point-in- Time Copies 7-25 Copynqht ~. :!t106 Symantec Corporation All nqhts reserveo
    • Displaying Instant Volume Snapshot Information: ell To display information about instant snapshots: vxsnap -g diskgroup print [orig_ vOl;-:l'--__ --. For example: vxsnap -g datadg print NAME SNAPOBJECT TYPE datavol volume full_ls_snp volume sp-op_la_snp volume datavo12 - - volume sp-op_2a_snp volume sp-op_4b_snp volume full_2a_snp volume full 1a datavol_snp volume sp-op_la datavol_snpl volume sp-op 2a datavo12 snp volume sp-op_2b datavo12 __snpl volume full 2a datavo12_snp2 volume PARENT SNAPSHOT%DIRTY %VALID 100.00 full la 0.00 sp-op_la 0.00 100.00 sp-op_2a 0.00 sp-op_2b 0.00 full 2a 0.00 datavol 0.00 100.00 datavol 0.00 0.00 datavo12 0.00 0.00 datavo12 0.00 0.00 datavo12 0.00 0.00 I Snapshot volumes I Displaying Instant Volume Snapshot Information Use vxsnap print to display information about volumes and their associated instant snapshots. The 'YoDIRTY value is based on what has been written to the volume. The value is the percentage of the snapshot plcx or detached plcx that is dirty with respect to the original volume. If an instant snapshot volume has not been synchronized with the original volume. the ~;'VALID value is tile same as the "("DIRTY value. If the snapshot were partly synchronized. the '~"VALID value would lie between the %DIRTY value and 100"'0. If the snapshot ere fully synchronized. the "~.vALI D value would be 100%. The snapshot could then be made independent or moved into another disk group. Note: You can also use the vxsnap -9 diskgroup -v list command to display snapshot iutonnution tor a disk group. Copynqtuc 2006 Syrn.mtec Co-ooranon All nqtus reserved 7-26 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • '~)~lant~c Using Volume Snapshots for Off-Host Processing oSplit snapshots into a ~ separate disk group. f2 Import on another ~ host for processing. ohpdg ~Iill CD Reimport, rejoin, and resynchronize. Using Volume Snapshots for Off-Host Processing This section provides an outline of how to apply off-host processing by combining the full-sized or third-mirror break-off instant snapshot. disk group split and join, and FastResync features ufVxVM. You can use this outline to setup a regular backup cycle or to set up a replica of a production database for decision support purposes. Off-Host Processing Phases Implementing off-host processing has three general phases: PhaseI: Create, Split, and Deport On the primary host, create a snapshot volume. split it into a separate disk group, and deport the disk group. Phase2: Import, Process,and Deport On another host, import the disk group containing the snapshot volume and perform off-host processing. When you have completed your off-host processing tasks. deport the disk group. Phase3: Import, Join, and Resynchronize On the primary host, bring the snapshot volume back by rejoining the disk groups and rcsynchronizing the volumes. Lesson 7 Point-in-Time Copies 7-27 Ccpynqht £: ;!OO(i Symantec Corporanon All nglll~ r'!servIilo
    • symantcc Phase I: Create, Split, and Deport In this procedure, the primary host contains the datadg disk group and the datavol volume. Till: snapshot volume of datavol is called snapvol. The disk group that is moved to another host is called offhostdg. On the primary host. enable Fastkcsync for the datavol volume. 2 On the primary host, add an additional mirror or create an empty volume lor use as the snapshot volume. The example 011 the slide shows adding an additionul mirror to be used lor the snapshot operation. Note that ill this case yuu have to wait until the synchronization is complete and the additional plcx is in SNAPDONE state before you can create the snapshot. 3 On the primary host, itthe original volume contains database tables in a file system, suspend updates to the volume. The database may have a hot-backup mode that enables you to do this by temporarily suspending writes to its tables. If yoII arc selling up a replica ota production database, prepare the off-host processing host to receive the snapshot volume. This preparation may involve selling lip private volumes to contain redo logs and configuring files that arc used to iniiiulizc the database. 4 On the primary host. create a full-sized or third-mirror break-off instant snapshot Ior the volume, 5 II'you temporarily suspended updates to the volume by a database,you can now release all of the tables Irom hot-backup mode. 6 Split the snapshot volume into a separatedisk group by lIsing vxdg spl i t. A full-sll.ed instant snapshot must be tully synchronized before splitting. 7 Deport the disk group that contains the snapshot volume, OHP Example: Phase 1 1. Enable FastResync on the volume: vxsnap -g datadg prepare datavol 2. Add an additional mirror for use as the snapshot volume: vxsnap -g datadg addmir datavol 3. Suspend updates to the volume and unmount the file system: umount /mntl 4. Create a full-sized instant snapshot for the volume: vxsnap -g datadg make source=datavol/newvol=snap-datavol/nmirror=l 5. If you temporarily suspended updates to the volume by a database, release the tables from hot-backup mode. 6, Split the snapshot volume into a separate disk group: vxdg split datadg offhostdg snap-datavol 7, Deport the disk group: vxdg deport offhostdg 7-28 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Cupyrlght 2r)nf) Svroa-uec Ccrporancu All flgtll~reserved
    • L: ,wtJiiii OHP Example: Phase 2 8. On the off-host processing host, import the disk group: vxdg import offhostdg vxvol -g offhostdg start snap-datavol 9. To perform off-host processing, mount the snapshot volume: mount -F vxfs /dev/vx/dsk/offhostdg/snap-datavol /rontl 10. When processing is complete, unmount the snapshot: umount /rontl 11. Deport the disk group: vxdg deport offhostdg symantec. Phase2: lrnport, Process,and Deport 8 On the off-host processing (0111') host where the backup or processing is to be performed. import the disk group that contains the snapshot volume, 9 To perform off-host processing. you must first mount the snapshot volume. For example: mount -F vxfs /dev/vx/dsk/offhostdg/snap-datavol /mntl Note: On Linux. use mount - t . In the syntax, /mnt 1 is tile mount point for tile file system, Then. you can perform the off-host processing activities: If you are performing online backup. then back up the file system using your backup utilities and methods, If you are performing decision support, then issue the appropriate database commands to recover and start the replica database tor its decision-support role, 10 When you are ready to reattach the snapshot plcx with the original volume, unmount the snapshot volume, 11 On the 0111' host, deport the disk group that contains the snapshot volume, lesson 7 Point-in-Time Copies Copvnqnt:" 2006 Symaote- Ccrporaucn /1.11ncnts reserv •.. t 7-29
    • OHP Example: Phase 3 12. On the primary host, reimport the disk group: vxdg import offhostdg 13. Rejoin the disk group with the original disk group: vxdg join offhostdg datadg 14. Restart the snapshot volume: vxrecover -g datadg -m snap-datavol 15. Refresh the snapshot volume with the original volume: vxsnap -g datadg refresh snap-datavol source=datavol Phase 3: Import. Join, and Resynchronlze 12 On the primary host. rcimport the disk group that contains the snapshot volume. 13 Rejoin the disk group that contains the snapshot volume with the disk group that contains the original 'olumc by using vxdg join. 14 The snapshot olumc is Initially disabled following thejoin. 011 the primary host. restart the snapshot volume. 15 Un the primary host. refresh the plcxcs of the snapshot volume with the original volume. 7-30 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • ,tL,'. . symantcc $illft?s #k. , Creating and Managing Storage Checkpoints: CLI Create: fsckptadm [-nruvl create ckpt_llame moullt_poillt fsckptadm -v create thu_7pm /checkptl Mount: mount -F vxfs -0 ckpt=ckpt_llame /dev/vx/dsk/d"skgroup/volume_llamp:ckpt Ilame /mount_point mount -F vxfs -0 ckpt=thu7pm /dev/vx/dsk/datadg/voll:thu_7pm /checkpt2 Unmount: By mount point: umount /checkpt2 By pseudo device name: umount /dev/vx/dsk/datadg/voll:thu 7pm Creating and Managing Storage Checkpoints Creating a Storage Checkpoint: CLI To create a storage checkpoint, you use the f sckptadm create command: -n sets the nodata attribute, creating a checkpoint that contains no tile data, - r sets the remove attribute on a checkpoint at creation time. This ensures that the checkpoint is automatically deleted under certain conditions, -u sets the nomount attribute ofa checkpoint when it is created, making the checkpoint not mountable. -v specifies verbose mode, which displays extensive statistical information. Mounting a Storage Checkpoint: CLI To access a storage checkpoint, you mount the checkpoint using the mount option: -0 c~pt=ckpt_name Storage checkpoints are mounted as read-only storage checkpoints by default. If you need to write to a storage checkpoint, mount it using the -0 r w oprion. Ifa storage checkpoint is originally mounted as a read-only storage checkpoint, you can remount it as writable using the -0 remount option. To mount a checkpoint of a file system, tirst mount the file system itself To unmount a file system, first unmounr all of its storage checkpoints, Ccoyr.unt f, 2006 Symantec Corporauon All n~hls reserverl 7-31Lesson 7 Point-in- Time Copies
    • symantec Creating and Managing Storage Checkpoints: VEA Create: Select Actions->Storage Checkpoint->Create. Mount: Select Actions->Storage Checkpoint->Mount. F~esvseen: Imnt CN:c~r.ame: ~12:::-7_-:-"-::.0""'_"70-::.,,,,,,,,----1 He Sy~em, Jrrn. ""dl""r--~ Enable Quotas: Select Actions->Storage Checkpoint->Enable Checkpoint Quotas. Remove: Select Actions->Storage Checkpoint->Remove. :;;P.erroo.,.~ ;; ~ci;iti MorJt pont: !(I:,dDt/m,..,t/'::7 _1603_0401'::7 Select: Creating a Storage Checkpoint: VEA The Ilk system tor which you wall! to create a checkpoint Navigution path: Actions ->SlOrage Checkpoint- ··>Create Input: Checkpoint name: Specify a checkpoint name. Removable: The default is for a removable checkpoint. 1I0Ullt: Mark the check box if you want the checkpoint mounted and supply a mount point. Select: Mounting a Storage Checkpoint: VEA The tile system for whi, h you want to mount a checkpoint :aigatioll path: Actions >Storage Checkpoint =Mount Input: Checkpoint name: Specify a checkpoint name. or use the dctaulr provided. lIuulIl point: Verify the mount point that is displayed. Mount details: Set mount options. such as read-only. 7-32 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • symantec. Displaying Storage Checkpoint Information To list all checkpoints in a file system, you use the command: fsckptadm [-clvl list mount_point .•. - c resets statistic counters. such as the number of reads, writes, and pushes. -1 displays statistical information on the file system. in addition to its checkpoints. -v specifics verbose mode, which displays extensive statistical information. To display the name of a storage checkpoint containing the tile at a specified path. you use the pathinfo keyword: fsckptadm [-cvl pathinfo path_name ':''Rff~ Displaying Storage Checkpoint Information To display information about storage checkpoints: fsckptadm [-clvl list mount_point To list all checkpoints on the ImntO file system: fsckptadm list IrontO ImntO fri_7pm: ctime Fri 04 Feb 2005 07:14:39 PM PST mtime = Fri 04 Feb 2005 07:14:39 PM PST flags = nodata fri_6pm: ctime mtime flags Fri 04 Feb 2005 07:02:17 PM PST Fri 04 Feb 2005 07:02:17 PM PST largefiles. removable. mounted Lesson 7 Point-in-Time Copies Copyright (1;": 2006 Symanter Corporation All nqtus reserved 7-33
    • syrnantec Using Storage Checkpoints: Restoring a File System Unmount the file system, and then use: fsckpt restore [-11 device name This command runs an interactive utility where you select the checkpoint to which to roll back. r ---·_· . The filesets listed before the restoration show an UNNAMEDroot filese! and five storage checkpoints. fsckpt_restore -1 c c c '~ K K K K K P P T T T T T 5 4 3 2 1 11am 10am 9am 8am c K P T T 2 1 8am 7am This example restored from After running the command and selecting CKPT 3 , the former UNNAMED root fileset and CKPTSand CKPT4 were removed; CKPT3 is now the primary fileset. Restoring a File System from a Checkpoint Mountable data storage checkpoints on a consistent and undamaged file system can he used by backup and restore applications tu restore either individual files or an entire Ii lc system. Resturatiun from storage checkpoints can also help recover incorrectly modified files, but storage checkpoint restoration typically cannot recover Irom hardware damage or other file system integrity problems. Restore files by copying the entire lilc from a mounted storage checkpoint hack to the primary lilesct. To restore an entire file system, you can designate a mountable data storage checkpoint as the primary Iilcsct using the f sckpt _restore command. When a file system is restored lrom a storage checkpoint using the fsckpt restore command . .I11changes made to that Ide system after that storage checkpoint's creation date are permanently lost. The only storage checkpoints and data preserved arc those that were created at the same time, or before. the selected storage checkpoint's creation. Aller unmouruing the file systems. use the fsckpt_restore command to restore a file system from a storage checkpoint: fsckpt restore [-11 device name [ckpt name] In the syntax: device_name Specifics the block device on which the file system resides ckpt name Specifics the name of the storage checkpoint lrom I filch to restore the file system -1 Lists information on the file system root and all or its storage checkpoints 7-34 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqnt c 2006 Svmaruec Corporauon All nctu-, r~sHrverl
    • Example File System Restoration from a Storage Checkpoint The following example restores a file system from the CKPT3 storage checkpoint. The filcscrs listed before the restoration show an unnamed root fileser and six storage checkpoints. Run the f sckpt restore command fsckpt restore -1 /dev/vx/dsk/datadg/vo12 UNNAt~ED: ctime Fri 04 Feb 2005 06:28:24 PM PST mtime Fri 04 Feb 2005 06:28:26 PM PST flags file system root CKPT6: ctime Fri 04 Feb 2005 06::18 :35 Pt~ PST mtime Fri 04 Feb 2005 U6:28 :35 PM PST flags removable CKPT5 : ctime Fri 04 Feb 2005 06:28:34 PM PST mtime Pri 04 Peb 2005 06:28:34 PM PST flags nomount CKPT4: ctime Fri 04 Feb 2005 06:28:33 PM PST mtime Fri 04 Feb 2005 06:28:33 PM PST flags removable CKPT3: ctime Fr-i 04 Feb 2005 06 :28 :31 PM PST mtime Fri 04 Feb 2005 06 :28 :36 PM PST flags removable CKPT2: ctime Fri 04 Feb 2005 06 :28 :30 PM PST mtime Fci 04 Feb 2005 06 :28 :30 PM PST flags none CKPT1 : ctime Fri 04 Feb 2005 06:28: 29 PM PST mtime Fri 04 Feb 2005 0":28 :29 PM PST flags nodata ctime mtime flags Fri 04 Feb 2005 06:28:31 PM PST Fri 04 Feb 2005 06:28:36 PM PST removabJe 2 In this example, select the CKPT3 storage checkpoint as the new root filesct: Select checkpoint for restore operatIon or enter <Return> to list checkpoints: CKPT3 CKPT3: WARNING!! Checkpoint CKPT3 has been modified. WARNINGl J Any file system changes or checkpojnts made afte!::" Fri 04 Feb 2005 06:28:31 PM PST will be lost. Lesson 7 Point-in-Time Copies 7-35 Copynght (0l2006 Symanter Coeoorar.on 111fights resllfverj
    • 3 Enter y tu restore the Ilk system trom CKPT3: Restore the file system from checkpoint CKPT3 ? (ynq) y (Yes) File system restored frol11 CKPTJ If the tilcscts arc listed at this point, the list shows that thc former UNNAMED root filcsct and CY.PT6. CKPT5, and CKPT4 were removed, and that CKPT3 is now the primary filcsct. CKPT3 is now the lilcset that will be mounted by default. 4 Run the f sckpt rest ore command fsckpt restore -1 /dev/vx/dsk/datadg/vo12 CKPT3: ctime Fri 04 Feb 2005 06 :28 : 31 PM PST mtime Fri 04 Feb 2005 06 :28 :36 P~l PST flags tile system roor CKPT2: ctime Fri 04 Feb 2005 06:28: 30 PM PST mtime F ri 04 Feb 2005 06:28 :30 PM PST flags nODe CKPT1: ccime Fri 04 Feu 2005 C!6:28:29 P~l PST mtirne Fri 04 Fet 2005 06 :28:29 PM PST flaqs nodata VERITAS Storage Foundation 5.0 for UNIX: Maintenance7-36 Copyngnl '_J 2006 Svmantec Corpor anon 111""ll't'> -oservec
    • syrnantec. Lesson Summary • Key Points In this lesson, you learned how to create and manage traditional and full-sized instant volume snapshots, space-optimized instant volume snapshots, and storage checkpoints. This lesson also covered off-host processing and backup and recovery of file systems. • Reference Materials - VERITAS Volume Manager Administrator's Guide -VERITAS File System Administrator's Guide - VERITAS FlashSnap Point-In-Time Copy Solutions Administrator's Guide 'symantcc Labs and solutions for this lesson are located on the following pages: Appendix J provides complete lab instructions. '-I.<lh -i· I'oim-in-Tinu- (·opics"· p;I~C :-(l."i Lab 7 Lab 7: Point-in-Time Copies In this lab, you perform off-host processing using full- sized instant volume snapshots, create space- optimized instant volume snapshots, and restore a file system using storage checkpoints. [Jor Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Appendix B provides complete lab instructions and solutions .:' , Iii) 7 <;"llIlll'I1S PUIl1I-iIl- Illlle (l'plc'S. P:I!"C II-'n ------_.---------------- Lesson 7 Point-in-Time Copies Copyright c:.2006 Symanl~r Corporation All nqnts reservea 7-37
    • 7-38 VERITAS Storage Foundation 5.0 for UNIX' Maintenance
    • Lesson 8 Other Enterprise Features Overview
    • Lesson Introduction • Lesson 1: Maintaining Data Consistency • Lesson 2: Managing Devices Within the VxVM Architecture • Lesson 3: Encapsulation and Rootability • Lesson 4: Troubleshooting the Boot Process • Lesson 5: Volume Maintenance • Lesson 6: Performance Monitoring • Lesson 7: Point-in-Time Copies ~1~~~~fe~~ther~~~erprise Fea~ures~ symantec. symantec Lesson Topics and Objectives Topic After completing this lesson, you will be able to: What Is Dynamic Storage Describe the purpose and benefits of ~T~i:.:e.:..:ri.:..:ng~?~,----------1 dynamic storage tiering (D..;S:..:T.!.'-), -I What Is Intelligent Storage Describe the purpose and benefits of Provisioning? intelligent storage provisioning (ISP), What Is the Storage Describe the purpose and benefits of the Foundation Management Storage Foundation Management Server Server? (SFMS), 8-2 COPYflQili '; 200t, Svmautec Corporatrou All IH.jhts reserved VERITAS Storage Foundation 5,0 for UNIX: Maintenance
    • The High Cost of Data Life-Cycle Management Companies are implementing alternate solutions to reduce the cost of storage, including: • Centralizing data storage into large data centers • Establishing service-level-agreements (SLAs) with the departments that use the storage, or storage accounts Survivability When my application is reaching current does it need to be? of Data its storage capacity, how quickly ~ . ! do I need fresh capacity online? Time to ~ Time to ecover ~ Capacity ,-------------L, What level of downtime can this application or Application What level of storage storage account tolerate? Performance perf~rm~nce does my application need? Dynamic Storage Tiering is the technology that helps keep storage costs under control while effectively meeting business needs. What Is Dynamic Storage Tiering? The High Cost of Data Life-Cycle Management Data life-cycle management is the process of: Managing data from its creation to its deletion or archiving Moving data through an organization based on business rules and policies In the past, organizations often solved their data life-cycle management storage problems by buying more disks, which was an expedient method of addressing storage management. However, this method created numerous problems. not the least of which was under utilization. Storage virtualization combined with storage networking architectures has led to new approaches in storage management that are highly centralized. This centralized approach to managing storage enables data centers to better handle increased amounts of storage and increasingly complex environments. Today. data centers can match the underlying storage technology to the application requirements of thc storage accounts through a service-level agreement (SLA) using dynamic storage tiering. Data centers set up policies for serving departments within the organization to meet their storage provisioning needs. Each department's distinct applications are treated as storage accounts. Each storage account is serviced with storage that meets their business needs. Business-critical systems that need the greatest levels of availability and performance receive the highest levels of storage service, while less-critical systems receive predefined and agreed-upon levels of storage service. Lesson 8 Other Enterprise Features Overview Copyright (f, 2n06 Svmantec Corporation A.II nqhts reservpd 8-3
    • What Is Dynamic Storage Tiering (DST)? DST enables administrators to manage the placement of files by defining placement policies that control both initial file location and the circumstances under which existing files are relocated. Examples: Move to tier 2 all filesPolicy-based storage tiering (placement policies) Dynamic movement of files without interruption to applications Application transparency Database support / filesystem Tier 1 I Gold I that have not been accessed in 90 days. Place paying customers on tier 1 and non-paying on tier 2; if either change, they are dynamically migrated to new tier of storage. Roll sales information from previous quarters onto tier 2 storage. Tier 2 Defining Dynamic Storage Tiering (DST) The DST allocation and relocation policies can be used to manage the data of the storage accounts throughout the data life cycle. The policy-based rules can, with minimal or no intervention, ensure that business-critical applications have sufficient storage space to achicv c uninterrupted service 24 hours per day, seven days per cck , and ensure thai business requirements arc met. Dynamic Storage Tiering is the next level of tiered storage, mapping the information within an appl icaiion to storage tiers. Dynamic Storage Tiering is real-rime mapping of data that is policy driven. You define the policies based on: File type Directory type End-user Frequency of Ii lc access DST enables you to move files from one volume to another with no impact on application or dutabasc architecture. DST file allocation and relocation help you to: More efficiently use your storage resources without disrupting users or applications. Place files on different classes or devices-s-lur example. inexpensive or expensive devices. or lust or slow devices. Bcucr manage your ever-changing environment. VERITAS Storage Foundation 5.0 for UNIX' Maintenance8-4
    • ~ ..~. 14tS:fu%:'""'&it •• ..21 ~symantec. Examples of DST Levels NOTE: Numbers used in the tables are for example purposes only, to show the relative Example A cost savings of different storage layouts. Factors Storage Account A Storage Account B Redundancy Two full copies Three full copies Time to Capacity Two hours Two minutes Time to Recovery Two hours Two minutes Performance Medium (Tier 2) Fast (Tier 1) Cost per GB of storage SSG I GB SlO() i G8 Example B A storage account requests two full copies of all their data. To reduce the cost, they separate the less important data onto RAIO-5. R_~, Cest per OS •••••••~ S_.T.taICM•• I Method size (GB ) Storage Mirror $100 1,000 2,000 $;'1)0,000 IMirror (Tier 1) $100 200 ! 1400 [ IRAIO-5 (Tier 2) $50 600 ! 11,000 : $90.()OO DST Cost Savings Data centers can use a charge-back 1110delwith DST. For example. an c-commercc application that generates a significant portion of the company's revenue may need to have very fast application time to recovery, The finance department may want to ensure that availability of its data can survive catastrophic site failures. requiring the copies of data to be stored at a remote disaster recovery site. As higher levels of DST are deployed for storage accounts, the costs associated with providing those services also increase. Important elements in the management of these storage accounts arc clearly defined accountability and methodologies to manage security of stored data and measurement of total cost of ownership for storage in each account. The data center works with each storage account to determine the most appropriate level of service based on redundancy, performance. and cost needs. As the first table on the slide shows, if the storage account needs higher levels of service, they pay more fur it. In the second table on the slide, the storage account pays S200.000to fully mirror all of their data. However, if they use R!ID-5 for part of their data instead. they pay unly S90.000. Lesson 6 Other Enterprise Features Overview 8-5 Copvnqnt 200b Symantec cooo-auon All rights reserveo
    • syrnantec DST Components The data center can implement DST policies to easily and appropriately move the data throughout the organization as the data continues on its life cycle. Frequently accessed files on fastest volume (striped) Allocation policies initially allocate various types of incoming data to specific locations. After time. relocation policies move files based on policies you set up. IInfrequently accessed files on I slower volume (concatenated) ~Metadata on most redundant volume (triple mirror) What Makes OST Possible? VERITAS Storage Foundation combines several technologies to enable allocation and relocation. VxV,,", Volume Sets A volume set enables multiple volumes to be combined together under one file system. The volume set feature works in conjunction with the multivolume file systems. volume sets allow file systems to make the best use of the different performance and availability characteristics or the underlying volumes, For example, file system mctadata can be stored on v olumcs with higher redundancy, and user data can he stored on volumes with better performance. VxFS Multivolume File Systems VERITAS File System provides support for multivolume file systems (MVFS) when used in conjunction with VxVM. Through multidcvicc support, a single Iilc system can be created over multiple volumes, while each volume has its own properties. The incoming data can he isolated and sent to any volume within the volume set. For example, you can place mctadaia on mirrored storage and place file data on a bcucr-pcrforming volume type. such as striped. File systems can also reside on different classes of devices, so that a file system can be supported from both inexpensive disks and /i'0111 expensive arrays. All 110 to and from an underlying volume is directed through the volume set. A file's location in the directory tree docs not need to determine the volume on which it is stored. 8-6 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Potential uses of multivolume support include: Controlling where tiles arc stored so that specitic files or file hierarchies can be assigned to different volumes Placing the VxFS intent log 011 its own volume to minimize disk head movement and increase performance Separating storage checkpoints so that data allocated to a storage checkpoint is isolated from the rest of the file system Separating mctadata fi'OI11 file data Encapsulating volumes so that a volume is displayed in the tile system as a tile, This is particularly useful for databases that are running on raw volumes. Allocation Policies VERITAS File System provides a method to define and assign allocation policies. These policies send data that is entering the file system to specific devices. Allocation policies specify a list of volumes and the order in which to attempt allocations. Relocation Policies VERITAS File System provides a method to define relocation policies that move data based on criteria that you set up. Lesson 8 Other Enterprise Features Overview 8-7 Copvr.qtu <f, 2()06 Symauter. Corpornnon All not-ts reserved
    • "C"'::A!~~I. ···--'-sYmantec. Why Use OST: Separate by Type of File Problem: Storage checkpoints are stored in the same space that is available to the primary file system. Solution: With DST, storage checkpoints can be isolated and sent to a separate volume from the file system. Why Use CST? Example I The system administrators in the QA department want to do some testing using the data from storage checkpoints. However. they would prefer to conduct the testing on a separate host. They an: looking for a way to separate storage checkpoints so that data allocated to a storage checkpoint is isolated from the rest of the file system. Challenge: The system administrator wants an easy way to place the storage checkpoint data on a device that is separate from the device that stores the Ii lc system data. Solution: Using l)ST. you can send all the storage checkpoints to a separate volume from the primary file system. Storage checkpoints can reside on one volume ill a volume set. and the primary tile system can reside 011 another volume in the volume set. Cupy"ym ~ 2006 Syrnanter Corporauon 111·,,;!hls reserved VERITAS Storage Foundation 5.0 for UNIX Maintenance8-8
    • r-'~_~::----.::.---,p~11~r••.•.:1""iiffiJ..'~'r __---'~;;;;~-t~c Why Use DST: Improve File Access Times Problem: Too many infrequently accessed files can cause an environment to run slowly, Relocating the files can take careful planning and E:jcan:a:~~ffiCUIt. {} '" . "" .1' ,,'I, ;, . .• - J I ~ Server . ow 1999,2006 Solution: With OST, you can set up automatic relocation based on the age of the file. Relocating your infrequently accessed ru•• Improvesfile .c~, tlrnas 10'''';:(jServer ~;Q1999,2005 2006 Example 2 Jenkins and Roue, one of the nation's leading professional services firms, provides accounting, tax management. and financial consulting services. Their data center is experiencing performance slowdowns. The billing application is running too slowly because there an: too many files on the devices. Over time. their storage has become overloaded because of all the history information. They have many files, but most of the files arc not being accessed. They need an easy way to move the files to other devices. Challenge: The system administrator has too much unaccessed data on the storage devices and it is causing slow access. Solution: Using OST, you can set up automatic relocation of the data based on age. The relocation policies of OSTmakc identifying and moving the data easier. You can set up policies to move data to other cheaper, slower hardware. This will increase performance on the most frequently accessed data. 8-9Lesson 8 Other Enterprise Features Overview
    • What Is Intelligent Storage Provisioning? The intelligent storage provisioning (lSP) feature ofVxVI'v1 provides a new way to create, organize. and manage volumes using policy-based templates that you configure to meet the performance and reliability requirements of your environment. With ISP. volumes operate in the same way. have the same benefits, and have the same online administration features as traditionally created volumes. However. ISf> introduces new volume tunctionulity that enables you to standardize how volumes are created, modi ficd. and managed. Consciously or not. every organization implements storage provisioning policies. With ISI'. you can formally set the rules that govern storage provisioning. To help you understand the concept of ISI' and the associated terminology. the next few slides present an analogy that compares storage provisioning to land development. What Is Intelligent Storage Provisioning (ISP)? Intelligent storage provisioning: • Provides a new way to create, organize, and manage volumes • Uses policy-based templates that you configure to meet the performance and reliability requirements of your environment • Enables you to standardize how volumes are created, modified, and managed To help you understand the benefits and purpose of ISP, the next slides present an analogy. 8-10 Cupyr:ghl ~~,2006 Svruarnec Corporation All rights reserved VERITAS Storage Founoeuon 5.0 for UNIX. Mamtenance
    • Analogy: "Traditional" Land Development (Without ISP) Land for Housing Community (Disk Group) ~ (Volume) • "~i. . , &' , I (Volume) (Volume) (Volume) . symantec (Volume) (Volume) • (Volume) ,111,/'" i!!.. v, " I !Iil.(Volume) You sell the land and allow buyers to build whatever they want. Traditional Land Development (Creating Volumes Without ISP) Imagine that you are a land developer (slorage architcct/adnunistrutori who wants to develop a residential housing community (slomgc (,11I'ir(lIllI1(,I1I), First. you establish how much land you want to allocate to the housing community (1/7(' disk group). Next. you start allocating land to interested buyers, selling ofT the land and building whatever type of house ('(Jlllllle) they want on that piece of land. Some buyers may want only a small piece of land to build a modest house (small COI1COICII(//cd volumeOil (//1 inexpensiveJilUD); other buyers may want a large piece of land to build a luxurious mansion (large mirrored-striped I'OlulII<' Oil on expensivearravv. After all of the land is developed. the result is an uneven residential community in which no two houses are similar. no two lots arc of equal size or value, and there is an eclectic mix of neighbors with widely di ftcrcnt needs, The residents in the mansion want security gates at all entrances (high reliabilitvi and paved roads (high pcrtormancc; throughout the community. Other residents do not care about security and are content with gravel roads. As you attempt to sort out the competing needs of the new residents. you wish that you had set some ground rules. established some standards for homcbuilders, and planned the infrastructure for the community before you started selling land. lesson 8 Other Enterprise Features Overview Copvnqht" 2006 SymllntH Corrotanon All ncnts reserved. 8-11
    • alogy: Planned Community Development (With ISP) Land for Planned Housing Community (Disk Group) Open Space Land for Houses~POOI) I 3·BR Floor plan • Living Room • Kitchen • Bedroom 4-BR Floor plan • living Room • Kitchen • Bedroom symantec. Imagine that you arc a land developer tstorag« architect/administrators who wants to develop a residential housing community (storag« environment). You arc experienced in planned community development (intetligem storage provisioning). First. you establish how much land you want to allocate to the housing community (the disk gro/lp). You also determine that you want to maintain some environmental barriers and open spaces within the community, so the actual area that you plan to divide up lor housing (sioragl' pool) is not the full size or the community (the disk gro/lp). Your goal is tu maximize profit by fining as many houses and luts intu the remaining available space as possible, but you also wan I to establish a consistent standard of living lor the community. You decide that your target market is middle- income families. so you decide to develop three- and four-bedroom single-family dwellings. You hire an architect to develop t u basic house designs. All residents or the community will live in one or these prcdcsigncd houses tupplication volumes). The house 11001' plans lor these two house types are based on common roOI11 designs that include a living room, kitchen, bedrooms, and bathrooms ttemplaresv. With all of the details of the planned community in place. you are now ready to start selling homes to interested buyers. And you already have a plan to create a second community on adjacent land (clone pool) that is based on the first planned community (11011'. IIII' clutct pool). The second community will have 2-bcdroom and }-bedroolll homes tditterent temptatcs; and a target market of lower-middle income lumilics (dif/i'rem slor(/g(' pools). 8-12 Planned Community Development (Creating Volumes With ISP) VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • ISP Structures I Disk Gro_u_ p -, I Data Pool 10 Storage Pools Clone Pool Application Volumes snapvolvol1 yol2 ElElElElEl Disks In the Disk Group Each storage pool consists of templates. Templates are collections of rules that specify performance and reliability capabilities for the application volumes. ISP Structures Storage pool: A policy-based container that resides inside a disk group and holds application volumes There are two types of storage pools: Data pool: A storage pool that contains your primary data Clone pool: A storage pool that contains snapshots of your primary data Application volume: A volume created in a storage pool An application volume is created with characteristics that are defined by the templates associated with the storage pool. If an application volume is reconfigured, rcsized, or relocated, the "intent" of the volume (that is, the original characteristics of that volume that were set by the templates) is preserved. Template: A collection of rules that specify performance and reliability characteristics for volumes Storage Foundation includes over Zf) predefined templates CopyrlghlE 2006 Syrnantec Coroorano». All ncnts reserved 8-13Lesson 8 Other Enterprise Features Overview
    • ---''''rn~ntP'' Why Use ISP: Snapshots (Not Using ISP) It takes careful planning to ensure that the disks to be used for OHP are reserved solely for snapshots. If snapshots and data volumes are on the same disk, you may not be able to split the disk group for OHP without first reorganizing storage. Why Use ISP? Example 1: Snapshots Tripolyz Inc. uses snapshot volumes Ior off-host processing operations. including backup. testing. and decision support. The system administrators arc finding it challenging to manage the amount of data in their systems. Frequently. the same storage devices are accidently allocated for both data volumes and snapshot volumes. When the time comes to split olf a snapshot to move to another host. the administrator must rcconligurc the files to place all the snapshot data 011 independent disks. Challenge: The system administrator needs to ensure that snapshots are not using the same disks as data volumes. Cup'if1ght ~ 2006 Symasuec Corporation {Ii nqr-ts reserved VERITAS Storage Foundation 5,0 for UNIX: Maintenance8-14
    • ~--- Why Use ISP: Snapshots (Using ISP) With ISP, the planning is taken care of up front when you configure your ISP environment. 'symantcc The storage for snapshot volumes for OHP is separated from the storage for data volumes. Example I Solution lSP enables the administrator to automatically place snapshots on separate storage from data volumes, Regulations are set up that affect volume configuration. These regulations cause the disks to be separated, When you use lSP. there is an additional container in the disk group. called a storage pool. which can hold all the disks that you want to use for oft-host processing. therefore keeping them separate from the disks used for data volumes, Lesson 8 Other Enterprise Features Overview 8-15
    • '~~l-;;-~t;c Why Use ISP: Control of Volumes (Not Using ISP) If a department requests additional storage space for their data, administrators can fulfill that request however they want. Original: Mirrored Concatenated 1 GB More Space: Concatenated 2 GB The administrator sacrificed redundancy for space by removing the mirror to add more space to the volume. Example 2: Control of Volumes A S9 billion company whose data centers handle:a mission-critical sales management system hasjust experienced a loss of data. The salesprocessing department had asked for more space tor a new application. However, the system administrator who performed the spaceallocation was new. He made a mistake when he assigned the spaceand broke their service-level agreement. He increased their volume size by removing mirroring and adding the additional space to the volume. The company needsto prevent this from happening again. No matter who the administrator is. then: must be consistency in administration and service-level agreements must be maintained. Challenge: The system administrator needs to ensure that any new administrator has the appropriate inlormauon and docs not make an error. 8-16 VERITAS Storage Foundation 5.0 for UNIX' Maintenance
    • · '~;';;l;mtec Why Use ISP: Control of Volumes (Using ISP) With ISP, the administrator would have been prevented from reducing the redundancy because of a regulation that was in place. Original: Mirrored Concatenated 1 GB More Space: Mirrored Concatenated 2 GB The administrator could not reduce redundancy and therefore had to add more storage to provide more space in the volume. Example 2 Solutlon When you use ISr, there is congruity in administration regardless of who is the administrator. In this case. the administrator would have not been offered the option of an unmirrored volume. System administrators must understand the big picture and configure everything appropriately. Because the big picture can also change-frequently. ISP is the protection needed to preserve the original intent of the volumes. There is no possibility that operations. such asgrow, evacuate. add mirror. or add column. can accidentally degrade the reliability or performance capabilities of a volume. Storage is automatically allocated based on stated requirements. such as the desired capabilities of a volume. Volumes can be created or grown safe in the knowledge that Isr will balance the requirements of all volumes. Lesson 8 Other Enterprise Features Overview 8-17 Copyngn(. 2006 Syrnanter Corporauon All nqhts reserved
    • syrnantec What Is the Storage Foundation Management Server (SFMS)? The Storage Foundation Management Server provides: • Centralized management of diverse applications, servers, and storage (from application to spindle) • Access across different operating systems, servers, and storage arrays • A central interface • Comprehensive visibility and improved operational efficiencies Click here to view screens hots of the new SFMS GUI What Is the Storage Foundation Management Server? The Storage Foundation Management Server (SFMS) is a new GUI lor Storage Fouudauon that provides centralized administration. management, monitoring. and reporting on Storage Foundation S.X and 4..v. This is the Managing Summary window of the SFMS: ." 8-18 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copynqm 't, 2006 Svmantec Co.p.uaunn All nqhts reserved
    • ••Lesson Summary • Key Points This lesson introduced dynamic storage tiering, intelligent storage provisioning, and the Storage Foundation Management Server. symantec • Reference Materials - VERITAS Storage Foundation Intelligent Storage Provisioning Administrator's Guide - VERITAS Volume Manager Administrator's Guide - VERITAS File System Administrator's Guide Lesson 8 Other Enterprise Features Overview COPVf'9ht f 2006 Svmantec Corporation All nqnts reserved 8-19
    • 8-20 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqht c-2r.06 S)r11il"itK Corpnrauun /'1.11fights reserved
    • Appendix A Lab Exercises
    • A-2 Education and SRT FrameMaker Template Set CupYrighl {; 2005 Symantec Corporation All r'yl,ts reSfHYf!d
    • symantcc Lab 1 Lab 1: Maintaining Data Consistency In this lab, you practice recovering from a variety of plex problem scenarios, and optionally, observe the benefits of a dirty region log during a system crash. To investigate and practice recovery techniques, you will use a set of interactive lab scripts. For Lab Exercises, see Append~ For Lab Solutions, see Append~~ Lab 1: Maintaining Data Consistency In this lab, you practice recovering from a variety of plex problem scenarios. and optionally, observe the benefits of a dirty region log during a system crash. To investigate and practice recovery techniques. you will use a set of interactive lab scripts. Each script: Sets up the required volumes Simulates and describes a failure scenario Prompts you to fix the problem Prerequisite Setup To perform this lab, you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need a minimum of three external disks to be used during the labs. --------------------------------------------------- Lab 1: Maintaining Data Consistency A-3 COpyright :f, 2fl06 Syrnantec Corporation /lll nqtus reserved
    • Classroom Lab Values In preparation fur this lab, you will need the following information about your lab environment. For your reference. you may record the information here, or refer back to the first lab ill the SF Fundamentals section where you initially documented this information. Object Sample Value Your Value>-----'-- IJy Data J)isks: Solaris: clt#dO - clt#d5 HP-UX c4tOdO - c4tOd5 A)X:hdisk21- hdisk26 l.inux: sda - sdf Location of Lab Scripts: /student/labs/sf/ sf50 A-4 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Preparation for Plex Recovery Labs Overview Your goal is to recover from the problem as described in each scenario. Use your knowledge ofVxVM administration, as well as the VxVM recovery tools and concepts described in the lesson, to determine what steps to take to ensure recovery. After you recover the test volumes, the script verifies your solution and provides you with the result. YOII succeed when you recover the volumes without corrupting the data. For most of the recovery problems, you can use any of the VxVM interfaces: the command line interlace, the VERITAS Enterprise Administrator (VEA) graphical user interlace, or the vxdiskadm menu interface. Lab solutions are provided for only one method. II'you have questions about recovery using interfaces not covered in the solutions, see your instructor. Setup Due to the way in which the lab scripts work, it is important to set up your environment as described in this setup section: If your system is set to use enclosure-based naming, then you must turn off enclosure-based naming before running the lab scripts. 2 If you have a namedgdisk group left from previous labs, ensure that the disk group has no mounted fi le systems or volumes. II' necessary, unmount any mounted file systems that arc on volumes in the narnedgdisk group and remove the volumes. 3 If you have not already done so, create a disk group named testdg and add three disks (preferably of the same size) to the disk group. Assign the following disk media names to the disks: testdgOL testdg02, and testdg03. Note: If you do not have enough disks, you can destroy disk groups created in other labs (for example, narnedg)in order to create the t estdg disk group. 4 Before running the automated lab scripts, set the DG environment variable in your root profile to the name of the test disk group that you are using: Rerun your profile by logging out and logging back on, or manually running it. 5 Ask your instructor for the location of the lab scripts. Lab 1: Maintaining Data Consistency A-5 Copynght ,>"' 2U06 Svmantec Crsporanon !ill "gilts reserved
    • Resolving Plex Problems: Temporary Failure In this lab exercise, a temporary disk failure is simulated By using the vxmend command, you must select the plcx that has the correct data and recover the volume by using the clean plex. II' you select the wrong plex asthe clean plcx, then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure, and validates your solution lor recovering the volume. Ask your instructor Ior the location of the run _st ates script. Before You Begin: Check to ensure that the environment variable DG is set to the name ofthe testdg disk group: From the directory that contains the lab scripts, run the script run_states, and select option I, "Turned 011'drive (temporary failure)": This script sets up a mirrored I' otumc named test. "10'1': II'you receive an error messageabout the / image file system becoming full during volume setup. ignore the error message.This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk power-off by saving and overwriting the private region on the drive that is used by the volume. Then, when you arc ready to power the disk back 011, the script restores the private region as it was before the failure. 3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volume. Note that the second plcx is already in the STALE state before the drive fails. 4 Alter you recover the volume. type e in the lab script window. The script veri lies whether your solution is correct. Resolving Plex Problems: Permanent Failure In this lab exercise, a permanent disk failure is simulated. 8y using the vxmend command. you must select the plcx that has the correct data and recover the volume hy using the clean plcx. If you select the wrong plcx asthe dean plcx, then you have corrupted the data. The lab script run_states sets up the test volume configuration. simulates a disk failure. and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Check to ensure that the environment variable DGis set to the name of the testdg disk group: A-6 VERITAS Storage Foundation 5.0 for UNIX Maintenance COPYright;;, 2006 Syrnar:tp.cCo-porauon All rights reser veo
    • From the directory that contains the lab scripts, run the script run_states, and select option 2, "Power failed drive (permanent failure]" This script sets up a mirrored volume named test. Note: If you receive an error message about the / image file system becoming full during volume setup, ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk power-off by saving and overwriting the private region on the drive that is used by the volume. I/O is started so that VxVM detects the failure, and VxVM detaches the disk. 3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCS I location or by another disk at another SCSI location. Note that the new disk docs not have any data on it. The other plex of the volume became STALE ten minutes before the drive tailed. However, it still has your data, but data from the last ten minutes is missing. 4 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if the disk device that was originally used during disk failure simulation is in onl ine inval id state, reinitialize the disk to prepare for later labs. Resolving Plex Problems: Unknown Failure In this lab exercise, an unknown failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plcx. If you select the wrong plcx as the clean plcx, then you have corrupted the data. The lab script run_states sets up the test volume configuration and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: From the directory that contains the lab scripts, run the script run_states, and select option 3. "Unknown failure": Lab 1: Maintaining Data Consistency A-7 Copynqm 'f. l006 Syruantec Corpcrauon All fights reserved
    • This script sets up a mirrored volume named test that has three plcxcs. Note: l lyou receive an CITor message about the /image file system becoming full during volume setup, ignore the CITor message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates an unknown failure that causes all plcxcs to be set to the STALE state. You arc not provided with information about the cause otthc problem with the plcxcs. 3 In a second terminal window, check each plcx individually to determine it' it has the correct data. To test i r the plcx has correct data, start the volume using that plcx, and then. in the lab script window. press Return. The script output displays a message stating whether or not the plex has the correct data. Continue this process lor each plcx, until you determine which plex has the correct data. 4 After you determine which plcx has the correct data, recover the volume. 5 Aller you recover the volume. type e in the lab script window. The script veri lies whether your solution is correct. A-8 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copynqnt :;,. 21106 Svmantec Corporation All fight'" reserved
    • Optional Lab Exercises The next set oflab exercises is optional and may be performed if yoII have time. These exercises provide additional recovery scenarios for resolving plex problems with layered volumes. A final activity explores logging behavior following a system crash. Optional Lab: Resolving Plex Problems: Temporary Failure with a Layered Volume In this lab exercise, a temporary disk failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plex. If you select the wrong plcx as the clean plcx. then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: From the directory that contains the lab scripts, run the script run_states, and select option 4, "Turned off drive with layered volume": This script sets up a concur-mirror volume named test. Note: If you receive an error message about the / image file system becoming full during volume setup, ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk power-offby saying and overwriting the private region on the drive that is used by the volume, and I/O is started so that VxVM detects the failure. Then, when you are ready to power the disk back on, the script restores the private region as it was before the failure. 3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volume. Note that the second plcx is already in the STALE state befure the drive tails. 4 After you recover the volume, type e in the lab script window. The script verities whether your solutiun is correct. Lab 1: Maintaining Data Consistency A-9 Coryngill 'f 2006 Svmanter Corporation. All flgnls reservec
    • Optional Lab: Resolving Plex Problems: Permanent Failure with a Layered Volume In this lab exercise, a permanent disk failure is simulated. By using the vxmend command, you must select the plcx that has the correct data and recover the volume by using the dean plcx. If you select the wrong plcx as the clean plcx, then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: from the directory that contains the lab scripts. run the script run_states, and select option 5, "Power failed drive with layered volume": This script sets lip a concat-mirror volume named test. Note: If you receive an error messageabout the / image file system becoming lull during volume setup. ignore the error message.This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk powcr-oflby saving and overwriting the private region on the drive that is used by the volume. 1;0 is started so that VxVM detects the failure. and VxVM detaches the disk. 3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Note that the new disk docs not have any data on it. The other plcx of the volume became STALE ten minutes before the drive failed. However, this plcx still has your data. but data from the last ten minutes is missing. 4 ABel' you recover the volume, type e in the lab snip! window. The script verifies whether your solution is correct. 5 When you have completed this exercise, destroy testdg. A-10 VERITAS Storage Foundation 5,0 for UNIX: Maintenance Copvnqtu r 2006 Symantec Cocrorauon All !'gllls reserved
    • Optional Lab: Exploring Logging Behavior During a System Crash 'ote: This section requires console access to the lab system. If you are working in a Virtual Academy lab environment. skip this section. List the imported disk groups on your system and destroy the testdg disk group if it still exists. 2 If you do not already have a disk group called namedg, create it using one of the already initialized disks. 3 Ensure that the namedg disk group has at least three disks in it. Ifnecessary, add disks to the namedg disk group. 4 Create two mirrored, concatenated volumes, 'iOO MB in size, called v o Ll oq and vo l no l oq in the namedg disk group. 5 Add a log to the volume vo I Loq. 6 Create a file system on both volumes. 7 Create mount points for the volumes, /vo110g and /vo1no10g. 8 Copy /etc/vfstab (on Solaris) or /etc/fstab (on Linux and III'-lIX) to a tile called origvfstab or origfstab. 9 Edit /etc/vfstab or /etc/fstab so that voI Loq and vol no Loq are mounted automatically on reboot. (In the file, each entry should be separated by a tab.) Nore: On the Solaris platform. ensure that you set the mount at boot option to yes for both file systems. 10 Type mountall (on Solaris and HI'-UXl or mount -a (on Linux) to mount the vo l Loq and vo l no l oq volumes. 11 In root, start an I/O process on each volume. For example: 12 Simulate a system crash on your system by stopping it unexpectedly as shown in the following. 13 After the system is running again, check the state of the volumes to ensure that neither of the volumes is in the sync/need sync mode. Note: If you are not using file systemlogging for boot disk file systems,you may need to carry out tile system checks for boot disk tile systems on the console before the system becomes operational again. Lab 1: Maintaining Data Consistency A-11 Cnpyngtll '~.2006 Svmantf!r.Corporation All r-ents ''''SAP,eo
    • 14 Run the vxstat command as shown in the Iollowing. This utility displays statistical information about volumes and other VxVM objects. For more information on this command, see the vxstat (1m)manual page. The output shows how many I/Os it took to rcsynchronizc the minors. Compare the number of I/Os lor each volume, What do you notice" 15 Uumount both file systems and remove the volumes vollog and volnolog. 16 Restore your original vfstab or fstab file. 17 Destroy the namedg disk group. A-12 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvrtqht ~ 2006 Svmantec Corporanon All rlW,ts reserveo
    • symantec. Lab 2 Lab 2: Managing Devices Within the VxVM Architecture In this lab, you explore the VxVM tools used to manage the device discovery layer (DOL) and dynamic multipathing (DMP). The objective of this exercise is to make you familiar with the commands used to administer multipathed disks. Lab 2: Managing Devices Within the VxVM Architecture In this lab, you explore the VxVM tools used to manage the device discovery layer (DOL) and dynamic multipathing (OM!». The objective ufthis exercise is tu make YOll familiar with the commands used to administer multipathed disks. In the VERITAS classroom (not Virtual Academy). you also explore dynamic multipathing through the use of two ports un the liDS disk array. In the classroom configuration, each I..UN maps to two pons on the liDS, so that a system detects a LUN twice through a single IIBA. Your instructor will change the classroom configuration at a certain point in the lab to enable access to the II OS ports, effectively switching from one path to two paths to each LUN. I For Lab Exercises, see Appendix A. L. For Lab Solutions, se~Appendi~~ Prerequisite Setup Tu perform this lab. you need a lab system with Storage Foundation pre-installed, configured and licensed. In addition to this, you also need a minimum of three external disks to be used during the labs. Nute: Pan 1 of this lab can only be performed in the standard VERITAS classrooms that include an liDS disk array. Before you begin this lab. destroy any data disk groups that are left from previous labs: vxdg destroy diskgroup Lab 2: Managing Devices Within the VxVM Architecture Copvnqtu '(" 2006 Symaorer Corporation All fights reserved A-13
    • Classroom Lab Values In preparation lor this lab. you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the first lab in the SF Fundamentals section where you initially documented this information. Object Sample Value Your Value root password veritas Host name trainl My Data Disks: Sol.uis: clt#dO - clt#d5 HP-liX: c4tOdO - c4tOd5 AIX: hdisk2l- hdisk26 l.inux: sda - sdf Location of Lab Scripts: /student/labs!sf/ sf50 Location of the fp /student/labs/sf/ program: sf50/bin A-14 VERITAS Storage Foundation 5,0 for UNIX: Maintenance Copyr'lht o 2006 Sv-namec Corporation All rights reserved
    • Introduction to DMP Labs To explore the behavior of DMP in the classroom, this lab is organized into two sections: Part I: In this activity, you practice using DDL and DMP administrative commands while only one path is visible to each LUN. Part 2: In this activity, the instructor changes the classroom configuration so that each LUN maps to both ports on the liDS, and each system detects two paths to a LUN. Explore additional exercises to experience when two paths are in use. Note: Part 2 can only be performed in a standard Symanrec classroom (not in a Virtual Academy or Mobile Academy lab environment). Instructor Classroom Setup Instructor: If you did not initialize the classroom zoning configuration prior to the start of class on day one, perform the following steps to initialize classroom zoning configurations. This must be completed prior to performing this lab. 1 Use course_setup script: Select Classroom. (Setup scripts are all included in Classroom SAN configuration Version 2). Select Function To Perform: Select Zoning by Zone Name 2 - Select Zoning and Hostgroup Configuration by Course Name 3 Select/Check Hostgroup Configuratlon 2 Select option 3 - Select/Check Hostgroup Configuration. Select HostGroup Configuration to be Configured: 1 - Standard Mode: 2 or 4 node sharing, No DMP 2 DMP Mode: 2 node sharing, switchable between 1 path and 2 path access Check active HDS Hostgroup Configuration 3 Select option 2 - DMP Mode. Wait and do not respond to prompts. 4 Exit to first level menu. 5 Select option I - Select Zoning by Zone Name. Select Zoning Configuration Required: 1 - Mode 1: 6 sets of 2 Systems sharing 12 LUNs, no Tape Library available (HDS DMP Mude - 6 x 2 Systems - Single Path to 12 LUNsl 2 - Mode 2: 3 sets of 4 Systems sharing 24 LONs, no Tape Library available (HDS DMP Mode - 6 x 2 Systems - Dual Paths to 12 LUNsl 6 Select option I - Mode I (single path to 12 LUNs). 7 Select option 4 - Solaris as the US. a Exit out of the course_setup script. 9 Reboot each system using reboot - - - L Lab 2: Managing Devices Within the VxVM Architecture Copyright 20()6 Symanter Corpo-auo» All «qnts .eservec A-15
    • Part 1: Exploring DMP (Single Path Visible) Administering the Device Discovery Layer Display the JI30Ds currently supported on your system by using VxVM's device discovery layer utility, vxddladm. Use manual pagesto identity the option you need to use with the vxddladm command. 2 List all currently supported disk arrays. Note: Iryour lab environment is using Hitachi 9500 array, note that this array is included in the libvxhdsalua library that is already included in VxVM 5.0 by default. 3 List all the enclosures connected to your system using the vxdmpadm 1istenclosure all command. Docs volume Manager recognize the disk array you arc using in your lab environment? What is the name ofthe enclosure'? 4 Set your system to use enclosure-based naming. 5 Display the disks attached to your system and note the changes. 6 Rename the enclosure to yournameusing the vxdmpadm setat tr command. To find the exact command syntax, check the manual pages tor the vxdmpadmcommand. Note: The original name ofthe enclosure is displayed by the vxdmpadm listenclosure all command that you used in step 3. 7 Launch VEA. connect to your local system, and notice any differences ill how disks arc represented. Displaying DMP Information L.ist all controllers on your system using the vxdmpadm 1i stct lr all command. How many controllers arc listed for the disk array your system is connected to'? 2 Display all paths connected to the controller listed lor the disk array on your system using the vxdmpadm getsubpaths ctlr=controller command. Compare the NAME and the DMPNODENA~1E columns in the output. A-16 VERITAS Storage Foundation 5.0 for UNIX Maintenance Copvnqnt 2(j()6 Symatuec Cmporaliot] 111fights reserved
    • 3 In the displayed list ofpnths.use the DMP node name of one of the paths to display information about paths that lead to the particular LUN. How many paths can you see') In the next three sections you will investigate preventing multipathing to a specific device, changing DMP 110policies. and displaying DMP statistics. If you are working in an environment where the SAN zoning can be changed to provide dual paths to the disk devices. for exampit: in a standard Symanrec classroom. skip thesesections and start with Part 2 Exploring OM P(Dual Paths Visible). The same sections will be repeated when dual paths to disk devices are available. Note: Dual paths to disk devices are not available in Virtual Academy or Mobile Academy lab environments. Lab 2: Managing Devices Within the VxVM Architecture A-17 Copynqm 'f: 2006 Symantec Corporation All nqnts reserved
    • Preventing/Allowing Multipathing for a Device ~ol(': Perform this section only ifyou cannot change your environment to use dual paths to disks. Otherwise. skip to Part 2 Exploring DMP (Dual Paths Visible). 1 List the paths fur each DM P node name displayed in the enclosure based naming scheme to identify two of the disks that were assigned to you. Nute: Alternatively. you can use the output of the vxdmpadm getsubpaths ctlr=controllercoml11and to find the path corresponding to each drnp node name. On the Solaris platform, the vxdisk - e 1is t command also provides information about the native OS name that corresponds to each dmp node name. 2 Create a disk group named namedg that contains the two disks you identified in step I. 3 Display multipathing information for one of the disks ill the namedg disk group. 4 Select a device in the namedg disk group on your system, and prevent multipathing for that device. Note that you have to exit the vxdiskadm menu completely before the change takes affect. Note: When you arc prompted to enter the disk name. you have to enter the actual device name, not the OM P node name. On the Solaris platform, you can use the list option to identify the actual device names that correspond to the [)MP node names. 5 Veri fy that multipathing has been prevented fur the device. 6 Run the vxdisk -0 alldgs list command and notice the name and location of the disk in the list. 7 Rc-cnablc muliipathing for this device and verily your action. Note that you have to exit the vxdiskadm menu completely before the change takes affect Note: Depending on your plauorm. the vxdiskadm menu may prompt you to reboot your system. Reboot your system itprompted by the vxdiskadm menu. 8 Run the vxdisk -0 alldgs list command and notice the names and locution 01" the disk in the list. A-18 VERITAS Storage Foundation 5.afor UNIX: Maintenance Copyflqtll ~ ;:006 Svrnantec Corpcr auon JII "yhls reserved
    • Displaying DMP Statistics Note: Perform this section only if you cannot change your environment to usedual paths to disks. Otherwise, skip to Part 2 Exploring OMP (Dual Paths Visible). 1 Create a 1-(iB volume named namevoll in the namedg disk group. 2 Enable the gathering of I/O statistics for DMP. 3 Resctthe DMP I/O statistics counters to zero. 4 Next, you will use a simple performance utility, called f p. to generate I/O on the disk used by the namedg disk group. Ask your instructor for the location of the program. In a different terminal window, start several invocations of the fp program by using the following command: /8cI'ipt_loc~t'i('i-)!fp_plAt'foI'i!: /dev/vx/rdsk/n3;n,'·dg/na'i1c,voll 1048.576 32 99999 rw & To create enough 110 resistance, LIsethe vi editor and CllPY about 10 of these lines into a file called /tmp/testscript. and then rUII the script :'Iote: Make sure that you are using the correct version of the fp program for your platform, for example, fp_sun. 5 In the original terminal window, display 110 statistics for all controllers. 6 Display I/O statistics for the DM P node that corresponds to the device used by namevoll. Display statistics every two seconds lor four times. ~flte: You can use the vxprint. -g namedg -ht.r namevoll command to identify the drnp node name of the device used by namevoll. Managing Array Policies Note: Perform this section only if you cannot change your environment to Lisedual paths to disks. Otherwise, skip to Part 2 Exploring DMP (Dual Paths Visible). 1 Display the current 110 policy for the enclosure you are using. 2 Change the current I/O policy for the enclosure to stop load-balancing and only Lisemultipathing for high availability. 3 Display the new I/O policy attribute. 4 Kill all fp processes. A-19Lab 2: Managing Devices Within the VxVM Architecture Copynqht 'C; 200f) Syrnantec Corporation All «ctus reserved
    • 5 Destroy the namedg disk group. 6 Set your system back to traditional naming. A-20 VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copvnqhl: 200n S,'rJ1ilHler. Corporanou All rights reserved
    • Part 2: Exploring DMP (Dual Paths Visible) Note: The following labs arc intended for a multipath environment and do not make sense with a single path. These activities may be performed only in the Symantec classroom environment. nut in the Virtual Academy or Mobile environments. Instructor Classroom Setup Instructor: Perform the following steps to switch dual paths on. Switch to zone contiguration 2 to enable a second path to the LUNs: Use course_setup script: Select Classroom. Setup scripts are all included in Classroom SAN configuration (Version 2). Select Function To Perform: Select Zoning by Zone Name 2 - Select Zoning and Hostgroup ConEiguration by Course Name 3 Select/Check Hostgroup Configuration 2 Select option 1- Select Zoning by Zone Name. Select Zoning Configuration Required: 1 - Mode 1: 6 sets of 2 Systems sharing 12 LUNs, no Tape Library available (HDS DMP Mode - 6 x 2 Systems - Single Path to 12 LUNs) 2 - Mode 2: 3 sets of 4 Systems sharing 24 LUNs, no Tape Library available (HDS DMP Mode - 6 x 2 Systems Dual Paths to 12 LUNs) 3 Select ? to switch to dual paths. 4 Select option 4 - Solaris as the OS. 5 Exit out of the course_setup script. 6 Reboot each system using reboot - L Displaying DMP Information List all controllers on your system using the vxdmpadm 1i stct lr all command. How many controllers are listed for the disk array your system is connected to') 2 Display all paths connected to the controller listed for the disk array on your system using the vxdmpadm getsubpaths ctlr~controller command. Compare the NAME and the DMPNODENAME columns in the output, Lab 2: Managing Devices Within the VxVM Architecture A-21 Copyright ,t, 2006 Symantec Corporanon 111fI~l~lls reservec
    • 3 In the displayed list of paths, use the DMP node name of one of the paths to display information about paths that lead to the particular LUN. How many paths can you see? Preventing/Allowing Multipathing for a Device List the paths for each drnp node name displayed in the enclosure based naming scheme to identify two of the disks that were assigned to you. 2 Create a disk group named namedgthat contains the two disks you identified in step I. 3 Display mulupathing information Ior one of tile disks in the namedgdisk group. 4 Select a device in the namedgdisk group on your system, and prevent multipathing for that device. Note that you have to exit the vxdiskadm menu completely before the change takes affect. Note: When you arc prompted to enter the disk name, you have to enter the actual device name, not the drnp node name. On the Solaris platform. you can use the list option to idcnti fy the actual device names that correspond to the dmp node names. 5 Verify that multipathing has been prevented lor the device. 6 Run the vxdisk - 0 alldgs 1ist command and notice the names and location of the disk in thc list. 7 Rc-cnablc multipathing for this device and verify your action. You need to enable multipathing for both paths of the device. Note that you have to exit the vxdiskadm menu completely before the change takes affect. Note: Depending on your platform. the vxdiskadm menu may prompt you to reboot your system. Reboot your system ifprompted by the vxdiskadm menu. 8 Run the vxdisk -0 alldgs 1ist command and notice the names and location or the disk ill the list. A-22 VERITAS Storage Foundation 5.0 for UNIX Maintenance Cupyflyht <l.. 2006 Syoranrer Cou-oranon All rights reserved
    • Displaying DMP Statistics Create a I-GB volume named namevoll in the namedg disk group. 2 Enable the gathering of I/O statistics for or",! p. 3 Reset the DM P I/O statistics counters to zero. 4 Next, you will use a simple performance utility. called f p, to generate I '0 on the disk used by the namedg disk group. Ask your instructor for the location of the program. In a different terminal window. start several invocations of the fp program by using the following command: !dev!vx/rdsk/iLunedg!n:iln.,o·voll 1048576 32 99999 rw & To create enough 110 resistance. use the vi editor and copy about 10 ofthese lines into a file called /tmp/testscript, and then run the script Note: Make sure that you are using the correct version of the fp program for your platform. for example, fp _sun. 5 In the original terminal window, display I-O statistics for all controllers. 6 Display I/O statistics for the DMP node that corresponds to the device used by namevoll. Display statistics every two seconds for four times. ;ote: You can use the vxprint -9 namedg - htr namevoll command to identify the drup node name of the device used by namevoll. 7 Kill all fp processes. Managing Array Policies Display the current 1/0 policy for the enclosure you are using 2 Change the current I/O policy for the enclosure to stop load-balancing and only use multipathing for high availability. 3 Display the new 110 policy attribute. 4 Reset the DMP 10 statistics counters to zero. 5 Next, YOLIwill Lise the fp program again. to generate 110 on the disk used by the namedg disk group. Ask your instructor till' the location of the program. In A-23Lab 2: Managing Devices Within the VxVM Architecture CopyrFg~11 ,t 2006 Syrnantec Corpor.mon 111 notus reserved
    • a di Herem terminal window, start several invocauons of the fp program by using the following command: /dev/vx!rdsk/:""""'dg/']",,,ic"voll 1048576 32 99999 rw & To create enough I/O resistance, use the vi editor and copy about IU olthcsc lines into a file called / tmp/ testscript, and then run the script. Note: Make sure that you arc thing the correct version of the fp program for )OUI platform. for example, fp sun 6 In the original terminal window, display I/O statistics for all controllers, 7 Display I/O statistics for the DMP node that corresponds to the device used by namevoll. Display statistics every two seconds for four times. Compare the output to the output you observed be lure changing the DMI' policy to singlcucuv c. Note: You can use the vxprint -g namedg -htr namevoll command to idcnii Iy the dmp node name of the device used by namevoll. 8 Kill all fp processes. 9 Change the DMI' 1/0 policy back to its default value (round-robin). 10 Destroy the namedgdisk group. 11 Set your system back to traditional naming. Managing the DMP Restore Daemon Check the status ofthe DM!' restore daemon. Note the values of the daemon interval and policy. 2 Change the restore daemon interval to 400 seconds and change thc policy to analyze all paths in the system. 3 Veri Iy the changes that you made. 4 Change thc daemon interval and policy back to the original values. 5 Veri Iy the changes that you made. A-24 VERITAS Storage Foundation 5.0 for UNIX Maintenance
    • symantcc. Lab 3 Lab 3: Encapsulation and Rootability In this practice, you create a boot disk mirror, disable the boot disk, and boot up from the mirror. Then you boot up again from the boot disk, break the mirror, and remove the boot disk from the boot disk group. Finally, you reencapsulate the boot disk and re- create the mirror. For Lab Exercises, see APpen. dix A. 1 For Lab Solutions, see Appendix B. Lab 3: Encapsulation and Rootability In this practice, you create a hoot disk mirror. disable the boot disk, and boot up from the mirror. Then you boot up again from the boot disk. break the mirror. and removethe boot disk from the boot disk group. Finally. you reencapsulate the boot disk and re-create the mirror. These tasks are performed using a combination of the VEA interface. the vxdiskadm utility, and ell commands. The Lab Solutio", j,'r Ihis lab <I,.,' locak" on Tile foll""ili!! P;i!!,' Prerequisite Setup To perform this lab, you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need a second internal disk to be able to mirror the system disk. On the IIP-UX platform, you also need three external disks to carry out the labs on LVM to VxVM conversion. Some of the lab steps may require console access. If you are working in a Virtual Academy lob environment where you do not have console access to the lab system. you will be asked to skip these steps. Lab 3: Encapsulation and Rootability A-25 Copyngtlt i) l006 Symantac Corporauon A.II rights r!"server1
    • Classroom Lab Values In preparation for this lab, you will need thc following information about your lab environment. For your reference, you may record the information here, or refer back to the first lab in the SF l-undamcntals section where you initially documented this information. Object Sample Value Your Value root pass" ord veritas Host name trainl lIy Boot J)isk: Solaris:cOtOdO HP-L,X: clt15dO AIX: hdiskO l.inux: hda 2nd Internal Disk: Solaris: cOt2dO HI'-LX: c3tl5dO AIX: hdiskl l.inux: hdb lIy Data Disks: Solaris: c 1t#dO - clt#d5 HI'-I;X: c4tOdO - c4tOd5 AI:': hdisk21- hdisl:26 Linux: sda - sdf A-26 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqm ~~2006 Svm-uitec Corporal Ion All rlghlS -eserveo
    • Solaris and Linux Only: Encapsulation and Boot Disk Mirroring NOll': The encapsulation and hoot disk mirroring labs vary by platform due to the way in which the boot disk is handled by the operating system. This lab section applies to Solaris and Linux only. Labs for IfP-UX are presented in the next section. Encapsulation and Boot Disk Mirroring. Solaris and Linux Use vxdiskadm to encapsulate the boot disk. Use systemdg as the name of your boot disk group and use rootdisk as the name of your boot disk. NOll': If you are accessing your lab system remotely as in a Virtual Academy lab environment. you will lose your connection to your lab system when you reboot your system after boot disk encapsulation. Wait for the system to come back up to reconnect. If you cannot log back in within a reasonable amount of time. contact your instructor. 2 After the reboot, use vxdi skadm to add a disk that will be used for the mirror of rootdisk. If your system has two internal disks. use the second internal disk on your system for the mirror. (This is required due to the nature of the classroom configuration.) When setting up the disk. ensure that the disk layout is s I iced. Use a I tboot as the name of your disk. 3 Next. use vxdiskadm to mirror your system disk. rootdisk. to the disk that you added. a 1tboot. 4 After the mirroring operation is complete, verify that you now have two disks in systemdg: rootdisk and a I tboot, and that all volumes are mirrored. Also. check to determine ifrootvo1 is enabled and active. Hint: Use vxprint and examine the STATE fields. 5 Place the names of your alternate boot disks in persistent storage. Nore: The following steps (steps (i-II) in this lab section require console access. If you are working in a Virtual Academy lab environment with no console access. skip to the last step (step 12) of this section. 6 Test that the mirror of the system disk is bootablc. 7 Now that you are running offthe original boot disk. fail the disk. The system will continue to run because you have a mirror of the disk. a Fail the system disk. Lab 3 Encapsulation and Roolability A-27 Copyn9hl~) 2006 Symante- Corroranon. All rights reserved
    • 11.) disable till; boot disk and make rootvol- 01 disabled and offline, use the vxmendcommand. This command is used to make changes to contiguration database records. Here, you arc using the command to place the plcx in an offline state. For more information about this command, see the vxmend (Lm) manual page. b Verify that rootvol- 01 is now disabled and offline. c To change the plcx to a STALE state, run the vxmendon command on rootvol- 01. Verify that rootvol- 01 is now in the DISABLED and STALE state. 8 Now that you have simulated the failure of the original boot disk, reboot the system and boot up on the mirror. a Reboot the system using ini t 6. b The system stops during the reboot at the OK prompt and indicates for you to use vx -a I tboot. 30(l from the alternate boot disk 9 Alter the system comes back up, check the status of the root volume, What is the state of the volume'? Use the vxtask 1i st command to see the progress of the rcsynchronization of the root volume. 10 Alter the synchronization is complete, verify the status of rootvoL Verify that rootvol- 01 is now in the ENABLED and ACTIVE statc. Note: You may need to wait a few minutes for the state to change from STALE to ACTIVE. You have successfully booted up Irorn the mirror, and the volumes have been rcsynchronizcd. 11 Your system is currently booted up li·0I11the boot disk mirror. To boot up from the original boot disk, reboot again using ini t 6. You have now booted up from the original boot disk, Not«: If you arc working in a Virtual Academy lab environment with no console access, you call continue with step 12. 12 Remove the mirror of the boot disk. Using VEA, remove all but one plex of rootvol, swapvol, usr, var. opt, and home(that is, remove the newer plcx from each volume in systemdg). III preparation for the next lab, leave the boot disk encapsulated, but not mirrored, A-28 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copyright:; 2006 Symamer Corporation All rights ff:!$fHVt=;d
    • Optional Lab: Unencapsulating - Solaris and Linux If you want to test the vxunroot process for uncncapsulating the hoot disk, you can do the following steps. l lowcver, you need the boot disk encapsulated for the next lab. So after performing this optional exercise, you will need to encapsulate, but not mirror, your boot disk. Run the command to convert the root volumes back to disk partitions. 2 Shut down and restart the system when prompted. Xote: If you are accessing your lab system remotely as in a Virtual Academy lab environment, you will lose your connection to your lab system when you reboot your system. Wait for the system to come back up to reconnect. IfYl1U cannot log back in within a reasonable amount of time, contact your instructor. 3 Verify that the mount points arc now slices rather than volumes. 4 In preparation for the next lab. leave the boot disk encapsulated, but not mirrored. Lab 3: Encapsulationand Rootability A-29 Copynght'f 2006 Symantec Corporanon. Auuntus reserved
    • HP-UX Only: Putting the Boot Disk Under VxVM Control and Boot Disk Mirroring Note: The encapsulation and boot disk mirroring labs vary by platform due to the way in which the bout disk is handled by the operating system. The following lab sections apply to HP-UX only. Labs lor Solaris and Linux arc presented in the previous sections. Putting the Boot Disk Under VxVM Control and Boot Disk Mirroring· HP·UX Note: This lab section requires console access. If you arc working in a Virtual Academy lab environment with no console access.skip this lab section and gu to the LVM tu VxVM Conversion lab section. Take the system into single user mode (init level I). 2 Check the status of the second internal disk using the vxdi sk 1i st command. II'the disk is displayed asan LVM disk. ensure that it is not used by any active LVM volume groups and take it uut of LVM control using the pvremove command. Note: If the pvremove command 1(liisdue to an exported volume group intorrnatiun left un the disk, IT-create an LVM header using the force option (pvcreate - f / dev / rdsk/ device_name) before using the pvremove command to remove it. 3 Check the values ofthe primary and alternate boot devices using the setboot command. 4 Create a bootablc VxVM system disk on the second internal disk using thc vxcp _1vmroot command and make this disk the primary boot disk. Use systemdg as the disk group to put the boot disk in. 5 When the vxcp 1vmroot command completes. check the output ul the setboot command and verify that the primary path is the VxVM disk. 6 Reboot the system. Alter it boots up, verily that it is booted on Vx VM volumes by checking the output of the bdf command. 7 Destroy the internal disk that was used as the LVM system disk. 8 Mirror the system disk to the other internal disk that used to he the LVM disk. A-30 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Coovnclu ~,2006 Symaruec Corporation (III nqhts reserved
    • Note: This operation can take some time depending on the sizes of the volumes on your system disk. 9 Verify the primary and alternate hoot paths and check the volume layouts of the volumes in the bootdg. 10 Reboot your system using shutdown -ry now and interrupt the automatic boot process. 11 Boot the system using the alternative boot disk. 12 When the system is up, disable the root volume plex that is on the primary boot device using the vxmend of f command followed by vxmend on. 13 Reboot your system using shutdown - ry now and follow the boot-up messages. What did you observe' 14 Reset the system using Service Processor login as follows: 15 Interrupt the automatic boot process and boot the system using the alternate boot device. 16 When the system is up and running display the state of the root volume and plcxcs. Check if there are any synchronization tasks being carried by volume manager using the vxtask 1ist command. Wait for the synchronization to complete. 17 When the state of the first rootvol plcx changes back to ACTIVE. remove the second mirrors on the alternate disk from each volume. Take the alternate disk out of the disk group and uninirialize it. 18 Take the system to single user mode by executing ini t 1. 19 Create a copy of the system disk on an LVM disk using the vxres 1vmroot command. Do not make the LVM disk the primary boot disk. 20 Reboot your system using shutdown - ry now and interrupt the automatic boot process. 21 Boot the system using the alternative boot disk. 22 When the system is up, verify that the system is booted on the LVM disk by executing the bdf command. and display the f lc system table and the / stand/boot conf tile. Lab 3: Encapsulation and Rootability A-31 COpYrighl {' 200n Symantec Corpo-anoo. 111rlght<; reserved.
    • 23 Reboot your system using shutdown -ry now and when the system is back up verify that the system is booted on YxYM volumes. and display the file system table and the / stand/booteonf tile. LVM to VxVM Conversion - HP-UX Create two LYM physical volumes using the pvereate command on two uninitializcd external disks as follows: pvereate /dev/rdsk/device~tagl pvereate /dev/rdsk/device_tag2 Note: If you do not have enough uniniualizcd external disks, youlllay need to uninitializc the empty disks that arc under YxYM control before creating the LVM physical volumes. 2 Create a volume group called vgOl using the physical volumes as follows: mkdir /dev/vgOl mknod /dev/vgOl/group e 64 Ox020000 vgcreate /dev/vgOl /dev/Jsk/device~tagl /dev/dsk/device~tag2 3 Using the 1verea te command, create two logical volumes, one concatenated and one striped, of size IOUMl3 in the vgOl volume group as follows: (Use 32K as the stripe unit) Ivereate -L 100 -n eoncatvoll /dev/vgOl Ivcreate -i 2 -I 32 -L 100 -n stripevoll /dev/vgOl 4 Make vxls file systems on both volumes and mount them to two new directories called /concat and /stripe. 5 Edit the /etc/fstab tile and create the corresponding entries for the file systems by adding the following lines: /dev/vgOl/concatvoll /coneat vxfs log 0 2 /dev/vgOl/stripevoll /stripe vxfs log 0 2 6 Using the vgcfgbackup command. back up the LYM configuration for the volume group that you created in the first part of the lab. 7 Unmount the file systems and run the conversion tool: vxvmconvert. 8 Examine the file /ete/fstab. and the directories /dev/vgOl and / dev / vx/ [r J dsk. What changes have been made? 9 Change the volume name of eoncatvollto volOl alter the conversion. Change the corresponding / ete/ f stab entry to: VERITAS Storage Foundation 5.0 for UNIX: MaintenanceA-32 Copy'fOyill!';, 2006 Svmautec Corporauo« All flghls reservec
    • /dev/vx/dsk/dgOl/volOl /concat vxfs log 0 2 10 Remount the file systems. 11 After the conversion to VxVM completes successfully, create a VxVM disk group called testdg on another external disk. 12 Unmount the tile systems and roll back to LVM configuration using vxvmconvert option 3. What did you observe') 13 Remove the entries for /concat and /stripe from /etc/fstab. 14 Using the Ivremove command. remove the volumes concatvoll and stripevol1. 15 Destroy the volume group vg 01. 16 Convert the empty LVM physical volumes to VxVM by removing them from LVM control and then initializing them using VxVM. 17 Destroy the testdg disk group, Lab 3: Encapsulation and Rootability A-33 Copvnqtu e, 2006 S~11l8'1IAr. Coroo-enon !II «qtvs reserved
    • A-34 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqht 'E" 2006 Symamec cororenoo All flgl~ls reserved
    • Lab 4 Lab 4: Troubleshooting the Boot Process In this lab, you practice recovering from encapsulated boot disk failure scenarios. On the Solaris platform, to investigate and practice recovery techniques, you will use a set of interactive lab scripts. Each script simulates a failure in the encapsulated boot disk (and its mirror, if required) and reboots the system. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Lab 4: Troubleshooting the Boot Process In this lab, you practice recovering from encapsulated boot disk failure scenarios. On the Solaris platform. tu investigate and practice recovery techniques, you will use a set of interactive lab scripts. Each script simulates a failure in the encapsulated boot disk (and its mirror, ifrequired) and reboots the system. TIK Lab Sohuiou-. il)r this lab arc' loc.ucd on the !c)lIllyill", I''''''c: "I.nb ..) Solurion-: fmuhksh"'lilllic rhc Il"ot PtlCl>S." pilgc' Il·:; I Prerequisite Setup To perform this lab, you need a lab system with Storage Foundation pre-installed, configured and licensed. In addition to this. you also need a second internal disk to be used as an alternative boot disk. If you have completed the previous labs. you should have the following setup: On the Solaris platform. you should have your system disk under Vx VM control (encapsulated) but not mirrored. On the IIP-UX platform. you should have the system disk under VxVM control and you should have the second internal disk configured as an alternative LVM boot disk. :ote: This lab requires console access. If you are working in a Virtual Academy lab environment with no console access. you cannot perform this lab. Lab 4: Troubleshooting the Boot Process Coovnqnt fi", 2(106 Svmantec Corporanon All fights -eserveo A-35
    • Classroom Lab Values In preparation for this lab. you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the first lab in the SF Fundamentals section where you initially documented this information. Object Sample Value Your Value root password veritas Host name t ra i n l Host name of the system train2 sharing disks ith my system 11)Boot Uisk: Sol.iris: cOtOdO HP-LX: cltl5dO AIX: hdiskO l.inux: hda 2nd Internal Disk: Solaris: cOt2dO HP-LX: c3t15dO AIX: hdiskl Liuux: hdb Location of Lab Scripts: /student/labs/sf/ sf 50 A-36 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Cupyflyhl ( 2006 Svmat.tec Corporauon All royhts reserved
    • Solaris Only: Troubleshooting the Boot Process Note: The boot process troubleshooting labs vary by platform due to the way in which the boot disk is handled by the operating system. This lab section applies to Solaris only. Labs for IIP-UX arc presented in the next section. 'ote: These labs require console access. If yoII are working in a Virtual Academy lab environment with no console access, you cannot perform these labs. Troubleshooting the Boot Process - Solaris Overview Your goal is to recover from the problem as described in each scenario. Use your knowledge of VxVM administration. as well as the VxVM recovery 100is and concepts described in the lesson, to determine what steps to take to ensure recovery. You succeed when you solve the problem with the boot disk and boot to multiuser mode. For most of the recovery problems. you can use any of the VxVM interfaces: the command line interface, the VERITAS Enterprise Administrator (VEA) graphical user interface, or the vxdiskadm menu interface. Lab solutions are provided for only one method. If you have questions about recovery using interfaces not covered in the solutions. see your instructor. Setup In this lab, the automated lab scripts prompt you to reboot the system. If the reboot fails, ask your instructor how to bring the system down. If your system is set to use enclosure-based naming. then you must turn off enclosure-based naming before running the lab scripts. 2 These labs require the system disk to be encapsulated. If your system disk is not encapsulated. you must encapsulate it before proceeding with this lab. Use vxdiskadm to encapsulate the boot disk. Use systemdg as the name of your boot disk group and use root d i. sk as the name of your boot disk. 3 You must have at least one additional disk that is the same size (or larger) as your bout disk. You arc instructed to create a mirror of the boot disk in the second exercise. 4 Ask your instructor for the location of the lab scripts. Lab 4: Troubleshooting the Boot Process A--37 Copynqnt 'r; l006 Symanter Corporaunn. All nqtus reserved
    • Recovering from Encapsulated, Unmirrored Boot Disk Failure In this lab exercise, you attempt to recover from encapsulated. unmirrorcd boot disk failure. You succeed" hen yOU recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor Ior the location or the run_root script. This lab requires that the system disk is encapsulated, but not mirrored. Iryour system disk is mirrored. then remove the mirror. 2 Save a copy ofthe /etc/system file to /etc/system.preencap. In the new rile (/ etc/ system. preencap), comment out the non-force load lines related to YxYM (the lines that define the disk to be an encapsulated device). 3 From the directory that contains the lab scripts, run the script run_root. and select option I, "Encapsulated. unmirrorcd boot disk failure": Before You Begin: Ensure that the environment variable DG is set to the name or the bootdg disk group. 4 Follow the instructions in the lab script window. This script causes the only plcx in root vol to change to the STALE state. When you arc ready. the system will be rebooted twice. Wait until the system reboot rails because or the STALE plcx and you an: presented with the OK prompt. 5 Recover the volume root vol by using the / etc/ system. preencap file that you created before the Iai lure. You succeed when the system boots up to multiuser mode. A-38 VERITAS Storage Foundation 5.0 for UNIX: Maintenance COf))'nqhi iC..' 2006 Svmantec Corporaton All 'Ignis reserved
    • Recovering from Encapsulated, Mirrored Boot Disk Failure (1) In this lab exercise. you attempt to recover from encapsulated, mirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a bout disk failure. Ask your instructor for the location of the run_root script. Important: Mirror the boot disk. This lab requires that the system disk is encapsulated and mirrored. If your system disk is not currently mirrored. then mirror the system disk before continuing. Note: Make sure that the use -nvramrc? eeprom parameter is set to true when you mirror the system disk. so that the device alias created by VxVM for the mirror disk can be used. 2 If you have not already done so. save a copy of the jete/system file to / etc/ system. preeneap. In the new file (/ ete/ system. preeneap). comment out the non-foreeload lines related to VxVM (the lines that define the disk to be an encapsulated device). 3 From the directory that contains the lab scripts. run the script run_root. and select option 2. "Encapsulated. mirrored boot disk failure - I": Befure You Begin: Ensure that the environment variable DG is set to the name of the bootdg disk group. 4 Follow the instructions in the lab script window. This script causes both plcxcs in root vol to change to the STALE state. When you are ready. the system is rebooted. The system does not come up due to the STALE plex. 5 Recover the volume root vol by using the / ete/ system. preeneap file that you created before the failure. You succeed when the system boots up to multiuser mode. Lab 4: Troubleshooting the Boot Process A-39 Coryrrg~ll ,to 20U6 Symantec Corpo-auon. All nqhts reservet
    • Optional Lab Exercises: Solaris Only The next set of lab exercises is optional and may be performed if you have time. These exercises provide additional recovery scenarios for troubleshooting the boot process on Solaris. Optionallab: Recovering from Encapsulated, Mirrored Boot Disk Failure (2) In this lab exercise, you attempt to recover fi'OI11encapsulated, mirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure, Ask your instructor for the location of the run_root script. Important: .Vlirrur the buot disk. This lab requires that the system disk is encapsulated and mirrored. If your system disk is not currently mirrored, then mirror the system disk before continuing. If your system disk is already mirrored, ensure that allthc plcxcs ofthc system disk volumes and the volumes themselves arc in ENABLED' ACTIVE stare. that is there are no synchronization processes that arc running on the volumes on the system disk. Note: Makc sun; that the use -nvramrc? eeprom parameter is set to true when you mirror the system disk. so thatthc device alias created by Vx VM for the minor disk can be used. Run the eeprom command and view the settings tor deval ias. such as vx- al tboot and vx- rootdisJ.:. 2 Ifyou have not already done so. save a copy ofthe jete/system file to / ete/ system. preeneap. In the new tile (/ ete/ system. preeneap). comment out the non-force load lines related to VxVM (the lines that define the disk to be an encapsulated device). 3 From the directory that contains the lab scripts, run thc script run_root. and select option 3. "Encapsulated. mirrored boot disk failure - 2": Before You Begin: Ensure that thc environment variable DG is set to the name or the bootdg disk group. 4 Follow the instructions in the lab script window. This script causes one of the plcxcs in rootvol tu change to the STALE state. The clean plcx is missing the /kernel directory. so you cannot boot up the system without recovery. When you arc ready. the script reboots the system. 5 Recover the volume rootvol by using the / etc/ system. preencap file that you created before the tuilurc. You succeed when the system boots up to multiuser mode. A-AD VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copvnqht " 2ll1l6 Svmantec Corporancn All r'gilts reserved
    • HP-UX Only: Troubleshooting the Boot Process Note: The boot process troubleshooting labs vary by platform due to the way in which the hoot disk is handled by the operating system. This lab section applies to IIP-UX only. Labs for Solaris are presented in the previous section. Note: These labs require console access. If you are working in a Virtual Academy lab environment with no console access. you cannot perform these labs. Troubleshooting the Boot Process - HP-UX Note: Before starting this lab. ensure that the system disk is under VxVM control and that there is an alternate LVM boot disk. Part I Reboot your system using shutdown - ry now and interrupt the automatic boot process. 2 Boot the system using the alternative boot disk. 3 When the system is up, verify that the system is booted on the LVM disk by executing the bdf command. Ensure that the systemdg disk group that is used for the VxVM boot disk is imported on the system. 4 Stop the rootvol volume and change the state of the only plex in the rootvol volume to STALE using the vxmend fix stale command. 5 Reboot your system using shutdown - ry now. Do not interrupt the boot process. Observe what happens. 6 Recover the VxVM boot disk using the maintenance mode boot without booting off the LVM system disk. Part II Ensure that you arc booted off the VxVM system disk using the bdf command. Edit the /etc/vx/volboot tile and modify the hostid entry to dummy. Lab 4: Troubleshooting the Boot Process A--41 Copyright <t 2006 Symantec co-ooretoo All rl9hts reSRfler1
    • 2 Reboot your system using shutdown -ry now. Do not interrupt the boot process. Observe what happens. 3 Recover till: YxYM boot disk using the maintenance mode boot without booting offthe LYM system disk. 4 When the system comes back lip verify that you arc running on the YxYM boot disk. Destroy the LVM boot disk 011 the other internal disk to free lip the internal disk for later labs. A-42 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copyrlyht ,~, 2006 Syntantec Corporeuon All fights reserved
    • symantec. Lab 5 Lab 5: Volume Maintenance In this lab, you practice volume maintenance activities, such as changing volume layouts and using the Storage Expert utility. Optional exercises provide additional practice on managing VxVM tasks. [ For Lab Exercises, see Appendix A. --"or Lab Solutions, see Appendix B. Lab 5: Volume Maintenance In this lab. you practice volume maintenance activities. such as changing volume layouts and using the Storage Expert utility. Optional exercises provide additional practice on managing Vx VM tasks. TIl: 1;lh Slllllil;js 1,,[ Ihi, I:rb alc,' :uc'a:r:ti on the t(lll(min!, page" "1.;iI) < 'inIUli"IlS: 'UIUllH' L.il1l,·n'IIKl'," pi!g' 1..6' Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this, you also need at least lour disks to be used in a disk group. Copynqtu c' 2006 Symantec Corpora lion Ali nqrns reserved A-43Lab 5: Volume Maintenance
    • Classroom Lab Values In preparation lor this lab, you will need the following information about your lab environment. For your reference, you may record the information here, ur refer back tu the first lab in the SF Fundamentals section where you initially documented this information. Object Sample Value Your Value root password veritas ~ost name trainl Ily Data Disks: Soluris: clt#dO - clt#d5 HP-LX:c4tOdO - c4tOd5 AIX: hdisk21- hdisk26 Linux: sda - sdf VERITAS Storage Foundation 5.0 for UNIX: MaintenanceA-44 Cupyngt) ~ 20U6 Syrnaruec Corpor anou All !lgrlls reserved
    • Changing the Volume Layout You can use either the VEA interface or the command line interface, whichever you prefer. The solutions tor both methods are covered where appropriate. If you use object names other than the ones provided, substitute the names accordingly in the commands. Note: If you are using VEA, view the properties of the related task after each step to view the underlying command that was issued. Create a disk group called namedg with four disks. 2 Create a 20-MB concatenated mirrored volume called namevol1. Create a Vcritas tile system on the volume and mount it to /name1. If you use VEA to create and mount the file system. ensure that the tile system is not added to the file system table.= 3 Add data to the volume and verify that the tile has been added. 4 Change the volume layout from its current layout (mirrored) to a nonlayered mirror-stripe with two columns and a stripe unit size of 128 sectors (64K). Monitor the progress of the relayout operation. and display the volume layout after each command that you run. 5 Verify that the file is still accessible. 6 Unmount the file system on the volume and remove the volume. Using the Storage Expert Utility Add the directory containing the StorageExpert rules to your PATI I environment variable in your. prof ile tile. 2 Display a description of Storage Expert nile vxse_drll. What does this rule do" 3 Docs Storage Expert nile vxse _drll have any uscr-sertablc parameters' 4 From the command line. create a I00-I1R mirrored volume with no log called namevoll in the namedg disk group. Create a tile system on the volume and mount it to / name 1. Lab 5: Volume Maintenance A-45 Copyright ,~, 2U06 Svmantec Corporation 111fights reserved
    • 5 Run Storage Expert rule vxse_drll on the disk group containing the volume. What docs Storage Expert report? 6 Expand the volume to a si/c or I G13. 7 Run Storage Expert rule vxse_drll again on tile disk group containing the volume. What docs Storage Expert report" 8 AdJ a log to the volume. 9 Run Storage Expert rule vxse_drll again on the disk group containing the volume. What docs Storage !::xpert report'! 10 What arc thc attributes and parameters that Storage Expert uses in running the vxse drll rule" 11 Shrink the volume to IO(J M13and remove the log. 12 Run Storage Expert rule vxse_drll again. When running the rule, specify that you want Storage Expert to test the mirrored volume against a mirror_thrcshold or I()OM13.What does Storage Expert report' 13 Unmount the file system and remove the volume used in this exercise. A-46 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Optional Lab Exercises The next setof lab exercises is optional and may be performed iI'you have time. These exercises provide additional practice on monitoring tasks. Optional Lab: Monitoring Tasks In this optional lab, you track volume relayout processes using the vxt ask command and recover from a vxrelayout crash by using VEA or from the command line. To begin. you should have at least four disks in the disk group that you arc using. Create a mirror-stripe volume called namevoll in the namedg disk group, with a size of I GB using the vxassist command. Assign a task tag to the task and run the vxassist command in the background. 2 View the progress of the task 3 Slow down the task progress rate to insert an I/O delay of 100 milliseconds. View the layout of the volume in the VEl'. interface. 4 After the volume has been created, use vxassist to rclayout the volume to stripe-mirror, Use a stripe unit size of 251lK, use two columns, and assign the process to the above task tag. 5 In another terminal window, abort the task to simulate a crash during relayout. View the layout of the volume in the VEA interface. 6 Reverse the relayour operation. View the layout of the volume after the reversal of the relayout operation completes. Notice that the stripe unit size is back to the original value but the layout is layered. Change the layout to non- layered. 7 Destroy the namedg disk group. Lab 5: Volume Maintenance A-47 Copyright ~ 2006 Syrnantsc Corporation All notus reserved
    • A-48 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Cupynyht 2()06 Svmantec COII){lrallon All flgtlls reserved
    • Lab 6 Lab 6: Performance Monitoring In this lab, you analyze Volume Manager 1/0 operations using the vxstat and the vxtrace utilities. For Lab Exercises, see Append~l For Lab Solutions, see Append~ Lab 6: Performance Monitoring symantec. The purpose of this lab is to analyze Volume Manager 1/0 operations using the vxstat and the vxtrace utilities. The lnb Solution-, I,ll lili, lab arc IO,:;lkd 011 the lulll illg page: "1.:11.1(, :)01 II!II 11,;: Pcrtormancc 11lnill1Illlg," page B· 75 Prerequisite Setup To perform this lab, you need a lab system with Storage Foundation pre-installed, configured and licensed. In addition to this, you also need at least six disks to be used in a disk group. Copvnqbt <&" 200b Symarnec Corporaton All nqtus reservec A-49Lab 6: Performance Monitoring
    • Classroom Lab Values In preparation fur this lab, you will need the following information about your lab environment. Fur your reference, you may record the information here, or refer back tu the first lab in the SF Fundamentals section where you initially documented this inforrnauon. Note: On the HP-UX platform, if you have moved the bout disk from an L VM disk to a VxVM disk during Encapsulation and Rootability lab (lab 3) ur the Troubleshooting the Boot Process lab (lab 4), your bout disk and second internal disk values will have changed from previous labs. l Iyou have skipped these labs, the values will still be the same. Ensure that you enter the correct values in the following table, Object SampleYaille Your Value root password veritas Host name trainl 11) Boot Disk: Solaris:cOt OdO HI'-UX: cltl5dO IIIX: hdiskO Linux: hda 2nd Internal Disk: Soluris:cOt2dO Ill'-LX: c3t15dO AIX: hdiskl l.inux: hdb Ily nata Disks: Solaris:clt#dO - clt#d5 HI'-LX: c4tOdO - c4tOd5 IIIX:hdisk21- hdisk26 Linux: sda - sdf Location of Lab Scripts /student/labs/sf/ (if any): sf50 Location of the fp /student/labs/sf/ program: sf50/bin A-50 Education and SRT FrameMaker Template Set Copyflghl' 21)(}6 Symanter Corpo-atco. 111 fights reserve»
    • Preparing the Environment for the Performance Labs If the second internal disk on your system is used in the systemdg disk group. which is the disk group used for the system disk encapsulation. use the following steps to free it up for performance testing. If you do not have a second internal disk or if you cannot use the second internal disk. skip this section. 1 If the system disk is encapsulated and mirrored to the second internal disk. remove the mirrors on the second internal disk for all system disk volumes. 2 Remove the second internal disk from the systemdg disk group. Note: If you are working on the IIP-UX platform and the second internal disk is configured as an alternative LVM boot disk. ensure that you are booted off the VxVM boot disk and destroy the LVM boot disk using the following command: vxdestroy_lvmroot -v c#t#d# where c#t #d# is the second internal disk used as an alternative LVM boot disk. Exploring the vxs ta t Utility In this exercise. you analyze the performance ofa disk in the testdg disk group for 32K random reads. You use the fp program to generate an 1:0 load. Ask your instructor for the location ofthc fp program. Create a non-CDS disk group named testdg that contains one disk. If your system has lVO internal disks and the second internal disk is available for you to use. use thc second internal disk, otherwise use any disk except for your boot disk. Name the disk testdgOl. Note: In a North American Mobile Academy lab environment. you cannot use the second internal disk during the labs even if the system has a second internal disk. 2 Determine the maximum volume size that can be created using the single drive. Create a volume named test in the testdg disk group that is the maximum size on the single drive. 3 Invoke the vxstat command to begin drive analysis on the test volume. Set the vxstat interval to display statistics every I second. Statistics will begin printing every second. and all statistics are displayed as () until you begin sending I/O to the volume. Lab 6: Performance Monitoring A-51 Copyright 1'.;:006 Svrnante- Corporation. 111r.qtus r"serveo
    • Note: To be able 10 analyze the output later, you can direct it to a file, for example /tmp/vxstat, out. 4 In a different terminal window, change to the directory that contains the fp program, Note: Make sure that you arc using the correct version of the fp program for your platform, lor example, fp _sun for Solaris, fp _hp for HP-UX, or fp_linux lor Linux .. Display a description of the fp program by running the fp command without any parameters: 5 From the directory that contains the fp I/O program, start several invocations ofthe fp program by using the command: Note: Alternatively, you can use the v i editor to createa simple script that contains ten or more invocations of this fp command. This method can more effectively flood the volume with I/O: a Invoke the vi editor and create a tile named /tmp/testscript. b Copy the fp command shown above into the file ten or more times. c Save the testscript tile. d Change the permissions on the Iilc 10 he readable and writable by the root user. e Run the test script. 6 When you execute the fp command or your test script, the vxstat output in the other terminal window begins to display data. Wait fur all the fp commands or the script to finish executing, then stop the vxstat output by typing CTRL-C on the terminal you arc running vxs t a t .and analyze the vxstat output and determine the peak performance of the drive. 7 Destroy the testdg disk group. A-52 Education and SRT FrameMaker Template Set CopYflghl 02006 Svmautec Corporation All flYtVs teserveo
    • Optional Lab Exercises The next set of lab exercises is optional and may be performed if you have time. These exercises provide additional performance scenarios for exploring performance utilities. Optional Lab: Analyzing Drive Performance: Scenarios In this exercise, you analyze drive performance based on sample vxstat output and identify possible improvements to volume layouts. This exercise is theoretical. but designed to help you understand how to interpret vxstat output. Note: The samples provided arc from a Solaris platform. Therefore I block is equivalent to 512 bytes. Scenario I Suppose that you have a disk group named testdg that contains four disks. You have a IOO-MB volume. called test. striped across two disks with a stripe unit size of 4 M B. There are three processes performing random reads that arc 512K in size on the volume. There are no other volumes in the disk group. When you run a performance test and run vxstat on the disk group. the following output is displayed: Analyze the vxstat output. What do you notice? What changes might you make to the volume layout to improve performance') Scenario 2 Suppose that you have a disk group named testdg that contains four disks. You have a IOO-MB volume, called test. that is concatenated using two disks. There are three processes performing random reads of size .'i 12K on the volume. There are no other volumes in the disk group. When you run a performance test and run vxstat on the disk group. the following output is displayed: Analyze the vxsta t output. What do you notice') 2 What changes might you make to the volume layout to improve performance') Lab 6: Performance Monitoring A-53 Cocynum E ;ton6 Symantec Corporation All nahts reserved
    • Scenarin 3 Suppose that you have a disk group named testdg that contains lour disks. You have two volumes: A IOO-MB volume: called test striped across three disks with a stripe unit size:of4 MB Another IOO-MB volume called test2 with a concatenated layout on the:disk testdgOl, which is also one of the disks usedby the volume test. There arc three processes performing random reads of size 128K on the test volume andone process performing random reads of size 512K 011 the test2 volume. Whenyou rUII a performance test andrun vxstat on the disk group, the following output is displayed: vxstat -g testdg -d OPERATIONS BLOCKS AVG TIME (ms) TYP NAME READ WRITE READ WRITE READ WRITE dm testdg01 241 a 138496 a 30.8 0.0 dm testdg02 126 a 32256 a 17.6 0.0 dm testdg03 132 a 33792 a 18.2 0.0 dm testdg04 a a a 0.0 0.0 Analyze thc vxstat output. What do you uoticc? 2 What changes might you make to the volume layout to improve performance? vxstat -g testdg -d OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE dm testdg01 128 a 32768 a 23.0 0.0 dm testdg02 124 a 31744 a 24.3 0.0 dm testdg03 147 a 37632 a 24.7 0.0 dm testdg04 100 a 102400 a 25.5 0.0 A-54 Education and SRT FrameMaker Template Set
    • Optional Lab: Analyzing the Application 1/0 Profile: Scenarios In this exercise, you analyze the application 1/0 profile based on sample vxt race output and identify possible improvements to volume layouts (for example, changing the layout from concatenated to striped. increasing the number of columns, changing the stripe unit size, and so on). This exercise is theoretical. but designed to help you understand how to interpret vxtrace output. Note: The samples provided are from a Solaris platform. Therefore I block is equivalent to 512 bytes." Scenario I Suppose that you have a disk group named testdg that contains four disks. You have a IOO-MBvolume, named test, striped across two disks with a stripe unit size of 4K. When you start a trace on the volume, run a performance test on the volume, and then stop the trace on the volume. the following vxtrace output is displayed: vxtrace -g testdg -d /tmp/appiolabl.out -0 dev.disk test ICtrl+q vxtrace -g testdg -f /tmp/appiolabl.out -0 dev.disk I pg 3601 START write vdev test block 115392 len 64 concurrency 1 pid 6948 3602 START write disk c1t8dOs2 op 3601 block 57696 len 8 3603 START write disk c1t9dOs2 op 3601 block 57696 1en 8 3604 START write disk c1t8dOs2 op 3601 block 57704 len 8 3605 START write disk c1t9dOs2 op 3601 block 57704 len 8 3606 START write disk c1t8dOs2 op 3601 block 57712 1en 8 3607 START write disk c1t9dOs2 op 3601 block 57712 len 8 3608 START write disk c1t8dOs2 op 3601 block 57720 len 8 3609 START write disk c1t9dOs2 op 3601 block 57720 1en 8 3602 END write disk c1t8dOs2 op 3601 block 57696 len 8 time 0 3603 END write disk c1t9dOs2 op 3601 block 57696 len 8 time 0 3604 END write disk c1t8dOs2 op 3601 block 57704 len 8 time 1 3606 END write disk clt8dOs2 op 3601 block 57712 len 8 time 1 3608 END write disk clt8dOs2 op 3601 block 57720 len 8 time 1 3605 END write disk clt9dOs2 op 3601 block 57704 len 8 time 1 3607 END write disk clt9dOs2 op 3601 block 57712 len 8 time 1 3609 END write disk clt9dOs2 op 3601 block 57720 len 8 time 1 3601 END write vdev test op 3601 block 115392 len 64 time 1 Analyze the application l/O profile based on the vxtrace output. Analyze the number of concurrent processes, the application 110 size for each process, and whether the process is performing random or sequential 1/0. What do you notice') 2 WhM changes might you make to the volume layout to improve performance'.' Lab 6: Performance Monitoring A-55 Copyfl~h! f: 2006 Symantec Corporation. All nqtus reserved
    • Scenario 2 Suppose that you have a disk group named testdg that contains four disks. You have a IOO-MB concatenated volume. named test. on one of the disks. When you start a trace on the volume. run a performance test on the volume, and then stop the trace on the volume, the following vxtrace output is displayed: vxtrace -g testdg -d /tmp/appiolab2.out -0 dev,disk test ICtrl+Cj vxtrace -g testdg -f /tmp/appiolab2.out -0 dev,disk I pg 6005 START read vdev test block 108256 len 32 concurrency 2 pid 7211 6006 START read disk c1t8dOs2 op 6005 block 108256 len 32 6007 START read vdev test block 59552 len 32 concurrency 3 pid 7217 6008 START read disk c1t8dOs2 op 6007 block 59552 len 32 6004 END read disk c1t8dOs2 op 6003 block 172512 len 32 time 1 6003 END read vdev test op 6003 block 172512 len 32 time 1 6009 START read vdev test block 196352 len 32 concurrency 3 pid 7214 6010 START read disk c1t8dOs2 op 6009 block 196352 len 32 6008 END read disk c1t8dOs2 op 6007 block 59552 len 32 time 1 6007 END read vdev test op 6007 block 59552 len 32 time 1 6011 START read vdev test block 78688 len 32 concurrency 3 pid 7217 6012 START read disk c1t8dOs2 op 6011 block 78688 len 32 6010 END read disk c1t8dOs2 op 6009 block 196352 len 32 time 0 6009 END read vdev test op 6009 block 196352 1en 32 time 0 6013 START read vdev test block 151712 len 32 concurrency 3 pid 7214 6014 START read disk c1t8dOs2 op 6013 block 151712 len 32 6006 END read disk c1t8dOs2 op 6005 block 108256 len 32 time 2 6005 END read vdev test op 6005 block 108256 len 32 time 2 Analyze the application I/O profile based on the vxtrace output. Analyze the number or concurrent processes, the application I/O size for each process, and whether the process is performing random or sequential 1/0. What do you notice'? 2 What changes might you make to the volume layout to improve performance? Scenario 3 Suppose that you have a disk group named testdg that contains lour disks. You have a IOO-MB volume. named test. striped across three disks with a stripe unit size of -+K. When you start a trace on the volume. run a performance test 011 the volume, and then stop the trace 011 the volume. the following vxtrace output is displayed: vxtrace -g testdg -d /tmp/appiolab3.out -0 dev,disk test ICtrI+CI vxtrace -g testdg -f /tmp/appiolab3.out -0 dev,disk I pg A-56 Education and SRT FrameMaker Template Set Cupynght i 2D()6 Symantec Corpor auon All right" reserved
    • 6802 START read vdev test block 194304 len 64 concurrency 2 pid 7487 6803 START read disk c1t8dOs2 op 6802 block 64768 len 8 6804 START read disk c1t9dOs2 op 6802 block 64768 len 8 6805 START read disk c1t10dOs2 op 6802 block 64768 len 8 6806 START read disk c1t8dOs2 op 6802 block 64776 len 8 6807 START read disk c1t9dOs2 op 6802 block 64776 len 8 6808 START read disk c1t10dOs2 op 6802 block 64776 len 8 6809 START read disk c1t8dOs2 op 6802 block 64784 len 8 6810 START read disk c1t9dOs2 op 6802 block 64784 len 8 6795 END read disk c1t9dOs2 op 6793 block 67712 len 8 time 1 6798 END read disk clt9dOs2 op 6793 block 67720 len 8 time 1 6801 END read disk c1t9dOs2 op 6793 block 67728 len 8 time 6794 END read disk c1t8dOs2 op 6793 block 67712 len 8 time 1 6797 END read disk c1t8dOs2 op 6793 block 67720 len 8 time 6800 END read disk c1t8dOs2 op 6793 block 67728 len 8 time 1 6796 END read disk clt10dOs2 op 6793 block 67712 len 8 time 6799 END read disk c1t10dOs2 op 6793 block 67720 len 8 time 6793 END read vdev test op 6793 block 203136 1en 64 time 2 6811 START read vdev test block 169984 len 64 concurrency 2 pid 7484 6812 START read disk c1t10dOs2 op 6811 block 56656 len 8 6813 START read disk c1t8dOs2 op 6811 block 56664 len 8 6814 START read disk c1t9dOs2 op 6811 block 56664 len 8 6815 START read disk c1t10dOs2 op 6811 block 56664 len 8 6816 START read disk c1t8dOs2 op 6811 block 56672 len 8 6817 START read disk c1t9dOs2 op 6811 block 56672 len 8 6818 START read disk c1t10dOs2 op 6811 block 56672 len 8 6819 START read disk c1t8dOs2 op 6811 block 56680 len 8 6820 START read vdev test block 32320 len 64 concurrency 3 pid 7490 I Analyze the application l/O profile based on the vxtrace output. Analyze the number of concurrent processes. the application lIO size for each process. and whether the process is performing random or sequential 110.What do you notice') 2 What changes might you make to the volume layout to improve performance'! Scenario 4 Suppose that you have a disk group named testdg that contains four disks. You have a IOO-Ml:lvolume, named test. striped across three disks with a stripe unit size of 256K. When you start a trace on the volume, run a performance test on the volume, and then stop the trace on the volume. the following vxtrace output is displayed: vxtrace -g testdg -d /tmp/appiolab4.out -0 dev,disk test ICtrl+C1 vxtrace -g testdg -f /tmp/appiolab4.out -0 dev,disk I pg Lab 6: Performance Monitoring A-57 Copyncbt f 2[106 Symantec Corporanor All nqtus reserved
    • 8972 START read vdev test block 0 len 64 concurrency 1 pid 7751 8973 START read disk c1t8dOs2 op 8972 block 0 len 64 8973 END read disk c1t8dOs2 op 8972 block 0 len 64 time 2 8972 END read vdev test op 8972 block 0 len 64 time 2 8974 START read vdev test block 64 len 64 concurrency 1 pid 7751 8975 START read disk c1t8dOs2 op 8974 block 64 1en 64 8975 END read disk clt8dOs2 op 8974 block 64 len 64 time 0 8974 END read vdev test op 8974 block 64 len 64 time 0 8976 START read vdev test block 128 len 64 concurrency 1 pid 7751 8977 START read disk c1t8dOs2 op 8976 block 128 len 64 8977 END read disk c1t8dOs2 op 8976 block 128 len 64 time 0 8976 END read vdev test op 8976 block 128 1en 64 time 0 AnaIYL<:the application 1/0 profile based on the vxtrace output. Analyze the number of concurrent processes. the application 1/0 size for each process, and whether the process is performing random or sequential 110. What do you notice' 2 What changes might you make to the volume layout to improve performance? A-58 Education and SRT FrameMaker Template Set COI'yngnt 'D 2006 Svmantec Corporauon All fights ieseivea
    • Optional Labs: Measuring Volume 1/0 Operations In the following exercises, you determine whether reads or writes occur when VxVM performs various actions. Note: The solutions provided in this section show sample vxstat outputs from a Solaris platform. You may observe different sizes if you are working on anIIP-UX platform. This is because on IIP-UX one sector is 1024 bytes, whereas on Solaris, one sector is 512 bytes. Optional Lab: When Creating a Disk Group Does VxVM write into the public region ora disk when it creates a disk group'! Create a disk group named datadg using six disks. 2 Determine whether Vx VM writes into the public region of a disk when it creates a disk group. Optional Lab: When Creating a Volume Does VxVM write into the volume, plexcs, subdisks, or disks space when it creates a volume') Reset the read/write counters for datadg. Create a 50-Ms' concatenated (RAID-O) volume named datavoll in datadg. Did reads or writes to the volume, plcx. subdisk. or disk occur? 2 Reset the read/write counters for dat adg. Create a 30-M B, 3-columll, striped (RAID-O) volume named datavo12 ill datadg. Did reads or writes occur') 3 Reset the read/write counters for datadg. Create a 30-MB, 2-way, mirrored (RAID-I) volume named datavo13 in datadg. Did reads or writes occur? Did any synchronization occur'! 4 What type of synchronization occurred between the mirrors') 5 Reset the read/write counters for datadg. Create a IOO-MB, 3 column. striped, mirrored and logged volume (RAID 0+1) named datavo14 in datadg. Lab 6: Performance Monitoring A-59 Copyright <f' 2006 Svmantec corooranoo. All noms reserved
    • Did reads or writes occur" 6 Why do writes occur when creating mirrors hut not when creating concatenated or striped volumes" 7 Reset the read/write counters tor datadg. Create a30-Ml3, 2-way mirrored. 3 column striped and logged (RAID-I+O) volume named datavo16 in datadg. Did reads or writes occur'? Note: When a layered (stripe-mirror or RAID-I +0) volume is involved, omit the name or the high-level volume to get the statistics. 8 Docs YxYM write into the volume. plcxcs, subdisks. or disks when it creates a volume'! 9 Reset the read/write counters tor datadg. Create a 5MB mirrored volume named datavol 7 and initialize it to zero. Is the volume's address space written when it is initialized to zero? Optional Lab: When Mirroring a Volume or Resynchronizing Mirrors Docs YxYM write into the volume. plcxcs, subdisks, or disks when it mirrors a volume or rcsynchronizcs mirrors'! Creak a 50-Ml3, concatenated (RAID-O) volume named datavo18 in datadg. 2 Reset the read/write counters lor datadg. Add a mirror to the datavo18 volume. Did reads andlor writes occur'! Docs adding a mirror perform atomic-copy or rcad-writcbuck? 3 What is the meaning or atomic copy'! Optional Lab: When an Application Performs I/O to a Volume Create a 500-Ml3, concatenated (RAID-O) volume named datavo19 in datadg. A-60 Education and SRT FrameMaker Template Set Copynqnt <t: 2006 Sytuanlec Corporauon 111nqrus reserved
    • vxassist -g datadg make datavo19 SOOm 2 Reset the read/write counters for datadg, Starr JlO to the datavo19 volume using dd in the background, While the 110is ongoing examine reads or writes. Kill the dd process when you are finished, 3 Assuming that there is only one process doing 110to the disk (no parallel 110) and that there are 512 bytes per block. how would you calculate the I/O throughput to the disk') No, of Blocks x 512/(No. of 1/0 Operations x Average 1:0 Time/OOO) Blsee Divide by 1024 for Kfs/sec Divide by 1024 for MB!sec 4 lllustratc that one 1/0 to the mirrored volume datavo18 generates two I/Os within the volume, Optional Lab: When Removing a Volume Docs VxVM write into the volume. plcxes, subdisks, or disks when it removes a volume':' Reset the read/write counters for datadg, List the volumes in datadg, 2 Remove the datavoll volume, 3 Does VxVM write into the volume. plcxes, subdisks, or disks when it removes a volume? What docs this imply" Optional Lab: When Removing a Plex Does VxVM write into the volume, plexes, subdisks, or disks. when it removes a plex? 1 Reset the read/write counters for datadg, L.ist the volumes in datadg, 2 Does VxVM write into the volume, plexes. subdisks, or disks when it removes a plcx? What docs this imply" Lab 6: Performance Monitoring A-61 Copyr'Qllt :{.<'006SvmaruecCorporanon All fl9htSreserved
    • Optional lab: When Destroying a Disk Group Does Yx YM write when it destroys a disk group'! Reset the read/write counters for datadg. Destroy datadg. Did any writes occur') 2 Docs YxYl'vJ write into the public area of the disk when any ofthe above operations arc performed':' What docs this imply for administration of volumes! A-62 Education and SRT FrameMaker Template Set Ccpynqhtv 2006SvmantscCo-porauon111fights reserved
    • symantcc. Lab 7 Lab 7: Point-in-Time Copies In this lab, you perform off-host processing using full- sized instant volume snapshots, create space- optimized instant volume snapshots, and restore a file system using storage checkpoints. For Lab Exercises, see Appendix A. For Lab Solutions, se~Appendix B. Lab 7: Point-in-Time Copies In this lab. you perform off-host processing using third-mirror break-off' volume snapshots, create space-optimized instant volume snapshots. and restore a tile system using storage checkpoints. Optionally. you also create and investigate full- sized instant volume snapshots. Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed, configured and licensed. In addition to this. you also need at least four disks to be used in a disk group. Before starting this lab you should have all the external disks assigned to you already initialized but free to be used in a disk group. Copyright ,[ 200£ Syrnaruec Coroo-auoo. All rogtlls reserved A-63Lab 7: Point-in-Time Copies
    • Classroom Lab Values In preparation Ior this lab. you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the lirst lab in the SF Fundamentals section where you initially documented this information. Object Sample Value Your Value ruut password veritas Host name trainl Host name of the system train2 sharing disks with my system 11)'nata J)jsks: Solaris: ci t#dO - clt#dS HP-LX:c4tOdO - c4tOds AIX: hdisk21- hdisk26 Linux: sda - sdf VERITAS Storage Foundation 5.0 for UNIX: MaintenanceA-64 Cccvngm ~,2()06 Svruarttec Corporation All fights reserved
    • Off-Host Processing Using Third Mirror Break-off Volume Snapshots Phase I: Create, Split, and Deport 1 Identify the name of the system that is sharing access to the same disks as your system. If you are not sure, check with your instructor. Note the name of the partner system here. Partner system hosrname: _ ----~~~------ 2 On your local lab system, create a disk group called namedg with four dish. 3 Create a 500-MB concatenated volume, namevoll, using a single disk. Create a Veritas tile system on the volume and mount the tile system on the mount point / name 1. 4 Add data to the tile system using the following command: echo "Pre-snapshot for name" > /namel/presnap on name and verify that the data has been added. 5 Enable FastResync for the volume namevoll. Can you identify what has changed" 6 Add a mirror to the volume for use as the snapshot. Observe the volume layout. What is the state of the newly added minor after synchronization completes'! 7 Create a third mirror break-off snapshot named namesnapl using the new mirror youjust added. Use the vxsnap ~g namedg list command to observe the snapshots in the disk group. Can you find similar information in the VEA GUI" 8 Split the snapshot volume into a separate disk group from the original disk group called nameOHPdg: 9 Verify that the nameOHPdg disk group exists and contains namesnapl. First, display the disk groups on the system. You should see the new nameOHPdg disk group displayed. Then, view the volume information for the nameOHPdg disk group. 10 Deport the disk group that contains the snapshot volume. If you have a partner system that shares access to the external disks with your system, you can use the new host information using the hosrname of the partner system. 11 View the disk groups 011 the system. Lab 7: Point-in-Time Copies A-65 Copyright ·F· 2006 Svmantec Corpcrenon All rlQhts -eservec
    • Run the vxdisk command to view the status otihc disks on the system. Alternatively, you can view till: status of the disks in V EA 12 Add additional data to the original volume using the following command: echo "Post-snapshot for name" > /namel/postsnap on name and verify that the data has been added. Phase 2: Import, Process, and Deport 13 Remote login to the partner system which will be used as the off-host processing (UHP) system. Notes: I I' you are working on a standalone system, skip this step and use your own system as the partner system. I I' you want to continue using the graphical user interface (VEA) on the partner system. you need to conned to the partner system using the V EA client on yuur local system. 14 On the oft-host processing (OHP) host (your partner system) where the backup or processing is to be performed, import the disk gruup that contains the snapshot volume. Note: You may need to rcscun the disks using the vxdctl enable command on the 01--1I' host so that the host detects the changes. View the status or the volume in the nameOHPdg disk group. 15 To perform off-host processing, you must first start the volume and mount the file system on the off-host processing host. Use the mount point /namesnapl. 16 View and compare the contents of both file systems, 17 Check if you can write to the snapshot file system during off-host processing by creating a new tile in the snuphsot file system as follows: echo "Data in snapshot of name" > /namesnapl/data_on_namesnapl 18 Alter completing off-host processing, you arc ready to reattach the snapshot volume with the original volume. UII1l10unt the snapshot volume on the off- host processing host. A-66 VERITAS Storage Foundation 5.0 for UNIX Maintenance Cop)'''glil ~,2006 Svmanter: Comoeauon All "yh!s reserved
    • Note: If you have been using your local lab system as the OIIP host. you do not need to perform the next three steps ( 19-21 ). However, in an actual off-host processing situation, you would perform these steps. 19 On the OIlP host. deport the disk group that contains the snapshot volume. 20 If you had been working on a partner system, exit from the partner system. Alternatively, if you had been using the VEA, disconnect from the partner system. Phase 3: Import, Join, and Resynchronize 21 On the primary host (your local lab system), reimport the disk group that contains the snapshot volume: 22 Rejoin the disk group that contains the snapshot volume to the disk group that contains the original volume. 23 At this point you should have the original volume and its snapshot in the same disk group but as separate volumes. There is still no synchronization between the original volume and the snapshot volume. To observe this, the snapshot volume will be mounted again to observe its contents. You would not need to perform this step during a normal ofT-host processing procedure. Note that if you have been using the CLI, the snapshot volume is initially disabled following the join. a Restart the snapshot volume if necessary. b If necessary, run a file system check on the snapshot volume. Note that this step should not be necessary if you have cleanly unmounted the file system before the deport on the OHP host. c Mount namesnapl back on the /namesnaplmount point. Create the mount point ifnecessary. d View and compare the contents of both file systems. e Unmount the / namesnapl tile system. 24 On the primary host (your local lab system), reattach the pie xes of the snapshot volume to the original volume and resynchronize their contents. 25 Remove the snapshot mirror. Lab 7: Poinl-in-Time Copies A-67 Copyright 'c 2(1[)6Symantec Comoranon Ali r1gh1S fflS8rVfJd
    • Using Space-Optimized Instant Volume Snapshots Select a disk in the namedg disk group that is not used by the original volume namevo11, and create a 50-MI3volume on this disk to be used as the cache volume. Name the cache volume namecachevol. Create a cache object called namecache on the cache volume. Ensure that the cache object is started, Note: l lyou use the YEA to create the cache object. the autogrow option will be left at the default value of off. If you use the command line you can change this setting to on while creating the cache object. 2 Observe how the cache object and the cache volume is displayed in the disk group. 3 Verify that the namevo11 volume is already prepared for instant snapshot operations by displaying information about the DCO log. 4 Add data to the / name1file system using the following command: echo "New data before 8081 for name" > /name1/presos1_on_name1 and veri Iy that the data is written. 5 Create a space-optimized instant snapshot of the namevo11 volume, named namesos1, using the cache object namecache. 6 Display information about the snapshot volumes using the YEA, or the vxprint, vxsnap list and vxsnap print commands from the command line. 7 Using the command line. veri Iy which snapshots arc associated to the cache object. 8 Mount the space optimized snapshot volume namesos1 to the /namesos1 directory. 9 Observe the contents of the / namesos 1 directory and compare it to the contents of the / name1directory. 10 Add data to the /name1 file system using the following command: echo "New data before 8081 for name" > /name1/presos1_on_name1 and verily that the data is written. VERITAS Storage Foundation 5.0 for UNIX" MaintenanceA-68 COPY'Iljllt ~. 2006 Svma-uec Corpori1tlWl All nqrus reserved
    • 11 Create a second space-optimized instant snapshot of the namevoll volume, named namesos2, using the same cache object narnecache. 12 Using the command line, verify which snapshots are associated to the cache object. 13 Mount the space optimized snapshot volume namesos2 to the /narnesos2 directory. 14 Observe the contents of the original file system and the two space optimized snapshots. 15 Make the following changes on the tile systems: a Remove the data you had on the original tile system prior to starting the Using Space-Optimized Instant Volume Snapshots section. If you have followed the lab steps, you need to remove the pre snap on_nameand postsnap_on_narne files from the /namel file system. b Add new data to the space optimized snapshot volumes using the following commands: 16 Observe the contents of the original tile system and the two space optimized snapshots. 17 Assume that you have decided to use the contents of the second space optimized snapshot as the final version of the original file system. Restore the original tile system using the second space optimized snapshot. Note that you will have to unmounr the original file system to make this change. Mount the original file system back to / narnel directory when the restore operation completes. 18 Observe the contents of the original file system and the two space optimized snapshots. 19 Refresh the first space optimized snapshot. narnesosl. Note that you will need to unrnount the first space optimized snapshot to make this change. Mount the namesosl volume again after its contents are refreshed. 20 Observe the contents of the original file system and the two space optimized snapshots. Lab 7: Point-in-Time Copies A-59 COPYright t. 200ti Svmaruec Corporation All nqtus reserved
    • 21 LJnmount the two space optimized snapshots and dissociate them from the original volume. 22 Remove the space optimized snapshot volumes. Note: l lyou want to use the vxassist remove volume command to delete the volume from the command line, you first need to delete the DCO log. Alternatively you can use the vxedi t -g diskgroup - r f rm volume_namecommand to remove the volume together with the associated DtO log. 23 Remove the cache object with its associated cache volume. 24 Unmount the / name1 file system and remove the original volume, namevoll. Note: llyou want to use the vxassist remove volume command to delete the volume from the command line. you lirst need to delete the DCO log. Ahcrnativcly you can use the vxedit -g diskgroup -rf rm vol ume_namecommand to remove the volume together with the associated DCO log. VERITAS Storage Foundation 5.0 for UNIX: MaintenanceA-70 Ccpvnqnt 'I." 21106 Symantec COqlOI auon 111rights reserved
    • Restoring a File System Using Storage Checkpoints In the beginning of this section you should have a namedg disk group with four unused disks in it. Create a simple 1500m volume called origvol in the namedg disk group. 2 Create a VxFS file system on the volume, 3 Make three new mount points: lor ig, I checkpt 1, and I checkpt 2. 4 Mount the file system on lorig. 5 Write a file of size I M named 4pm in the original file system. 6 Create a storage checkpoint named thu_5pm 011 lorig. Note the output. 7 Mount the thu _ 5pm storage checkpoint on the mount point I checkpt l. 8 Write some more files in the original file system on lorig, and synchronize the tile system using the following commands: dd if=/dev/zero of=/orig!5pm bs=1024k count=5 dd if=/dev/zero of=/orig/5pm_2 bs=1024k count=5 sync; sync 9 Create a second storage checkpoint. called thu_6pm, on lorig. Note the output. 10 Mount the second storage checkpoint on the mount point / checkpt2. 11 Write some more files in the original tile system on lod g, and synchronize the tile system using the following commands: dd if=/dev/zero of=/orig/6pm bs=1024k count=6 dd if=/dev/zero of=/orig/6pm 2 bs=1024k count=6 sync; sync 12 View the checkpoints and the original file system. 13 To prepare to restore from a checkpoint. unmount the original tile system and both storage checkpoints. 14 Restore the tile system to the thu _ 6pm storage checkpoint. 15 Run the fsckpt_restore command again. Note the output. Lab 7: Point-in-Time Copies A-71 Coovnqrn 't 2006 Symantec Corpo.auon. All fights reserved
    • 16 Destroy the namedg disk group. A-72 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqru ,~ 200ti Svrnanrec Corporation All fights reserved
    • Optional Lab Exercises The next set of lab exercises is optional and may be performed if you have time. These exercises provide additional practice in exploring storage checkpoints. Optional Lab: Storage Checkpoint Behavior In this exercise, you perform and analyze tour types offile system operations: A file to be deleted (lk. to delete) A tilt: to he replaced by (new) content (lk. to replace) A file to he enlarged (lk5 . to_append) A tile to be written by databases (10m.db I o: the tile remains at the same position with the same size. but some blocks within it are replaced) Create a disk group named xdg with four disks. 2 Create a 12H-MB mirrored volume with a log. Name the volume xvol. Mount the volume at /xvol. 3 Add these four new files to the volume and view the tiles: 1K named /xvol/1k.to delete 1K named /xvol/1k.to_replace 3B=1536bytes named /xvol/1k5.to append 10M named /xvol/10m.db_io 4 Remount /xvol and run ncheck. 5 Create a storage checkpoint for /xvol named CKPT. 6 Delete the file 1k. to delete. 7 Create a new IK tile named 1k. to replace 8 Copy the 1k5. to_append tile to /tmp. 9 Add the 1k5. to append file in /tmp to the original 1k5. to append tile in /xvol. - - 10 Use the following Perl command 10 generate database-like 1'0 (modifying a block within a database file). The second line opens read/write access to the tile without recreating it or simply appending new data. Lab 7: Point-in-Time Copies Copyrmru (1' 2006 Symanter: Corpcrauon A!I notus reserved A-73
    • The third line creates a variable containing 8K "x". The next line positions the tile pointer at 80K offset from the beginning of the lile, The following line writes at this position the new 8K block. perl -e ' > openIFH,"+< /xvol/10m.db_io") I I die; > $Block="x" x 8192; .> sysseek IFf!, 8192, 0); > syswlite IFH, $Block, 8192, OJ; .> c1oseIFH);' 11 Remount /xvol and run /ncheck. Examination of Storage Checkpoint Behavior The following information is an analysis of the previous output from ncheck: • lk. to delete Samedata blocks (2768-276'.1)mapped to CKPT. • lk. to replace Old data blocks (4 I44-4 I45) mapped. not copied, to CKPT. New data blocks (24336-24337) were written to a new location. • lk5. to_append 13cforechcckpointing UNNAMED 1 2784 2785 1 2786 1 2787 Alter chcckpoinring and appending data UNNAMED 1 2784 1 2785 1 2786 2787 124062 24063 CKPT 1 2784 1 2785 1 4146 14147 To get the UNNAMEDfile system. which is normally the active one. as contiguous as possible, a copy-before-write of the middle block is performed. Otherwise copy-before-write would be unnecessary in favor of simple address mapping. Note: Blocks 2784-2785 are mapped to both UNNAMEDand CKPT. This is not shown in the output ofncheck. A-74 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copynqht (9 2006 Svrnantec Cotp •.-irauon All r'ghls reserved
    • • 10m.db io The data file for UNNAMEDremains at the same position (2!WO-2!WI. 4352- 2406 I, 2R I 6-3583). Note: These files are fragmented because the required space was not preallocaied in one extent. The new blocks are written to UNNAMED.and therefore the old data must be copied to 24352-243(,7 n~K) before the new blocks arc writteu. Otherwise copy-before-write would be unnecessary in favor of simple address mapping. Note: In all cases, inode and directory information is copied before the write. 12 Unmount the checkpoint and the original tile system. Destroy the xdg disk group. Lab 7: Point-in-Time Copies Copynght © 2006 Syrnantec Corporanon All noms reserved A-75
    • A-76 VERITAS Storage Foundation 5.0 for UNIX: Maintenance COD}r:ght.0 2006 Syn);lnlec Corooranon 111fights reserver
    • Appendix B Lab Solutions
    • Education and SRT FrameMaker Template SetB-2 Copyright ~ 20nS Svmautec Corporation All nqtus reserved
    • Lab 1 Lab 1: Maintaining Data Consistency In this lab, you practice recovering from a variety of plex problem scenarios, and optionally, observe the benefits of a dirty region log during a system crash. To investigate and practice recovery techniques, you will use a set of interactive lab scripts. !For Lab Exercises, see Appendix A. l£()r Lab Solutions, see Appendix B. symantec. Lab 1 Solutions: Maintaining Data Consistency In this lab. you practice recovering from a variety of piex problem scenarios. and optionally, observe the benefits of a dirty region log during a system crash. To investigate and practice recovery techniques, you will use a set of interactive lab scripts. Each script: Sets lip the required volumes Simulates and describes a failure scenario Prompts you to fix the problem The Lab Exercises for this lab are located on the following page: Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need a minimum of three external disks to be used during the labs. Copynqht f 2006 Syrnantec Corporation All nqtus reserved B-3Lab 1 Solutions: Maintaining Data Consistency
    • Classroom Lab Values In preparation Ior this lab, you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the first lab in the SF Fuudamcntals section where you initially documented this information. Object Sample Value Your Value ~Iy Data Disks: Solar is: clt#dO - clt#d5 HP-LX: c4tOdO - c4tOd5 /IX: hdisk21- hdisk26 Linux: sda - sdf Location fir Lab Scripts: /student/labs/sf/ sf50 8-4 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copyfljhl i;': 2006 Svmantec Corporation All nqhts reserved
    • Preparation for Plex Recovery Labs Overview Your goal is to recover from the problem as described in each scenario. Use your knowledge ofVxVM administration. as well as the VxVM recovery tools and concepts described in the lesson. to determine what steps to take to ensure recovery. Alter you recover the test volumes. the script verifies your solution and provides you with the result. You succeed when you recover the volumes without corrupting the data. For most of the recovery problems. you can use any of the VxVM interfaces: the command line interface. the VERITAS Enterprise Administrator (VEA) graphical user interface. or the vxdiskadm menu interlace. Lab solutions are provided for only one method. If you have questions about recovery using interfaces not covered in the solutions. see your instructor. Setup Due to the way in which the lab scripts work. it is important to set lip your environment as described in this setup section: If your system is set to use enclosure-based naming. then you must turn off enclosure-based naming before running the lab scripts. 2 If you have a namedg disk group left from previous labs. ensure that the disk group has no mounted file systems or volumes. lf nccessary, unmounr any mounted file systems that are on volumes in the namedg disk group and remove the volumes. If necessary: umount /mount_point vxassist -g namedg remove volume volume name 3 If you have not already done so. create a disk group named testdg and add three disks (preferably of the same size) to the disk group. Assign the following disk media names to the disks: testdgOl. testdg02, and testdg03. If necessary: vxdisksetup -i device_tag vxdg init testdg testdgOl=device_tagl testdg02=device_tag2 testdg03=device_tag3 Note: If you do not have enough disks. you can destroy disk groups created in other labs (for example, namedg) in order to create the testdg disk group. Lab 1 Sotutions: Maintaining Data Consistency B-5 Copyroghl 'f 2005 Syrnantar Corpoear.on All nqnts reserved
    • 4 Before running the automated lab scripts, set the DGenvironment variable in your root profile to the name of the test disk group that you arc using: Snlarls, vi /,profile HP-UX DG=testdg; export DG Linux vi /root/.bashrc DG=testdg; export DG Rerun your profile by logging out and logging back on. or manually running it. 5 Ask your instructor fur the location of the lab scripts. Resolving Plex Problems: Temporary Failure In this lab exercise, a temporary disk failure is simulated. By using the vxmend command. you must select the plcx that has the correct data and recover the volume by using the clean plcx. II'you select the wrong plcx asthe dean plcx, then you have corrupted the data. The lab script run_states sets up the test volume configuration. simulates a disk failure. and validates your solution for recovering the volume. Ask your instructor lor the location of the run_states script. Before You Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: echo $DG If it is not set, set it before you continue: DG="testdg" export DG From the directory that contains the lab scripts, run the script run_states. and select option I, "Turned 011'drive (temporary failure)": ./run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Optional Lab 4 - Turned off drive with layered volume 5) Optional Lab 5 - Power failed drive with layered volume x ) Exit Your Choice? 1 B-6 VERITAS Storage Foundation 5.0 for UNIX: Maintenance CopynqhtI 20()6Sv.uaruec Corporancn All rights rasarvsc
    • This script sets up a mirrored volume named test. Note: If you receive an error message about the /image file system becoming full during volume setup. ignore the error message. This error will not have any impact on further lab steps or lab resuits. 2 Read the instructions in the lab script window. The script simulates a disk power-off by saving and overwriting the private region on the drive that is used by the volume. Then. when you are ready to power the disk back on. the script restores the private region as it was before the failure 3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volume. Note that the second plcx is already in the STALE state before the drive fails. Assume that the drive that was turned off and then hack on was c 1t2dO for a Solaris or HP-UX system or sdb for a Linux system (with the plex test-Ol) (actual device name will vary by system). The plex test-02 was STALE prior to the failure of the disk with the plex tes t - 0 1. When the disk is powered back on and reattached, the plex test-Ol continues to contain the most up-to-date data. Note: When performing recovery procedures, run vxprint and vxdisk list often to see what is changing after issuing recover)' commands: vxprint -g testdg -htr vxdisk -0 alldgs list To recover: a Ensure that the operating system recognizes the device: Solaris devfsadm Note: Prior to Solaris 7. you call usedrvconf ig and disks. liP-LX ioscan -c disk insf -e Linux partprobe /dev/sdb b Verify that the operating system recognizes the device: Solaris prtvtoc /dev/rdsk/clt2dOs2 IIP-UX ioscan -fnC disk (Verify that the disk is ill CLAIMED statc.j Linux fdisk -1 /dev/sdb c Force the VxV!l configuration daemon to reread all of the drives in the system: Lab 1 Solutions: Maintaining Data Consistency Copyright E 2006 Svmantec Corporatmn All flyht<; reservf!rl 8-7
    • vxdctl enable d Reattach the device to the disk media record: vxreattach e Change the state of pie x test- 01 to STALE: vxmend -g testdg fix stale test-Ol Change the state of plex test - 01 to CLEAN: vxmend -g testdg fix clean test-Ol 9 Recover and start the volume: vxrecover -s 4 Alter you recover the volume, type e in the lab script window. The script verifies whether your solution is correct. Resolving Plex Problems: Permanent Failure In this lab exercise, a permanent disk failure is simulated. By using the vxmend command, you must select the plcx that has the correct data and recover the volume by using the clean plcx. Ilyou select the wrong plcx as the clean plcx, then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure. and validates your solution for recovering the volume. Ask your instructor for the location ofthe run_states script. Before You Begin: Check to ensure that the environment variable DG is set to the numc of the testdg disk group: echo $DG If it is not set, set it before you continue: DG="testdg" export DG From the directory that contains thc lab scripts, run thc script run_states, and select option 2. "Power failed drive (permanent failure}": ./run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Optional Lab 4 - Turned off drive with layered volume B-8 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copyrlyhl 't 2006 Svrnantec Corporahou All flghl" ft::lSHlted
    • 5) Optional Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 2 This script sets up a mirrored volume named test. 'Iote: If you receive an error message about the / image tile system becoming full during volume setup, ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk power-offby saving and overwriting the private region on the drive that is used by the volume. lO is started so that VxVM detects the failure, and VxVM detaches the disk. 3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Note that the new disk does not have any data on it. The other plcx of the volume became STALE ten minutes before the drive failed. I lowever, it still has your data, but data from the last ten minutes is missing. Assume that the failed disk is testdg02 (cl t2dO for a Solaris or UP-UX system or sdb for a Limn system) with plex test-Ol, and the new disk used to replace it is clt3dO for a Solaris or HP-UX system or sdd for a Linux system, which is originally uninitialized (actual device names will vary by system). Because the newly replaced disk has no data on it, you can only use the stale plex test- 02to recover the volume. Note: When performing recovery procedures, run vxprint and vxdisk list often to see what is changing after issuing recovery commands: vxprint -g testdg -htr vxdisk -0 alldgs list To recover from the permanent disk failure: a Invoke vxdiskadm: vxdiskadm b From the vxdiskadm main menu, select the option, "Replace a failed or removed disk." When prompted, select cl t3dOfor a Solaris or UP- UX system or sdd for a Linux system to initialize and replace testdg02. 'iote: If you receive an error while using vxdiskadm about a vxprint operation requiring a disk group, ignore the error. e Change the state of plex test - 02 to CLEAN: vxmend -g testdg fix clean test-02 d Recover and start the volume: Lab 1 Solutions: Maintaining Data Consistency 8-9 Copynqnt :{. 2U06 Syrnantec Corporation 1.11rights reserved
    • vxrecover -s 4 After you recover the volume. type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise. ifthe disk device that was originally used during disk failure simulation is in online invalid state, rcinitializc the disk to prepare for later labs. For example: vxdisksetup -i device_tag Resolving Plex Problems: Unknown Failure III this lab exercise, an unknown failure is simulated. By using the vxmend command, you must select the plcx that has the correct data and recover the volume by using the clean plcx. II'you select the wrong plcx as the:clean plcx. then you have:corrupted the:data. The lab script run_states sets up the test volume conliguration and validates your solution for recovering the volume. Ask your instructor lor the location 01 the run states scnpt Before You Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: echo $DG llit is not set. set it before you continue: DG="testdg" export DG From the directory that contains the lab scripts, run the script run_states, and select option 3, "Unknown failure": . /run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Optional Lab 4 - Turned off drive with layered volume 5) Optional Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 3 This script sets up a mirrored volume named test that has three plcxes. 8-10 VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copyrr!olhl ,,~,2006 Svruantec Corporal 1011 All flgnts fFlSFlrved
    • :ote: If you receive an error message about the / image file system becoming full during volume setup. ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window, The script simulates an unknown failure that causes all plcxcs to be set to the STALE stare. Yuu are not provided with infonnntion about the cause of the problem with the plexes. 3 In a second terminal window. check each plcx individually to determine if it has the correct data. To test if the plcx has correct data. start the volume using that plcx, and then. in the lab script window. press Return. The script output displays a message stating whether or not the plex has the correct data. Continue this process for each plcx, until you determine which plex has the correct data. Because all three plexes of the volume tes t are STALE, and you do not know which plex cuntains the good data, you must offline all but one plex and check to determine if that plex has the guod data. If it is the correct plex, you can recover the volume. If it is not the correct plex, repeat the ofllining of all but one plex to check the other plexes. a Start by checking the data on test-Ol: vxmend -g testdg off test-02 vxmend -g testdg off test-03 vxmend -g testdg fix clean test-Ol vxvol -g testdg start test b Press Return on the output of the script. The script tests the data. If the pie x does not have the good data, continue by checking the data on test-02: vxvol -g testdg stop test vxmend -g testdg -0 force off test-Ol vxmend -g testdg on test-02 vxmend -g testdg fix clean test-02 vxvol -g testdg start test c Press Return on the output of the script. The script tests the data. If this plex has the good data, you do nut need to search any further. 4 After you determine which plcx has the correct data, recover the volume. To recover the volume: vxmend -g testdg on test-Ol vxmend -g testdg on test-03 Lab 1 Solutions: Maintaining Data Consistency Cr>pynghl '~/200{j Symantec Corporation All fights reserved 8-11
    • vxrecover 5 Alter you recover the volume, type e in the lab script window. The script veri lies whether your solution is correct. 8-12 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Cupynylll ~. 2006 Svruamec Corpo.anon All fights reserved
    • Optional Lab Exercises The next set of lab exercises is optional and may be performed if you have time. These exercises provide additional recovery scenarios for resolving plex problems with layered volumes. / final activity explores logging behavior following a system crash. Optional Lab: Resolving Plex Problems: Temporary Failure with a Layered Volume In this lab exercise, a temporary disk failure is simulated. By using the vxmend command, you must select the plex that has the correct data and recover the volume by using the clean plcx. If you select the wrong plcx asthe clean plcx. then you have corrupted the data. The lab script run_states sets up the test volume configuration, simulates a disk failure. and validates your solution for recovering the volume. Ask your instructor for the location of the run_states script. Before You Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: echo $DG Ifit is not set, set it before you continue: DG="testdg" export DG From the directory that contains the lab scripts. run the script run_states. and select option 4, "Turned off drive with layered volume": ./run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Optional Lab 4 - Turned off drive with layered volume 5) Optional Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 4 This script sets up a concat-mirror volume named test. Note: If you receive an error messageabout the / image tile system becoming full during volume setup. ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk power-off by saving and overwriting the private region on the drive that is used by the volume, and l/O is stalled so that V~VM detects the failure. Then. whcn Lab 1 Solutions: Maintaining Data Consistency Copynght .? 2006 Syrnanter coroo-auoo All notus reserved B-13
    • you arc ready to power the disk back on. the script restores the private region as it was before the failure. 3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volume. Note that the second plcx is already in the STALE state before the drive fails. Assumethat the drive that was turned off and then back on was cl t2dO for a Solaris or HP-UX systemor sdb for a Linux systemwith the plex test-POl (actual device name will vary by system).The plex test-P02 was STALE prior to the failure of the disk with the plex test -POL When the disk is powered back on and reattached, the plcx test-POl continues to contain the most up-to-date data. Note: When performing recovery procedures, run vxprint and vxdisk list often to seewhat is changing after issuing recovery commands: vxprint -g testdg -htr vxdisk -0 alldgs list To recover: a Ensure that the operating system recognizesthe device: Solaris devfsadm 1011': Prior to Solaris 7. you can use drvconf ig and di sks. HP-UX ioscan -c disk insf -e Linux partprobe /dev/sdb b Verify that the operating system recognizesthe device: Solaris prtvtoc /dev/rdsk/clt2dOs2 HP-UX ioscan -fnC disk (Writ} that the disk is in CLAIMED sturc.) Linux fdisk -1 /dev/sdb c Force the VxVM configuration daemon to reread all of the drives in the system: vxdctl enable d Reattach the device to the disk media record: vxreattach 8-14 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Cupynqht '.'. 2006 svneotec Corporation All rights reserved
    • e Change the state of piex test-POl to STALE: vxmend -g testdg fix stale test-POl Change the state of plex test -POl to CLEAN: vxmend -g testdg fix clean test-POl 9 Recover and start the volume: vxrecover -s 4 After you recoverthe volume, type e in the lab script window. The script verifies whether your solution is correct. Optional Lab: Resolving Plex Problems: Permanent Failure with a Layered Volume In this lab exercise,a permanentdisk failure is simulated. By using thevxmend command,you must selectthe plex that hasthe correct dataandrecoverthe volume by using thecleanplex. If you selectthewrong plcx asthecleanplcx, then you havecorrupted thedata.The lab script run_states setsup the testvolume configuration. simulatesa disk failure. andvalidatesyour solution for recovering thevolume. Ask your instructor for the location of the run_states script. Before You Begin: Checkto ensurethat the environmentvariable DGis setto the nameof the testdg disk group: echo $DG If it is not set,setit beforeyou continue: DG="testdg" export DG From the directory that containsthe lab scripts.run the script run_states, and selectoption 5, "Power failed drive with layeredvolume": ./run_states 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Unknown failure 4) Optional Lab 4 - Turned off drive with layered volume 5) Optional Lab 5 - Power failed drive with layered volume x) Exit Your Choice? 5 This script setsup a concat-mirror volume namedtest. B-15Lab 1 Solutions: Maintaining Data Consistency Copyright .{:: 2()[)6 Symantec Corporation. All nqhts reaervec
    • Note: If you receive an error message about the / image file system becoming full during volume setup, ignore the error message. This error will not have any impact un further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk powcr-ottby saving and overwriting tile private region on the drive that is used by the volume, I/O is started so that VxVM detects the failure, and VxVM detaches the disk. 3 In a second terminal window, replace the pcrmancmly failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Note that the new disk does not have any data on it. The other plcx of the volume became STALE ten minutes before tile drive failed. However, this plcx still has your data, but data 1i"L1I11 the last tell minutes is missing. Assume that the failed disk is tes tdgO2 (c1t2dO for a Solaris or HP-LJX system or sdb for a Limn system) with pie x tes t - PO1, and the new disk that you use to replace it with is c1t3dO for a Solaris or HP-LJXsystem or sdd for a Linux system, which is originally uninitialized (actual device names will vary by system), Because the newly replaced disk has no data on it, you can only use the stale plex test-P02 to recover the volume. Note: When performing recovery procedures, run vxprint and vxdisk list often to see what is changing after issuing recovery commands: vxprint -g testdg -htr vxdisk -0 alldgs list To recover from the permanent disk failure: a Invoke vxdiskadm: vxdiskadm b From the vxdiskadm main menu, select the option, "Replacc a failed or removed disk," When prompted, select c1 t3dO for a Solaris or HP- LJXsystem or sdd for a Linux system to initialize and replace testdg02. Note: If you receive an error while using vxdiskadm about a vxprint uperatiun requiring a disk group, ignore thc error. c Change the state of plex test-P02 to CLEAN: vxmend -g testdg fix clean test-P02 d Recover and start thc volume: vxrecover -s 8-16 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copynqnt 2006 Svmantec Corporanon All flg~lls reserved
    • 4 After you recover the volume, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, destroy testdg. vxdg destroy testdg Lab 1 Solutions: Maintaining Data Consistency Copvncht e 1006 Syrnantec Corpo-auoo All nghts reserved 8-17
    • Optional Lab: Exploring Logging Behavior During a System Crash "Iole: This section requires console accessto the lab system. II'you arc working in a Virtual Academy lab environment, skip this section. List the imported disk groups on your system and destroy the testdg disk group ifit still exists. vxdg list If necessary: vxdg destroy testdg 2 Ifyou do not already have a disk group called namedg, create it using one of the already initialized disks. If necessary: vxdg init namedg namedgOl=device_tagl 3 Ensure that the namedg disk group has at least three disks in it. If necessary. add disks to the namedg disk group. vxdisk -0 alldgs list If necessary: vxdg -g namedg adddisk namedg02=device_tag2 namedg03=device_tag3 4 Create two mirrored, concatenated volumes, 500 11113in size, called vol log and volnolog in thc namedg disk group. vxassist -g namedg make vollog 500m layout=mirror vxassist -g namedg make volnolog 500m layout=mirror 5 Add a log to the volume vol log. vxassist -g namedg addlog vol log 6 Create a file system on both volumes. Solaris, mkfs -F vxfs /dev/vx/rdsk/namedg/volnolog IIP-UX mkfs -F vxfs /dev/vx/rdsk/namedg/vollog Llnux Use - t instead of - F. 7 Create mount points for the volumes. /vollog and /volnolog. mkdir /vollog mkdir /volnolog 8-18 VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copynyrll'~ 2006 Svmantec Corpceanoo 111nqhts reserved
    • 8 Copy /etc/vfstab (on Solaris) or /etc/fstab (on Linux and IIP-UXl to a file called origvfstab or origfstab. Solaris ep /ete/vfstab /origvfstab IIP-UX, ep /ete/fstab lorigfstab Linux 9 Edit /etc/vfstab or /etc/fstab sothat vollog and volnolog are mountedautomatically on reboot.(In the tile, eachentry should beseparated by a tab.) "Ole: On the Solaris platform. ensurethat you setthemount at boot option to yes for both tile systems. 10 Typcmountall (on Solaris and IIP-UXl or mount -a (on Linux) to mount the vol log andvolnolog volumes. mountall 11 In root. start an I/O processon eachvolume. For example: find lusr -print epio -pmud Ivollog & find lusr -print I epio -pmud Ivolnolog & 12 Simulatea systemcrashon your systemby stopping it unexpectedlyasshown in the following. Solaris PressStop-A. At theOK prompt.typeboot. OK? boot II P-l:X PressCTRL-B. Log intotheGSP. GSR> rs (Entery toconfirm.) Linux Use halt -n. thenpoweroffand poweronthesystem. halt -n 13 After thesystemis running again.checkthestateof the volumesto ensurethat neither of the volumes is in the sync/needsync mode. 'iote: If you arenot using file systemlogging for boot disk lile systems,you may needto carry out file systemchecksfor boot disk file systems011 the consolebeforethe systembecomesoperationalagain. vxprint -g namedg -thf vollog volnolog 14 Runthe vxstat commandasshown in the following. This utility displays statistical information aboutvolumesandother VxVM objects. For more information on this command.s,e thevxstat (1m) manualpage. Lab 1 Solutions: Maintaining Data Consistency 8-19 Copynght i 2006 Syrnantec Corporation 111nqtus reserved
    • vxstat -g namedg -fab vollog volnolog The output shows how many I/Os it tuuk to rcsynchronizc the mirrors. Cumpare the number of I/Os for each volume. What do you notice? You should notice that fewer 1/0 operations were required to resynchronize vol log. The log keeps track of data that needs to be resynchronized. 15 Unmount both tile systems and remove the volumes vol log and volnolog. umount /vollog umount /volnolog vxassist -g namedg remove volume vol log vxassist -g namedg remove volume volnolog 16 Restore your original vfstab or fstab file. Solaris cp /origvfstab /etc/vfstab HP-UX, cp /origfstab /etc/fstab Linux 17 Destroy the namedgdisk group. vxdg destroy namedg 8-20 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copym.jhl;;:; 2006 Svmantec Corporation All "yhts reserves
    • syrnantec. Lab 2 Lab 2: Managing Devices Within the VxVM Architecture In this lab, you explore the VxVM tools used to manage the device discovery layer (DOL) and dynamic multipathing (DMP). The objective of this exercise is to make you familiar with the commands used to administer multipathed disks. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Lab 2 Solutions: Managing Devices Within the VxVM Architecture In this lab, you explore the VxVM tools used to manage the device discovery layer (DOL) and dynamic mulripathing (OMP). The objective of this exercise is to make you familiar with the commands used to administer multipathed disks. In the VERITAS classroom (not Virtual Academy). you also explore dynamic multipathing through the use of two P0l1S on the liDS disk array. In the classroom configuration. each LUN maps to two ports on the liDS. so that a system detects a LUN twice through a single IIBA. Your instructor will change the classroom configuration at a certain point in the lab to enable access to the lIDS ports. effectively switching from one path to two paths to each LUN. The Lab Exercises for this lab are located on the following page: Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need a minimum of three external disks to be used during the labs. Note: Part 2 of this lab can only be performed in the standard VERITAS classrooms that include an II OS disk array. Before you begin this lab. destroy any data disk groups that are left from previous labs: vxdg destroy diskgroup Copyngtlt if 2006 Symarnec Corporation All nqtns reserved 8-21Lab 2 Solutions: Managing Devices Within the VxVM Architecture
    • Classroom lab Values In preparation for this lab, you will need the following information about your lab environment. For your reference. you may record the information here, or refer back to the first lab in the SF Fundamentals section where you initially documented this information. Object Sample Value Your 'alue root password veritas _._-- Host name trainl II~' Data Disks: Solaris: clt#dO - clt#d5 HP-LX: c4tOdO - c4tOd5 AIX: hdi sk21- hdisk26 Linux: sda - sdf Location of Lab Scripts: /student/labs/sf/ sf50 l.ocation of the £p /student/labs/sf/ program: sf50/bin 8-22 VERITAS Storage Foundation 5.0 for UNIX- Maintenance Copy"qht © 2006 Svntantec Con.oranon All fights reserved
    • Introduction to DMP Labs To explore the behavior of DMP in the classroom, this lab is organized into two sections: Part 1: In this activity, you practice using DOL and DMP administrative commands while only one path is visible to each LUN. Part 2: In this activity, the instructor changes the classroom configuration so that each L UN maps to both ports on the II DS, and each system detects two paths to a LUN. Explore additional exercises to experience when two paths are In use. Note: Part 2 can only be performed in a standard Symantec classroom (not in a Virtual Academy or Mobile Academy lab environment). Instructor Classroom Setup Instructor: If you did not initialize the classroom zoning configuration prior to the start of class on day one, perform the following steps to initialize classroom zoning configurations. This must be completed prior to performing this lab. Use course_setup script: Select Classroom. (Setup scripts arc all included in Classroom SAN configuration Version 2). Select Function To Perform: Select Zoning by Zone Name 2 - Select Zoning and Hostgroup Configuration by Course Name 3 - Select/Check Hostgroup Configuration 2 Select option 3 - Select/Check Ilostgroup Configuration. Select HostGroup Configuration to be Configured: 1 - Standard Mode: or 4 node sharing, No DMP 2 DMP Mode: 2 node sharing, switchable between 1 path and 2 path access 3 - Check active HDS Hostgroup Configuration 3 Select option 2 - DMP Mode. Wait and do not respond to prompts. 4 Exit to first level menu. 5 Select option I - Select Zoning by Zone Name. Select Zoning Configuration Required: 1 - Mode 1: 6 sets of 2 Systems sharing 12 LUNs, no Tape Library available (HDS DMP Mode 6 x 2 Systems - Single Path to 12 LUNs) 6 2 - Mode 2: 3 sets of 4 Systems sharing 24 available (HDS DMP Mode - 6 x 2 Systems Select option I - Mode I (single path to 12 LUNs). LUNs, no Tape Library Dual Paths to 12 LUNs) 7 Select option 4 - Solaris as the OS. S Exit out of the course_setup script. 9 Reboot each system using reboot - - - r. Lab 2 Solutions: Managing Devices Within the VxVM Architecture 8-23 COPYrlght:[., 2006 Symantec Coccor ano«. All fights reserved
    • Part 1: Exploring DMP (Single Path Visible) Administering the Device Discovery Layer Display the J80Ds currently supported on your system by using V.xVM's device discovery layer utility, vxddladm. Use manual pages to identity the option you need to use with the vxddladm command. man vxddladm vxddladm listjbod 2 List all currently supported disk arrays. Nete: If your lab environment is using Hitachi 9500 array, note that this array is included in the libvxhdsalua library that is already included in VxVM 5.0 by default. vxddladm listsupport 3 List all the enclosures connected to your system using the vxdmpadm 1i stenclosure all command. Docs Volume Manager recognize the disk an'ay you arc using in your lab environment' What is the name of the enclosure' vxdmpadm listenclosure all Volume ,1anager recognizes the disk array if it is among the supported disk arrays you listed in step 2. If you arc working in a standard Symantec classroom with an Hitachi 9500 array, you should see HDS9500-ALUAO displayed as the enclosure name of the disk array connected to your system. 4 Set your system to use enclosure-based naming. vxdiskadm Select the option, "Change the disk-naming scheme" and complete the prompts to select enclosure-based naming. 5 Display the disks attached to your system and note the changes. vxdisk -0 alldgs list For example, if your classroom is configured with an Hitachi array, you will see the Hitachi tags in the output. The disks will be named enc]osUl"e_name_#, for example HDS9500 -ALUAO_0. 6 Rename the enclosure to yourname using the vxdmpadm setat tr command. To find the exact command syntax, check the manual pages for the vxdmpadm command. Note: The original name of the enclosure is displayed by the vxdmpadm listenclosure all command that you used in step 3. vxdmpadm setattr enclosure orig_name name=yourname 8-24 VERITAS Storage Foundation 5.0 for UNIX Maintenance Copynqhtc 2006 Symantec Corporanoo 111nqbts reserved
    • 7 Launch VEA. connect to your local system. and notice any differences in how disks are represented. yea & Displaying DMP Information List all controllers on your system using the vxdmpadm I istct I raIl command. l Iow many controllers are listed for the disk array your system is connected to? vxdmpadm listctlr all If you are in a standard Symantec classroom, you should observe only one controller listed for the enclosure you renamed to yourname. 2 Display all paths connected to the controller listed for the disk array on your system using the vxdmpadm getsubpaths ctlr=controller command. Compare the NAME and the DMPNODENAME columns in the output. vxdmpadm getsubpaths ctlr=controller The NAME column lists all of the disk devices that the operating system sees whereas the DMPNODENAME column provides the corresponding DM!' node name used for that disk device. If you have not switched to enclosure based naming, these names will be the same. Note that the DMP node names are the ones displayed by VEA or by the vxdisk -0 alldgs list command. 3 In the displayed list of paths. use the OMP node name of one of the paths to display information about paths that lead to the particular LUN. I low many paths can you see'! vxdmpadm getsubpaths dmpnodename=dmpnodename If you are in a standard Symantec classroom, at this stage in the lab you are using a single path to the disk array. Therefore, you should observe only one path listed for each DMP node name. In the next three sections you will investigate preventing rnultipathing to a specific device. changing OMP JiO policies, and displaying DMP statistics. If you are working in an environment where the SAN zoning can be changed to provide dual paths to the disk devices. for example in a standard Symantcc classroom. skip these sections and start with Part 2 Exploring DMP (Dual Paths Visible). The same sections will be repeated when dual paths to disk devices arc available. Note: Dual paths to disk devices are not available in Virtual Academy or Mobile Academy lab environments. Lab 2 Solutions: Managing Devices Within the VxVM Architecture 8-25 Coovoqnt s.2006 Svrnamec Corporation All nqbts fBsPfved
    • Preventing/Allowing Multipathing for a Device Note: Perform this section only if you cannot change your environment to use dual path, to disks. Otherwise, skip to Part 2 Exploring DMP (Dual Paths Visible). 1 List the paths I()!' each DM P node name displayed in the enclosure based naming scheme to identify two of the disks that were assigned to you. Nute: Alternatively, you can use the output of the vxdmpadm getsubpaths ctlr=controliercolllmand to find the path corresponding to each dmp node name. On the Solaris platform, the vxdisk -e list command also provides informauon about the native OS name that corresponds to each drnp node name. Until you id~ntify two or the disks assigned to yuu: vxdisk list name_# 2 Create a disk group named namedg that contains thc two disks you identified in step I. vxdisksetup -i device_tag (if necessary) vxdg init namedg namedgOl=device_tagl namedg02=device_tag2 Note that yuu should use the enclosure based names as the device tags, 3 Display multipathing information for one of the disks in the namedg disk group. vxdisk list device_tag When you list disk information, multipathing information is displayed at the end of the command output. The number of paths to the disk and the state of the paths (enabled or disabled) is displayed. If the hardware is set to single-pathing and is nut set up to support mullipathing, then the number of paths is I. 4 Select a device in the namedg disk group on your system, and prevent muliipathing tor that device. Note that you have to exit the vxdiskadm menu completely before the change takes affect. Note: When you arc prompted to enter the disk name, you have to enter the actual device name, not thc DM!' node name. On the Solaris platform. you can use the list option to identify the actual device names that correspond to the DMP node names. Invoke the vxdiskadm mcnu. Select the option "Prevent multipathiug/Suppress devices from 'x'M's view." Fullow the instructions to select a disk for which you will prevent multipathing by using the option "Prevent muUipathing of a disk by 'xVM." 8--26 VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copyngtll ';,2006 Svor.nuec Corpocancn 111rights reserved
    • Note: On some platforms, such as UP-UX, you are asked if you want the change to take affect after a reboot. You can implement an immediate change because the disks do not have any actual VxVM objects to he concerned about. Exit the vxdiskadm menu to complete the operation. 5 Verify that multipathing has been prevented for the device. Invoke the vxdiskadm menu. Select the option "List currently suppressed/non-multipathed devices." The device that you suppressed is displayed in the output of this menu option. 6 Run the vxdisk -0 alldgs 1ist command and notice the name and location of the disk in the list. vxdisk -0 alldgs list The disk naming convention has changed back to standard for that disk. 7 Re-cnable muhipathing for this device and verify your action. Note that you have to exit the vxdiskadm menu completely before the change takes affect Invoke the vxdiskadm menu. - Select the option "Allow multipathing/Unsuppress devices from V,VM's view", Follow the instructions to select the disk for which you will re-enable multipathing b~ using the option" Allow multipatbing of a disk by VxVI1". Exit the vxdiskadm menu to complete the operation. Note: Depending on your platform, the vxdi skadm menu may prompt you to reboot your system. Reboot your system if prompted by the vxdi skadm menu. To verify your action, invoke the vxdiskadm menu, and select the option "List currently suppressed/non-multipathed devices". There should be no devices listed in the output of this menu option. 8 Run the vxdi sk - 0 aIldgs 1i st command and notice the names and location of the disk in the list. vxdisk -0 alldgs list The disk is back in enclosure-based naming mode. Lab 2 Solutions: Managing Devices Within the VxVM Architecture 8-27 COPYright 'f, 2006 Svmamac (orpor.:!lIOfl All rl91115 reserved
    • Displaying DMP Statistics Note: Perform this section only if you cannot change your environment to use dual paths to disks. Otherwise, skip to Pm11 Exploring DM P (Dual Paths Visible). 1 Create a I-CiB volume named namevoll in the namedg disk group. vxassist -g namedg make namevoll Ig 2 Enable the gathering of I/O statistics lor DMP. vxdmpadm iostat start 3 Reset the DMP I/O statistics counters to zero. vxdmpadm iostat reset 4 Next, you will use a simple performance utility, called f p, to generate l/O on the disk used by the namedg disk group. Ask your instructor lor the location of the program. In a different terminal window, start several invocations olthc fp program by using the following command: / scr.lpt .1,)(.,;::!ti en! fp ple t ro.?-m jdevjvx/rdsk!1l.ow:edg!n:unevol1 1048576 32 99999 rw & To create enough 1;0 resistance, use the vi editor and copy about 10 ofthese lines into a file called !tmp/testscript, and then run the script Notc: Make sure that you arc using the COITectversion ofthe fp program lor you I platform, 101example, fp sun 5 In the original terminal window. display 1/0 statistics for all controllers. vxdmpadm iostat show all 6 Display 1/0 statistics lor the DMP node that corresponds to the device used by namevoll. Display statistics every two seconds for four times. Note: You can use the vxprint -g namedg -htr namevoll command to idcnti ty the dmp node n.unc of the device used by namevoll. vxdmpadm iostat show dmpnodename=nodename interval=2 count=4 Managing Array Policies Note: Perform this section only if you cannot change your environment to use dual paths to disks. Otherwise, skip to Part 2 Exploring DMP (Dual Paths Visible). 1 Display the current I/O policy for thc enclosure you arc using. vxdmpadm getattr enclosure yourname iopolicy 8-28 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copynqhl ,(:, 2006 Syrnantec Cornoranon 111nqtus reserved
    • The default 1/0 policy is round-robin for the array usedin standard Symantec classrooms. 2 Change the current I/O policy for the enclosure to stop load-balancing and only use multipathing for high availability. vxdmpadm setattr enclosure yourname iopolicy=singleactive Solaris ps -ef I grep fp - sun I awk '{print $2} , I xargs kill -9 liP-LX ps -ef I grep fp_hp I awk '{print $2} , I xargs kill -9 3 Display the new I/O policy attribute. vxdmpadm getattr enclosure yourname iopolicy 4 Kill all fp processes. 5 Destroy the namedg disk group. vxdg destroy namedg 6 Set your system back to traditional naming. vxdiskadm Selectthe option, "Change the disk-naming scheme" and complete the prompts. Lab 2 Solutions Managing Devices Within the VxVM Architecture B-29 COPyfl9ht '0 2006 Symantec Corporation. All flglltS reserved
    • Part 2: Exploring DMP (Dual Paths Visible) Note: The following labs are intended for a muhipath environment and do not make sense with a single path. These activities may be performed only in the Symantec classroom environment, not in the Virtual Academy or Mobile environments. Instructor Classroom Setup Instructor: Perform the following steps to switch dual paths on. Switch to zone configuruuon Z to enable a second path to the LUNs: 1 Use course_setup script: Select Classroom. Setup scripts arc all included in Classroom SAN configuration (Version 2). Select Function To Perform: Select Zoning by Zone Name 2 - Select Zoning and Hostgroup Configuration by Course Name 3 - Select/Check Hostgroup Configuration 2 Select option 1- Select Zoning by Zone Name. Select ZOIling Configuration Required: Mode 1: 6 sets of 2 Systems sharing 12 LUNs, no Tape Library available (HDS DMP Mode - 6 x 2 Systems - Single Path to 12 LUNs) 2 - Mode 2: 3 sets of 4 Systems sharing 24 LUNs, no Tape Library available (HDS DMP Mode - 6 x 2 Systems - Dual Paths to 12 LUNs) 3 Sclcct Z to switch to dual paths. 4 Select option 4 - Solaris as the OS. 5 Exit out of the coursesetup script. 6 Reboot each system using reboot - r. Displaying DMP Information List all controllers on your system using the vxdmpadmlistctlr all command. How many controllers arc listed for the disk array your system is connected to'! vxdmpadm listctlr all If you are in a standard Symanrec classroom, ynu should still observe only one controller listed for the enclosure. :'oIotethat the J)'VIP capabllity is illustrated by using dual ports on the disk array but the system still uses a single controller to connect to the SA:'oI. 8-30 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Cupyngh <t:' 2006 Symantec Corporation All nyhts reserved
    • 2 Display all paths connected to the controller listed for the disk array on your system using the vxdmpadm getsubpaths ctlr=controller command. Compare the NAME and the DMPNODENAME columns in the output. vxdmpadm getsubpaths ctlr=controller TIll' NAME column lists all of the disk devices that the operating system sees whereas the DMPNODENAME column provides the cur responding DM I) nude name used for that disk device. After you switch to dual paths, you should notice that the same DMP node name is listed for two devices. 3 In the displayed list of paths, use the DMP node name of one of the pathsto display information about paths that lead to the particular LUN. Ilow many paths can you see') vxdmpadm getsubpaths dmpnodename=dmpnodename If you are in a standard Symantec classroom, at this stage in the lab you have switched to dual paths to the disk array. Therefore. you should observe two paths listed for each DMP node name. Preventing/Allowing Multipathing for a Device List the paths for each dmp node name displayed in the enclosure based naming scheme to identify two of the disks that were assigned to you. Tu identify two of the disks assigned tu you: vxdisk list name_# 2 Create a disk group named narnedg that contains the two disks you identified in step I. vxdisksetup -i device_tag (if ncccssarv) vxdg init namedg namedgOl=device_tagl namedg02=device_tag2 :otl' that yuu shuuld use the enclosure based names as the device tags. 3 Display multiparhing information for one of the disks in the narnedg disk group. vxdisk list device_tag When you list disk information, multipathing information is displayed at the end of the command output. The number of paths to the disk and the state of the paths (enabled or disabled) is displayed. After switching to dual paths you should observe two paths at the end of the command output. 4 Select a device in the narnedg disk group on your system. and prevent multipathing for that device. Note that you have to exit the vx d i skadm me11U completely before the change takes affect. Lab 2 Solutions: Managing Devices Within the VxVM Architecture 8-31 COP}flghl ~,2006 Svrnantec Corporation All fights reserved
    • Note: When you arc prompted to enter the disk name, you have to enter the actual device name, not the dmp node name. 011the Solaris platform, you can use the list option to identify the actual device names that correspond to the dmp 1I0d.:names. Invoke the vxdiskadm menu. - Selectthe option "Prevent muitipathing/Suppress devicesfrom VxVM's view." Follow the instructions to selecta disk for which you will prevent muitipathing b) using the option "Prevent muitipathing of a disk by VxVM." Note: On someplatforms, such as HP-lIX, you arc asked if you want the changeto take affect after a reboot. You can implement an immediate changebecausethc disks do not have any actual VxVM objects to beconcerned about. Exit the vxdiskadm mcnu to complete the operation. 5 Verify that multipathing has been prcv cntcd for the device. Invoke the vxdiskadm menu. - Selectthe option "List currently suppressed/non-multipathed devices." The device that you suppressedis displayed in the output of this menu option. 6 Run the vxdisk - 0 alldgs 1ist command and notice the names and location or the disk in the list. vxdisk -0 alldgs list The disk nanung convention haschanged back to standard for both paths of that disk. 7 Rc-cnablc multipathing for this device and verify your action. You need to enable multipathing for both paths olthc device. Note that you have to exit the vxdiskadm menu completely before the change takes affect. invoke the vxdiskadm menu. - Selectthe option "Allow multipathing/Unsuppress devicesfrom VxVM's view". Follow the instructions to selectthe disk for which you will rc-enable multipathing by using the option "Allow multipathing of a disk by VxVM". - When prompted if you would like to enable muiripathing for another device, type y and enter the secondpath for the device. Exit the vxdiskadm menu to complete the operation. 8-32 VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copyfl~hl c- 2006 Svruantec Corporauon. 1'1.11tights reservec
    • Note: Depending on your platform. the vxdiskadm menu may prompt you to reboot your system. Reboot your system ifprompted by the vxdiskadm menu. To verify your action, invoke the vxdiskadm menu, and select the option "List currently suppressed/non-multipathed devices". There should be no devices listed in the output of this menu option. 8 Run the vxd i sk - 0 alldgs 1 i st command and notice the names and location of the disk in the list. vxdisk -0 alldgs list The disk is back in enclosure-based naming mode and only one device is listed for both paths of the disk. Displaying DMP Statistics Create a I-GB volume named namevoll in the namedg disk group. vxassist -g namedg make namevoll 19 2 Enable the gathering of 1/0 statistics for DMP. vxdmpadm iostat start 3 Reset the OM P I/O statistics counters to zero. vxdmpadm iostat reset 4 Next, you will use a simple performance utility, called fp, to generate 110on the disk used by the namedg disk group. Ask your instructor II)r the location of the program. In a different terminal window, start several invocations of the fp program by using the following command: /SI..','l'ipt .._.l()ca::i;:Jn!fp j:·la iciiu /dev/vx/rdsk/!1 .."i'edg/I1,c;"""'voll 1048576 32 99999 rw '" To create enough 110resistance, use the vi editor and copy about 10 of these lines into a me called /tmp/testscript, and then run the script :ote: Make sure that you are using the correct version of the fp program for your platform, for example, f p_ sun. 5 In the original terminal window, display I/O statistics for all controllers. vxdmpadm iostat show all 6 Display 110 statistics for the DM P node that corresponds to the device used by namevoll. Display statistics every two seconds for tour times. Lab 2 Solutions: Managing Devices Within the VxVM Architecture B-33 CopYrlght,f; 2()06 Symanter- Corporsuon All flQt1ts reserved
    • Note: You can use the vxprint -g namedg -htr namevoll command to identify the dmp node name of the device used by namevoll. vxdmpadm iostat show dmpnodename=nodename interval=2 count=4 7 Ki II all f p processes. ._- Solaris ps -e£ I grep £p sun I awk '{print $2}' I - xargs kill -9 HP-UX ps -e£ I grep £p hp I awk • {print $2}' I - xargs kill -9 Managing Array Policies Display the current I/O policy Ill!' the enclosure you are using. vxdmpadm getattr enclosure yourname iopolicy The default 1/0 policy is round-robin for the array used in standard Symantec classroums, 2 Change the (UnTIl! I/O policy lor the enclosure to stop load-balancing ami only use multipathing for high availability. vxdmpadm setattr enclosure yourname iopolicy=singleactive 3 Display the new I/O policy attribute. vxdmpadm getattr enclosure yourname iopolicy 4 Resetthe DMP I/O statistics counters to zero, vxdmpadm iostat reset 5 Next. you will use the fp program again. to generate I/O on the disk used by the namedg disk group. Ask your instructor for the location or the program. In a different terminal window, start several invocations or the fp program by using the following command: I :'::::"T.ipt_lc'c;;:·'tt cu/fp._.p.l.:"tcf..·'L!if !dev!vx!rdsk/n.'""edg/na",evoll 1048576 32 99999 rw & To create enough 1/0 resistance. use the vi editor and copy about 10or these lines into a file called /tmp/testscript. and then run the script. Note: Make sure that you arc using the correct version otthc fp program for your platform. for example. fp _ sun. 6 In the original terminal window. display I/O statistics lor all controllers. vxdmpadm iostat show all 8-34 VERITAS Storage Foundation 5.0 for UNIX Maintenance Copyflghl:;:' 2006 Svm.mtec Corporancn All nqhls resorve-t
    • 7 Display 110 statistics for the DMP node that corresponds to the device used by namevoll. Display statistics every two seconds for four times. Compare the output to the output you observed before changing the DMP policy to singlcactivc. Note: You can use the vxprint -g namedg -htr namevoll command to identify the drnp node name of the device used by namevoll. vxdmpadm iostat show dmpnodename=nodename interval=2 count=4 Solaris ps -ef I grep fp_sun I awk '{print $2}' I xargs kill -9 " P-L'X ps -ef I grep fp hp I awk ' {print $2}' I -- xargs kill -9 8 Kill all fp processes. 9 Change the DMP 110 policy back to its default value (round-robin). vxdmpadm setattr enclosure yourname iopolicy=round-robin 10 Destroy the namedg disk group. vxdg destroy namedg 11 Set your system back to traditional naming. vxdiskadm Select the option, "Change the disk-naming scheme" and complete the prompts. Managing the DMP Restore Daemon Check the status of the DMP restore daemon. vxdmpadm stat restored Note the values of the daemon interval and policy. 2 Change the restore daemon interval to 400 seconds and change the policy to analyze all paths in the system. vxdmpadm stop restore vxdmpadm start restore interval=400 policy=check_.all 3 Verify the changes that you made. vxdmpadm stat restored Lab 2 Solutions: Managing Devices Within the VxVM Architecture 8-35 Copynght'S 2006 Sym8111ec Coeoorauo- 111fl!i!ltS reserved
    • 4 Change the daemon interval and policy back to the original values. vxdmpadm stop restore vxdmpadm start restore interval=300 policy=check disabled 5 Verify the changes that you made. vxdmpadm stat restored 8-36 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqht '.' ;,>00£ Symantec Corpor auo» All fights reserved
    • s)'1l1antee. Lab 3 Lab 3: Encapsulation and Rootability In this practice, you create a boot disk mirror, disable the boot disk, and boot up from the mirror. Then you boot up again from the boot disk, break the mirror, and remove the boot disk from the boot disk group. Finally, you reencapsulate the boot disk and re- create the mirror. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Lab 3 Solutions: Encapsulation and Rootability In this practice. you create a boot disk mirror. disable the boot disk. and boot up from the mirror. Then you boot up again from the boot disk. break the mirror. and remove the boot disk from the boot disk group. Finally, you rcencapsulate the boot disk and re-create the mirror. These tasks are performed using a combination of the VEA interface. the vxdiskadm utility. and Cl.I commands. The Lab Exercises for this lab are located on the following page: Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need a second internal disk to be able to mirror the system disk. On the IIP-UX platform, you also need three external disks to carry out the labs on LVM to VxVM conversion. Some of the lab steps may require con sole access. If you arc working in a Virtual Academy lab environment where you do not have console access to the lab system. you will be asked to skip these steps. Lab 3 Solutions: Encapsulation and Rootability B-37 Copvnqht S 2UOfl Svmantec Corpo-auoo All nghls reserved
    • Classroom Lab Values In preparation fur this lab. you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the first lab in the SF Fundamentals section where you initially documented this information. Object Sample Value Your Value runt password veritas Host name trainl My Boot Ilisk: Solaris: cOtOdO HP-LX: c It ISdO AIX: hdiskO Linux: hda 2nd Internal Disk: Solaris: cOt2dO HP-l:X: c3tlSdO AIX: hdiskl Linux: hdb IJy Data Disks: Solaris: c i t#dO - clt#d5 HP-UX: c4tOdO - c4tOd5 iIX: hdisk21- hdisk26 Linux: sda - sdf B-38 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Curynjht :;-: 2006 Syrnantec Corporation All fights reserved
    • Solaris and Linux Only: Encapsulation and Boot Disk Mirroring Note: The encapsulation and boot disk mirroring labs vary by platform due to the way in which the boot disk is handled by the operating system. This lab section applies to Solaris and Linux only. Labs for IIP-UX are presented in the next section. Encapsulation and Boot Disk Mirroring - Solaris and Linux Use vxdiskadm to encapsulate the boot disk. Use systemdg as the name of your boot disk group and use rootdisk as the name of your boot disk. Note: If you are accessing your lab system remotely as in a Virtual Academy lab environment, you will lose your connection to your lab system "hen you reboot your system after boot disk encapsulation. Wait for the system to come back up to reconnect. If you cannot log back in within a reasonable Ul1lOUIll of time. contact your instructor. Select the vxdiskadm option, "Encapsulate one or more disks," and follow the steps to encapsulate your system disk. Select the system disk as the disk to encapsulate. Add the system disk to a disk group named sys temdg. Specify the name of the disk as rootdisk. Shutdown and reboot after exiting vxdiskadm. 2 After the reboot. use vxdiskadm to add a disk that will be used for the mirror of root disk. If your system has two internal disks, use the second internal disk on your system for the mirror. (This is required due to the nature of the classroom configuration.) When setting up the disk. ensure that the disk layout is s I iced. Use a I tboot as the name of your disk. Select the vxdiskadm option, "Add or initialize one or more disks," and follow the steps to add a disk to the systemdg disk group. Select the second internal disk as the device to add. Add the disk to the systemdg disk group. Specify a sliced format when prompted. Specify the name of the disk as al tboot. 3 Next. use vxdiskadm to mirror your system disk, rootdisk, to the disk that you added. a I tboot. Select the vxdiskadm option, "Mirror volumes on a disk," and follow the steps to mirror the volumes. Specify the disk containing the volumes to be mirrored as rootdisk. Specify the destination disk as al tboot. Open a separate window and monitor the mirroring progress using the vxtask moni tor command for each of the volumes being mirrored on the boot disk. Use Ctrl+c to exit the command. vxtask monitor Lab 3 Solutions: Encapsulation and Rootability 8-39 Copvnqht f" 2006 Syrnanter Coree-anon. 111fights reserveo
    • This command shows the percentage complete of the volume being mirrored. 4 After the mirroring operation is complete, verify that you now have two disks in systemdg: rootdisk and al tboot, and that all volumes are mirrored. Also, check to determine if rootvol is enabled and active. Hint: Use vxprint and examine the STATE fields. vxprint -g systemdg -htr The rootvol should be in the ENABLED and ACTIVE state, and you should also see two plexes for each of the volumes in sys temdg. Solaris 5 Place the names of your alternate boot disks in persistent storage. a.Check the values in eepromand verify the names of your alternate boot disks in persistent storage. Ensure the use- nvr amrc>variable is set to true. use the eepromcommand to see the current settings and to set the new values. if needed. h.Write down the values of the vx-rootdisk and vx-altboot variables. eeprom igrep vx- The vx- rootdi sk value is the disk alias for the original boot disk and the vx -a1tboot value is the alias for the mirror of the boot disk. c.use eepromto see if the use-nvramrc? value is set to true. eeprom igrep nvramrc d. If the value is set to true. go to the next step where you test that the mirror of the system disk is boorable. If the value is set to false. perform the following additional eepromcommands: From the command line, set the ecprom variable to enable VxVM to create a device alias in the openboorprogram. eeprom use-nvramrc?=true Verify that the new values are in place by typing ceprom again. eeprnm l.lnux :'iolc: You need console access to be able to perform this step on the Linux platform. If you arc working in a Virtual Academy lab environment. skip this step. To set the 13105 on the PC systems. you must enter system setup upon boot uud add the mirror disk into the boot sequence. Press i after the boot loader window shows. 8-40 Copyflq!1I -, 2006 SY!l1i1Jler; Curp(>f<llO!1 All rights rese-veo VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Note: The following steps (steps 6-11) in this lab section require console access. If you are working ill a Virtual Academy lab environment with no console access, skip to the last step (step 12) of this section. 6 Test that the mirror of the system disk is bootable. Solaris Linux a, Sync the disks then bring the system down to the OK prompt. sync; sync init 0 h. Print the environment settings at the OK prompt. OK> printenv c. Use the boot disk alias vx-al tboot to boot up trom ihealternate boot disk. OK> boot vx-altboot d. After the system is booted back up from the alternate boot disk. verify which boot disk is used. eeprom Igrep devalias Note the value or the alternate hoot disk variable. e. Run the prtconf command to view which disk the system is currently booted from. prtconf -vp I grep bootpath Compare the values to validate you are bootee! oIT the mirror. f. After you have veri lied that the mirror of the boot disk is bootahlc, rebootthc systcm tu be booted off the original boot disk. sync; sync init 6 g. After the system comes back up. verify that the original boot disk is being used. eeprom Igrep rootdisk h. :-.iote the value ofthe disk, prtconf -vp Igrep bootpath i. Compare the two values of the disk and confirm you are booted 01'1' the original disk. In our training classrooms, the Dell Opiiplcx GX270 rcs have a 810S which supports tuilmg oyer to and booting from an alternate mirror disk. in the event of boot disk failure. So now. when the system boots. it should automatically recognize the secondary hard drive (altboot) and boot from it. Lab 3 Solutions: Encapsulation and Rootability Copvnqht v 2(106 Svmantar; Corporauou All nqtus rflSIlrven B-41
    • 7 Now that you arc running off the original boot disk. tail the disk. The system will continue to run because you have a mirror ofthe disk. a Fail the system disk. Todisable the bout disk and make root vol- 01 disabled and offline. use the vxmend command. This command is used to make changes to configuration database records. Here, you are using the command to place tile plcx in an uilline state. For more information about this command, see the vxmend (1m) manual page. vxmend -g systemdg off rootvol-Ol b Verify that rootvol- 01 is nowdisabled and offline. vxprint -htr c To change the plcx to a STALE Slate. run the vxmend on command on root vol- 01. Verify that root vol- 01 is nowin the DISABLED and STALE state. vxmend -g systemdg on rootvol-Ol vxprint -htr 8 Now that you have simulated the failure of tile uriginal boot disk, reboot the system and bout up on the mirror. a Reboot the system using ini t 6. init 6 b The system stops during the reboot at the OK prompt and indicates lor you to use vx- al tboot. Boot from the alternate boot disk OK> boot vx-altboot 9 After the system comes back up, check the status ofthe root volume. What is the state or the volume') vxprint -htr The volume with the plexes being synchronized is in the SYNC state. The other plexes are in Ihe NEEDSYNC state. Use the vxtask 1i st commandto see the progress ofthe rcsynchronization or the root volume. (use the following commands a few times to see the progress) vxtask list vxprint -htr 10 Alter the synchronization is complete, verify the status of root vol. Verify that rootvo1- 01 is now in the ENABLED and ACTIVE state. Note: You may need to wait a few minutes for the state to change from STALE to ACTIVE. B-42 VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copyright '!-' {006 SymimlflC Corporanon All fights reservec
    • vxprint -thf You have successfully booted up from the mirror, and the volumes have been resynchronizcd, 11 Your system is currently booted lip from the hoot disk mirror. To hoot up from the original boot disk. reboot again using init 6. init 6 You have now booted up from the original boot disk. Note: If you arc working in a Virtual Academy lab environment with no console access. you can continue with step 12. 12 Remove the mirror of the boot disk. Using VEA, remove all but one plex of root vol, swapvol, usr, var, opt. and home(that is, remove the newer plex from each volume in systemdg). For each volume in systemdg, remove all of the newly created mirrors. More speclflcally, for each volume, two plexesare displayed, and you should remove the newer (-02) piexesfrom each volume. To remove a mirror, highlight a volume and select Actions->Mirror->Remove. In preparation for the next lab. leave the boot disk encapsulated. but not mirrored. Optional Lab: Unencapsulating - Solaris and Linux If you want to test the vxunroot process for uncncapsularing the boot disk. you can do the following steps. However, you need the boot disk encapsulated for the next lab. So after performing this optional exercise, you will need to encapsulate. but not mirror, your boot disk. Run the command to convert the root volumes back to disk partitions. vxunroot 2 Shut down and restart the system when prompted. Nute: If you arc accessing your lab system remotely as in a Virtual Academy lab environment. you will lose your connection to your lab system when you reboot your system. Wait lor the system to come back up to reconnect. If you cannot log back in within a reasonable amount of time. contact your instructor. 3 Verify that the mount points are now slices rather than volumes. df -k 4 In preparation fur the next lab. leave the boot disk encapsulated, but not mirrored. Lab 3 Solutions: Encapsulation and Rootability 8-43 Copvnqht '0 2D06 Symaruec Corporation 111noms resehe(l
    • HP-UX Only: Putting the Boot Disk Under VxVM Control and Boot Disk Mirroring Notc: The encapsulation and boot disk mirroring labs 'ary by platform due to the way in which the boot disk is handled by the operating system. The following lab sections apply to H P-U X only. Labs for Solaris and Linux are presented in the previous sections. Putting the Boot Disk Under VxVM Control and Boot Disk Mirroring - HP-UX :'IIotc: This lab section requires console access. lfyou arc working in a Virtual Academy lab environment with no console access. skip this lab section and go to the LVM to VxVM Conversion lab section. Take the system into single user mode (init level I). init 1 2 Check the status or the second internal disk using the vxdisk list command. II'the disk is displayed as an LVM disk. ensure that it is not used by any active LVM volume groups and take it out or LVM control using the pvremove command. Note: ltthc pvremove command fails due to an exported volume group information lclt on the disk, re-create an LVM header using the force option (pvcreate -f /dev/rdsk/device_name) before using the pvremove command to remove it. vxdisk list If necessary: vgdisplay -v /dev/vgOO pvcreate -£ /dev/rdsk/c3tlSdO pvremove /dev/rdsk/c3tlSdO where c3tlSdO is the device name of thc second internal disk. vxdctl enable vxdisk list 3 Check the values ofthe primary and alternate boot devices using the setboot command. setboot 4 Create a bootablc VxVM system disk on the second internal disk using the vxcp 1vmroot command and make this disk the pnrnary boot disk Use systemdg as the disk group to put the boot disk in. 8-44 VERITAS Storage Foundation 5.0 for UNIX: Maintenance C()pynlhtr.~2006 Symantec Corpcranon All ('gillS re serveu
    • vxcp_lvmroot -g systemdg -v -b c3tlSdO where c3tlSdO is the secondinternal disk. 5 When the vxcp_1 vmroot command completes. check the output of the setboot command and verify that the primary path is the VxVM disk. setboot 6 Reboot the system. After it boots up. verify that it is booted on VxVM volumes by checking the output of the bdf command. shutdown -ry now bdf 7 Destroy the internal disk that was used as the LVM system disk. vxdestroy_lvmroot -v cltlSdO where cltlSdO is the first internal disk (the original LVM boot disk). Confirm when prompted. 8 Mirror the system disk to the other internal disk that used to be the LVM disk. Note: This operation can take some time depending on the sizes of the volumes on your system disk. vxrootmir -v -b clt15dO 9 Verify the primary and alternate boot paths and check the volume layouts of the volumes in the bootdg. setboot vxprint -g bootdg -htr 10 Reboot your system using shutdown - ry nowand interrupt the automatic boot process. shutdown -ry now Pressany key to interrupt the boot processwhen the following messages appear: Processor is booting from first available device. To discontinue, press any key within 10 seconds. You should see"Boot terminated." messagefollowed by the Main menu. 11 Boot the system using the alternative boot disk. Main Menu: Enter command or menu> bo alt Interact with IPL (Y, N, or Cancel)? n 12 When the system is lip. disable the root volume plex that is on the primary boot device using the vxmend of f command followed by vxmend on. vxmend -g bootdg off rootvol-Ol Lab 3 Solutions: Encapsulation and Rootability B-45 Copvnqm g 2006 Svmanrsr. Corporation All flQlltS eeservec
    • vxprint -g bootdg -htr rootvol (rootvol-01 should be in DISABLED/OFFLINEstale) vxmend -g bootdg on rootvol-Ol vxprint -g bootdg -htr rootvol (rootvol-Ol should be in DISABLED/STALEstate) 13 Reboot your system using shu tdown - ry now and follow the boot-up messages. What did you observe? shutdown -ry now Messages are displayed indicating thaI the rootvol volume does not have a valid plex on the primary boot disk and that you should use rootdisk02 to boot the system up. The bool process is then aborted and t he system is halted. 14 Reset the system using Service Processor login as follows: CTRL-B Service Processor login: <CR>(if necessarv) Service Processor password: <CR>(if necessary) GSP> rs Type Y to confirm your intention to restart the system: (Y/[N]l y GSP> 15 lntcrrupt thc automatic boot process and boot the system using the alternate boot device. Press any key to interrupt the boot process when the following messages appear: Processor is booting from first available device. To discontinue, press any key within 10 seconds. You should see "Boot terminated." message followed by the Main menu. Main Menu: Enter commandor menu> bo alt Interact with IPL (Y, N, or Cancel)? n 16 When the system is up and running display the state of the root volume and plcxcs. Check if there arc any synchronization tasks being carried by volume manager using the vxtask 1 ist command. Wait for the synchronization to complete. vxprint -g bootdg -htr (the rootvol-Ol plex will be in ENABLED/STALEstate until the synchronization is complete. The state will change to ENABLED/ACTIVEafter the synchronization completes.) 8-46 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqht !:. 2006 Syorantec Corpor aucn All nqhts reserved
    • vxtask list (you should observe the atomic copy synchronization operation initiated bv vxrecover command and the percentage that is completed) 17 When the state ofthe first rootvol plcx changes back to ACTIVE, remove the second mirrors on the alternate disk from each volume. Take the alternate disk out of the disk group and un initialize it. Note: The volumes you have on your system disk in your lab envirnnment can be different from the volumes shown in the following sulutlun. vxprint -g bootdg -htr vxplex -g bootdg -0 rm dis rootvol-02 vxplex -g bootdg -0 rm disk standvol-02 vxplex -g bootdg -0 rm disk varvol-02 vxplex -g bootdg -0 rm disk usrvol-02 vxplex -g bootdg -0 rm disk tmpvol-02 vxplex -g bootdg -0 rm disk optvol-02 vxplex -g bootdg -0 rm disk swapvol-02 vxplex -g bootdg -0 rm disk homevol-02 vxprint -g bootdg -htr vxdg -g bootdg rmdisk rootdisk02 vxdiskunsetup clt15dO where cl t15dO is the disk usedasthe alternate boot disk. 18 Take the system to single user mode by executing ini t l. init 1 19 Create a copy of the system disk on an LVM disk using the vxre s_1 vmroot command. Do not make the LVM disk the primary boot disk. vxres Ivmroot -v clt15dO setboot 20 Reboot your system using shutdown - ry nowand interrupt the automatic boot process. shutdown -ry now Pressany key to interrupt the boot processwhen the following messages appear: Processor is booting from first available device. To discontinue, press any key within 10 seconds. You should see"Boot terminated." messagefollowed by the Main menu. 21 Boot the system using the alternative boot disk. Main Menu: Enter command or menu> bo alt Lab 3 Solutions: Encapsulation and Rootability B-47 COpyright ~t;2006 Svmamec Coroorat.oo. All fights reservec
    • Interact with IPL (Y, N, or Cancel)? n 22 When the system is up, verify that the system is booted on the l.VM disk by executing the bdf command, und display the file system table and the / stand/bootconf lilc. bdf cat /etc/fstab (includes LV:! volumes for system disk file systems) cat /stand/bootconf (shows the LV'. disk as the boot device) 23 Reboot your system using shutdown - ry now and when the system is back up verify that the system is booted 011V.xVM volumes, and display the file system table and the / stand/bootconf fi Ie. shutdown -ry now After the systemis back up, login as root. bdf cat /etc/fstab (includes "x"l volumes for system disk file systems) cat /stand/bootconf (shows the VxV,! disk as the boot device) LVM to VxVM Conversion - HP-UX Create two LVM physical volumes using the pvcreate command on two uninuializcd external disks as follows: pvcreate /dev/rdsk/device tagl pvcreate /dev/rdsk/device tag2 Note: If you do 1I0t have enough uninitializcd external disks, you may need to uninitialize the empty disks that an: under Vx VM control before creating the l.VM physical volumes. 2 Create a volume group called vgO 1 using the physical volumes as follows: mkdir /dev/vgOl mknod /dev/vgOl/group c 64 Ox020000 vgcreate /dev/vgOl /dev/dsk/device_tagl /dev/dsk/device_tag2 3 Using the Ivcreate conunund, create two logical volumes, one concatenated and one striped, of size 100MB ill the vgOl volume group as follows: (Use 32K as the stripe unit.) Ivcreate -L 100 -n concatvoll /dev/vgOl Ivcreate -i 2 -I 32 -L 100 -n stripevoll /dev/vgOl 4 Make vxfs file systems 011both olumcs and mount them to two new directories called /concat and /stripe. mkfs -F vxfs /dev/vgOl/rconcatvoll 8-48 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copynyhl j;::, 2006 Svmanlec Comorauon All lights reserved
    • mkfs -F vxfs /dev/vgOl/rstripevoll mkdir /concat mkdir /stripe mount -F vxfs /dev/vgOl/concatvoll /concat mount -F vxfs /dev/vgOl/stripevoll /stripe 5 Edit the /etc/fstab file and create the corresponding entries for the tile systems by adding the following lines: /dev/vgOl/concatvoll /concat vxfs log 0 2 /dev/vgOl/stripevoll /stripe vxfs log 0 2 vi /etc/fstab 6 Using the vgcfgbackup command. back up the IVM configuration for the volume group that you created in the first part of the lab. vgcfgbackup -f /vgOlconf vgOl 7 Unmount the file systems and run the conversion tool: vxvmconvert. umount /concat umount /stripe vxvmconvert a Analyze the LVM volume group (option I) you created in the first part of the lab. What is the result of the analysis" Successful. b Convert the LVM volume group (option 2) to a VxVM disk group named dgO 1. What changes did you notice in device names'! The volume nameswill be the sameas the LVM volume names.The disk ~roup name is specified during the conversion and the disk media names,subdisk, and plex namesfollow the VxVM naming comentions. c Quit from the utility. 8 Examine the tile /etc/fstab. and the directories /dev/vgOI and /dev/vx/ l r l dsk. What changes have been made? The entries in /etc/fstab have beenmodilied, /dev /vgOl is goneand the volumes are now in /dev/vx/ [rl dsk/dgOl. 9 Change the volume name uf conca tvoll to volO 1 after the conversion. vxedit -g dgOl rename concatvoll volOl Change the corresponding /etc/fstab entry to: /dev/vx/dsk/dgOl/volOl /concat vxfs log 0 2 vi /etc/fstab 10 Remount the tile systems. mount -F vxfs /dev/vx/dsk/dgOl/volOl /concat Lab 3 Solutions: Encapsulation and Rootability B-49 COf1ynghl if, 2006 Svmamec Corporation NI nquts reserved
    • mount -F vxfs /dev/vx/dsk/dgOl/stripevoll /stripe 11 After the conversion to Yx YM completes successfully. create a Yx VM disk group called testdg on another external disk. vxdisksetup -i device_tag3 vxdg init testdg testdgOl=aevice_tag3 12 Unmouut thc file systems and roll back to LVM configuration using vxvmconvert option 3. What did you observe? umount /concat umount /stripe vxvrnconvert Selectoption 3. Only vgOl is eligible for rollback (not the original VxVi'! disk group testdg). You are warned that your changeswill he lost. Everything including the / etc/ fstab record has reverted to the original names. 13 Remove the entries lor Iconcat and /stripe from /etc/fstab. vi /etc/fstab 14 Using the 1vremove command. remove the volumes concatvoll and stripevoll. Ivremove /dev/vgOl/concatvoll Confirm when prompted. Ivremove /dev/vgOl/stripevoll Confirm when prompted. 15 Destroy the volume group vgOl. vgreduce /dev/vgOl /dev/dsk/device_tag2 vgremove /dev/vgOl 16 Convert the empty LVM physical volumes to VxVM by removing them from LVM control and then initializing them using VxVM. pvremove /dev/rdsk/device tag1 pvremove /dev/rdsk/device_tag2 vxdisksetup -i device_tag1 vxdisksetup -i device_tag2 17 Destroy the testdg disk group. vxdg destroy testdg 8-50 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqht {,,2006 Sym.l!11BL Corpnrauon {,II fights reserved
    • symantec. Lab4 Lab 4: Troubleshooting the Boot Process In this lab, you practice recovering from encapsulated boot disk failure scenarios. On the Solaris platform, to investigate and practice recovery techniques, you will use a set of interactive lab scripts. Each script simulates a failure in the encapsulated boot disk (and its mirror, if required) and reboots the system. !For Lab Exercises, see Appendix A. l£~rLab Solutions, see Appendix B. Lab 4 Solutions: Troubleshooting the Boot Process In this lab. you practice recovering from encapsulated boot disk failure scenarios. On the Solaris platform. to investigate and practice recovery techniques. you will LIse a set of interactive lab scripts. Each script simulates a failure in the encapsulated boot disk (and its mirror. ifrequired) and reboots the system. The Lab Exercises for this lab are located on the following page: Prerequisite Setup To perform this lab, you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need a second internal disk to be used as an alternative boot disk. If you have completed the previous labs, you should have the following setup: On the Solaris platform. you should have your system disk under VxVM control (encapsulated) but not mirrored. On the IIP-UX platform. you should have the system disk under VxVM control and you should have the second internal disk configured as an alternative LVM boot disk. Note: This lab requires console access. If you arc working in a Virtual Academy lab environment with no console access. you cannot perform this lab. Lab 4 Solutions: Troubleshooting the Boot Process B-51 Copynght'f 2006 Svrnantec Corporation All nctus reserved
    • Classroom lab Values In preparation for this lab, you will need thc following information about your lab environment. For your reference, you may record the information here, or refer back to the IiI'S! lab in the SF Fundamentals section where you initially documented this informauon. Object Sample Value Your '·aluc root password veritas Host name trainl Host name of the system train2 sharing disks with my system lIy Bunt Disk: Solaris:cOtOdO HP-LX:clt15dO AIX: hdiskO Linux: hda 2nd Internal Disk: Solaris: cOt2dO -- HP-LX:c3t15dO AIX: hdiskl l.inux: hdb Location tlf I.ab Scripts: /student/labs/sf/ sf 50 8-52 VERITAS Storage Foundation 5.0 for UNIX· Maintenance Cupynyh! 2006 Syrnantec Corporation All ~Iyhls reserved
    • Solaris Only: Troubleshooting the Boot Process Note: The boot process troubleshooting labs vary by platform due to the way in which the boot disk is handled by the operating system. This lab section applies to Solaris only. Labs for IIP-UX arc presented in the next section. Note: These labs require console access. II'you arc working in a Virtual Academy lab environment with no console access. you cannot perform these labs. Troubleshooting the Boot Process - Solaris Overview Your goal is to recover from the problem as described in each scenario. Use your knowledge ofVxVM administration. as well as the VxVM recovery tools and concepts described in the lesson. to determine what steps to take to ensure recovery. You succeed when you solve the problem with the boot disk and boot to multiuser mode. For most of the recovery problems, you can use any of the VxVM interfaces: the command line interlace. the VERITAS Enterprise Administrator (VEA) graphical user interlace. or the vxdiskadm menu interlace. Lab solutions are provided for only one method. If you have questions about recovery using interlaces not covered in the solutions. see your instructor. Setup In this lab, the automated lab scripts prompt you to reboot the system. If the reboot fails. ask your instructor how to bring the system down. If your system is set to use enclosure-based naming. then you must turn off enclosure-based naming before running the lab scripts. 2 These labs require the system disk to be encapsulated. If your system disk is not encapsulated. you must encapsulate it before proceeding with this lab. Use vxdiskadm to encapsulate the boot disk. Use systemdg as the name of your boot disk group and use rootdisk as the name of your boot disk. 3 You must have at least one additional disk that is the same size (or larger) as your boot disk. You arc instructed to create a mirror of the boot disk in the second exercise. 4 Ask your instructor for the location of the lab scripts. B-53Lab 4 Solutions: Troubleshooting the Boot Process Copyright f 2006 Svmamec Corporauoo All flghls reserveo
    • Recovering from Encapsulated. Unmirrored Boot Disk Failure In this lab exercise, you attempt to recover lrom encapsulated. unmirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor lor the location of the run_root script. This lab requires that the system disk is encapsulated. but not mirrored. If your system disk is mirrored, then remove the mirror. 2 Save a copy of the / etc/ system file to / etc/ system. preencap. In the new file (/ etc/ system. preencap). comment out the non-forceload lines related to VxVM (the lines that define the disk to be an encapsulated device). To comment out a line, place an asterisk (*) in front of the line in the / etc/ sys tern. preencap file: * rootdev:/pseudo/vxio@O:O * set vxio:vol rootdev is volume=l 3 From the directory that contains the lab scripts, run the script run_ root, and select option I. "Encapsulated. uumirrorcd boot disk failure": Before '011 Begin: Ensure that tile environment variable DG is set to the name of the bootdg disk group. echo $DG II' it is not set, set it before you continue: DG="bootdg" export DG run root 1) Lab 1 - Encapsulated, unmirrored boot disk failure 2) Lab 2 - Encapsulated, mirrored boot disk failure - 1 3) Optional Lab 3 - Encapsulated, mirrored boot disk failure - 2 x ) Exit Your Choice? 1 4 Follow the instructions in the lab script window. This script causes thc only plcx in rootvol to change to the STALE state. When you arc ready. the system will be rebooted twice. Wait until the system reboot fails because ofthe STALE plcx and you arc presented with the OK prompt. B-54 VERITAS Storage Foundation 5.0 for UNIX: Maintenance C<Jryflq~ll; 2006 Syoiamec Comorano« All rights reserved
    • 5 Recover the volume root vol by using the / etc/ system. preencap tile that you created before the failure. You succeed when the system boots up to multiuser mode. To recover: a Whenthe systemfails tu boot up, usethe commandboot -a from the ok prompt: ok> boot -a b PressReturn whenprompted for the UMX and kernel infurmatiun. 'Vhen promptedfor the nameuf the sys tern file, enterthe nameuf the file that you copiedwith nun-forceload linescommenteduut: Nameof system file [etc/systeml: etc/systern.preencap c Whenyuu are in maintenancemode,checkthe stateuf rootvol by usingthevxprint command: vxprint -g bootdg -ht You shouldnoticethat rootvol is nul started(DISABLED mode),and the only plex it has(rootvol- 01) is STALE. Note:bootdg is a reservednameand isjust a link to whateverdisk gruup you usedwhenyou encapsulatedthesystemdisk. It is nut a variable to bereplacedby the nameuf the disk group you are using. d Tu recover: vxmend -g bootdg fix clean rootvol-Ol vxvol -g bootdg start rootvol vxprint -g bootdg -ht reboot Lab 4 Solutions: Troubleshooting the Boot Process B-55 Copyright g: 2006 Svmantec Comoranoo All fights .eseveo
    • Recovering from Encapsulated. Mirrored Boot Disk Failure (1) In this lab exercise. you attempt to recover from encapsulated. minored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor for the location of the 1un _ root script. Important: Mirror the boot disk. This lab requires that the system disk is encapsulated and mirrored. If your system disk is not currently mirrored. then mirror the system disk before continuing. Note: Make sure that the use-nvramrc? eeprom parameter is set to true when you mirror the system disk. so that the device alias created by VxVM Ior the mirror disk call be used. 2 If you have not already done so. save a copy of the /etc/system file to / etc/ system. preencap. In the new file (/ etc/ system. preencap). comment out the non-force load lines related to YxYM (the lines that define the disk to be an encapsulated device). To comment out a line. place au asterisk (*) in front or the line in the I e tel system. preeneap file: * rootdev:/pseudo/vxio@O:O * set vxio:vol rootdev is volume=l 3 From the directory that contains the lab scripts. run the script run_root. and select option 2. "Encapsulated, mirrored boot disk failure - I": Before '1'011 Begin: Ensure that the environment variable DG is set to the name of the bootdg disk group. eeho $DG IIit is not set. set it before you continue: DG="bootdg" export DG run root 1) Lab 1 - Encapsulated, unmirrored boot disk failure 2) Lab 2 - Encapsulated, mirrored boot disk failure - 1 3) Optional Lab 3 - Encapsulated, mirrored boot disk failure - 2 x) Exit Your Choice? 2 4 Follow the instructions in the lab script window. This script causes both plcxcs in root vol to change to the STALE state. When you arc ready. the system is rebooted. The system docs not come up due to the STALE plcx. B-56 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Cupynqtn '0 2006 Svruantec Corporation AlllIglltS reserved
    • 5 Recover thevolumeroot vol by usingthe/ et c/ system. pr eeric ap file that you created before the failure. You succeed when the system boots up to multiuser mode. To recover: a When the systemfails to boot up. usethe commandboot -a from the ok prompt: ok boot -a b PressReturn when prompted for the ll:IX and kernel information. When prompted for the nameof the sys tern file, enter the nameof the tile that you copiedwith non-forceload linescommentedout: Nameof system file [etc/systeml: etc/systern.preencap c When you are in maintenancemode.checkthe stateof rootvol by usingthe vxprint command: vxprint -g bootdg -ht You should noticethat rootvol is not started (DISABLED mode),and both plexes(rootvol- 01 and rootvol- 02) are STALE. d To recover: vxmend -g bootdg fix clean rootvol-Ol If you do not want to wait for the mirrors to resynchronizebefore booting up to multiuser mode,you can offline the secondplex and then continue: vxmend -g bootdg off rootvol-02 Otherwise.the staleplex is resynchronizedfrom the cleanplex when you start the volume. e Start the volume rootvol and reboot: vxvol -g bootdg start rootvol reboot Note: While you are rebooting the system,if vuu receivea message stating that the root file systemcould not be repaired and sbnuld be checkedmanually. enter the maintenancemodeb~ providing the root password,and manually checkthe root tile systemb~ executingthe fsck -F ufs /dev/vx/rdsk/bootdg/rootvol command. Oncethe file systemcheckis complete.reboot the system. After you boot up to multiuser mode.online the secondplex (if necessary)and recover: vxmend -g bootdg on rootvol-02 vxrecover 8-57L.ab 4 Solutions: Troubleshooting the 800t Process
    • Optional Lab Exercises: Solaris Only The next set of lab exercises is optional and may be performed if you have time. These exercises provide additional recovery scenarios for troubleshooting the boot process on Solaris. Optional Lab: Recovering from Encapsulated. Mirrored Boot Disk Failure (2) In this lab exercise, you attempt to recover from encapsulated, mirrored boot disk failure. You succeed when you recover the system disk and boot to multiuser mode. The lab script run_root simulates a boot disk failure. Ask your instructor for the location of the run_root script. Important: :-1irror the boot disk. This lab requires that the system disk is encapsulated and mirrored. II'your system disk is not currently mirrored, then mirror the system disk before continuing. I I' your system disk is already mirrored. ensure that all the pines of the system disk vulurncs and the volumes themselves arc in ENABLED/ACTIVE state, that is there arc no synchronization processes that arc running on the volumes on the system disk. "Iote: Make sure that the use- nvramre? eeprom parameter is set to true when youmirror the system disk. so thai the device alias creal cd by VxVM It)!' the mirror disk can be used. Run the eeprom command and view the settings for deval ias, such as vx - a 1tboot and vx- rootdisk. 2 ltyou have not already done so. save a copy of the jete/system tile to / ete/ system. preeneap. In the new Iile (/ ete/ system. preeneap), comment out the non-foreeload lines related to VxVM (the lines that define the disk to be an encapsulated device). To comment out a line, place an asterisk (*) in front of the line in the /ete/ system. preeneap tile: * rootdev:/pseudo/vxio@O:O * set vxio:vol rootdev is volume=l 3 From the directory that contains the lab scripts, run the script run_root, and select option 3,"Encapsulated, mirrored boot disk failure - T: Before You Begin: Ensure thai the environment variable DG is set to the name of the bootdg disk group. eeho $DG If it is 110t set. set it before you continue: DG="bootdg" export DG run root 8-58 VERITAS Storage Foundation 5.0 for UNIX: Maintenance COPYright '!;.' 2006 Symaeaec Co-coranoo 111fights reser v eo
    • 1) Lab 1 - Encapsulated, unmirrored boot disk failure 2) Lab 2 - Encapsulated, mirrored boot disk failure - 1 3) Optional Lab 3 - Encapsulated, mirrored boot disk failure - 2 x) Exit Your Choice? 3 4 Follow the instructions in the lab script window, This script causes one of the plcxes in root vol to change to the STALE state. The clean plex is missing the /kernel directory. so yuu cannot boot up the system without recovery. When you arc ready. the script reboots the system. 5 Recover the volume root vol by using the / etc/ system. preencap file that you created before the failure. You succeed when the system boots up to multiuser mode. In this lab, the original system disk fails to boot becausethe /kernel directory is missing on the rootvol- 01 plex, although the rootvol- 01 plex is in a CLEAN state. (This directory has been renamed as /kernel. bak.) The mirror disk fails to boot becausethe secondplex is in STALE mode. To recover, you must boot up on the mirror disk using the partition rather than the volume. For example, assumethat the mirror of the system disk is called sysdg02 and you have already changed the use-nvramrc? parameter to true while you were mirroring the systemdisk sothat 'xVM created the devalias vx- sysdg02 for the mirror disk. a Run the following command on the ok prompt to boot up to single- user mode: ok boot vx-sysdg02 -as b PressReturn when prompted for the UNIX and kernel information. When prompted for the name of the system file, enter the name of the file that you copied with non-forceload lines commented out: Name of system file [etc/systeml: etc/system.preencap c When you are in maintenance mode, check the state of rootvol by using the vxprint command: vxprint -g bootdg -ht You should notice that the rootvol is started (ENABLED/ACTIVE mode), the plex rootvol- 01 is ACTIVE, and the piex rootvol- 02 is STALE. To prevent the second plex from being resyuchronized automatically upon reboot, offline the secondplex using: vxmend -g bootdg off rootvol-02 Lab 4 Solutions: Troubleshooting the Boot Process Copynqnte. 2006 Svmaruec Coeooranoo 111fights -esevec B-59
    • d To recover using rootvol- 01: mount -F ufa /dev/vx/dak/bootdg/rootvol /mnt Note: When you run this command, ityou receiveany errors about not being able to wrlte to /etc/mnttab, then ignore them. cd /mnt mv kernel.bak kernel cd / umount /mnt reboot Note: While you are rebooting the system,if you receive a message stating that the root fill' systemcould not bc repaired and should be checked manually, enter thc maintenance mode by providing the root password, and manually check the root tile system by executing the fack -F ufa /dev/vx/rdak/bootdg/rootvol command. Once the file systemcheck is complete, reboot the system. e Once you boot lip to multiuser mode, online the secondplex and recover: vxmend -g bootdg on rootvol-02 vxrecover Notes In a real-world environment, becausethe kernel directory is totally missing,you must copy it from the partition that YOIl booted lip on (in step d): mkdir /mnt/kernel cd /kernel; tar cf - (cd /mnt/kernel; tar xfBp -) YOII could also recover by using the secondplex rootvol- 02, by ofllining the first piex rootvol-01 and setting the secondptex rootvol-02 to CLEAN before rebooting, However, if the secondpiex becameSTALE before YOII lost the kernel directory on the first plex, this piex doesnot contain the most up-to-date data for recovery. 8-60 VERITAS Storage Foundetion 5,0 for UNIX,' Mail1tel1al1ce Copyroyht t; 2006 Symantec Corpo-encn All nqhts reserved
    • HP-UX Only: Troubleshooting the Boot Process Note: The boot process troubleshooting labs vary by platform due to the way in which the boot disk is handled by the operating system. This lab section applies to IIP-UX only. Labs for Solaris are presented in the previous section. Note: These labs require console access. If you are working in a Virtual Academy lab environment with no console access, you cannot perform these labs. Troubleshooting the Boot Process - HP-UX Note: Before starting this lab, ensure that the system disk is under VxVM control and that there is an alternate LVM boot disk. Part I Reboot your system using shutdown - ry now and interrupt the automatic boot process. shutdown -ry now Press any key to interrupt the boot process when the following messages appear: Processor is booting from first available device. To discontinue, press any key within 10 seconds. You should see "Boot terminated." message followed by the Main menu. 2 Boot the system using the alternative boot disk. Main Menu: Enter command or menu> bo alt Interact with IPL (Y, N, or Cancel)? n 3 When the system is up, verify that the system is booted Oil the LVM disk by executing the bdf command. Ensure that the systemdg disk group that is used fur the VxVM boot disk is imported on the system. bdf vxdg list 4 Stop the rootvol volume and change the state of the only plcx in the rootvol volume tu STALE using the vxmend fix stale command. vxvol -g systemdg stop rootvol Lab 4 Solutions: Troubleshooting the 800t Process 8-61 Copyr.qnt f 2006 Svmantec Corporauon All nqtus reserved
    • vxmend -g systemdg fix stale rootvol-Ol 5 Reboot your system using shutdown - ry now. 00 not interrupt the boot process. Observe what happens. Systemis halted when the boot processattempts to start the root volume. 6 Recover the YxYM boot disk using the maintenance mode boot without booting olf the LYM system disk. CTRL-B Service Processor login: <CR> (if necessary) Service Processor password: <CR> (if necessary) GSP> rs Type Y to confirm your intention to restart the system: (Y/ [N]) y GSP> co Pressany key to interrupt the boot processwhen the following messages appear: Processor is booting from first available device. To discontinue, press any key within 10 seconds. You should see"Boot terminated." messagefollowed by the Main menu. Main Menu: Enter command or menu> bo pri Interact with IPL (Y, N, or Cancel)? y ISL> hpux -vm VxVM Maintenance Mode boot INIT: SINGLE USER MODE INIT: Running /sbin/sh vx_emerg start nodename vxdctl mode vxdg list vxprint -g systemdg -htr vxmend -g systemdg fix clean rootvol-Ol vxvol -g systemdg start rootvol vxprint -g systemdg -htr shutdown -ry now Nore: Ignore any error messagesrelated 10 shutdown scripts. 8-62 VERITAS Storage Foundation 5.0 for UNIX: Maintenance C()f)ynjtl!( 2006 Svmantec Corporatron All «cnts reserved
    • Part II 1 Ensure that you are booted off the VxVM system disk using the bdf command. Edit the /etc/vx/volboot file and modify the hostid entry to dummy. bdf vi /etc/vx/volboot hostid dununy (save and quit) 2 Reboot your system using shutdown - ry now. Do not interrupt the boot process. Observe what happens. System is halted after vxconfigd is started in boot 1II0de. An error message indicates that there is a mismatch with the volboot tile. 3 Recover the Vx VM boot disk using the maintenance mode boot without booting offthe LVM system disk. CTRL-B Service Processor login: <CR> (if necessary) Service Processor password: <CR> (if necessary) GSP> rs Type Y to confirm your intention to restart the system: (Y/ [N]) y GSP> co Press any key to interrupt the boot process when the following messages appear: Processor is booting from first available device. To discontinue, press any key within 10 seconds. You should see "Boot terminated." message followed by the Main menu. Main Menu: Enter command or menu> be pri Interact with IPL (Y, N, or Cancel)? y ISL> hpux -vm VxVM Maintenance Mode boot INIT: SINGLE USER MODE INIT: Running /sbin/sh cat /etc/vx/velbeet vx_emerg start nodename B-63Lab 4 Solutions: Troubleshooting the Boot Process Copvnqht 'f 2006 Swnantar Corporauon All nqhts reserven
    • You should see messages indicating that the mismatch is corrected. cat /etc/vx/volboot shutdown -ry now Note: Ignore any error messages related to shutdown scripts. 4 When the system comes back up verify that you are running 011 the YxVM boot disk. Destroy the LVM boot disk on the other internal disk to free up the internal disk lor later labs. bdf vxdestroy_lvmroot -v clt15dO vxdisk -0 alldgs list (you should see the second internal disk as an uninitialized disk ready 10 be used) 8-64 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copyrlynl s. 2006 Syrnantec Corporation All fights reserved
    • symantcc. Lab 5 Lab 5: Volume Maintenance In this lab, you practice volume maintenance activities, such as changing volume layouts and using the Storage Expert utility. Optional exercises provide additional practice on managing VxVM tasks. u------~~For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Lab 5 Solutions: Volume Maintenance In this lab. you practice volume maintenance activities. such as changing volume layouts and using the Storage Expert utility. Optional exercises provide additional practice on managing VxVM tasks. The Lab Exercises for this lab arc located on the following page: Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need at least four disks to be used in a disk group. Lab 5 Solutions: Volume Maintenance Copynqht f· 2UU6 Syrnantec Corporation. All fl<lIItS reserved B-65
    • Classroom Lab Values In preparation fix this lab, you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the: first lab in the: SF Fundamentals section where you initially documented this intorm.uion. Object Sample Value Your Value mnt password veritas ------- --Hust name trainl !I,v Data Disks: Solaris: clt#dO - clt#d5 HP-UX: c4 t ud o - c4tOd5 AIX: hdisk21- hdisk26 Linux: sda - sdf 8-66 VERITAS Storage Foundation 5.0 for UNIX. Maintenance CopYW)hl c 2006 svma-uec Ctxporat.on 111 flytlls reserved
    • Changing the Volume Layout You can useeither the YEA interfaceor thecommandline interface,whichever you prefer.The solutions for both methodsarecoveredwhereappropriate.If you useobject namesother thantheonesprovided. substitutethe namesaccordingly in the commands. Note: If you areusing YEA, view the propertiesof the relatedtaskafter eachstep to view the underlying commandthat wasissued. Createa disk group called namedg with four disks. vxdisksetup -i device_tag (itnccessarv) vxdg init namedg namedgOl=device_tagl namedg02=device_tag2 namedg03=device_tagJ namedg04=device_tag4 2 Createa 20-M13 concatenatedmirrored volume called namevoll. Createa Veritastile systemon the volume and mount it to /namel. If you useVEA to createandmount the tile system.ensurethat the tile systemis not addedto the ti Ie systemtable. YEA a Highlight the disk group, and selectActions->:ew Volume. b Specify the volume name, the size,a concatenated layout, and select mirrored. c Ensure that "Enable logging" is not checked. d Add a VxFS file system and set the mount point. e Uncheck the Add to file system table option and complete the wizard. eLl vxassist -g namedg make ·namevoll 20m layout=mirror mkfs -F vxfs /dev/vx/rdsk/namedg/namevoll Note: On Linux, usemkfs -to mkdir /namel (ifit doesn'talreadyexist.) mount -F vxfs /dev/vx/dsk/namedg/namevoll /namel 3 Add datato the volume and verify that the tile hasbeenadded. echo "hello name" > /namel/hello cat /namel/hello 4 Changethe volume layout from its current layout (mirrored) to a nonlayered mirror-stripe with two columnsanda stripe unit sizeof 1211sectors(04K). 6-67Lab 5 Solutions: Volume Maintenance Copyrlght'E 2006 Symautec Corporation 111nqhts, reserved
    • Monitor the progress or the rclayout operation, and display the volume layout after each command that you run. VEA a Highlight the volume and select Actions->Change Layout. b In the Change Volume Layout dialog box, select a Striped layout, specify two columns, and click OK. c To monitor the progress of the relayout, the Relayout status monitor window is automatically displayed when you start the relayour operation. d View the task properties of the relayout operation. Notice that two commands are issued: vxassist -t taskid -g namedg relayout namevoll layout=mirror-stripe nmirror=2 ncol=2 stripeunit=128 vxassist -g namedg convert namevoll layout=mirror-stripe ell a To begin the relayour operation: vxassist -g namedg relayout namevoll layout=mirror-stripe ncol=2 stripeunit=128 b To monitor the progress of the task, run: vxtask monitor c Run vxprint to display the volume layout. Notice that a layered layout is created: vxprint -g namedg -rth d Recall that when you relayuut a volume to a striped layout, a layered layout is created first, then )'OU IIIUst use vxassist convert to complete the conversion to a nunlayered mirror-stripe: vxassist -g namedg convert namevoll layout=mirror-stripe e Run vxprint to confirm the resulting layout. Notice that the volume is now a nuulayered volume: vxprint -g namedg -rth 5 Verily that the file is still accessible. cat /namel/hello 8-68 VERITAS Storage Foundation 5,0 for UNIX: Maintenance
    • 6 Unmount the tile system 011 the volume and remove the volume. YEA a llighlight the volume, and select Actions->Delete Volume. b In the Delete Volume dialog box, click Yes. c In the Unmount File System dialog box, click Yes. eLl umount /namel vxassist -g namedg remove volume namevoll Using the Storage Expert Utility Add the directory containing the Storage Expert rules to your PATH environment variable in your .profile tile. PATH=$PATH:/opt/VRTS/vxse/vxvm export PATH 2 Display a description of Storage Expen rule vxse_drll. What does this rule do') vxse drll info This rule checks for large mirrored volumes that do not have an associated log. 3 Does Storage Expert rule vxse _drll have any user-sctrable parameters') vxse drll list 4 From the command line. create a IOO-MB mirrored volume with no log called namevoll in the namedg disk group. Create a file system on the volume and mount it to /namel. vxassist -g namedg make namevoll 100m layout=mirror mkfs -F vxfs /dev/vx/rdsk/namedg/namevoll Note: On Linux, usemkfs -to mkdir /namel (if it docs not already exist) mount -F vxfs /dev/vx/dsk/namedg/namevoll /namel 5 Run Storage Expert rule vxse_drll 011 the disk group containing the volume. What does Storage Expert report'! vxse drll -g namedg run Lab 5 Solutions: Volume Maintenance 8-69 Copynqht t; 200ti Swnantec Corporanon 111nqnts reserved
    • Storage Expert reports information; the mirrored volume is skipped becausethe volume is lessthan the sizeof volumes tested by the rule. 6 Expand the volume to a size of I CiB. vxresize -g namedg namevoll 19 7 Run Storage Expert rule vxse_drll again on the disk group containing the volume. What does Storage Expert report" vxse drll -g namedg run Storage Expert reports a violation becausethe large mirrored volume docsnot have a log. 8 Add a log to the volume. vxassist -g namedg addlog namevoll 9 Run Storage Expert rule vxse_drll again on the disk group containing thc volume. What docs Storage Expert report'! vxse drll -g namedg run Storage Expert reports that the volume passesthe test, sincethe large mirrored volume now hasa log. 10 What arc the attributes and parameters that Storage Expert usesin running the vxse drll rule" vxse drll list The attribute is mirror_threshold. Storage Expert will warn if a mirror is greater than this sizeand the volume doesnot have a log. vxse drll check The mirror_threshold is a I-GB mirrored volume. 11 Shrink the volume to 100 MB and remove the log. vxresize -g namedg namevoll 100m vxassist -g namedg remove log namevoll 12 Run Storage ExpCI1rule vxse_drll again. When running the rule, specify that you want Storage Expert to test the mirrored volume against a mirror_threshold of 1001,18. What docs Storage Expert report'! vxse drll -g namedg run mirror threshold=lOOm This stepdemonstrates how to specify different attribute values from the command line. Becauseyou set the mirror_threshold parameter to 100 "IB, Storage Expert reports a violation. 13 Unmount the tile system and remove the volume used in this exercise. umount /namel vxassist -g namedg remove volume namevoll B-70 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copyright' 2000 Svrnantec Corporation All rt'Jnts reserved
    • Optional Lab Exercises The next set of lab exercises is optional and may be performed if you have time. These exercises provide additional practice on monitoring tasks. Optional Lab: Monitoring Tasks In this optional lab, you track volume rclayout processes using the vxtask command and recover from avxrelayout crash by using VEA or from the command line. To begin, you should have at least four disks in the disk group that you are using. Create a mirror-stripe volume called namevoll in the namedg disk group, with a size of I GB using the vxassist command. Assign a task tag to the task and run the vxassist cornmaud in the background. VEA a Highlight a disk group and selectActlons=c-New Volume, b Specify the volume name, the size, a striped layout, and select mirrored. c Ensure that ";'1;0 layered volumes" is checked, Note: You cannot assigna task tag when using VEA. CLI vxassist -g namedg -b -t task name make namevoll 19 layout=mirror-stripe 2 View the progress of the task. YEA Click the Tasks tab at the bottom of the main window to display the task and the percent complete. CLI vxtask list task name or vxtask monitor 8-71Lab 5 Sotutions: Votume Maintenance C0pynghl"'::2006 Symant••c Cocro-auoo Ail ngh5reserved
    • 3 Slow down the task progress rate to insert an 1/0 delay of 100 milliseconds. VEA a Right-click the task in the Tasks tab, and select Throttle Task. b Specify 100 as the Throttling value, and click OK. CLl vxtask set slow=lOO task name View the layout of the volume in the VEA interface. 4 After the volume has been created. use vxassist to relayout the volume to stripe-mirror. Use a stripe unit size of 256K, use two columns, and assign the process to the above task tag. VEA a Highlight the volume and select Actions->Change Layout. b In the Change Volume Layout dialog box. select a Striped Mirrored layout. c Change the stripe unit size value to 512 on Solaris,256 on HP-UX. eLl vxassist -g namedg -t task_name relayout namevoll layout=stripe-mirror stripeunit=256k ncol=2 5 In another terminal window. abort the task to simulate a crash during rclayout. VEA In the Rclayour status monitor window, click Abort. CLI vxtask abort task name View the layout of the volume in the VEA interface. 6 Reverse the rclayout operation, View the layout ofthe volume alter the reversal of the rcluyout operation completes, Notice that the stripe unit size is 8-72 VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • back to the original value but the layout is layered. Change the layout to 110n- layered. YEA In the Relayout status monitor window, click Reverse. eLl vxrelayout -g namedg reverse namevoll View the layout of the volume in the VEA interface. vxassist -g namedg convert namevoll layout=mirror-stripe 7 Destroy the namedg disk group. YEA a Selectthe Disk Groups node in the object tree and the namedg disk group in the right pane dew. b SelectActlons=c-Dcstroy Disk Group. e Confirm when prompted. CLI vxdg destroy namedg B-73Lab 5 Solutions: Volume Maintenance Copynqhtf 2006 Symantec Corcoranon All fights reserved
    • 8-74 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copyflgh 1';.2006 Syrnantec Corporation MI fl(:lh!S reserved
    • Lab 6 Lab 6: Performance Monitoring In this lab, you analyze Volume Manager 110 operations using the vxstat and the vxtrace utilities. --------------------------, For Lab Exercises, see Appendix A. For Lab Solutions, see Append~B. symantec. Lab 6 Solutions: Performance Monitoring The purpose of this lab is to analyze Volume Manager I/O operations using the vxstat and the vxtrace utilities. The Lab Exercises for this lab are located on the following page: Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this, you also need at least six disks to be used in a disk group. Lab 6 Solutions: Performance Monitoring Copynqnt 'f 2006 Symantec Corooral,on 111rights reserved 8-75
    • Classroom Lab Values In preparation lor this lab, you will need the following information about your lab cnvironmcnt. For your reference, you may record the information here, or refer back to the first lab in the SF Fundamentals section where you initially documented this information. Not!': On the HP-UX platform. if you have moved the boot disk from an L VM disk to a VxVM disk during Encapsulation and Rootability lab (lab 3) or the Troubleshooting the 1300tProcess lab (lab 4), your boot disk and second internal disk values will have changed Irom previous labs. If you have skipped these labs, the values will still be the same, Ensure that you enter the COITertvalues in the following table. Ohjl'ct Sample Value Your "alue r-;:OOI password veritas Hust name trainl Ily B'lOt Disk: Solaris: cOtOdO HP-L'X: c It lSdO AIX: hdiskO l.inux: hda 2nd Internal Disk: Solaris: cOt2dO HP-L'X: c3tlSdO AIX: hdiskl Linux: hdb My Data Disks: Solaris: c 1t#dO - c1t#dS HP-l:X c4tOdO - c4tOd5 AIX: hdisk21- hdisk26 tinux: sda - sd£ Location of Lab Scripts /student/labs/s£! (if any): s£SO Location of the fp /student/labs/s£/ prog rum: s£SO/bin 8-76 Education and SRT FrameMaker Template Set Copvnghl ~' 2006 Svrnantec Ccrporanoo All nqhts reserved
    • Preparing the Environment for the Performance Labs If the second internal disk on your system is used in the systemdg disk group, which is the disk group used for the system disk encapsulation, use the following steps to free it up for performance testing. Ifyou do not have a second internal disk or if you cannot use the second internal disk, skip this section. If the system disk is encapsulated and mirrored to the second internal disk, remove the mirrors on the second internal disk for all system disk volumes. YEA For each volume in bootdg, remove all of the mirrors that use the second internal disk. More specifically, for each volume, two plexes are displayed, and you should remove the newer (-02) plexes from each volume. To remove a mirror. highlight a volume and selecl AClions->Mirror->Remon·. ell vxassist -g bootdg remove mirror rootvol !altboot vxassist -g bootdg remove mirror swapvol !altboot vxassist -g bootdg remove mirror usr !altboot vxassist -g bootdg remove mirror var !altboot where a1 tboot is the disk media name used for the mirror of the system disk. 2 Remove the second internal disk from the systemdg disk group. YEA a Select the systemdg disk group and highlight the empty second internal disk. b Select Actions->Remove Disk From Disk Group. eLl vxdg -g systemdg rmdisk altboot where a1 tboot is the disk media name used for the second internal disk during the mirroring of the system disk. Note: If you are working on the lIP-UX platform and the second internal disk is configured as an alternative LVM boot disk. ensure that you arc booted off the VxVM boot disk and destroy the LVM boot disk using the following command: 8-77Lab 6 Solutions: Performance Monitoring Copvnqht 'f· 2006 Symantac Corrorancn 111rights reserved
    • vxdestroy Ivmroot -v c#t#d# where c#t#d# is the second internal disk used as an alternative LVM boot disk. Exploring the vxstat Utility In this exercise, you analyze the performance ola disk in the testdg disk group for 32K random reads. You use the fp program to generate an J!O load. Ask your instructor lor the location of the fp program. Create a non-CDS disk group named testdg that contains one disk. II'your system has two internal disks and the second internal disk is available for you to use. use the second internal disk. otherwise use any disk except lor your boot disk. Name the disk testdg01. Note: In a North American Mobile Academy lab environment. you cannot use the second uucmal disk during the labs even if the system has a second internal disk. Solaris vxdisksetup -i device_tag format=sliced (if necessary) vxdg init testdg testdg01=device_tag cds=off where device _ tag is c#t#d# for Solaris. HP-l!X vxdisksetup -i device_tag format=hpdisk (if necessary) vxdg init testdg testdg01=device_tag cds=off where device_tag is c#t#d# tell HP-UX. 2 Determine the maximum volume size that can be created using thc single drive. Create a volume named test in the testdg disk group that is the maximum size on the single drive. vxassist -g testdg maxsize testdg01 Maximum volume size: 4151296 (2027Mb) vxassist -g testdg make test 4.251296 testdg01 Nore that the volume size may be different in your environment. 3 Invoke the vxstat command to begin drive analysis on the test volume. Set the vxstat interval to display staristics every I second. Statistics will begin printing every second. and all statistics arc displayed as 0 until you begin sending I/O to the volume. Note: To be able to analyze the output later. you can direct it to a file, for example /tl1lp/vxstat. out. vxstat -g testdg -i 1 -d test I tee -a /tmp/vxstat.out B-78 Education and SRT FrameMaker Template Set Cnpyng~lt ;:: 200€ Syrnantec Coroorahun All fights reserved
    • 4 In a different terminal window, change to the directory that contains the fp program. ''ote: Make sure that you are using the correct version of the fp program for your platform, for example, f p _ sun for Solaris, fp _ hp for IIP-UX, or fp Linux for l.inux . Display a description of the fp program by running the fp command without any parameters: cd /location_of_fp ./fp platform usage - fileperf path ksize iosize iocnt op [op ... J path = pathname of file or device on which to run test ksize = size of test file to create in units of K bytes iosize = size in Kbytes of each I/O operation iocn = number of times to perform each operation op = [sr sw rr rwJ The operations supported are: sr sequential read sw sequential write rr random read rw random write Fileperf creates a file of ksize*lK bytes and performs iocnt operations of size iosize*lK against the file. If the file exists and is at least ksize*lK bytes in size or if a device is specified for path then the create is skipped. 5 From the directory that contains the fp 110 program. start several invocations of the fp program by using the command: /location_of_fp/fp_platform /dev/vx/rdsk/testdg/test 2075648 32 4096 rw & 1otl' that 2075648 is the size of the volume in KB and this command gl'neratl's random writes of 32K to the volume. Note: Alternatively, you can use the vi editor to create a simple script that contains ten or more invocations of this fp command. This method can more effectively flood the volume with 110: a Invoke the vi editor and create a file named /tmp/testscript. vi /tmp/testscript b Copy the fp command shown above into the tile ten or more times. i (Type the fp command shown above and copy it multiple times in the file.) c Save the test script file. Lab 6 Solutions: Performance Monitoring B-79 Copyright If; 2006 Svmanter Coroorauon All fights reserven
    • [Ese] :wq d Change the permissions on the 11k to be readable and writable by the mot user. ehmod 755 /tmp/testseript e Run the test script. /tmp/testseript 6 When you execute the fp command or your lest script, the vxstat output in the other terminal window begins to display data. Wait for all the fp commands or the script to finish executing, then stop the vxstat output by typing CTRL-C on the terminal you arc running vxstat, and analyze the vxstat output and determine the peak performance of the drive. Peak drive performance is reached when the number of 1/0 operations and the 1/0 count stop increasing on the vxstat output. 7 Destroy the testdg disk group. vxdg destroy testdg B-80 Education and SRT FrameMaker Template Set COnynghl1;:· 2006 Svmantec Corporation All ny!Jts reserved
    • Optional Lab Exercises The next set of lab exercises is optional and may be performed if you have time. These exercises provide additional performance scenarios for exploring performance utilities. Optional Lab: Analyzing Drive Performance: Scenarios In this exercise. you analyze drive performance based on sample vxstat output and identify possible improvements to volume layouts. This exercise is theoretical. but designed to help you understand how tu interpret vxstat output. Note: The samples provided arc from a Solaris platform. Therefore I block is equivalent to 512 bytes. Scenario I Suppose that you have a disk group named testdg that contains four disks. You have a IOO-MU volume. called test. striped across two disks with a stripe unit size of 4 MB. There are three processes performing random reads that arc 512K in size on the volume. There are no other volumes in the disk group. When you run a performance test and run vxstat on the disk group, the following output is displayed: vxstat -g testdg -d OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE dm testdg01 107 0 109568 0 50.9 0.0 dm testdg02 91 93184 a 42.5 0.0 dm testdg03 a a a 0.0 0.0 dm testdg04 a a a O. a 0.0 Analyze the vxstat output. What do you notice" You have a bottlenecked drive. lIost of the 1/0 operations are going to a single drive. 2 What changes might you make to the volume layout to improve performance" Because there are additional disks in the disk group that are not used for any other volumes, you can improve the performauce b~' increasing the numher of columns: vxassist -g testdg relayeut test neel=3 When you rerun the performance test, you would expect the vxstat output to look similar to: vxstat -g testdg -d Lab 6 Solutions: Performance Monitoring 8-81 Copynoht '!': 2006 Symautec Ccmoranon All nqms reserved
    • OPERATIONS BLOCKS AVG TIME (ms) TYP NAME READ WRITE READ WRITE READ WRITE dm testdg01 80 0 81920 0 46.4 0.0 dm testdg02 64 0 65536 0 45.6 0.0 dm testdg03 54 0 55296 0 39.6 0.0 dm testdg04 0 0 0 0 0.0 0.0 Scenario 2 Suppose that you have a disk group named testdg that contains four disks. You have a IOO-MB volume. called test. that is concatenated using two disks. There an: three processes performing random reads of size 512K on the volume. There arc no other volumes in the disk group. When you run a performance test ami run vxstat on thc disk group. the following output is displayed: vxstat -g testdg -d OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE dm testdg01 165 0 168960 0 57.5 0.0 dm testdg02 36 0 36864 0 24.7 0.0 dm testdg03 0 0 0 0 0.0 0.0 dm testdg04 0 0 0 0 0.0 0.0 Analyze the vxstat output. What do you notice" Notice that most of the 1/0 is concentrated on the first disk in the concatenatedvolume. lhcre is no 1/0 on the other disks in the disk group. 2 What changes niighr you make to the volume layout to improve performance'! To improve the performance, balance the load acrossthe disks in the disk group by changing the layout from concatenated to striped: vxassist -g testdg relayout test layout=stripe neol=3 stripeunit=4m When you rerun the performance test, )OU would expect the vxstat output to look similar to: vxstat -g testdg -d OPERATIONS BLOCKS AVG TIME (ms ) TYP NAME READ WRITE READ WRITE READ WRITE dm testdg01 63 0 64512 0 46.5 0.0 dm testdg02 0 0 0.0 0.0 dm testdg03 79 80896 0 46.6 0.0 dm testdg04 56 0 57344 0 39.5 0.0 B-82 Education and SRT FrameMaker Template Set Cupyrl>jhl t, 2006 Svn.antec Corporanou All r1gt'b reservec
    • Scenario 3 Suppose that you have a disk group named t estdg that contains four disks. You have two volumes: A IOO-MB volume called test striped across three disks with a stripe unit size of4 MB Another IOO-MB volume called test2 with a concatenated layout on the disk testdgOl. which is also one orthe disks used by the volume test. There are three processes performing random reads of size InK on the test volume and one process performing random reads of size 512Kon the test2 volume. When you I1I1l a performance test and run vxstat on the disk group, the following output is displayed: vxstat -g testdg -d OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE dm testdg01 241 0 13 84 9 6 0 30.8 0.0 dm testdg02 126 0 32256 17.6 0.0 dm testdg03 132 33792 0 18.2 0.0 dm testdg04 0 0 0 0.0 0.0 Analyze the vxstat output. What do you notice'! Notice that the disk that has two volumes has a much higher number of 1/0 operations and a slower responsetime than the other disks. 2 What changes might you make to the volume layout to improve performance') You can improve performance by moving one of the volumes off of that disk: vxassist -g testdg move test2 !testdg01 testdg04 Vhen you rerun the performance test, you would expect the vxstat output to look similar to: vxstat -g testdg -d OPERATIONS BLOCKS AVG TIME (ms ) TYP NAME READ WRITE READ WRITE READ WRITE dm testdg01 128 0 32768 0 23.0 0.0 dm testdg02 124 0 31744 24.3 0.0 dm testdg03 147 0 37632 0 24.7 0.0 dm testdg04 100 0 102400 25.5 0.0 Lab 6 Solutions: Performance Monitoring 8-83 Copynqtu". 2006 Symnntec C()rpn(;'1i'OJI All nqhts reserved
    • Optional Lab: Analyzing the Application 1/0 Profile: Scenarios In this exercise. you analyze the application I/O profile based on sample vxtrace output and identi fy possible improvements to volume layouts (for example. changing the layout from concatenated to striped, increasing the number of columns. changing the stripe unit size. and so on). This exercise is theoretical, but designed to help you understand how to interpret vxtrace output. Note: The samples provided arc Irorn a Solaris platform. Therefore I block is equivalent to 51~ bytes." Scenario I Suppose that you have a disk group named testdg that contains four disks. You have a IOO-MBvolume. namedtest. striped across two disks v. ith a stripc unit size of4K. When you start a trace on the volume. run a performance test on the volume, and then stop the trace on the volume. the following vxtrace output is displayed: vxtrace -g testdg -d /tmp/appiolabl.out -0 dev,disk test ICtrl+CJ vxtrace -g testdg -f /tmp/appiolabl.out -0 dev,disk I pg 3601 START write vdev test block 115392 len 64 concurrency 1 pid 6948 3602 START write disk clt8dOs2 op 3601 block 57696 len 8 3603 START write disk c1t9dOs2 op 3601 block 57696 len 3604 START write disk c1t8dOs2 op 3601 block 57704 len 8 3605 START write disk c1t9dOs2 op 3601 block 57704 len 8 3606 START write disk c1t8dOs2 op 3601 block 57712 len 3607 START write disk c1t9dOs2 op 3601 block 57712 1en 8 3608 START write disk clt8dOs2 op 3601 block 57720 len 8 3609 START write disk c1t9dOs2 op 3601 block 57720 len 3602 END write disk c1t8dOs2 op 3601 block 57696 len 8 time 0 3603 END write disk c1t9dOs2 op 3601 block 57696 1en 8 time 0 3604 END write disk c1t8dOs2 op 3601 block 57704 len 8 time 1 3606 END write disk c1t8dOs2 op 3601 block 57712 1en 8 time 1 3608 END write disk clt8dOs2 op 3601 block 57720 len 8 time 1 3605 END write disk c1t9dOs2 op 3601 block 57704 len time 1 3607 END write disk c1t9dOs2 op 3601 block 57712 len time 1 3609 END write disk clt9dOs2 op 3601 block 57720 len time 1 3601 END write vdev test op 3601 block 115392 len 64 time 1 Analyze thc application 10 profile based on the vxtrace output. Analyze the number of concurrent processes, the application I/O size far each process, and whether the process is performing random or sequential I/O. What do you notice') There is a single processperforming random writes of size32K IIn the volume. 'xV'1 must perform four writes to eachdisk to complete a single I/O due to the small stripe unit size. 8-84 Education and SRT FrameMaker Template Set Cop~n9111 :; 2006 Svruantec Corpo-anoo Ail nqhts reservec
    • 2 What changes might you make to the volume layout to improve performance" To improve performance, increase the stripe unit sizeto 16K: vxassist -g testdg relayout test stripeunit=16k Scenario 2 Suppose that you have a disk group named testdg that contains four disks, You have a IOO-MB concatenated volume, named test, on one of the disks, When you start a trace on the volume, run a performance test on the volume, and then stop the trace on the volume, the following vxtrace output is displayed: vxtrace -g testdg -d /tmp/appiolab2.out -0 dev,disk test ICtrl+CI vxtrace -g testdg -f /tmp/appiolab2.out -0 dev,disk I pg 6005 START read vdev test block 108256 len 32 concurrency 2 pid 7211 6006 START read disk c1t8dOs2 op 6005 block 108256 len 32 6007 START read vdev test block 59552 len 32 concurrency 3 pid 7217 6008 START read disk c1t8dOs2 op 6007 block 59552 len 32 6004 END read disk c1t8dOs2 op 6003 block 172512 len 32 time 1 6003 END read vdev test op 6003 block 172512 len 32 time 1 6009 START read vdev test block 196352 len 32 concurrency 3 pid 7214 6010 START read disk c1t8dOs2 op 6009 block 196352 len 32 6008 END read disk c1t8dOs2 op 6007 block 59552 len 32 time 6007 END read vdev test op 6007 block 59552 len 32 time 1 6011 START read vdev test block 78688 len 32 concurrency 3 pid 7217 6012 START read disk c1t8dOs2 op 6011 block 78688 len 32 6010 END read disk c1t8dOs2 op 6009 block 196352 len 32 time 0 6009 END read vdev test op 6009 block 196352 1en 32 time 0 6013 START read vdev test block 151712 len 32 concurrency 3 pid 7214 6014 START read disk c1t8dOs2 op 6013 block 151712 len 32 6006 END read disk c1t8dOs2 op 6005 block 108256 len 32 time 2 6005 END read vdev test op 6005 block 108256 len 32 time 2 Analyze the application I/O profile basedon the vxtrace output. Analyze the number of concurrent processes,the application I/O size for each process, and whether the process is performing random or sequential I/O, What do you notice" There are three processesperforming random reads of size 16K on the volume. 2 What changes might you make to the volume layout to improve performance" To improve performance, change the layout to striped with at least three columns and a large stripe unit size: vxassist -g testdg relayout test layout=stripe ncol=3 stripeunit=256k OR vxassist -g testdg relayout test ncol=4 stripeunit=8k Lab 6 Solutions: Performance Monitoring 8-85 Copyright 1);' 2006 Symantec Corpora lion All rights reserved
    • Scenario 3 Suppose that you have a disk group named testdg that contains four disks. You have a IOO-MB volumcc narncd test, striped across three disk> with a stripe unit size of4K. When you start a truce on the volume. run a performance test on the volume. and then stop the trace on the volume. the following vxtrace output is displayed: vxtrace -g testdg -d /tmp/appiolab3.out -0 dev,disk test ICtrl+C1 vxtrace -g testdg -f /tmp/appiolab3.out -0 dev,disk I pg 6802 START read vdev test block 194304 len 64 concurrency 2 pid 7487 6803 START read disk c1t8dOs2 op 6802 block 64768 len 8 6804 START read disk c1t9dOs2 op 6802 block 64768 len 8 6805 START read disk clt10dOs2 op 6802 block 64768 len 8 6806 START read disk clt8dOs2 op 6802 block 64776 len 8 6807 START read disk clt9dOs2 op 6802 block 64776 len B 6808 START read disk cltl0dOs2 op 6802 block 64776 len 8 6809 START read disk clt8dOs2 op 6802 block 64784 len 8 6810 START read disk clt9dOs2 op 6802 block 64784 len 8 6795 END read disk clt9dOs2 op 6793 block 67712 len 8 time 1 6798 END read disk clt9dOs2 op 6793 block 67720 len 8 time 1 6801 END read disk clt9dOs2 op 6793 block 67728 len 8 time 1 6794 END read disk cltBdOs2 op 6793 block 67712 len 8 time 1 6797 END read disk clt8dOs2 op 6793 block 67720 len 8 time 1 6800 END read disk clt8dOs2 op 6793 block 67728 len time 1 6796 END read disk c1t10dOs2 op 6793 block 67712 len 8 time 2 6799 END read disk c1t10dOs2 op 6793 block 67720 len 8 time 2 6793 END read vdev test op 6793 block 203136 len 64 time 2 6811 START read vdev test block 169984 len 64 concurrency 2 pid 7484 6812 START read disk cltl0dOs2 op 6811 block 56656 len 8 6813 START read disk cltBdOs2 op 6811 block 56664 len 8 6814 START read disk clt9dOs2 op 6811 block 56664 len 8 6815 START read disk cltl0dOs2 op 6811 block 56664 len 8 6816 START read disk cltBdOs2 op 6811 block 56672 len 8 6817 START read disk clt9dOs2 op 6811 block 56672 len 8 6818 START read disk cltl0dOs2 op 6811 block 56672 len B 6819 START read disk clt8dOs2 op 6811 block 56680 len 8 6820 START read vdev test block 32320 len 64 concurrency 3 pid 7490 Analyze the application 1'0 profile based on the vxtrace output. Analyze the number or concurrent processes, the application I/O size tor each process, and whether the process is performing random or sequential 1.'0. What do you notice? There are three processesperfurmlng random reads of size32K on the volume. 2 What changes might you make to the volume layout to improve performance'? To improve performance, increase the stripe unit sizc. To achieveat least an S'Y..break-up value, selecta stripe unit sizeby calculating 32K x (100/8) rounded to the nearcst power of two (5I2K): vxassist -g testdg relayout test stripeunit=512k 8-86 Education and SRT FrameMaker Template Set Cop)'flyhi ie 2006 Symautec Corporauon All rrghts reserved
    • Scenario 4 Suppose that you have a disk group named testdg that contains four disks. You have a IOO-MB volume. named test. striped across three disks with a stripe unit size of 256K. When you start a trace on the volume. run a performance test on the volume. and then stop the trace on the volume. the following vxtrace output is displayed: vxtrace -g testdg -d /tmp/appiolab4.out -0 dev,disk test ICtrl+q vxtrace -g testdg -f /tmp/appiolab4.out -0 dev,disk I pg 8972 START read vdev test block 0 len 64 concurrency 1 pid 7751 8973 START read disk c1t8dOs2 op 8972 block 0 len 64 8973 END read disk c1t8dOs2 op 8972 block 0 len 64 time 2 8972 END read vdev test op 8972 block 0 len 64 time 2 8974 START read vdev test block 64 len 64 concurrency 1 pid 7751 8975 START read disk c1t8dOs2 op 8974 block 64 len 64 8975 END read disk c1t8dOs2 op 8974 block 64 len 64 time 0 8974 END read vdev test op 8974 block 64 len 64 time 0 8976 START read vdev test block 128 len 64 concurrency 1 pid 7751 8977 START read disk c1t8dOs2 op 8976 block 128 len 64 8977 END read disk c1t8dOs2 op 8976 block 128 len 64 time 0 8976 END read vdev test op 8976 block 128 len 64 time 0 Analyze the application I/O profile based on the vxtrace output. Analyze the number of concurrent processes, the application 110 size for each process. and whether the process IS performing random or sequential I/O. What do you notice') There is a single processperforming sequential reads of size32K on the volume. 2 What changes might you make to the volume layout to improve performance"? To improve performance, you must ensure that the single processcan use the full handwidth provided hy the striped volume (that is. the 110size should be equal to the full stripe width). To change the volume layout: vxassist -g testdg relayout test ncol=2 stripeunit=16k OR vxassist -g testdg relayout test ncol=4 stripeunit=8k Lab 6 Solutions: Performance Monitoring 8-87 Capynghl 'f;' 20fl6 Symantec Corporanon. All nghl"; reserved
    • Optional Labs: Measuring Volume 1/0 Operations In the following exercises, you determine whether reads or writes occur when VxVM performs various actions. Note: The solutions provided in this section show sample vxstat outputs from a Solaris platform. You may obscrv c different sizes if you arc working on an HP-UX platform. This is because on HP-UX one sector is 1024 bytes, whereas on Solaris, one sector is 51 ~ bytes. Optional Lab: When Creating a Disk Group Docs VxVM write into the public region ola disk when it creates a disk group'.' 1 Create a disk group nameddatadg using six disks. vxdisksetup -i device_tag (if necessary) vxdg init datadg datadgOO=device_tagl datadgOl=device_tag2 datadg02=device_tag3 datadg03=device_tag4 datadg04=device_tagS datadg05=device_tag6 2 Determine whether Vx VM writes into the public region of a disk when it creates a disk group. vxstat -g datadg -fs -vpsd rv> NAME dm da tadgOO dm da tadgOl dm datadg02 dm datadg03 dm datadg04 dm datadg05 OPERATIONS READ WRITE o 0 o 0 o 0 o 0 o 0 o 0 Bl.OCKS READ WRITE o 0 o 0 o 0 o 0 o 0 o 0 AVG TIME (ms ) READ WRITE 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 No, VxVM does not write into the public region of a disk when it creates a disk group. Optional Lab: When Creating a Volume Docs VxVM write into the volume, plcxcs. subdisks, or disks space when it creates a volume? Rcset the read/write counters for datadg. Create a 50-MB, concatenated (RAID-O) volume nameddatavoll in datadg. Did reads or writes to the volume, plcx, subdisk. or disk occur') vxstat -g datadg -r 8-88 Education and SRT FrameMaker Template Set Copynqht ;;·2006 Syrnantec Corporanon All r.qnts reserved
    • vxstat -g datadg -fs -vpsd OPERATIONS BLOCKS AVG TlME(ms) TYP NAME READ WRITE READ WRITE READ WRITE dm datadgOO 0 0 0 0 0.0 0.0 dm datadgOl 0 0 0 0 0.0 0.0 dm datadg02 0 0 0 0 0.0 0.0 dm datadg03 0 0 0 0 0.0 0.0 dm da tadg04 0 0 0 0 0.0 0.0 dm datadgOS 0 0 0 0 0.0 0.0 vxassist -g datadg make datavoll Sam vxstat -g datadg -fs -vpsd datavoll OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE dm datadgOO 0 0 0 0 0.0 0.0 vol datavoll 0 0 0 0 0.0 0.0 p l datavoll-Ol 0 0 0 0 0.0 0.0 sd datadgOO-OI 0 0 0 0 0.0 0.0 10, VxVI1 doesnot write into the volume, plexes,suhdisks, or disks space when it creates a volume. 2 Reset the read/write counters for datadg. Create a30-M13, 3-column. striped (RAID-O) volume named datavo12 in datadg. Did reads or writes occur" vxstat -g datadg -r vxassist -g datadg make datavol2 30m layout=stripe neol=3 vxstat -g datadg -fs -vpsd datavol2 OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE dm datadgOI 0 0 0 0 0.0 0.0 dm datadg02 0 0 0 0 0.0 0.0 dm datadg03 0 0 0 0 0.0 0.0 vol datavo12 0 0 0 0 0.0 0.0 pl da tavo 12-01 0 0 0 0 0.0 0.0 sd da tadgOl-Ol 0 0 0 0 0.0 0.0 sd datadg02-01 0 0 0 0 0.0 0.0 sd datadg03-01 0 0 0 0 0.0 0.0 No, reads or writes did not occur. 3 Reset the read/write counters for datadg. Create a 30-MB. 2-way, mirrored (RAID-I) volume named datavo13 in datadg. Did reads or writes occur'.' Did any synchronization occur') vxstat -g datadg -r vxassist -g datadg make datavol3 30m layout=mirror nmirror=2 vxstat -g datadg -fs -vpsd datavo13 TYP NAME dm datadg04 dm datadgOS vol datavo13 p l datavol3-01 sd datadg04-01 p l datavo13-02 sd datadgOS-OI OPERATIONS READ WRITE 240 0 o 240 30 0 30 0 30 0 o 30 o 30 BLOCKS READ WRITE 61440 0 o 61440 61440 0 61440 0 61440 0 o 61440 o 61440 AVG TTME(ms) READ WRITE 22.0 0.0 0.0 20.8 66.0 0.0 33.7 0.0 33.7 0.0 0.0 27.3 0.0 27.3 Lab 6 Solutions: Performance Monitoring 8-89 CopyrIght (C: 20(J() Symanter- Corcorauon All fights reServed
    • Yes, reads and writes did occur and synchronization betweenthe mirrors did occur. 4 What type or synchronization occurred between the mirrors') vxstat -g datadg -fab -vpsd datavo13 TYP NAME dm datadg04 dm datadgOS vol datavo13 pl datavoll-Ol sd datadg04-01 pl datavo13-02 sd datadgOS-Ol ATOMIC COPIES OPS BLOCKS AVG(ms) o 0 0.0 o 0 0.0 o 0 0.0 o 0 0.0 o 0 0.0 o 0 0.0 o 0 0.0 READ WRITEBACK OPS BLOCKS AVG ems) o 0 0.0 o 0 0.0 30 61440 66.0 o 0 0.0 o 0 0.0 o 0 0.0 o 0 0.0 Read-write back synchronization occurred between the mirrors. 5 Reset the read/write counters tor datadg. Create a IOO-MH.3 column, striped, mirrored and logged volume (RAID 0+1) named datavo14 in datadg. Did reads or writes OCCur" vxstat -g datadg -r vxassist -g datadg make datavo14 100m layout=stripe,mirror, log ncol=3 vxstat -g datadg -fs -fb -vpsd datavo14 OPERATIONS BLOCKS AVG TIME(ms) REAQ-wRTTEBA(K TYP NAME READ WRITE READ WRITE READ wRITE DPS BLOCKS AVG(ms) dm datadgOO 5n 2 68224 32 8.6 0.0 0 0 0.0 dm datadgO 1 0 534 0 68352 0.0 18.7 0 0 0.0 dm datadg02 0 5n 0 68224 0.0 18.6 0 0 0.0 dm datadg03 0 5n 0 68224 0.0 18.7 0 0 0.0 dm datadg04 534 0 68352 0 7.6 0.0 0 0 0.0 dm datadg05 533 0 68224 0 7.5 0.0 0 0 0.0 vo l datavo14 100 0 204800 0 47.1 0.0 lUO 204800 47.1 pl datavo14-01 0 100 0 204800 0.0 24.8 0 0 0.0 sd datadgOl-02 0 534 0 68352 0.0 18.7 0 0 0.0 sd datadg02-02 0 511 0 68224 0.0 18.6 0 0 0.0 sd datadg03-02 0 531 0 68224 0.0 18.7 0 0 0.0 pi dar avol a-D? 100 0 204800 0 16.6 0.0 0 0 0.0 sd ddtadgOO-02 5n 0 68224 0 8.6 0.0 0 0 0.0 sd datadg04· 02 534 0 68352 0 7.6 0.0 0 0 0.0 so datadgOS-02 511 0 68224 0 7.5 0.0 0 0 0.0 pl datavo14-03 0 0 0 0 0.0 0.0 0 0 o 0 sd datadgOO-03 0 2 0 32 0.0 0.0 0 0 0.0 Yes, reads and writes occur between the mirrors and extra writes to the disk with a log. 6 Why do writes occur when creating mirrors but not when creating concatenated or striped volumes') :otirrors must be synchronized to eachother in RAID-. 7 Reset the read/write counters lor da tadg. Create a 30-MI3.2-way mirrored, 3 column striped and logged (RAID-I+O) volume named datavo16 ill datadg. 8-90 Education and SRT FrameMaker Template Set Copvnpnt ~i2006 Symantec Corpor auou All rights reserved
    • Did reads or writes occur'! vxstat -g datadg -r vxassist -g datadg make datavol6 30m layout=stripe-mirror,log nmirror=2 neol=3 vxstat -g datadg -fs -vpsd datavol6 OPERATIONS BLOCKS AVG TIMf(ms) TYP NAME READ WRITE READ WRITE READ WRITE vo l datavo16 0 0 0 0 0.0 0.0 pl datavo 16-04 0 0 0 0 0.0 0.0 sd datavo16-S01 0 0 0 0 0.0 0.0 sd datavo16-S02 0 0 0 0 0.0 0.0 sd datavo16-S03 0 0 0 0 0.0 0.0 Note: When a layered (stripe-mirror or RAID-I +0) volume is involved, omit the name of the high-level volume to get the statistics. vxstat -g datadg -fs -vpsd OPERATIONS BLOCKS AVG TlME(ms) TYP NAME READ WRITE READ WRITE READ WRITE dm datadgOO 80 0 20480 0 48.1 0.0 dm datadq01 0 86 0 20576 0.0 35.6 dm datadg02 0 80 0 20480 0.0 31.2 dm datadg03 0 80 0 20480 0.0 36.1 dm datadg04 80 0 20480 0 27.8 0.0 dm datadg05 80 0 20480 0 29.6 0.0 vol datavo16 0 0 0 0 0.0 0.0 p l datavo16-04 0 0 0 0 0.0 0.0 sd datavo16-S01 0 0 0 0 0.0 0.0 sd datavo16-S02 0 0 0 0 0.0 0.0 sd datavo16-S03 0 0 0 0 0.0 0.0 vol datavo16-IOl 10 0 20480 0 98.0 0.0 pl datavo16-POI 0 0 0 0 0.0 0.0 sd datadgOl-05 0 2 0 32 0.0 0.0 pI datavo 16- P02 0 10 0 20480 0.0 45.0 sd datadgOl-06 0 ]0 0 20480 0.0 45.0 pl datavol6-P03 10 0 20480 0 41.0 0.0 sd datadg04-05 10 0 20480 0 41.0 0.0 vol datavo16-L02 10 0 20480 0 98.0 0.0 pl datavo 16-P04 0 0 0 0 0.0 0.0 sd datadgOl-07 0 2 0 32 0.0 0.0 pl datavo16-P05 0 10 0 20480 0.0 41. 0 sd datadg02-05 0 10 0 20480 0.0 41.0 pl dat avo l Svr'Of 10 0 20480 0 45.0 0.0 sd datadg05-05 10 0 20480 0 45.0 0.0 vol datavo 16- L03 10 0 20480 0 ]59.0 0.0 pl datavol6-p07 0 0 0 0 0.0 0.0 sd datadgOl-08 0 2 0 32 0.0 0.0 pi da t avo l fi-Pfl S 0 ]0 0 20480 0.0 44.0 sd datadg03-05 0 ]0 0 20480 0.0 44.0 pl dat.avo16-P09 ]0 0 20480 0 105.0 0.0 sd datad 00-07 10 0 20480 0 105.0 0.0 8 Does VxVM write into the volume. plcxes. subdisks. or disks when it creates a volume? Yes, when a mirrored volume is created. 9 Reset the read/write counters for datadg. Create a 5MB mirrored volume named datavol 7 and initialize it to zero. Is the volume's address space written when it is initialized to zero'? vxstat -g datadg -r vxassist -g datadg make datavol7 5m layout=eoneat,mirror init=zero Lab 6 Solutions: Performance Monitoring B-91 Copynght'~ 2006 Symanler.Corporauon fill "gilts reserved
    • vxstat -g datadg -fs -vpsd datavo17 TYP NAME dm datadgOO dm datadg02 vol datavo17 p l datavo17-01 sd datadgOO-O pi datavo17-02 sd da tadg02 -04 OPERATIONS READ WRITE o 42 o 42 o 42 o 42 o 42 o 42 o 42 BLOCKS READ o o o o o o o WRITE 10240 10240 10240 10240 10240 10240 10240 AVG TIME(ms) READ WRITE 0.0 8.8 0.0 11.0 0.0 17.1 0.0 8.8 0.0 8.8 0.0 11.0 0.0 11.0 Yes,the volume's address spaceis written when it is initialized to zero. Optional Lab: When Mirroring a Volume or Resynchronizing Mirrors Docs VxVM write into the volume, plcxcs, subdisks, or disks when it mirrors a volume or rcsynchronizcs mirrors'? Create a 50-MH. concatenated (RAID-I)) volume named datavo18 in datadg. vxassist -g datadg make datavo18 SOm 2 Reset the read/write counters for datadg. Add a mirror to the datavo18 volume, Did reads and/or writes occur" DO':5 adding a mirror perform atomic-copy or rcud-writcback? vxstat -g datadg -r vxassist -g datadg mirror datavo18 vxstat -g datadg -fs -vpsd datavo18 OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE dm datadgOO 413 0 102400 0 3.6 0.0 dm datadg02 0 413 0 102400 0.0 6.2 vol datavo18 0 0 0 0 0.0 0.0 pI datavo 18-01 413 0 102400 0 3.6 0.0 sd datadgOO-06 413 0 102400 0 3.6 0.0 pI datavo 18-02 0 413 0 102400 0.0 6.2 sd datadg02-06 0 413 0 102400 0.0 6.2 vxstat -g datadg -fab -vpsd datavo18 ATOMIC COPIES READ-WRITE BACK TYP NAME OPS BLOCKS AVG(ms) OPS BLOCKS AVG(ms) dm datadgOO 0 0 0.0 0 0 0.0 dm datadg02 0 0 0.0 0 0 0.0 vol datavo18 413 102400 9.8 0 0 0.0 pI datavo18-01 0 0 0.0 0 0 0.0 sd da tadgOO - 06 0 0 0.0 0 0 0.0 pI datavo18-02 0 0 0.0 0 0 0.0 sd datadg02-06 0 0 0.0 0 0 0.0 Ycs,reads and 'Hites occur. Atomic-copy is performed. 3 What is the meaning of atomic copy'! Atomic-copy resynchrnnizarton refers to the sequential writing of all blocks of the volume to a plex, 8-92 Education and SRT FrameMaker Template Set Ccpvnunr ~ 20()6 Symamec Corporano- 111nqhts reserved
    • Optional Lab: When an Application Performs 110to a Volume Create a SOO-MS.concatenated (RAID-O) volume named datavo19 in datadg. vxassist -g datadg make datavol9 500m 2 Reset the read/write counters for datadg. Start 1/0 to the dat avo19 volume using dd in the background. While the 1/0 is ongoing examine reads or writes. Kill the dd process when you arc finished. vxstat -g datadg -r dd if=/dev/zero of=/dev/vx/rdsk/datadg/datavol9 bs=1024 & vxstat -g datadg -i2 -vpsd datavol9 OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE rhu 11 sep 2003 05; 39; 55 PM EDT dm datadg03 0 420 840 0.0 .9 vol datavo 19 0 420 840 0.0 .9 pi datavo 19-01 0 420 840 0.0 .9 sd datadg03 -04 0 420 840 0.0 .9 rhu 11 sep 2003 05. 39; 57 PM EDT dm datadg03 0 271 0 542 0.0 .9 vol datavo19 0 271 0 542 0.0 .9 pi datavo19-01 0 271 0 542 0.0 .9 sd datadg03-04 0 271 0 542 0.0 .9 Thu 11 Sep 2003 05; 39; 59 PM EDT dm datadg03 0 308 616 0.0 .9 vol datavol9 0 308 616 0.0 .9 pi datavo19-01 0 308 616 0.0 5.9 sd datadg03-04 a 308 616 0.0 5.9 Use CNTL+C to cancel the vxstat command. If the dd process is still running, kill the dd process: ps -ef I grep dd kill -9 process-id 3 Assuming that there is only one process doing I/O to the disk (no parallel 1/0) and that there are 512 bytes per block. how would you calculate the I/O throughput to the disk') No. of Blocks x S12/( No. of 1/0 Operations x Average I/O Time/ I0(0) Blsec Divide by 1024 for KB/see Divide by 1024 for MB/scc 4 Illustrate that one 1'0 to the mirrored volume datavo18 generates two I/Os within the volume. vxstat -g datadg -r Lab 6 Solutions: Performance Monitoring B-93 Copynqtu Cr,'2001:> Symanter. coroorauo- All nqtits reservec
    • dd if=/dev/zero of=/dev/vx/rdsk/datadg/datavol8 bs=1024 & vxstat -g datadg -i2 -vpsd datavol8 OPERATIONS BLOCKS AVG TIME(mS) TYP NAME READ WRITE READ WRITE READ WRITE lhu 11 sep 2003 06:02 : 36 PM EDT dm da tadgOO 0 15967 0 31934 0.0 4 dm datadg02 0 15967 0 31934 0.0 6 vol datavolB 0 15967 0 31934 0.0 9 pl da tavo 18-01 0 15967 0 31934 0.0 4 sd datadgOO-06 0 15967 0 31934 0.0 4 1'1 datavolS-02 0 15967 0 31934 0.0 6 sd d. tadg02-06 0 15967 0 31934 0.0 6 Thu 11 sep 2003 06 02: 38 PM EDT dm datadgOO 0 324 0 648 0 5 .7 dm datadg02 0 324 0 648 0.0 3 6 vol datavo18 0 324 0 648 0.0 5 .9 pl datavo18-01 0 324 0 648 0.0 5. 7 sd datadgOO-06 0 324 0 648 0.0 5. 7 1'1 datavo18-02 0 324 0 648 0.0 3 6 sd datadg02-06 0 324 0 648 0.0 3 6 The numbers are cumulative since the reset. In the example, 324 volume level writes generated an equal number to each of two plexes. Use C:'IITL+C to cancel the vxstat command. If the dd process is still running, kill the dd process: ps -ef I grep dd kill -9 process-id Optional Lab: When Removing a Volume Does V.xVM write into the volume, plcxcs, subdisks. or disks when it removes a volume'! Reset the read/write counters for datadg. List the volumes in datadg. vxstat -g datadg -r vxprint -g datadg -v TY NAME ASSOC K5TATE LENGTH PLOFFS STATE TUTILO PUTILO v datavoll fsgen ENABLED 102400 ACTIVE v datavo12 fsgen ENABLED 61440 ACTIVE datavo13 fsgen ENABLED 61440 ACTIVE datavo14 fsgen ENABl ED 204800 ACTIVE v datavo16 fsgen ENABLED 61440 ACTIVE v dalavo16-L01 fsgen ENABLED 20480 ACTIVE v datavol6-L02 fsgen ENABLED 20480 ACTIVE v datavo 16-L03 fsgen ENABl ED 20480 ACTIVE v datavo17 fsgen ENABLED 10240 ACTIVE v datavo18 fsgen ENABL ED 102400 ACT rVE s: datavol9 fsgen ENABl ED 1024000 ACTIVE 2 Remove the datavoll volume. vxassist -g datadg remove volume datavoll 8-94 Education and SRT FrameMaker Template Set
    • 3 Does YxYM write into the volume, plcxcs. subdisks, or disks when it removes a volume') What does this imply'? vxstat -g datadg -fs -vpsd No, VxVM duesnut write intu the volume, plexes,subdisks, ur disks when it removes a volume. This implies that a volume whose definition gets corrupted ur is accidentally deleted can be restructured. Optional Lab: When Removing a Plex Does Yx YM write into the volume, plcxcs, suhdisks. or disks, when it removes a plcx? 1 Reset the read/write counters for datadg. List the volumes in datadg. vxstat -g datadg -r vxprint -rhtg datadg v datavo18 pi datavo18-01 datavo18 sd datadgOO-06 da tavo18-01 pl datavo18-02 datavo18 sd datadg02-06 datavo18-02 ENABLED ACTIVE ENABLED ACTIVE datadgOO 262143 ENABLED ACTIVE datadg02 283689 102~00 104139 104139 104139 104139 SELECT CON CAT o CON (AT o cltOdO clt2dO fsgen RW ENA RW ENA 2 Does YxYM write into the volume, plexes, subdisks, or disks when it removes a plex? What does this imply" vxassist -g datadg remove mirror datavolB vxstat -g datadg -fa -vpsd datavolB OPERATIONS BLOCKS AVG fIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE dm da tad gOO 0 0 0 0 0.0 0.0 vol datavo18 0 0 0 0 0,0 o 0 pl datavo18-01 0 0 0 0 0.0 0,0 sd datadgOO-06 0 0 0 0 0.0 0.0 No, VxVM duesnut write into the volume, plexes,subdisks, ur disks when it removes a plex. This implies that a plex whosedelinitiun gets corrupted or is accidentally deleted can be restructured. Optional Lab: When Destroying a Disk Group Does YxYM write when it destroys a disk group'? 1 Reset the read/write counters for datadg. Destroy datadg. Did any writes occur') vxstat -g datadg -r Lab 6 Solutions: Performance Monitoring 8-95 Copyright i:"' 2006 Symanter Corporauo» All nqtus reserved
    • vxstat -g datadg -fs -vpsd vxdg destroy datadg It is impossible to ascertain in VxV:'I if any writes occur. 2 Docs VxVM write into the public urea "I' the disk when any of the above operations arc performed What docs this imply for administration of volumes Only the private region of a disk is updated when volumes are created and destroy ed, mirrors added and removed, etc. This implies that with a current backup of the configuration database, a disk group and volume can be reconstructed. 8-96 Education and SRT FrameMaker Template Set Copyright "t! 2Dll6 Symantec Corporation All fights reserved
    • symantec Lab 7 Lab 7: Point-in-Time Copies In this lab, you perform off-host processing using full- sized instant volume snapshots, create space- optimized instant volume snapshots, and restore a file system using storage checkpoints. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Lab 7 Solutions: Point-in-Time Copies In this lab, you perform off-host processing using third-mirror break-off volume snapshots, create space-optimized instant volume snapshots, and restore a tile system using storage checkpoints. Optionally, you also create and investigate full- sized instant volume snapshots. The Lab Exercises for this lab are located on the following page: Prerequisite Setup To perform this lab, you need a lab system with Storage Foundation pre-installed, configured and licensed. In addition to this, you also need at least four disks to be used in a disk group. Before starting this lab you should have all the external disks assigned to you already initialized but free to be used in a disk group. Lab 7 Solutions: Point-in-Time Copies Copyngt11 © 2()06 Symantec Corporation All fights reser"",d 8-97
    • Classroom Lab Values In preparation for this lab. you will need the following information about your lab environment. For your reference. you may record the information here, or refer back to the first lab in the SF Fundamentals section where you initially documented this information. Object Sample Value Your Value root password veritas Host name trainl Host name of the system train2 sharing disks n ith my system ~Iy Data J)jsks: Solaris: clt#dO - clt#d5 HP-LX: c4tOdO - c4tOd5 /IX: hdisk21- hdisk26 Linux: sda - sdf 8-98 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqht s 2006 Sj'nldntec corroranou All fights resHrved
    • Off-Host Processing Using Third Mirror Break-off Volume Snapshots Phase I: Create, Split, and Deport 1 Identity the name of the system that is sharing access to the same disks as your system. If you are not sure. check with your instructor. Note the name ofthe partner system here. Partner system hostnarnc: 2 011your local lab system. create a disk group called namedg with four disks. VEA a Select Disk Groups node in the object tree and select Actlons=-c-New Disk Group. b In the New Disk Group wizard, click "ext to skip the Welcome page. e Type the name of the disk group. Ensure that Enable Cross-platform Data Sharing (CDS) remains checked. d Select the disks you want to add to the disk group and click Add to move them to the Selected disks area. Click Next to continue. e Confirm the disk selection. Do not select a disk group organization principle when prompted. 9 Click Finish. CLI vxdisksetup -i device_tag (lf ncccssary) vxdg init namedg namedgOl=device_tagl namedg02=device_tag2 namedg03=device_tag3 namedg04=device_tag4 3 Create a 500-MB concatenated volume. namevoll. using a single disk. Create a Veritas file system on the volume and mount the file system on the mount point / namel. VEA a llighlight the disk ~rollp and select Actions-c-o-New Volume. b Let VxVM select the disks to use. e Specify the volume name, the size, a concatenated layout, and no mirror. d Add a file system and set the mount point. Uncheck the Add to liIe system table option. CLl Lab 7 Solutions: Point-in- Time Copies 8-99 COPYright';;::2006 Svmantec Corporation All fights reserveo
    • vxassist -g namedg make namevoll SOOm mkfs -F vxfs /dev/vx/rdsk/namedg/namevoll Note: On Linux, usemkfs - t. mkdir / namel (i fit doesn't already exist) mount -F vxfs /dev/vx/dsk/namedg/namevoll /namel Note: On Linux, usemoun t - t. 4 Add data to the Ilk system using the following command: echo "Pre-snapshot for name" > /namel/presnap on name and veri Iy that the data has been added. 1s /name1 5 Lnablc Fasikcsync for the volume namevoll. Can you identify what has changed'! VEA a Highlight the namevoll volume. In the Actions menu, selectInstant Snapshot->Enable FastResync. b In the Enable Fasrkesync window, accept all the defaults and click OK. e Selectthe namevo11 volume in the ohject tree. Note the OCO tab in the right pane view. Selectthe OCO tab and observethe log added to the volume. CLl vxsnap -g namedg prepare namevoll vxprint -g namedg -htr A OCO log has beenaddcd to the volume. 6 Add a mirror to the volume for use as the snapshot. Observe the volume layout. What is the state of the newly added mirror alter synchronization completes? YEA a Selectthe volume then Actions ->Instant Snapshot=c-AddSnapshot Mirror. b Let 'xV'! selectthe disks to use. e Selectthe volume in the object tree and click the Mirrors tab in the right pane. Note that the type of the newly added mirror is displayed asSnapshot and thestatus changesfrom Snap In Progressto Snap Ready when the synchronization curnpletes. CLl 8-100 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Cr;pyngh!~, 2006 Svrnamec Corporation All nqnts reserved
    • vxsnap -g namedg addrnir namevoll vxprint -g namedg -htr The status of the newly added mirror should change from E~ABLEDI S'IAPATT to E'iABLED/S,APDO'lE after the synchronization is complete. 7 Create a third mirror break-otfsnapshor named namesnap 1 using the new mirror you just added. Use the vxsnap -g namedg 1is t command to observe the snapshots in the disk group. Can you find similar information ill the VEA GUt" YEA a Highlight the volume and selectActions->Instant Snapshot-> Create. b In the Snapshot Type dialog box, select Break off. c Verify one mirror to break off. d Confirm that you want to usea mirror. e Specify the snapshot name. In the object tree the snapshot volumes are displayed under their parent volume. If you selectthe Volumes node in the object tree, you will notice that the icons for the snapshot volumes are different from standard volumes. The Properties view for a snaphost volume lists the parent volume name. eLl vxsnap -g namedg make source=namevoll/newvol=namesnapl/nmirrors=l vxsnap -g namedg list The namesnapl volume is listed asa break-off snapshot of namevoll. 8 Split the snapshot 'ohIl1IC into a separate disk group from the original disk group called nameOHPdg: YEA Nnte: Although the solution for VEA has been provided here, on the Solaris platform you may needto perform this step from the command line becauseof a GUI bug with 5.0 software on this platform. a Ilighlight the disk group and select Actions->Split Disk Group. b Type the new disk group name. 8-101Lab 7 Solutions: Point-in-Time Copies Copvoqru 1" 2006 Symaruec Corpcr auou. All fights reserved
    • c Selecttu split disk group by volumes and move namesnapl to the selectedvolumes area. Click OK. CLl vxdg split namedg nameOHPdg namesnapl 9 Verify that the nameOHPdg disk group exists and contains namesnapl. First, display the disk groups on the system. You should seethe new nameOHPdg disk group displayed. Then, view the volume information for the nameOHPdg disk group. VEA a Selectthe Disk Groups node in the object tree. The nameOHPdg should be listed as imported. b Highlight the nameOHPdg disk group in the object tree and selectthe Volumes tab in the right pane to observe the volumes in the disk group. CLl vxdg list vxprint -g nameOHPdg -htr 10 Deport the disk group that contains the snapshot volume. If you have a partner system that sharesaccessto the external disks with your system, you can use the new host information using the hostname ofthe partner system. VEA Highlight the disk group and selectActions->Deport Disk Group. (Do not selectDeport Options unlessyou have a partner system).Click OK and couflrm when prompted. CLl vxdg deport nameOHPdg 11 View the disk groups on the system. VEA Selectthe Disk Groups node in the object tree. The nameOHPdg disk group is now listed asdeported. CLI vxdg list 8--102 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Copvnqnt '~' 20(1) Symantec Corroranon AIIII(JnlS reserve-t
    • The nameOHPdgdisk group is not displayed because it is deported. Run the vxdisk command to view the status of the disks on the system. Alternatively. you can view the status of the disks in VEA. VEA Select the Disks node in the object tree. The disk used for namesnap1 is listed as Deported. CLI vxdisk -0 a11dgs list This shows that the disk used for namesnap1 is deported. (nameOHPdg) in parentheses shows that the nameOHPdgdisk group is deported. 12 Add additional data to the original volume using the following command: echo "Post-snapshot for name" > /namel/postsnap_on_name and verify that the data has been added. 15 /name1 Phase 2: Import, Process, and Deport 13 Remote login to the partner system which will be used as the off-host processing (OIIP) system. Notes: If you arc working on a standalone system. skip this step and use your own system as the partner system. If you want to continue using the graphical user interface (VEA) on the partner system, you need to connect to the partner system using the VEA client on your local system. rlogin partner_system_hostname 14 On the off-host processing (OIIP) host (your partner system) where the backup or processing is to be performed, import the disk group that contains the snapshot volume. Note: You may need to rcscan the disks using the vxdct 1 enable command on the OIIP host so that the host detects the changes. VEA On the OHP host highlight the nameOHPdgdisk group and select Actiuns=-c-Import Disk Group. CLI Lab 7 Solutions: Point-in-Time Copies 8-103 Coovnqnt ~ 2006 Syrnantec Corocrauoo Atl notus reserven
    • vxdg import nameOHPdg View tile status olthc volume in tile nameOHPdg disk group. VEA Highlight the nameOHPdg disk group in the object tree and selectthe Volumes tab in the right pane. CLI vxprint -g nameOHPdg -htr 15 To perform off-host processing. you must first start tile volume and mount tile IiI.: system on tile off-host processing host. Use the mount point / namesnapl. YEA The volume is automatically started if you had usedthe default Start all volumes option while importing the disk group. Highlight the volume and selectActions->File System->Mount File System.Do not add the tile systemto the tile systemtable. CLI vxrecover -g nameOHPdg -s namesnapl vxprint -g nameOHPdg -htr mkdir /namesnapl (if the directory docs not already exist) mount -F vxfs /dev/vx/dsk/nameOHPdg/namesnapl /namesnapl Note: On Linux, usemoun t - t. 16 View and compare the contents 01' both file systems. On the local lab system: Is -1 /namel On the partner (OHP) system: Is -1 /namesnapl There is one more tile in / namel than in / namesnapl. The file that has beenwritten after the snapshot operation (pos tsnap _ on_name)exists in / namel but not in / namesnapl. The file that has beenwrjtten before the snapshot operation (presnap_on_name) exists in both tile systems. 17 Check if you can write to the snapshot file system during off-host processing by creating a new file in the snuphsot file system as follows: 8-104 VERITAS Storage Foundation 5.0 for UNIX: Maintenance COPY"lr,("- 2006 SyruantecCorporaucn All nqnts reserved
    • echo "Oata in snapshot of name" > /namesnapl/data_on_namesnapl Is -1 /namesnapl 18 After completing off-host processing. you are ready to reattach the snapshot volume with the original volume. Unmount the snapshot volume on the off- host processing host. CLI YEA Highlight the volume and select Actions->File System->Unmount File System. Confirm when prompted. umount /namesnapl Note: If you have been using your local lab system as the OIIP host. you do not need to perform the next three steps ( 19-21). Ilowever. in an actual off-host processing situation. you would perform these steps. 19 On the OIIP host, deport the disk group that contains the snapshot volume. YEA a Highlight the disk group and select Actions->Deport Disk Group. b Click OK (Do not select Deport Options unless you are working" ith a partner's system). c Confirm that you want to deport the disk group. CLI vxdg deport nameOHPdg 20 If you had been working on a partner system. exit from the partner system. Alternatively. if you had been using the VEA. disconnect from the partner system. YEA a Select File-->Disconnect. b Select the partner system and click OK. CLI On the OHP host: exit Lab 7 Solutions: Point-in-Time Copies B-105 Copyrlghl <f; 20U6 Svroa-uec Corcorsuoo All rlghls reserverl
    • Phase3: Import, Join, and Resynchronize 21 011 the primary host (your local lab system), rcimport the disk group that contains the snapshot volume: VEA a Highlight the disk group and selectActions->Import Disk Group. b Verify that the Start all volumes option is selected.Click OK. CLI vxdg import nameOHPdg vxdg list 22 Rejoin the disk group that contains the snapshot volume to the disk group that contains the original volume. VEA a Highlight the disk group and selectActions->Join Disk Group. b Ensure that the source disk group is nameOHPdg and the target disk group is namedg. Click OK. Note: If )'OU get an Object not found error in VEA, ignore it. This error is displayed becausethe nameOHPdg no longer exists on the system. CLI vxdg join nameOHPdg namedg vxprint -g namedg -htr 23 At this point you should have the original volume and its snapshot in the same disk group but as separatevolumes. There is still 110 synchronization between the original volume and the snapshot volume. To observe this. the snapshot volume will be mounted again to observe its contents. You would not need to perform this step during a normal off-host processing procedure. Note that if you have been using the CLI. the snapshot volume is initially disabled following the join. a Restart the snapshot volume if necessary. VEA Selectthe snapshot volume, and selectActions->Rccover Volume. eLl vxrecover -g namedg -s namesnapl B-106 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Ccpynqntc 2006 Swnantec corooranon 11.11rights reserved
    • vxprint -g namedg -htr Make sure the snapshot volume is E'ABLED and ACTIVE. b If necessary, run a f le system check on the snapshot volume. Note that this step should not be necessary if you haw cleanly unmounted the file system before the deport on the 0111' host. VEA Selectthe snapshot volume, and select Actions->File System->Check File System. Xote: The Check File System option may be grayed out in VEA if the file system doesnot needa check. CLI Snlarls, fsck -F vxfs /dev/vx/rdsk/namedg/namesnap1 liP-LX l.inux fsck -t vxfs /dev/vx/dsk/namedg/namesnap1 c Mount namesnapl back on the /namesnaplmollnt point. Create the mount point if necessary. VEA Selectthe snapshot volume, and select Actions->File System=c-Muunt File System. Ensure that the file system is not added to the file system table. CLI mkdir /namesnap1 (if the directory doesnot already exist) mount -F vxfs /dev/vx/dsk/namedg/namesnap1 /namesnap1 'ote: On Linux, usemount - t. d View and compare the contents of both Ii Ie systems. ls -1 /name1 /namesnap1 e Unmounr the /namesnapl file system. VEA Selectthe snapshot volume, and select Actions->fih.' System->Unmount file System. Confirm when 8-107Lab 7 Solutions: Point-in-Time Copies Copyright», 2006 Syuianter-Corporation All nqtusre~erved
    • prompted. eLl umount /namesnapl 24 On the:primary host (your local lab system), reattach tile plcxcs ofthe snapshot volume to the original volume and rcsynchronizc their contents. VEA a Select the snapshot volume, and select Actions->Instant Snapshot-> Reattach. b Verify that Volume to attach to is set to namevoll. c Click OK. eLl vxsnap -g namedg reattach namesnapl source=namevoll 25 Remove the snapshot mirror. YEA a Select the namevoll '01111111.', and select Actions->Instant Snapshot-> Remove Snapshot Mirror. b Accept the Automatic mirror selection option and click OK. eLl vxsnap -g namedg rmmir namevoll Using Space-Optimized Instant Volume Snapshots Select a disk in the namedg disk group that is not used by the original volume namevoll. and create a 50-MB volume on this disk to be used as the cache volume. Name the:cache volume namecachevol. Create a cache object called namecache on the cache volume. Ensure that the cache object is started. Note: If you use the VEA to create the cache:object. the autogrow option will be kit at the default value ofo t f . ltyou usc the command line you can change:this setting to on while: creating the cache object. YEA 8-108 VERITAS Storage Foundation 5.0 for UNIX: Maintenance C:ODyngh! I 2U06 Svmautec Corporation All flghls reserved
    • Selectthe namedg disk group in the object tree and selectthe Disk View tab to observe the disk usagein the disk group. Identity a disk that is not usedby any other VxVM object in the disk group. To create the volume: a Uighlight the namedg disk group. Select Acnons=c-New Volume. b Manually selectdisks to usefor this volume. Click Next. c Selectan unused disk, for example namedg04, and add it to the Included section using the > key. Click Next. d Specify the volume name, the size,a concatenated layout, and no mirror. Click :ext. e Selectthe Create as a Snapshot Cache Volume option. Enter the cache object name as namecache. Leave the Region Size as0 to usethe default value. Click "Iext. f Do not add a lile system when prompted. 9 Click Finish to complete the wizard. eLl vxprint -g namedg -htr vxassist -g namedg make namecachevol SOmnamedg## where namedg## is the disk media name of the disk that is unused in the namedg disk group. vxmake -g namedg cache namecache cachevolname=namecachevol autogrow=on vxcache -g namedg start namecache 2 Observe how the cache object and the cache volume is displayed in the disk group. VEA a Selectthe Volumes node in the object tree. Note the icon usedfor the cache volume. b Right click the cache volume in the right pane and select Properties. c :ote the cache object name displayed. Click Cancel. eLl vxprint -g namedg -htr 3 Verify that the namevoll volume is already prepared for instant snapshot operations by displaying information about the DCO log. VEA a Selectthe namevoll volume in the object tree. Lab 7 Solutions: Point-in-Time Copies 8-109 Copyright (!;~2006 Symauter Corporation 11.11nqrus reserved
    • b Click the DCO tab in the right pane view. I f you do not see a DCO tab, you need to prepare the volume for instant snapshot b~' highlighting the volume and selecting Acuons=c-Insraut Snapshot->Enable FastResync. CLl vxprint -g namedg -htr namevol1 'ou should observe a DCO log. 4 Add data to the / name1 file system using the following command: echo "New data before SOS1 for name" > /name1/presos1 oD_name1 and veri Iy that the data is written. Is -1 /namel 5 Create a space-optimized instant snapshot of the namevo11 volume. named namesos 1. using the cache object namecache. VEA a Highlight the namevoll volume and select Actions->Instant Snapshot->Create. b If necessary, click Next on the welcome page. e Select Space optimized as the snapshot type. Click Next. d Select Choose an existing cache object and verify that the name of the cache volume is selected in the Cache objects field. Click Next. e Enter the name of the snapshot volume and click Next. Click Finish to complete the wizard. CLI vxsnap -g namedg make source=namevoll/newvol=namesosl/cache=namecache 6 Display intormution about the snapshot volumes using the VEA. or the vxp r i nt , vxsnap list and vxsnap print commands from the command line. VEA a Select the Volumes node in the object tree. Observe the information provided lor the namesos1volume in the right pane. Note that the layout is specified as Space Optimized, 8-110 VERITAS Storage Foundation 5.0 for UNIX: Maintenance Cupyrlght 0 200fi SyruautecCorporano» All fights fPserved
    • b Note the hierarchy of the objects displayed in the object tree under the Volumes node. ell vxprint -g namedg -htr vxsnap -g namedg print vxsnap -g namedg list 7 Using the command line. verify which snapshots are associated to the cache object. vxcache -g namedg listvol namecache 8 Mount the space optimized snapshot volume namesosl to the /namesosl directory. VEA Selectthe namesosl volume, and selectActions->File System->Mount File System, Enter the mount point and ensure that the file system is not added to the file system table. CLI mkdir /namesosl (if necessary] mount -F vxfs /dev/vx/dsk/namedg/namesosl /namesosl 9 Observe the contents of the /namesosl directory and compare it to the contents of the / namel directory. Is -1 /namesosl /namel The contents of both file systemsshnuld be exactly the sameat this point. 10 Add data to the /namel file system using the following command: echo "New data before 8082 for name" > /namel/presos2 oD_namel and verify that the data is written. 1s -1 /namel 11 Create a second space-optimized instant snapshot of the namevoll volume, named namesos2. using the same cache object namecache. VEA a Highlight the namevo11 volume and selectActions->Instant Snapshot->Create. Lab 7 Solutions: Point-in-Time Copies B-111 COP)'flCJh!~; 2006 Svrnaruec Comoranon All nqnts resevec
    • b If necessary, click :'Iiext on the welcome page. c Select Space optimized as the snapshot type, Click Next. d Select Choose an existing cache object and verify that the name of the cache volume is selected in the Cache objects field. Click Next. e Enter the name of the snapshot volume and click Next. Click Finish to complete the wizard. eLl vxsnap -g namedg make source=namevo11/newvo1=namesos2/cache=namecache 12 Using the command line. verify which snapshots arc associated to the cache! object. vxcache -g namedg 1istvo1 namecache 13 Mount the space optimized snapshot volume namesos2 to the /namesos2 directory, VEA Select the namesos2 volume, and select Actions->File System=c-Mnunt File System. Enter the mount point and ensure that the file system is not added to the file system table. CLI mkdir /namesos2 (if necessary) mount -F vxfs /dev/vx/dsk/namedg/namesos2 /namesos2 14 Observe the contents of the original file system and the two space optimized snapshots, ls -1 /namel /namesosl /namesos2 15 Make the following changes on the file systems: a Remove the data you had on the original file system prior to starting the Using Space-Optimized Instant Volume Snapshots section, If you have followed the lab steps. you need to remove the presnap _ on_name and postsnap_on_name files lrom thc /namelliic system, rm /namel/presnap on name rm /namel/postsnap_on_name b Add new data to the space optimized snapshot volumes using the following commands: echo "New data on nameSOS1" > /namesosl/data on_namesosl 8-112 VERITAS Storage Foundation 5,0 for UNIX: Maintenance CUI'ynghl i'- 201)6 Svo.antec Corpnrauon All nqbts reser.ed
    • echo "New data on nameSOS2" > /namesos2/data on namesos2 16 Observe the contents of the original tile system and the two space optimized snapshots. 1s -1 /name1 /namesos1 /namesos2 17 Assume that you have decided to use the contents of the second space optimized snapshot as the final version of the original file system. Restore the original file system using the second space optimized snapshot. Note that you will have to unmount the original file system to make this change. Mount the original file system back to / namel directory when the restore operation completes. VEA a Highlight the namevo11 volume in the object tree and select Actions->File System=c-Unmounr Fill' System, Confirm when prompted. b Highlight the namevo11 volume in the object tree and select Actions->I nstant Snapshot->Restore. c Verify that the Snapshot field is set to namesos2 and that the Synchronize option is checked. Click OK. d Highlight the namevoll volume in the object tree and select Actions->File System->Mount File System. Enter the mount puint and ensure that the lile system is not added to the file system table. Click OK. eLl umount /name1 vxsnap -g namedg restore namevo11 source=namesos2 mount -F vxfs /dev/vx/dsk/namedg/namevo11 /name1 18 Observe the contents of the original tile system and the two space optimized snapshuts. ls -1 /name1 /namesos1 /namesos2 :ote that the contents of the second space optimized snapshot and the original file system are now the same. whereas the contents of the first space optimized snapshot remain unchanged. 19 Refresh the first space optimized snapshot. namesosl. Note that you will need to unmount the first space optimized snapshot to make this change. Mount the namesosl volume again after its contents are refreshed. VEA 8-113Lab 7 Solutions: Point-in-Time Copies COPY'lql)1 -, 2{)06 Symanter Corrorauoo All rights reserved
    • a Selectthe Volumesnode in the object tree and highlight the namesosl volume in the right pane. SelectActions->Filc System->Unmount File System.Confirm when prompted. b Uighlight thc namesosl volume in the right pane and select Actions->Instant Snapshot->Refresh. Click OK. c Uighlight the namesosl volume in the right pane and select Actions->File System->.V!ount File System. Enter the mount point (/ namesosl) and ensure that the file systemis not added to the file systemtable. Click OK. CLI umount /namesosl vxsnap -g namedg refresh namesosl source=namevoll mount -F vxfs /dev/vx/dsk/namedg/namesosl /namesosl 20 Observe the contents otthc original file system and the two space optimized snapshots. Is -1 /namel /namesosl /namesos2 Nnre that the contents should all be the samenow. 21 Unmount the two space optimized snapshots and dissociate them from the original volume. VEA a Selectthe Volumesnode in the object tree and highlight the namesosl volume in the right pane. SelectActions->File System->Unmount File System.Confirm when prompted. b Highlight the namesosl volume in the .-ight pane and select Actions->Instant Snapshot->[)issociate. Confirm when prompted. c l/ighlight the namesos2 volume in the rightl)anc. Select Actions->File System->Unmount File System.Conlirm when prompted. d Highlight the namesos2 volume in the right pane and select Actions->Instant Snapshot->[)issociate. Confirm when prompted. CLl umount /namesosl umount /namesos2 vxsnap -g namedg dis namesosl vxsnap -g namedg dis namesos2 22 Remove the space optimized snapshot volumes. 8-114 VERITAS Storage Foundation 5.0 for UNIX. Maintenance CUflyngill 2'106 Symilnt'!l- COfll',lfatollO Ali fI'-Jhtsreservec
    • 'Iote: If you want to use the vxassist remove volume commandto delete the volume from the command line, you first need to delete the DCO log. Alternatively you can use the vxedi t -g diskgroup - rf rm vol woe_namecommand to remove the volume together with the associated 0('0 log. VEA eLl a Highlight the namesos1 volume in the right pane and select Acttons=c-Delete Volume. Conlirm when prompted. b Highlight the namesos2 volume in the right pane and select Actions->Delete Volume. Confirm when prompted. vxassist -g namedg remove log namesosl logtype=dco vxassist -g namedg remove volume namesos1 vxassist -g namedg remove log namesos2 logtype=dco vxassist -g namedg remove volume namesos2 Alternatively. vxedit -g namedg -rf rm namesos1 vxedit -g namedg -rf rm namesos2 23 Remove the cache object with its associated cache volume. VEA mghlight the cache volume, namecachevo1, in the right pane and select Actions->Delete Volume. Confirm when prompted, YEA vxedit -g namedg -rf rm namecache 24 Unmount the / namel file system and remove the original volume, namevoll. 'Iote: If you want to use the vxassist remove volume command to delete the volume from the command line. you first need to delete the DCO log. Alternatively you can use the vxedi t -9 diskgroup - rf rm vol woe_namecommand to remove the volume together with the associated DCO log. YEA a Highlight the namevoll volume in the right pane and select Actions->Unmount File System.Confirm when prompted. Lab 7 Solutions: Point-in-Time Copies B-115 Copyr.qht t~2006 Symantec Corporanon 111n9hl~ reserved
    • b Highlight the namevoll volume in the right pane and select Actions->Dclete Volume. Confirm when prompted. eLl vxassist -g namedg -f remove log namevoll logtype=dco vxassist -g namedg remove volume namevoll Alternatively, vxedit -g namedg -rf rm namevoll 8-116 Copynqhtv 200{, Syrnantec Corporauon All nutus reserved VERITAS Storage Foundation 5.0 for UNIX: Maintenance
    • Restoring a File System Using Storage Checkpoints In the beginning of this section you should have a namedgdisk group with four unused disks in it. Create a simple 1500m volume called origvol in the namedg disk group. vxassist -g namedg make origvol l500m 3 Make three new mount points: /orig, /checkptL and /checkpt2. mkdir /orig mkdir /checkptl mkdir /checkpt2 2 Create a VxFS tile system on the volume. mkfs -F vxfs /dev/vx/rdsk/namedg/origvol Xote: On Linux, use mkf s - t. 4 Mount the tile system on /orig. mount -F vxfs /dev/vx/dsk/namedg/origvol /orig Xote: On Linux, use mount - t. 5 Write a tile of size I M named 4pmin the original file system. dd if=/dev/zero of=/orig/4pm bs=1024k count=l 6 Create a storage checkpoint named thu_5pm on /orig. Note the output. fsckptadm -v create thu_5pm /orig 7 Mount the thu _5pmstorage checkpoint on the mount point / checkpt L mount -F vxfs -0 ckpt=thu_5pm /dev/vx/dsk/namedg/origvol:thu_5pm /checkptl Xote: On Linux, use mount - t. 8 Write some more files in the original file system on /orig, and synchronize the tile system using the following commands: dd if=/dev/zero of=/orig/5pm bs=1024k count=5 dd if=/dev/zero of=/orlg/5pm 2 bs=1024k count=5 sync; sync 9 Create a second storage checkpoint, called thu spm,on /orlg. Note the output. fsckptadm -v create thu 6pm /orig 10 Mount the second storage checkpoint on the mount point /checkpt2. Lab 7 Solutions: Point-in-Time Copies 8-117 Copyright :t, 20[16 Syrnanlec Cnrooranon All nqtus reservec
    • mount -F vxfs -0 ckpt;thu6pm /dev/vx/dsk/namedg/origvol:thu_6pm /checkpt2 Note: On Linux, use mount - t. 11 Write some more files in the original tile system on /orig. and synchronize the file system using the following commands: dd if;/dev/zero of;/orig/6pm bs;1024k count;6 dd if=/dev/zero of=/orig/6pm_2 bs=1024k count=6 sync;sync 12 View the checkpoints and the original file system. Is -1 /orig /checkptl /checkpt2 13 To prepare to restore from a checkpoint, unmount the original file system and both storage checkpoints. umount Icheckptl umount /checkpt2 umount lorig 14 Restore the file system to the thu _ 6pm storage checkpoint. fsckpt_restore -1 /dev/vx/dsk/namedg/origvol 15 Run the fsckpt_restore command again. Note the output. The output shows that the former UNNAMED root fileset was removed, and that the second checkpoint (thu 6pm) is now the primary fileset. I he thu _ 6pm flleser is now the fileset that will be mounted by default. The first checkpoint, thu _ SpIll, still exists, because it was taken earlier than the second checkpoint. When you roll back 10 a checkpoint. carlier checkpoints still exist, while any checkpoints taken later than thu _ 6pm would haw been lost. Control-D to exit the fsckpt restore command. 16 Destroy the namedg disk group. vxdg destroy namedg 8-118 VERITAS Storage Foundation 5.0 for UNIX. Maintenance Copyright 1- 2006 Symamer: Corporanuu All II;;I'IS reserved
    • Optional Lab Exercises The next set of lab exercises is optional and may be performed if you have time. These exercises provide additional practice in exploring storage checkpoints. Optional Lab: Storage Checkpoint Behavior In this exercise. you perform and analyze four types of file system operations: A tile to be deleted (lk . to_de lete) A file to be replaced by (new) content (lk . to_replace) A file to be enlarged (lk5. to_append) A tile to be written by databases (1 Om. db _ Lo: the tile remains at the same position with the same size. but some blocks within it are replaced) Create a disk group named xdg with four disks. vxdg init xdg xdg01=device_tagl xdg02=device_tag2 xdg03=device_tag3 xdg04=device_tag4 2 Create a 12S-MB mirrored volume with a log. Name the volume xvol. Mount the volume at /xvol. vxassist -g xdg make xvol 128m layout=mirror,log mkfs -F vxfs /dev/vx/rdsk/xdg/xvol mkdir /xvol mount -F vxfs /dev/vx/dsk/xdg/xvol /xvol 3 Add these four new files to the volume and view the files: lK named /xvol/lk.to delete lK named /xvol/lk.to_replace 3B=1536bytes named /xvol/lk5.to_append 10M named /xvol/lOm.db_io Solaris mkfile 1k /xvol/1k.to -delete mkfile 1k /xvol/1k.to ._rep1ace mkfi1e 3b /xvo1/1k5.to __append mkfi1e 10m /xvo1/10m.db - io II P-LX dd if=/dev/zero of=/xvo1/1k.to -delete bs=1024 count=l dd if=/dev/zero of=/xvo1/1k.to -- replace bs=1024 count=l dd if=/dev/zero of=/xvo1/1k5.to _append bs=1536 count=l dd if=/dev/zero of=/xvo1/10m.db - io bs=1024k count=10 Lab 7 Solutions: POint-in-Time Copies B-119 Copynghl:& 2006 Symantac Corporanon All rights reserved
    • Is -1 /xvol total 20488 -l·W- - - - --T - rw- - - - --T -rw------T - rw- - - - --T drwxr-xr-x root root root root 2 root other or he r other other root 10485760 Oct 20 14:11 10m.db io 1536 Oct 20 14:10 lk5.to_append 1024 Oct 20 14:08 lk.to delete 1024 Oct 20 14:08 lk.to replace 96 Oct 20 14:04 lost+found Note: This example output is from a Solaris platform. The output may be slightly different on other plauorms. 4 Remount /xvol and run ncheck. mount -F vxfs -0 remount /dev/vx/dsk/xdg/xvol /xvol ncheck -F vxfs -0 sector= /dev/vx/rdsk/xdg/xvol STRUCTURAL 64 999 70-77 <inode alloc unit> STRUCTURAL 65 999 97 2640 2655 <inode - list> STRUCTURAL 66 999 - 2696 -2703 <inode -alloc unit> STRUCTURAL 67 999 99 422·1-4351 <inode -list> STRUCTURAL 68 999 4160-4223 <link count tbl> STRUCTURAL 69 999 2656-2671 <bsd_quota> STRUCTURAL 70 999 2672-2687 <bsd_quota> STRUCTURAL 97 999 65 2640-2655 <inode - list> STRUCTURAL 99 999 67 4224-4351 <inode lisl> UNNAMED 999 4 2768-2769 /lk.todelete UNNAMED 999 5 - 4144-4145 11k.to _replace UNNAMED 999 6 2784-2787 /lk5. to _append UNNAMED 999 2800-2801 /lOrn. db - io UNNAt·1ED 999 7 4352-24061 110m.db -io UNNAMED 999 7 2816-3583 /lOrn. db io UNNAMED 999 999 2690-2691 <attribute inode> "lute: This example output is Irom a Solari, platform. The output of the ncheck command may be slightly different Oil other platforms. 5 Create a storage checkpoint for /xvol named CKPT. fsckptadm create CKPT /xvol 6 Delete the file lk. to delete. rm /xvol/lk.to delete 7 Create a new IK tile named lk. to_replace. Solaris mkfile lk /xvol/lk.to -replace HP-UX dd if=/dev/zero of=/xvol/lk.to -replace bs=1024 count=l B-120 VERITAS Storage Foundetion B.O for UNIX: Maintenance Copyopru :;; 20IJ6 Sym<lllit:!t Con.crancn All nyhls reserved
    • 8 Copy the Ik5. to_append file to /tmp. cp /xvol/lkS.to append /tmp 9 Add the Ik5. to_append file in /tmp to the originallk5. to_append file in /xvol. cat /tmp/lkS.to append » /xvol/lkS.to append 10 Use the following Perl command to generate database-like I/O (modifying a block within a database file). The second line opens read/write access to the tile without recreating it or simply appending new data. The third line creates a variable containing XK "x", The next line positions the file pointer at XOK offset from the beginning of the tile. The following line writes at this position the new XK block. perl -e ' > open(FH, "+< /xvol/lOm.db_io") II die; > $Block="x" x 8192; > sysseek(FH,8192,O); > syswrite (FH, $Block, 8192,0) ; > close(FH);' 11 Remount /xvol and run /ncheck. mount -F vxfs -0 remount /dev/vx/dsk/xdg/xvol /xvol Lab 7 Solutions: Point-in-Time Copies 8-121 Copyright e- 2006 Symantec Corporation All nqnts reserved
    • ncheck -F vxfs -0 sector= dev/vx/rdsk/xdg/xvol STRUCTURAL 64 999 70-77 <inode alloe unit> STRUCTURAL <inode - 65 999 97 2640-2655 -list> STRUCTURAL 66 999 2696-2703 <inode -alloe -unit> STRUCTURAL 67 999 99 4224-4351 <inode - list.> STRUCTURAL 68 999 - 4160-4223 <link count tbl> STRUCTURAL 69 999 2656-2671 <bsd_quota,> STRUCTURAL 70 999 2672-2687 <bsd_quota::. STRUCTURAL 74 1000 2776-2783 <iIlode alloe unit> STRUCTURAL 75 1000 76 24096 -24111 <inode -list> STRUCTURAL 76 1000 75 24096-24111 <inode _. list> STRUCTURAL 77 ]000 2792-27~9 <inode alloe unit> STRUCTURAL 78 1000 79 24192-24319 <inode - list> STRUCTURAL 79 1000 78 24192-24319 <.inode - list> STRUCTURAL 80 1000 24128-24191 <link -count -cbl> STRUCTURAL 81 1000 24112-24127 <bsd quota> STRUCTURAL 82 1000 24320-24335 ebsd___quota> STRUCTURAL 97 999 65 2640-2655 <inode - list> STRUCTURAL 99 999 67 4224-4351 <inode - list> UNNAMED 999 24:336-24337 /lk. to replace UNNAMED 999 24062-24063 /lkS.to -append UNNAMED 999 rl84-2787 11kS.to -append UNNAMED 999 7 2300-2801 110m .db io UNNM1ED 999 7 4352-24061 /1001. db io UNNAMED 999 7 2816-3583 /lOrn. db io UNNAr~ED 999 2 999 2690-2691 <at.tribute inode> CKPT 1000 4 2768-2769 11k.co_delete CKPT 1000 5 4144-4145 11k.to replace CKPT 1000 6 4146-4l<l7 Ilk5.to -append CKPT 1000 24352·24367 /l Om.db io mkdir /ckpt mount -F vxfs -0 ckpt=CKPT /dev/vx/dsk/xdg/xvol:CKPT /ckpt ls -1 /ckpt total 22 '"f".V ·T 1 root ether ··rN- ·T 1 rout other -I".,,·.. ···T 1 root other ··('I./··_····_··T 1 root othe. drwxr -xr-x ., root rootc IU4857(j(J Jun 0 14:00 -10rndb~!0 1536 Ju,! 9 14:00 1k5 to appu!ci 1024 Jun 914:00 1k.to_delete '1()2'; Jun ') 14:00 9l) .Jun 9 1j:Sl) ls -1 /xvol total 20488 8-122 VERITAS Storage Foundation 5.0 for UNIX: Maintenance C,-,pynghl i: 7006 Svmantec Corpcratron 111nqtus reserved
    • ..r!'··_R_RRT root othPf 1()485760JU!l 'J 14:16 Uni.db .... IU ·rw-·----T roo: other 3072 Jun 9 14:14 kS.to (,~ppend -rN-----:r root other 1024 Jun ) 14:14 k.to ,repi8ce rlrwxr-xr-x root runt 96 lun 11 1'35,] ost+tound Examination of Storage Checkpoint Behavior The following iuformation is an analysis of the previous output from ncheck: • 1k. to delete Same data blocks (276~-2769) mapped to CKPT • 1k. to replace Old data blocks (4144-4145) mapped. not copied. to CKPT. New data blocks (24336-24337) were written to a new location. • 1kS.to append Before checkpointing UNNAMED 1 2784 1 2785 1 2786 1 2787 After checkpointing and appending data UNNAMED 1 2784 1 2785 1 2786 1 2787 124062 124063 CKPT 1 2784 1 2785 1 4146 14147 To get the UNNAMED file system. which is normally the active one, as contiguous as possible. a copy-before-write of the middle block is performed. Otherwise copy-before-write would be unnecessary in favor of simple address mapping. Note: Blocks 27R4-27R5 arc mapped to both UNNAMED and CKPT. This is not shown in the output of nc he ck. • 10m.db io The data file for UNNAMED remains at the same position (1ROO-2ROI, 4352- 2406 I, 2SI6-35S3). Note: These tiles arc fragmented because the required space was not prcallocated in one extent. Lab 7 Solutions: Point-In-Time Copies 8-123 Copynghl'~ 200fi Syruante- Corporauon All nqtus reSE!fverl
    • The:new blocks arc written to UNNAMED. and therefore the old data must be copied to 24352-24367 (8K) before the new blocks are written. Otherwise copy-before-write would be unnecessary in favor of simple address mapping. Note: In all cases. inodc and directory information is copied before the write. 12 Unmount the:checkpoint and the original Ii Ie system, Destroy the xdg disk group. umount /ckpt umount /xvol vxdg destroy xdg 8-124 VERITAS Storage Foundation 5.0 for UNIX Maintenance Copyng~ll' 2fl06 Syrnamec Coqxnauon All nqnts reserved
    • Appendix C Boot Processes and VxVM Start-Up Scripts
    • symantec Solaris Boot Process VxVM and the Solaris Boot Process Solaris Boot Process Overview In order to troubleshoot and resolve boot disk problems, you must have a conceptual understanding ofthe Solaris boot process and the associated scripts and tiles that arc involved in booting the system and starting VxVM. The Solaris boot process can be divided into lour main phases: Phase I: Boot PROM Phase Phase 2: Boot Program Phase Phase 3: Kernel Initialization Phase Phase 4: The /sbin/init Phase In the next sections. each otihcsc lour phases is described in detail. C-2 Copyrlytlt' 2005 Synl(llltt>c Corporal 1011 All fights reserved VERITAS Storage Foundation 4. 1 for UNIX: Maintenance
    • 1. Runs self tests 2. Reads the boot disk label 3. Reads the boot block 4 Loads the bootblk program symantcc Phase 1: Boot PROM Phase When you boot a Solaris system. the first phase of the boot process is the boot PROM phase. In this phase: 1 The programmable read-only memory (PROM) chip runs self-test diagnostics to identify system information. such as hardware and memory. 2 When you type boot at the OK prompt. the system reads the boot disk label at sector O. 3 The system then reads the boot block at sectors I through 15. 4 The PROM loads the bootblk program from the boot block. The bootblk program is a UFS file system reader that is placed on disk by the installboot program. Appendix C Boot Processes and VxVM Start-Up Scripts C-3 Ccpynqnt f 7005 Svmantac Corporauon. All fights reserved
    • 1. Runs self tests 2. Reads the boot disk label 3. Reads the boot block 4. Loads the bootblk program 1. Loads ufsboot 2. Loads the kernel Phase 2: Boot Program Phase 2: Boot Program Phase The second phase ill the Solaris boot process, the boot program phase, begins alter the PROM successfully loads the bootblk program from the boot block. 1 The bootblk program loads the secondary boot program, uf sboot, by invoking the command: /platform/-uname -m-/ufsboot 2 The uf sboot program loads the kernel, C-4 VERITAS Storage Foundation 4.1for UNIX. Maintenance
    • symantcc. 1. Runs self tests 2. Reads the boot disk label 3. Reads the boot block 4. Loads the bootblk program ~ Phase 3: Kernel Initialization 1. Loads ufsboot 2. Loads the kernel 1. Loads kernel modules 2. Reads fete/system 3 Initializes the kernel 4. Begins /sbin/init Phase 3: Kernel Initialization Phase The next phase in the Solaris boot process is the kernel initialization phase. Alter the ufsboot program loads the kernel: 1 The kernel begins to load the kernel modules. 2 The kernel reads the jete/system file. including the following entries: A rootdev entry A rootdev entry specifies an alternate root device. The default rootdev value is the physical path name of the device Oil which the boot program (bootblk) is located. foreeload entries foreeload entries force modules to be loaded at boot time. 3 The kernel initializes itsel f and begins the / sbin/ ini t process. After the kernel loads the modules needed to read the root partition, the ufsboot program is unmapped from memory. The kernel continues initializing the system using its own resources. 4 /sbin/init begins. C-5Appendix C Boot Processes and VxVM Start-Up Scripts Copyr<gllt of; 20n!} Symatuec Comoranoo. All not-ts reserved
    • Phase4: /sbin/init 1. Runs self tests 2 Reads the boot disk label 3. Reads the boot block 4. Loads the bootblk program 1. Loads ufsboot 2. Loads the kernel 1. Loads kernel modules 2. Reads fete/system 3. Initializes the kernel 4. Begins /sbin/init 1. Invokes run control scripts 2. Single-user scripts 3. Multiuser scripts Phase 4: The / sbin/ ini t Phase In the final phaseolthc Solaris boot process, the /sbin/init process invokes the run control scripts that arc used to start VxVM. Single-user startup scripts. located in / et e/ reS. d. include: S25vxvm-sysboot S30rootusr. sh (standard Solaris script) S35vxvm-startupl S4Ostandardmounts. sh (standard Solaris script) S50devf sadm(standard Solaris script) S70bui Ldmrit tab. sh (standard Solaris script) S85vxvm-startup2 S86vxvm-reconfig Multiuser startup scripts. located in / ete/ re2 .d. include: SOlvxfsldlic S45vxpbx~exchanged S50vxvail S70vxatd S750vxpal.gridnode S75vxpal.StorageAgent S75vxsmfd S760vxpal.actionagent C~6 Copyngrll .~ 2005 Symantec Corporatrou All fiyt>!S reserved VERITAS Storage Foundation 4.1 for UNIX. Maintenance
    • S94vxnm-vxnetd S95vxVtn-recover S96vxrsyncd Nute: Scripts added by VxVI1 are highlighted in bold. The function of each script is described in the next section. ------ •..~-----------------~--~---------------------------- Appendix C Boot Processes and VxVM Start-Up Scripts C-7 Copvnqht I 20US Syrnantec Como-anoo fill ncrus reserved
    • symantec VxVM Startup Scripts: Single User r /etc/rcs.dl • S25vxvrn-sysboot • S30rootusr.sh • S3Svxvm-startupl • S40standardmounts.sh • SSOdevfsadm • S70buildmnttab.sh • S85vxvrn-startup2 • S86vxvm- reconfig S2Svxvm-sysboot Checks root and lusr • Starts the restore daemon Starts vxconfigd in boot mode Creates disk access records Scans the vol boot file • Imports the boot disk group Starts rootvoland uar volumes S30rootusr.sh Mounts lusr as read-only using/etc/vfstab • Checks fusr VxVM Startup: Single-User Scripts /etc/rcS.d/S25vxv.m-sysboot The 82 5vxvm··sysboot script: Checks to determine if rootdev and /usr arc volumes Ir rootdev and / usr arc volumes, thcn vxconf i gd must successfully start for / and /usr to be accessible. Starts the V.xVM restore daemon by invoking tile command: vxdmpadm start restore options Note: 13ydefault. the restore daemon checks the health of disabled device node paths (policy=check_ disabled) at a polling interval of 300 seconds (interval=300). 13yusing options to the vxdmpadmstart restore command, you can change the polling interval (interval=seconds) or change the policy to check all paths (policy=check_all). Starts vxconf igd in boot mode by invoking the command: vxconfigd -m boot The script includes example option strings to enable different aspectsof vxconf igd logging. Creates disk accessrecords lor all devices Locales and imports the boot disk group Starts the rootvol and usr volumes C-8 VERITAS Storage Foundation 4.1 for UNIX: Maintenance Cop),flgm ;:, 2005 Syrnantec Corporanon Atlnghts reserved
    • /etc/rcS.d/S30rootusr.sh The S30rootusr. sh script: Mounts /usr as read-only Checks for any problems with /usr The /etc/vfstab tile is used to mount the /usr tile system. If /etc/vfstab includes / dev/vx/ dsk/usr and / dev/vx/rdsk/usr as the devices for the /usr file system. then Vx VM must be running for the mount to succeed. If /usr fails to mount. then utilities, such as fsck and Ls , are not available for use in other scripts. C-9Appendix C Boot Processes and VxVM Start-Up Scripts Copyright's 2005 Svmantec Corporation All fights reser~f>rl
    • S35vxvm-startupl Starts special volumes. such as swap and /var • Sets up dump devices • S25vxvm-sysboot • S30rootusr.sh • S35vxvm-startupl S40standardmounts.sh • SSOdevfsad.m S70buildmnttab.sh • S85vxvm-startup2 • S86vxvm-reconfig S40standardmounts.sh Mounts /proc • Adds a physical swap device • Remounts root and /usr S50devfsadm Configures /devand Idevices trees S70buildmnttab.sh Mounts file systems. such as /var, /var/adm, and /var/run /etc/rcS.d/S35vxvm-startupl The S35vxvm-startupl script: Starts special volumes. such as swap, /var. /var/adm. and/usr/kvm. Sets up dump devices ~ole: II'the firs: swap device is a volume. then it is used as the dump device. Dump devices arc used to store core files that arc created when the system panics. Con: Ii lc creation and recovery are performed completely outside ofVxVM. The swap device must be the first swap device listed in /etc/vfstab. The dump device requires a physical partition underneath the swap volume. Vx VM docs not have hooks Ior dumping, therefore the swap device must be created to enable thc dump device to be created. The first swap device is registered as thc dump device. The dump device is "registered' by adding and removing the swap device. /etc/rcS.d/S40standardmounts.sh The S40standardmounts. sh script: Mounts /proc Adds the physical swap device Volume Manager handles all volumes in the / etc/vf stab file that have a file system type of swap as swap volumes. C-10 VERITAS Storage Foundation 4.1 for UNIX: Maintenance Copvnqm >. 2005 Svrnantec Corporanon. 111nqnts reserved
    • Checks and remounts the root tile system as read-write Checks and remounts /usr as read-write /etc/rcS.d/SSOdevfsadm The S50devfsadm script configures the /dev and /devices trees, /etc/rcS.d/S70buildmnttab.sh The S70buildmnttab. sh script mounts file systems that arc required to be available in single-user mode: /var, /var / adm.and /var /run, Note: I. swap device is mounted as /var /run. C--11Appendix C Boot Processes and VxVM Start-Up Scripts COPYright c 2(105 Svrnanter Corporation All nqms reserved
    • 'symante(_ S85vxvm-startup2 Starts I/O daemons Changes vxconfigd to enabted mode • Imports disk groups marked for autoimport Initializes DMP Reattaches drives • Starts all volumes S86vxvm-reeonfig Performs operations defined by vxinstall and vxunroot • Uses flag files to determine actions Adds new disks • Performs encapsulation /ete/reS.d/ S2Svxvm-sysboot • S30rootusr.sh S35vxvro-startupl S40standardmounts.sh • SSOdevfsadm • S70buildmnttab.sh • S85vxvm-startup2 S86vxvm-reconfig /etc/rcS.d/S8Svxvm-startup2 The S85vxvm-startup2 script: Starts 110 daemons by invoking the command: vxiod set 10 Changes vxconf igd from boot mode to enabled mode: vxdct1 enable The dev info tree is scanned for new entries. Imports all disk groups marked for autoimport lnitializcs OM/, by invoking the command: vxdc t1 ini tdmp Reattaches drives that were inaccessible when vxconf igd first started: vxreattach Starts (but docs not recover) all volumes: vxrecover -n -s /etc/rcS.d/S86vxvm-reconfig The S86vxvm- reconf ig script is used to perform operations defined by vxinstall and vxunroot and is used as part of upgrade procedures. This script: Uses flag files to determine actions The / etc/vx/reconf ig. d/ state. d directory contains entries of prior actions. C-12 VERITAS Storage Foundation 4.1 for UNIx.- Maintenance Ccpynqht o 2005 Syrnamec ccroorenco All fights reserved
    • The encapsulation process requires a reboot and creates flag files for further actions. If encapsulation is incomplete. you must remove the !lag files manually. The root_done flag tile provides information on whether the root disk is encapsulated and can exit without any action. Adds new disks Disks selected for initialization by vxinstall are initialized. Performs encapsulation A reboot is required if the root file system or a mounted tile system is encapsulated. Appendix C Boot Processes and VxVM Start-Up Scripts C-13 (,Opyrlg~11 r: 2005 SyrnantecComo-anon fI!1 ngl:iS reserveo
    • svrnanrec S45vxpbx_exchanged • SSOvxval1 • S70vxatd • S73isisd • S750vxpal.gridnode • S75vxpal.StorageAgent • Loads the vxportal kernel module • Enables VXFS special features by running the vxenablef command. These scripts are used to start various infrastructure components for VxVM, such as the agents used by the storage foundation management server or VEA, the VEA server, and so on. These scripts are used to start various daemons and processes used by Veritas Volume Replicator. They require a valid WR license. • S94vxnm-vxnetd S96vxrsyncd • S96vradmind • S95vxvm-recover S95vxvm- recover Starts volume recovery and resynchronization and hot relocation daemons VxVM Startup: Multiuser Scripts /etc/rc2.d/S94vxnm-vxnetd and /etc/rc2.d/S96vxrsyncd These scripts start various daemons and processesthat arc used by the Veritas Volume Rcplicator software. Both of thesescripts require a valid VVR license in /etc/vx/licenses/lic. /etc/rc2.d/S95vxvm-recover The S95vxvm-recover script: Starts recovery and rcsynchronization on all volumes Starts hot-relocation daemons To enable hot-relocation notification for an account other than the local root account. you must modify this script. Disabling the hot-relocation daemon vxrelocd results in no hot relocation and no notification. C--14 VERITAS Storage Foundation 4.1 for UNIX: Maintenance Copyright 20U5 SyrnautecCospurauon Alluqhts reserved
    • symantec. HP-UX Boot Process VxVM and the HP-UX Boot Process HP-UX Boot Process Overview In order to troubleshoot and resolve problems that affect VxVM and the boot disk group, you must have a conceptual understanding of the II P-UX boot process and the associated scripts and tiles that are involved in bouting the system and starting VxVM. The I-IP-UX boot process can be divided into three main phases: Phase I: POST Phase Phase2: Bootstrap Phase Phase3: Initialization Phase These phasesare described in the following sections. Appendix C Boot Processes and VxVM Start-Up Scripts C-1S Copyright © 2005 Symantec ccrro-euon. All fights reserved
    • symantec. Phase 1: POST Phase &----11. Runsself tests '-- ,...••••• ,.jj 2. Initializesthe processor Phase 1: POST Phase When you boot an HP-UX system, the first phase of the boot process is the POST phase. In this phase, the processor-dependent code (PDC), performs selI-rest diagnostics and initializes the processor. lfthc autoboot and autosearch parameters arc enabled in firmware, the boot process wi II enter the bootstrap phase automatically. II' these parameters arc not set. or if you interrupt the autoboot process by pressing the Ese key, you can interact with the boot program. When you interact with the boot program, you can: Boot using a specific device. Search fur boot devices. Change boot parameters. C-16 VERITAS Storage Foundation 4.1 for UNIX: Maintenance Copynyht •. 2005 Symantec COfPOfi1l1uPAll nurus reserved
    • Phase 2: Bootstrap Phase •..•---11, Runs self tests ""' ••••••••••••_,.... •••• ••• 2, Initializes the processor POST Phase 1, Transfers control to the initial system loader (ISL) Bootstrap Phase •..•---1 2, Transfers control to the ""'-_ ••••_,....-_ ••••_... hpux bootstrap loader 3, Loads the HP·UX kernel Phase 2: Bootstrap Phase symantec. The second phase in the IIP·UX boot process is a two-part bootstrap operation: Part one of the bootstrap process begins after the POL successfully loads and transfers control to the operating system-independent initial system loader (ISL) 2 Part two of the bootstrap process begins when the ISL loads and transfers control to a HP-UX-specitic bootstrap loader utility called hpux. The hpux utility loads the IIP-UX kernel (/ s tand/vmunix) and transfers control to the loaded kernel image. Appendix C Boot Processes and VxVM Start-Up Scripts Copvnqtn ~ 2005 Syrnantec Corpo.aroo All notus reserved C-17
    • Phase 3: Initialization Phase symantec L Transfers control to the initial system loader (ISl) 1---.., 2. Transfers control to the '- .,..; .a hpux bootstrap loader 3. loads the HP-UXkernel •...---1 1. Runs self tests ""'----'1"'- .....---. 2. Initializes the processor 1. Executes /sbLn/pre init rc 1.~~!!!!!!:!!!:!=~!,._t"--12.Begins Isbin/init 3. Init executes ioini trc, vxvrn-sysboot, and vxvm-startup 4. Init executes other single user and multiuser VxVM startup scripts. Phase 3: Initialization Phase After the kernel initializes, it executes the / sbin/pre _ini t _rc command and then starts the / sbin/ ini t process, which executes the startup scripts found in the /etc/inittab file, The following scripts arc executed during the first part of the initialization phase: / sbin/pre _ini t _rc: This script checks if root is a VxVM volume by checking the existence or / sbin/ is _vxvmroot file, and executes it if the liie exists. The vxconf igd daemon is started in boot mode before the root file system is checked. /sbin/ioinitrc: This script checks if the system is running from a local root and itso. mounts the boot file system (/stand). It usesthe / sbin/ is _vxvmroot tile to check if the boot rile system is on a VxVM volume. / sbin/ ini t ,d/vxvm- sysboot: This script conligures the / and / stand volumes as needed. enables vxconf igd logging, and starts the DI1r restore daemon. / sbin/ ini t ,d/vxvm- start up: Starts some I/O daemons, rebuilds / dev/vx/ [r 1dsk. imports all disk groups, and starts all volumes that were not started earlier in the boot sequence. C-18 VERITAS Storage Foundation 4.1 for UNIX: Maintenance Copynght~, 2005 Svovernec Corporancn All lights resprved
    • symantcc. Start-up Scripts: Single User I/Sbin/initl----l S091vxvrn-nodes-check Checks and creates the required VxVM device files The VxVM-rciated startup scripts under /sbin/rcl. d directory arc: S091vxvm-nodes-check: This script is linked to / sbin/ ini t . d/vxvm-nodes - check. It checks and creates the required VxVM device files. S092vxvm-startup: This script is linked to / sbin/ ini t . d/vxvm- start up. It starts some 1/0 daemons, rebuilds / dev / vx/ [r 1d sk, imports all disk groups, and starts all volumes that were not started earl ier in the boot sequence. S093vxvm-reconfig: This script is linked to / sbin/ ini t . d/vxvm- reconf ig. It carries out any reconfiguration tasks such as conversions from L.VM to VxVM. /sbin/rcl.d/ S091vxvrn-nodes-check S092vxvm-startup ---+- so 92vxvrn- startup S093vxvm-reconfig • Starts I/O daemons Rebuilds device directories • Imports all disk groups Starts all volumes (without any recovery) S093vxvrn-reconfig Carries out any reconfiguratlon tasks such as conversions from LVM to VxVM. ----------------------~--~-------~-----------~---- Appendix C Boot Processes and VxVM Start-Up Scripts C-19 Start-up Scripts: Single User Cooynotu 't, 2()US Svmantec Corpcraucn All nqtn" reserveo
    • Start-up Scripts: Multiuser syrnantec I/abin/init~ /sbin/rc2.d/ S096vxvm-recover------ • 8994vxnm-vxnetd S996vradmind • S996vxrsyncd • S450vxpbx_8xchanged • SSOOvxvail • S100isisd • S700vxatd S750vxpal.StorageAgent • S750vxpal.gridnode B751vxamfd S760vxpal.actionagent BaCk.1 - S096vxVIn-recover Attaches stale plexes Recovers plex synchronization Executes vxrecover • Starts vxrelocd, vxconfigbackupd, and vxcached These scripts are used to start various daemons and processes used by Veritas Volume Replicator. They require a valid VVR license. ,~ These scripts are used to I start various daemons and processes used by Veritas Volume Replicator. They require a valid VVR license. Start-up Scripts: Multiuser The Vx VM-related startup scripts under / sbin/ rc2 .d directory are: S096vxvm-recover: This script is linked to /sbin/init .d/vxvm-recover. It attaches plcxcs, as needed, executes the vxrecover command. and starts the volumeManager watch daemons. S994vxnm-vxnetd, S996vradmind. S996vxrsyncd: These scripts arc related to VERITAS Volume Rcplicator and arc nut used unless there is a valid VVR license. S7 00i sisd: This script is linked to / sbin/ ini t .d/ i sisd. It starts the VEA server. The rest of the scripts arc used to start various infrastructure components lor VxVM. such as the agents used tor the Storage Foundation Management Server or V EA. C-20 VERITAS Storage Foundation 4. 1 for UNIX: Maintenance Copvnqht '. 2005 Symantec Coeporancn All fights reserved
    • Index Files and Directories /dcv'vxconfig 2-4 /dcv/vx.dsk 2-10 'dcv/vxdsk/usr C-9 'dev/vx/rdsk 2-10 'dev/vx/rdsk/usr C-9 'etc/defauh-vxse 5-25 letc'rc2.d/S95vxvlll-rccovcr C-14 'etcircS.d!S25vxvm-sysboot C-8 'ctc'rcS.diS:10rootusr C-9 /etc 'rcS.d.'S:15yxvlll-startupI C-l0 !ctc'rcS.d/S40standardmounts C-l0 etc/rcS.(f/S50devfsadlll C-l1 ere 'rcS.d'S70buildmnttab C-ll !etc'rcS.d!S85vx YIll-startup2 C-12 !etc,rcS.d/SR6vxvm-reconfig C-12 /etcsystcm 3-23, 4-4, C-5 forceload entries 4-11 troubleshooting 4-11 using an alternate 4-12 /ctcvfstab 3-23, 4-4, C-9 /ctc/vx/elm 4-4, 4-8 /ctc/vxTiccnsesTic 4-4, 4-8 'ctc/vx/rcconfig.dstatc.d C-12 'etc/vx-rccnufig.d/srate.d/insrull-db 4-4, 4-6 /erc/vx/volboot 2-3, 4-4 troubleshooting 4-7 /opt''RTS/vxsc/yxvlll 5·23 'proc C.l0 !sbininit phaseC-6 .siand/vmunix C·17 /usr C·8 /usr.kvrn C·l0 /var C·l0, C-l1 /var/adm C-l0, C·l1 /varadrnvx/veacmdlog 5·12 'vat/run C·l1 /var.vxvrn/rempdb 4·4, 4·9 !VXVM#.#.II·UPGRADEI.start runed 4·4, 4·6 Copvnqm,,- ~006 Svmantec Cotprxat.nu fl.11nqnts reserved fndex-l A aborting online rclayout 5·8 ACTIVE 1·5 active path attribute 2·19 ACTIVE state 1·13, 1·16 acrivcactive disk arrays 2·17 activepassive disk array 2·17 adaptive 1'0 policy 2·18 allocation policies 8·7 alternate boot disk creating 3·18 creating ill CLI 3·19 creating in VEA 3·18 creating in vxdiskudm 3·18 alternate mirror booting 3-21 analysis tools vxstat 6·6 vxtrace 6·6 analyzing drive performance 6-8 analyzing the application 1'0 profile 6·11 application volumes 8·13 application workload analyzing 6·4 architecture or Vx VM 2·3 array activeactive 2·17 activepassive 2·17 adding support for 2·14 removing support for 2·14 atomic-copy resynchronization 1·4. 1-21 auto cdsdisk 2·8 simple 2·8 sliced 2·8 auto disk type 2·8 B bad block rcvectoring 3·9 balanced 1'0 policy 2·18 balancing load 6·9
    • benchmarking tools 6-6 boot device, cannot be opened 4-5 boot disk 3-8 creating all alternate 3~18 creating an alternate in CLI 3-19 creating an alternate in VEA 3-18 creating an alternate in vxdiskadm 3-18 creating an emergency 4-23 encapsulating 4-23 encapsulating in vxdiskudm 3-14 mirroring in vxdiskadm 3-18 unencapsulating 3-22 boot disk encapsulation 3-8 effect on eresystem 3-16 effect on .etc.vtsrab 3-17 file system requirements 3~11 planning 3-13 using vxdiskadm 3-14. 3-15 viewing 3-16 boot disk errors 3-20 boot disk failure nonencupsulatcd 4-17. 4-18 only disk in boot disk group 4-19 other disks in boot disk group 4-20 boot disk failures 4-16 boot disk group temporarily importing 4-15 boot mirror verification 3·21 boot process 4-3 HP-LJX C-15 HP-L.X bootstrap process C-17 HP-UX POST phase C-16 Solaris C-2 Solaris 'sbininit phase C-6 Solaris boot program phase C-4 Solaris boot PROM phase C-3 Solaris kernel initialization phase C-5 troubleshooting 4-4 boot pnlgram phase C-4 hootblk C-3. C-4 bootdg 3-13 booting from alternate mirror 3-21 c C program 6-6 cache object creating 7-24 calculating throughput 6-8 CDS disk 2-8 changing plex states 1-22 check all 2-24 check disabled 2-24 circular buffer 6-11 CLEAN state 1-13, 1-16 cleartcmpdir 4-9 clone 1'0018-13 command log tile 5-12 concurrency 6-13 condition tlags 1-15 IOI'AIL 1-15 NODEVICL 1-15 RECOVER 1-15 REMOVED 1-15 configuration daemon 2-4 controlling 2-9 configuration database 2-4, 2-7 copies 2-5 disk group status 2-5 log entries 2-5 size 2-5 controller enabling or disabling 1'0 to 2-23 copy-on-write 7-6 cpio 4-23 crcuting a traditional volume snapshot 7-17 creating a volume 1-10 creating an alternate boot disk 3-18 creating an emergency boot disk 4-23 Cross-Platform Data Sharing 2-8 o daemons of VxVM 2-3 data backup 7-3 data Ii re-cycle management 8-3 data pool 8-13 data storage checkpoint 7-16 dd 6-6 debugging 4-10 decision support 7-3 DLTACHlD state 1-17 device discovery partial 2-15 Index-2 VERITAS Storage Foundation 5.0 for UNIX' Maintenance Copyright 1:. 2006 Svrnantec Curporanon 111rights reservad
    • device name 2-6 device tag 2·6 dirty region log biunaps 1·8 dirty region logging 1-6, 1·7, 3·11 DISABLED state 1·17 disabling I/O to a controller 2·23 disk load balancing 6·9 disk array activeactive 2·17 active/passive 2-17 adding support for 2·14 removing support tor 2·14 disk format 2·6 disk formats 2·8 disk group configuration database data 2·5 disk group name 2·6 disk header displaying 2-6 disk header version 2·6 disk media name 2-6 disk status nags 2·6 displaying information instant volume snapshots 7·26 storage checkpoints 7·33 traditional volume snapshots 7·18 DMP 2·16 benefits 2·16 displaying I/O statistics 2·20 preventing 2·21, 2·22 restore daemon 2·24 restore daemon policies 2·24 setting path attributes 2·19 setting the J!O policy 2·18 startiug the restore daemon 2·24 drive performance 6·8 DRL 1·7 dump devices C·10 dynamic multipathing 2·7, 2·16 benefits 2·16 displaying )'0 statistics 2·20 preventing 2·21,2·22 restore daemon 2·24 setting path attributes 2-19 setting the I/O policy 2·18 E ecprom 3·13 emergency boot disk 4-23 booting from 4·24 creating 4-23 EMPTY state 1·13,1·16 E"JABLED state 1·17 enabling 1'0 to a controller 2-23 encapsulating root benefits 3·9 limitations 3·9 encapsulating the boot disk 4·23 encapsulation effect on 'ere/system3·16 unencapsulating a boot disk 3·22 ["J[) read disk 6-13 E"JD read vdcv 6·13 expired license replacing 4·8 F failed root repairing 4·15 Fasikcsync 1·6,7·10 tile system snapshots compared to checkpoints 7·15 find utility 4·23 Iorceload C·5 forccload entries 4·11 forcing a volume to start 1·21 fsckptadm create 7·31 fsckpiadm list 7·33 fsckptadm pathinfo 7·33 full-sized instant volume snapshots off-host processing 7·27 H high availability 3·9 host If) changing in volboot 4~7 conflicting 4·7 hosiid 2·6 hot-relocation daemons starting C·14 VERITAS Storage Foundation 5,0 (or UNIX: Maintenance Copynght:{ 2006 Svmantec Corporation. All nqrus reserved lndex-B
    • HP-UX boot process C·15 (,0 displaying OMP statistics 2-20 enabling and disabling to a controller 2-23 LiO daemons C·12 1'0 generators 6·6 J'O policy setting for DMP 2·18 1'0 rate controlling 5-19 1'0 transfer path analyzing 6·5 importing the boot disk group temporarily 4·15 initializing a volume 1·10 insiallboor 3-19, 4·23, C·3 installp 2·14 instant volume snapshots 7·8 displaying information 7·26 intelligent storage provisioning 8·10 ioctl tunctionst-f ? IOFAIL tlag 1·15 iopolicy 2·18 K kernel C·5 kernel initialization phase C-5 kernel log 2-10 kernel logs 2·7 kernel Slates 1·17 DI:TACHLD 1·17 DISAI:lLLD 1·17 I,NAI:lLLD 1-17 kmrunc 6·18 L license keys troubleshooting 4-8 licenses replacing expired 4·8 life cycle point-in-time copies 7·7 load balancing 6·9 for overused drives 6·10 log plcx 1·7 log subdisks 1-7 logging 1·7 dirty region logging 1·7 logical PITe 7·4 copy-on-write 7-6 performance issues 7-6 redirected reads 7-6 M minimumq I '0 policy 2·18 mirror booting from alternate 3·21 mirroring the boot disk 4·23 errors 3·20 vxdiskadm 3·18 mounting a checkpoint 7·31, 7·32 multiported disk array 2·17 multiuser startup scripts C·14 multi-volume tile systems 8·6 N NITDSYNC 1·5 NITDSYNC stare 1·16 nodatastorage checkpoint 7·16 NO[)FVICE flag 1·15 NODI'VIlT state 1·16 nomanual path attribute 2·19 noncncupsulatcd boot disk failure 4·17,4·18 noumountable storage checkpoint 7·16 noprefcrrcd path attribute 2·19 o oft-host processing 7 ·27 phases 7·27 using point-in-time copies 7·3 OFFLINE state 1·14 online rclayout 5·3 aborting 5·8 and log plcxes 5-6 and sparse plcxes 5·6 lndex-a VERITAS Storage Foundation 5,0 for UNIX Maintenance Copyright' 2006 Syruants- Corpoanc« All fights reserved
    • and volume length 5·6 and volume snapshots5·6 continuing 5-8 in CLI 5-9 limitations 5·6 monitoring 5·8 pausing 5·8 process 5-4 reversing 5·6. 5·8 supported transformations 5·3 temporary storage space5·5 opt 3-10. 3·11 overused drives 6·10 p parameters YxVM tunable 6·17 parent task 5-14 partial device discovery 2·15 partition numbers 2-6 partitions after encapsulation 3·16 PATli A·45. 6·69 pausing online rclayout 5·8 performance analyzing drives 6-8 measuring drive throughput 6·8 performance analysis process6·3 performance analysis tools 6-6 vxsuu s-s vxtrace 6-6 physical or logical PITC 7-4 physical PITC 7-4 performance issues7-5 PITC 7·3 pkgrm 2·14 plex identifying problems 1·11 resolving problems 1·19 plex kernel state 1·17 plcx kernel states1·11 plcx problems analyzing 1·24 good plex is not known 1-24 plex states1-11. 1-13 ACTIVE 1·13 changing 1-22 CLEA~ 1·13 EMPTY 1·13 OFFLI'JE 1·14 SNAPATT 1·14 S'JAPDONE 1·13 STALE 1·14 TEMP 1·14 point-in-time copies 7·3 common uses7·3 copy-on-write 7·6 displaying information instant volume snapshots7·26 traditional volume snapshots7·18 FastResync7·10 full-sized instant volume snapshots7-12 instant volume snapshots7·8 life cycle 7-7 physical or logical copy 7-4 readsand writes 7·5 redirected reads7·6 space-optimized 7-8, 7·13 space-optimized instant volume snapshots creating a cacheobject 7·24 storagecheckpoints 7·8. 7·14 creating and munaging 7·31 displaying information 7-33 restoring a file system 7-34 traditional volume snapshots7·9 creating and managing 7·17 displaying information 7·18 PostMark 6-6 preferred path attribute 2·19 primary path auribuie 2·19 priority 1'0 policy 2·18 private region 2·7 private region offset 2·6 PROM C·3 prtconf 6·17 prtvtoc 3·13 PTID 5·14 pubpaths 2·6 R RAm-s lo/!/!ing 1·6 random 1'0 and stripe unit size 6-12 VERITAS Storage Foundation 5,0 for UNIX' Maintenance Index-5 Copyright e: 2006 Syuranter Corporation 111nqhts reserved
    • read-writcback resynchronizarion 1-5 read-writeback synchronization 1-21 RECOVER !lag 1-15 recovering the boot disk group 4-15 rclayour 5-3 aborting 5-8 pausing 5-8 resuming 5-8 reversing 5-8 Relayour Status Monitor window 5-8, 5-12 relocation policies 8-7 removable storage checkpoint 7-16 REMOVED flag 1-15 repairing tailed root 4-15 resilience level changing 5-11 restore daemon C-8 restoring a file system from a storage check- point 7-34 resynchronization 1-3, C-14 atomic-copy 1-4 read-wrireback 1-5 revectoring 3-9 reversing online rclayout 5-8 root 3-11, 3-16 reparing failed 4-15 root encapsulation benefits 3-9 limitations 3-10 root plex errors 3-20 root done C-13 roorabi Iiry 3-8 rootdev C-5. C-8 rootvol 3-10,4-12, C-8 round-robin 1,'0 policy 2-18 rpm 2-14 run control scripts C-6 s S25vxvm-sysboot C-8 S.lOrootusr C-9 S35vxvm-stal1UP I C-10 S-l()standardmounts C-10 S50detSadm C-11 S70buildmnttab C-11 S)5VxYll1-startllp2 C-12 SX6vxvm-recontig C-12 S95V'XYlll-rccover C-14 scratch pad 5-4 scripts for VxVM startup C-6 secondary path attribute 2-19 sequential 1,0 and stripe unit size 6-12 shared cache object creating 7-24 sinulcactive I/O policy 2-18 single-user startup scripts C-B sliced disk 2-8 slow attribute 5-20 SNAPAIT st.ue 1-14 SNAI'DUNL state 1-13 Solaris boot process C-2 space-optimized instant volume snapshots 7-8, 7-13 special volumes starting C-1 0 STALE statc 1-14 standby path attribute 2-19 START read disk 6-13 START read vdev 6-13 starting a volume 1-10, 1-21 startup scripts multiuser C-14 single-user C-8 troubleshooting 4-6 VxVM C-6 STATE fields 1-11 states kernel 1-17 plex 1-13 volume 1-16 statistics displaying with vxstat 6-7 storage checkpoints 7-14 compared to file system snapshots 7-15 creating and managing 7w 31 displaying information 7-33 mounting 7-31 Index-6 VERITAS Storage Foundation 5.0 for UNIX: Maintenance CUp)lroyr,J·iJ 2006 Syruanrec Coecorauon Annqnts reserved
    • restoring a IIle system 7-34 types 7-16 Storage Expert 5-21 customizing default uuributes 5-25 list of rules 5-26 rule syntax 5-23 rules 5-21 rules engine 5-21 types of rules 5-22 storage pools 8-13 clone 1'0018-13 data pool 8-13 stripe unit size random I/O 6-12 sequential 1'0 6-12 swap 3-8. 3-16. C-l0 swapvol 3-10 swrernove 2-14 SYNC 1-5 SYNC state 1-16 system file using an alternate 4-12 T tag 143-16 tag 153-16 task controlling 5-17 controlling progress rate 5-19 displaying intormauon about 5-14 monitoring 5-16 slowing 5·20 Task lIistory window 5-12 task rhroulinj; 5-20 TASKID 5-14 tasks managing 5-12 managing in YEA 5-12 TEMP state 1-14 template 8-13 temporarily importing the boot disk group 4-15 temporary storage area 5~5 TEMPRM state 1-14 TEMPRMSD state 1-14 thronling a task 5-20 throughput calculation 6-8 traditional volume snapshots 7-9 displaying infornuuion 7-18 troubleshooting the boot process 4-4 tunable parameters 6-17 displaying 6-17 modifying 6-18 u UFS 3-11 ufsboot C-4 upgrade _llnish 4-6 upgrade_start 3-10. 4-6 usr 3-10.3-11. 3-16 v var 3-10.3-11.3-16 VEA changing volume layout 5-7 command log file 5-12 creating an alternate boot disk 3-18 Task History window 5-12 ihrouling a task 5-20 'ERITAS Storage Expert 5-21 volboot 2-12 viewing contents 2-12 volboot file re-creating 4-7 volume analyzing with Storage Expert 5·21 creating 1-10 "nee starting 1-21 initializing 1-10 managing tasks 5-12 online rclayout 5-3 recovering 1·20 starting 1-21 volume kernel state 1·17 volume kernel states 1-11 volume layout changing in eLl 5·9 changing in VEA 5-7 changing online 5·3 load balancing 6·9 volume sets 8·6 VERITAS Storage Foundation 5,0 for UNIX' Maintenance Index-7 Copvnqtu '!j; 2006 Symanter. Corporaucn 111nqnts reserven
    • volume states1-11, 1-16 ACTIVE 1-16 CLI::AN 1-16 EMPTY 1-16 NEFDSYNC 1-16 NODEVICL 1-16 SYNC 1·16 volume tasks managing in VEA 5-12 VTOe 3-16, 3-17 vxassisr 1-10 vxassistconvert 5-9, 5-11 vxassisrmirror 3·19 vxassist relayout 5·9,6-15 vxbench 6-6 vxboorsetup 3-19 vxconfig 2·4 vxcontigd 1-22,2·3,2-4,2·8,2-9,4·9,4·10,4· 24, C-8, C·12 displaying status2·11 disabling 2·11 enabling 2·11 vxconfigd modes 2·10 vxdcrl z-t t vxdcrl enable 2·3,4·8, C·12 vxdctl hostid 4·7 vxdctl init 4·7 vxdctl initdmp C·12 vxdctl lisr 2·12 vxdg list 2·5 vxdisk list 2·7 vxdisk scandisks 2·15 vxdiskadm creallng an alternate boot disk 3·18 encapsulating the boot disk 3·14 vxdmp.con I' 6·17 vxdmpudm getattr 2·18 vxdmpadm iostar 2·20 vxdmpadm scuutr 2·18, 8·24 vxdmpadm start restore 2·24, C·8 XCIlCIP encupsul.uing the boot disk 3·15 VxFS 3·11 vxinfo 1·12 vxio.cont 6·17 vxiod 2·3, 4·24, C·12 vxliccnse 4·24 vxlicinsr 4·8 vxmend 1-22 vxmend fix 1·19, 1·22 vxrncnd offl·19, 1·23 vxmend (In 1·19, 1·23 vxmirror 3·19 vxprint 1·12, 6·8, 6·15 vxreattach C·12 vxrecover 1·19, 1·20,4·15,4·24, C·12 vxrelayour 5·13, 5·18 vxrelayour reverse 5·18 vxrelayour starr 5·18 vxrclayout status5·18 vxrclocd 2·3, C-14 vxrootmir 3·19 VxSI: 5-21 vxsuu 6·6, 6-7, 6-8, 6-9, 6·10 vxrask 5·13, 5-17 vxrask abort 5-17 vxrask list 5-14, 5·15 xrask monitor 5-16 vxtask pause5·17 vxtask resume5·17 vxtrace 6-6, 6-11, 6-14, 6·15 interpreting output 6-13 vxunroor 3-22 V,VM architecture 2·3 configuration database2-4 daemons2-3 VxVrv,l startup single-user scripts C-8 V V M startup scripts C·6 multiuser C-14 troubleshooting 4-6 vxvol init 1-10 vxvol sla.11-16, 1-19, 1-21 lndex-B VERITAS Storage Foundation 5,0 for UNIX: Maintenance Cnryflghl 'f 2006 SyrnautecCorporaucn All fights reserveo