Your SlideShare is downloading. ×
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
-bheritas5fundamentals-
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

-bheritas5fundamentals-

1,461

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,461
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
38
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. 'symantec.. VERITAS Storage Foundation 5.0 for UNIX: Fundamentals 100·002353·A
  • 2. COURSE DEVELOPERS Gail Ade~ BilJ,(eGerrits lECHNICAL CONTRIBUTORS AND REVIEWERS Jade ArrinJ,(lolI :Iargy Cassid~ Ro~' Freeman Joe Gallagher Bruce Garner Tomer G urantz Bill Havev Geue Henriksen Gerald Jackson Haymond Karns Bill Lehman Boh l.ucas Durivunc Manikhung Chr'istlan Rahanus Dan Rugers Kleher Saldanha Albrecht Scriba "liehe! Simoni Anaudu Sirisena Pete 'Iuemmes Copyright' 2006 Symamec Corporation. All rights reserved. Symantcc. the Symanrec Logo. and "LRITAS arc trademarks or registered trademarks uf 5) mantee Corporation Of its alfiluues in the U.S. and other countries. Other names may be trademarks of their respective owners. -I IllS PUBLICATION IS I'ROVIDfD "AS IS" AND ALL EXPRESS OR IMPLllDCONIlITIONS. REPRESENTArJONS AND WARRANTIES. INCLUDIN(i ANY 11PLlUl WARRANTY OF MFRCHANTA81L1TY. IITNI.sS FOR A PARIICULAR PURPOSE OR NON- INFRIN(iI:MEN r. ARL DISCLAIiIED. EXCEl'! TO THE FXTEN! rHAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. SYIANTI:C CORPORATION SHAl.L NOT HE L1ABLI: lOR INCIDENTAL OR CONSEQULNTIAL DA1AGI-.S IN CONNECTION WITH (HI FURNISHING. PERIC)RMANCE. OR USE OF THIS PUIlLlCAIION. TH~. INFORIATION CONTAINLD H!:RUN IS SUBJECT ro CHANtiE WITHOUT NOTICE. No part orthe contents ofthis hook may be reproduced or transmitted in any torm or b) any means without the , riuen permission of the publisher. tLRIT-l.')' ::';/orugc FOflllllulion 5.0 /iw [,i;V/.': Fundamentals Symnnrec Corporation ~03305te ens Creek 81 U. Cupertino. CA ()SOI4
  • 3. Table of Contents Course Introduction What Is Storage Virtualization?. Introducing VERITAS Storage Foundation VERITAS Storage Foundation Curriculum .. Lesson 1: Virtual Objects Physical Data Storage Virtual Data Storage Volume Manager Storage Objects Volume Manager RAID Levels .. Lesson 2: Installation and Interfaces Installation Prerequisites . Adding License Keys.. . . VERITAS Software Packages .. Installing Storage Foundation. Storage Foundation User Interfaces. Managing the VEA Software Lesson 3: Creating a Volume and File System Preparing Disks and Disk Groups for Volume Creation Creating a Volume . Adding a File System to a Volume Displaying Volume Configuration Information ... Displaying Disk and Disk Group Information Removing Volumes, Disks, and Disk Groups Lesson 4: Selecting Volume Layouts Comparing Volume Layouts ..... Creating Volumes with Various Layouts. Creating a Layered Volume . Allocating Storage for Volumes . Lesson 5: Making Basic Configuration Changes Administering Mirrored Volumes . Resizing a Volume . Moving Data Between Systems Renaming Disks and Disk Groups Managing Old Disk Group Versions .. Table of Contents Copyrigtlt ,( 2006 Svmantec Corporation All rights reserved Intro-2 Intro-6 Intro-11 1-3 1-10 1-13 1-15 2-3 .... 2-5 . 2-7 2-10 2-16 2-21 ...... 3-3 3-12 3·18 3-21 3-24 3-30 . 4-3 . 4-9 4-18 .4·25 5-3 5-10 5-16 5-21 5-23
  • 4. Lesson 6: Administering File Systems Comparing the Allocation Policies of VxFS and Traditional File Systems 6-3 Using VERITAS File System Commands 6-5 Controlling File System Fragmentation 6-9 Logging in VxFS , 6-15 Lesson 7: Resolving Hardware Problems How Does VxVM Interpret Failures in Hardware 7-3 Recovering Disabled Disk Groups "" , 7-8 Resolving Disk Failures ,.., , 7-12 Managing Hot Relocation at the Host Level , ,... 7-22 Appendix A: Lab Exercises Lab 1: Introducing the Lab Environment , A-3 Lab 2: Installation and Interfaces ,..,.., A-7 Lab 3: Creating a Volume and File System ,..,.., , A-15 Lab 4: Selecting Volume Layouts , A-21 Lab 5: Making Basic Configuration Changes , , A-29 Lab 6: Administering File Systems...... .. ,.., A-37 Lab 7: Resolving Hardware Problems.. ...., ,.., A-47 Appendix B: Lab Solutions Lab 1 Solutions: Introducing the Lab Environment ,.., B-3 Lab 2 Solutions: Installation and Interfaces , , , B-7 Lab 3 Solutions: Creating a Volume and File System , ,.., B-21 Lab 4 Solutions: Selecting Volume Layouts ,.., B-33 Lab 5 Solutions: Making Basic Configuration Changes ,.." B-47 Lab 6 Solutions: Administering File Systems , " B-67 Lab 7 Solutions: Resolving Hardware Problems " "...................... B-85 Glossary Index VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyngt1t <.;; .. LUOb Symaruec Corporation All nqhts reserved.
  • 5. Course Introduction
  • 6. Storage Management Issues Human Resource E-mail Database Server Customer Order Database 90% F symantec 10% 50% Full _ Multiple-vendor hardware Explosive data growth Different application needs Management pressure to increase efficiency I' Multiple operating systems r , Rapid change Budgetary constraints Problem: Customer order database cannot access unutilized storage. Common solution: Add more storage. What Is Storage Virtualization? Storage Management Issues Storage management is becoming increasingly complex due to: Storage hardware from multiple vendors Unprecedented data growth Dissimilar applications with different storage resource needs Management pressure to increase efficiency Multiple operating systems Rapidly changing business climates Budgetary and cost-control constraints To create a truly efficient environment. administrators must have the tools to skillfully manage large, complex, and heterogeneous environments. Storage virtualization helps businesses to simplify the complex IT storage environment and gain control of capital and operating costs by providing consistent and automated management of storage. Intro-2 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals Copynyht r; 2006 Svmantec coroorauoo All ngllts reserved
  • 7. ,S}lnamcc. What Is Storage Virtualization? .MimkM4 Consumer & m' 'f §$ A Virtualizatlon: The logical representation of physical storage across the entire enterprise Consumer fIiIiiIrIIi"m''fl Consumer sliildeM" Application requirements from storage • Application • Throughput • Failure requirements • Responsiveness resistance • Growth otential • Recovery time Ca acit Performance Availabilit • Disk size • Disk seek time • MTBF • Number of disks! • Cache hit rate • Path path redundanc Physical aspects of storage Physical Storage Resources Defining Storage Virtualization Storage virtualization is the process of taking multiple physical storage devices and combining them into logical (virtual) storage devices that are presented to the operating system. applications. and users. Storage virtualization builds a layer of abstraction above the physical storage so that data is not restricted to specific hardware devices. creating a Ilexible storage environment. Storage virtualization simplifies management of storage and potentially reduces cost through improved hardware utilization and consolidation. With storage virtualization. the physical aspects of storage arc masked to users. Administrators can concentrate less on physical aspects of storage and more on delivering access to necessary data. Benefits of storage virtualization include: Greater IT productivity through the automation of manual tasks and simplified administration of heterogeneous environments Increased application return on investment through improved throughput and increased uptime Lower hardware costs through the optimized use of hardware resources Copyright t: 20U6 Symanter; Corporation All riqh'~ .eservco Intro-3Course Introduction
  • 8. syrnantec Storage Virtualization: Types Storage-Based JfIfIJ'AIiI'AY Servers Host-Based AiIII1' Server Network-Based AYAYAY Servers Storage ~j,~ Storage ~s.,", Storage Most companies use a combination of these three types of storage virtualization to support their chosen architectures and application requirements. How Is Storage Virtualization Used in Your Environment? The way in which you use storage virtuulization. and the benefits derived from storage virtualization. depend on the nature of your IT infrastructure and your specific application requirements. Three main types of storage virtualization used today arc: Storage-based Host-based Network-based Most companies use a combination of these three types of storage virtualization solutions to support their chosen architecture and application needs. The type of storage virtualization that you use depends on factors. such as the: Heterogeneity of deployed enterprise storage arrays Need for applications to access data contained in multiple storage devices Importance of uptime when replacing or upgrading storage Need for multiple hosts to access data within a single storage device Value of the maturity of technology Investments in a SAN architecture Level of security required Level of scalability needed Inlro-4 Copynghl ·,C ;':OOb Svmantec Conioranon AU flghls reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 9. Storage-Based Storage Virtualization Storage-basedstorage virtualization refers to disks within an individual array that are presented virtually to multiple servers. Storage is virtualized by the array itself For example. RAID arrays virtualize the individual disks (that are contained within the array) into logical LUNS. which are accessed by host operating systems using the same method of addressing as a directly-attached physical disk. This type of storage virtualization is useful under these conditions: You need to have data in an array accessible to servers of di fferent operating systems. All of a server's data needs are met by storage contained in the physical box. You are not concerned about disruption to data access when replacing or upgrading the storage. The main limitation to this type of storage virtualization is that data cannot be shared between arrays. creating islands of storage that must be managed. Host-Based Storage Virtualization Host-basedstorage virtualization refers to disks within multiple arrays and from multiple vendors that are presented virtually to a single host server. For example. software-based solutions. such as VERITAS Storage Foundation. provide host- based storage virtualizarion. Using VERlTAS Storage Foundation to administer host-based storage virtualization is the focus of this training. I lost-based storage virtualization is useful under these conditions: A server needs to access data stored in multiple storage devices. You need the flexibility to access data stored in arrays from different vendors. Additional servers do not need to access the data assigned to a particular host. Maturity of technology is a highly important factor to you in making IT decisions. Note: By combining VERITAS Storage Foundation with clustering technologies. such as VERITAS Cluster Volume Manager. storage can be virtualized to multiple hosts ofthe same operating system. Network-Based Storage Virtualization Network-basedstorage virtualization refers to disks from multiple arrays and multiple vendors that arc presented virtually to multiple servers. Network-based storage virtualization is useful under these conditions: You need 10 have data accessible across heterogeneous servers and storage devices. You require central administration of storage across all Network Attached Storage (NAS) systems or Storage Area Network (SAN) devices. You want to ensure that replacing or upgrading storage does not disrupt data access. You want to virtualize storage to provide block services to applications. Course Introduction Intro-5 Copyright ,~,2006 Svmantec Corporation. All nnhts reserved
  • 10. syrnarucc VERITAS Storage Foundation VERIT AS Storage Foundation provides host-based storage virtualization for performance, availability, and manageability benefits for enterprise computing environments. High Availability Application Soluti Data Protection Volume Manager and File System Company Business Process VERITAS Cluster Server/Replication ons Storage Foundation for Databases -< VERITAS NetBackup/Backup Exec ~ VERIT AS Storage Foundation Hardware and Operating System Introducing VERITAS Storage Foundation VERITAS storage management solutions address the increasing costs of managing mission-critical data and disk resources in Direct Attached Storage (DAS) and Storage Area Network (SAN) environments. Atthe heart of these solutions is VERITAS Storage Foundation, which includes VERITAS Volume Manager (VxVM). VERITAS File System (VxFS), and other value-added products. Independently, these components provide key benefits. When used together as an integrated solution, VxVM and VxFS deliver the highest possible levels of performance, availability, and manageability for heterogeneous storage environments. Intro-6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cnpyllght ,£ 2006 Syroantec Corporation All r,ghts reserved
  • 11. Users / Applications/ Databases .................................................................................................... " ....,, ! Virtual Storage Resources 00.0 ••• 00 0 .0..(?v!2!J.t.1 '.', VERITAS Volume Manager (VxVM) What Is VERITAS Volume Manager? VERITAS Volume Manager, the industry-leader in storage virtualizarion. is an easy-to-use, online storage management solution for organizations that require uninterrupted, consistent access to mission-critical data. VxVM enables you to apply business policies to configure, share. and manage storage without worrying about the physical limitations of disk storage. VxVM reduces the total cost of ownership by enabling administrators to easily build storage configurations that improve performance and increase data availability. Working in conjunction with VERITAS File System, VERITAS Volume Manager creates a foundation for other value-added technologies. such as SAN environments, clustering and failover, automated management. backup and IISM, and remote browser-based management. What Is VERITAS File System? A file system is a collection of directories organized into a structure that enables you to locate and store tiles. All processed information is eventually stored in a tile system. The main purposes of a file system arc to: Provide shared access to data storage. Provide structured access to data. Control accessto data. Provide a common. portable application interface. Enable the manageability or data storage. The value of a file system depends on its integrity and performance. Copyright os 2006 swnante- Corporation. All rigllts reserveo Intro-7Course Introduction
  • 12. svrnaruec VERITAS Storage Foundation: Benefits • Manageability - Manage storage and file systems from one interface. - Configure storage online across Solaris, HP-UX, AIX, and Linux. - Provide additional benefits for array environments, such as inter-array mirroring. • Availability - Features are implemented to protect against data loss. - Online operations lessen planned downtime. • Performance - 1/0 throughput can be maximized using volume layouts. - Performance bottlenecks can be located and eliminated using analysis tools. • Scalability - VxVM and VxFS run on 32-bit and 64-bit operating systems. - Storage can be deported to larger enterprise platforms. Benefits of VERITAS Storage Foundation Commercial system availability now requires continuous uptime in many implementations. Systems must be available 24 hours a day. 7 days a week, and 365 days a year. VERlTAS Storage Foundation reduces the cost ofownership by providing scalable manageability, availability, and performance enhancements for these enterprise computing environments. Manageability Management of storage and the tile system is performed online in real time, eliminating the need for planned downtime. Online volume and file system management can be performed through an intuitive. easy-to-use graphical user interface that is integrated with the VERITAS Volume Manager (VxVM) product. Vx VM provides consistent management across Solaris. HP-llX, AlX, Linux, and Windows platforms. VxFS command operations are consistent across Solaris, HP-UX, AlX, and Linux platforms. Storage Foundation provides additional benefits for array environments, such as inter-array mirroring. Availability Through software RAID techniques, storage remains available in the event of hardware fai lure. Intro-8 CopytlQtll ''",2006 Svmantec Corporation All fights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 13. 1I0t relocation guarantees the rebuilding of redundancy in the case of a disk failure. Recovery time is minimized with logging and background mirror resynchronization. Logging of file system changes enables fast file system recovery. A snapshot of a file system provides an internally consistent. read-only image for backup. and file system checkpoints provide read-writable snapshots. Performance I/O throughput can be maximized by measuring and modifying volume layouts while storage remains online. Performance bottlenecks can he located and eliminated using YxYM analysis tools. Extent-based allocation of space lor files minimizes file level access time. Read-ahead buffering dynamically tunes itself to the volume layout. Aggressive caching of writes greatly reduces the number of disk accesses. Direct I/O performs file [10 directly into and out of user butlers. Scalability YxYM runs over a 32-bit and M-hit operating system. Ilosts can be replaced without modifying storage. Hosts with different operating systems can access the same storage. Storage devices can be spanned. YxYM is fully integrated with YxFS so that modifying the volume layout automatically modi lies the file system internals. With YxFS. several add-on products are available for maximizing performance in a database environment. Course Introduction Intro-9 Copyright;;; 2006 Symalll~r. Corporation All rights .esorveo
  • 14. • Reconfigure and resize storage across the logical devices presented by a RAID array. • Mirror between arrays to improve disaster recovery protection of an array. Use arrays and JBODs. • Use snapshots with mirrors in different locations for disaster recovery and off-host processing. Use VERITAS Volume Replicator (VVR) to provide hardware- independent replication services. symantec Storage Foundation and RAID Arrays: Benefits With Storage Foundation, you can: Benefits of VxVM and RAID Arrays RAID arrays virtualize individual disks into logical LUNS which are accessed by host operating systems as "physical devices." that is, using the same method of addressing as a directly-attached physical disk. VxVM virtualizes both the physical disks and the logical LUNs presented by a RAID array. Modifying the configuration ofa RAID array may result in changes in SCSI addresses of LUNs, requiring modification of application configurations. VxVM provides an effective method ofrcconfiguring and resizing storage across the logical devices presented by a RAID array. When using VxVM with RAID arrays. you can leverage the strengths of both technologies: You can use Vx VM to mirror between arrays 0 improve disaster recovery protection against the failure of an array. particularly if one array is remote. Arrays can be of different manufacture: that is, one array can be a RAID array and the other a J80D. VxVM facilitates data reorganization and maximizes available resources. VxVM improves overall performance by making 1/0 activity parallel for a volume through more than one 110 path to and within the array. You can use snapshots with mirrors in different locations. which is beneficial for disaster recovery and off-host processing. If you include VERITAS Volume Rcplicaror (VVR) in your environment, VVR can be used to provide hardware-independent replication services. tntro-10 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals Copyngtll' LU06Svmantec Corporaucn All nqtits reserved
  • 15. Storage Foundation Curriculum Path ,S}11Hlntt'C VERITAS Storage Foundation for UNIX: Fundamentals VERITAS Storage Foundation for UNIX: Maintenance VERITAS Storage Foundation Curriculum VERITASStorage Foundationfor UNIX' Fundamentalstraining is designed 10 provide you with comprehensive instruction on making the most ofVERITAS Storage Foundation. ~------------ ------------~---v-- VERIT AS Storage Foundation for UNIX Course Introduction Cor-yriqht © 2006 Symantec Corporauon. All rights reserved Inlro-11
  • 16. Storage Foundation Fundamentals: Overview • Lesson 1: Virtual Objects • Lesson 2: Installation and Interfaces Lesson 3: Creating a Volume and File System • Lesson 4: Selecting Volume Layouts • Lesson 5: Making Basic Configuration Changes • Lesson 6: Administering File Systems • Lesson 7: Resolving Hardware Problems syrnantcc VERITAS Storage Foundation for UNIX: Fundamentals Overview This training provides comprehensive instruction un operating the file and disk management foundation products: VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS). In this training. you learn how to combine file system and disk management technology to ensure easy management of all storage and maximum availability of essential data. Objectives After completing this training. you will be able to: Identify VxVM virtual storage objects and volume layouts. Install and configure Storage Foundation. Configure and manage disks and disk groups. Create concatenated. striped, mirrored. RAID-5, and layered volumes. Configure volumes by adding mirrors and logs and resizing volumes and tile systems. Perform tile system administration. Resolve basic hardware problems. Intro-12 Copynght'~ 2006 Svmantec Corpoauon All nghl<; reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 17. , S)111<1ntt'( Course Resources • Lab Exercises (Appendix A) • Lab Solutions (Appendix B) • Glossary Additional Course Resources Appendix A: Lab Exercises This section contains hands-on exercises that enable you to practice the concepts and procedures presented in the lessons. Appendix B: Lab Solutions This section contains detailed solutions to the lab exercises for each lesson. Glossary For your reference. this course includes a glossary ofterms related to V[RITAS Storage Foundation. Course Introduction tntro-13 Copyrigtllf' 2006 Symanter Corporation All rights reserved
  • 18. Typographic Conventions Used in This Course The following tables describe the typographic conventions used in this course. Typographic Conventions in Text and Commands Convention Element Examples Courier Nell. Command input. To display the robot and drive configurauon: bold both syntax and tpconfig -d examples To display disk information: vxdisk -0 alldgs list Courier New. · Command output lu the output: plain · Command protocol mlnlmum: 40- names. directory protocol - maximum: 60 names. tile protocol -current: 0 names. path Locale the al tnames directory. names. user Go 10http: / /www.symantec.com. names. passwords. URLs Enler the value 300. when used within Log on as user l. regular text paragraphs. Courier New. Variables in To install the media server: Italic. bold or command syntax, /cdrom_directory/install plain and examples: To access a manual page: · Variables in command input man command name - are Italic. plain. To display detailed information lor a disk: · Variables in vxdisk -g disk_group list command output disk - name are ltulic. bold. Typographic Conventions in Graphical User Interface Descriptions Convention Element Examples Arrow Menu navigation paths Select File -->Save. Initial capitalization Buttons. menus. windows, Select the Next buuon. options. and other interface Open the Task Sialus clements window. Remove the checkmark trorn the Print File check box. ()uotation marks lutertucc clements with Select the "Include long names subvolumes in object view window" check box. Intro-14 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyright ~ ;!()Clfi Svmanter: Corporation AUfights reserved
  • 19. Lesson 1 Virtual Objects
  • 20. symantcc Lesson Introduction • Lesson 1: VirtU!!L9_bl~~ts ••• • Lesson 2: Installation and Interfaces • Lesson 3: Creating a Volume and File System • Lesson 4: Selecting Volume Layouts • Lesson 5: Making Basic Configuration Changes • Lesson 6: Administering File Systems • Lesson 7: Resolving Hardware Problems svmantec Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Physical Data Identify the structural characteristics of Storage a disk that are affected by placing a disk under VxVM control. Topic 2: Virtual Data Describe the structural characteristics Storage of a disk after it is placed under VxVM control. Topic 3: Volume Manager Identify the virtual objects that are Storage Objects created by VxVM to manage data storage, including disk groups, VxVM disks, subdisks, piexes, and volumes. Topic 4: Volume Manager Define VxVM RAID levels and identify RAID Levels virtual storage layout types used by VxVM to remap address space. 1-2 COPYlIgl1l·.~ 2006 Symanter. Ccq-orauon All fights rnservoc VERITAS SLorage Foundation 5.0 for UNIX: Fundamentals
  • 21. , S)11Jan,'( I Physical Disk Structure Physical storage objects: • The basic physical storage device that ultimately stores your data is the hard disk. • When you install your operating system, hard disks are formatted as part of the installation program. • Partitioning is the basic method of organizing a disk to prepare for files to be written to and retrieved from the disk. • A partitioned disk has a prearranged storage pattern that is designed for the storage and retrieval of data. Solaris I HP-UX I AIX I Linux I Physical Data Storage Physical Disk Structure Solaris A physical disk under Solaris contains the partition table of the disk and the volume Table of contents (VTOC) in the first sector 151~ bytes) of the disk. The VTOC has at least an entry lor the backup partition on the II hole disk (partition tag 5, normally partition number 2), so the OS may work correctly with the disk. The VTOC is always a part of the backup partition and may be part ota standard data partition. You can destroy the VTOC using the raw device driver on that partition making the disk immediately unusable. Sector 0 of disk: VTOC Sector 1-15 of / partition: bootblock ~1US' IPartition 2 (backup slice) refers to the entire disk. Partitions (Slices) 1 Copyright 1'. 2006 Symaruec Corporation All nqtus reserved Lesson 1 Virtual Objects
  • 22. If the disk contains the partition fur the rout file system mounted on / (partition tag 2), lor example of an OS disk, this root partition contains the bootblock for the first boot stage after the Open Bout Prom within sector I - 15. Sector 0 is skipped, so there is no overlapping between VTOe and boorblock. if the root partition starts at the beginning of the disk. The li"t sector ofa file system un Soluris cannut start before sector 16 of the partition. Sector 16 contains the main super block of the file system. Using the block device driver of the file system prevents VTOC and boot block from being overwritten by applicuuon data. Note: 011 Solaris. VxVM 4. I and later support EFI disks. EFI disks are an lntcl-based technology that allows disks to retain BIOS code. HP-lJX On an HP-UX system. the physical disk is traditionally partitioned using either the whole disk approach or Logical Volume Manager (LVM). HP-UX Disk cOtld4 LVM Disk cOtld4 The whole disk approach enables you tu partition a disk in five ways: the whole disk is used by a single file system: the whole disk is used as swap area: the whole disk is used as a raw partition: a portion of the disk contains a file system, and the rest is used as swap: or the boot disk contains a 2-MB special boot are". the root file system, and a swap area. An LVM data disk consists of four areas: Physical Volume Reserved Area (PVRA): Volume Group Reserved Area (VGRA): user data area: and Bad Block Relocation Area IBBRA). AIX A native AIX disk docs not have a partition table "I' the kind familiar on many other operating systems. such as Solaris, Linux, and Windows. An application could use the entire unstructured raw physical device. but the lirst 5 I 2-byte sector normally contains intunn.uion, including a physical volume identifier (pvid) to support recognition of the disk by AIX. An AIX disk is managed by IBM's Logical Volume Manager (LVM) by defuult. A disk managed by LVM is called a physical volume (PV). A physical volume consists of: PV reserved area: A physical volume begins with a reserved area of 128 sectors containing I'V metadatu. including the pvid. 1-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYright ';:' 2(1()h Sytn<ll1le: Corporauon All [lgtHS reserveo
  • 23. Volume Group Descrlptur Area ('G[)A): One or two copies of the V[X;/ tollows. The V(jDA contains information describing a volume group (Vt.i ), which consists of one or more physical volumes. Included in the mctadata in the V(j[)A is the defiuiuon of the physical partition (PI') Sill'. nonnally-t MH, Physlcal partitions: The remainder of the disk is divided into a number of physical partitions, All of the PVs in a volume group have PI's of the same size. as defined in the VGDA, In a normal VG. there can be up to.n PI's in a P'. In a big VCi. there can be up to 12R PI's in a Pv. I ,Raw device ,hdisk3 Physical volume reserved area (128 sectors) Volume Group Descriptor Areas Physical partitions (equal size, defined in VGDA) The term partition is used differently in different operating systems. In many kinds of UNiX. Linux, and Windows. a partition is a variable sized portion of contiguous disk spacethat can be formatted to contain a file system. in LVM. a PI' is mapped to a logical partition 11..1'). and one or more LPs from any location throughout the V(j can be combined to define a logical volume (LV). A logical '0IU111eis the entity that can be formatted to contain a file system (by default either .IFS or .IFS2), So a physical partition compares in concept more closely to a disk allocation cluster in some other operating systems. and a logical volume plays the role that a partition does in some other operating systems. Linux On Linux. a nonboot disk can be divided into one to lour primary partitions. One of these primary partitions can be used to contain logical partitions. and it is called the extended partition. The extended partition can have lip to I ~ logical partitions on a SCSi disk and lip to 60 logical partitions on an IDE disk, You can use fdisk to set up partitions on a Linux disk, Lesson 1 Virtual Objects 1-5 CopyriglH '-':2006 Symantcc Corporation All fights reserveo
  • 24. Primary Partition 1 /dev/sdal or/dev/hdal Primary Partition 2 /dev/sda2or/dev/hda2 Primary Partition 3 /dev/sda3or/dev/hda3 Primary Partition 4 (Extended Partition) /dev/sda4 /dev/hda4 On a l.inux boot disk. the boot partition must be a primary partition and is typically located within the first 1024 cylinders of the drive. On the boot disk. you must also have a dedicated swap partition. The swap partition can be a primary or a logical partition. and it can be located anywhere on the disk. Logical partitions must be contiguous. but they do not need to take up all of the space of the extended partition. Only one primary partition can be extended. The extended partition docs not take up any space until it is subdivided into logical partitions. VERITAS Volume Manager 4.0 for Linux does not support most hardware RAID controllers currently unless they present SCSI device interfaces with names of the form / dev / sdx. The following controllers are supported: PERC, on the Dell 1650 MegaRAID. on the Dell 1650 ScrvcRAID. on x440 systems Compaq array controllers that require the SmartI and CCISS drivers (w hich present device paths. such as I dev I idal c #d#p # and I dev I cc i ssl c #d#p#) arc supported lor normal use and for rootnbiliry. 1·6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cupynqht c ;1.0011Svmanu-c Corporation All fights rcserveo
  • 25. ,S)l1HHlh'( Physical Disk Naming VxVM parses disk names to retrieve connectivity information for disks. Operating systems have different conventions: Operating System Device Naming Convention Example Solaris /dev/[rJdsk/c1t9dOs2 /dev/ [rJ dak/c3t2dO (no slice) /dev/hdisk2 (no slice) SCSI disks: /dev/ade [1-4J (primary partitions) /dev/ade [5 -16J (logical partitions) /dev/adbN(on the second disk) / dev / adcN (on the third disk) IDE disks: /dev/hdeN. /dev/hClbN./dev/hdcN HP-UX AIX Linux Physical Disk Naming Solaris I You locate and access the data on a physical disk by using a device name that specifies the controller, target ID. and disk number. A typical device name uses the format: c#t#d#. c# is the controller number. t# is the target !D. d# is the logical unit number (LUN) of the drive attached to the target. Ira disk is divided into partitions. you also specify the partition number in the device name: s# is the partition (slice) number, For example. the device name cOtOdOsl is connected to controller number 0 ill the system. with a target ID oro. physical disk number O. and partition number I 011 the disk. HP-liX You locate and access the data on a physical disk by using a device name that specifies the controller. target ID, and disk number. A typical device name uses the format: c#t#d#. c# is the controller number. t # is the target !D. d# is the logical unit number (LUN) of the drive attached to the target. Copyright .~..2006 Svmanter Corporauon All fights reserved 1-7Lesson 1 Virtual Objects
  • 26. For example, the cOt OdO device name is connected to the controller number °in the system, with a target 10 of 0, and the physical disk number O. AIX Every device in AIX is assigned a location code that describes its connection to the system. The general format of this identifier isAB-CD-EF-GH, where the letters represent decimal digits or uppercase letters. The first two characters represent the bus. the second pair identify the adapter, the third pair represent the connector, and the tinal pair uniquely represent the device. For example, a SCSI disk drive might have a location identifier of 04 - 01- 00 - 6, O.In this example, 04 means the PCI bus, 01 is the slot number on the PCI bus occupied by the SCSI adapter, 00 means the only or internal connector, and the 6,0 means SCSIID 6, LUN o. However, this data is used internally by AIX to locate a device. The device name that a system administrator or software uses to identify a device is less hardware dependant. The system maintains a special database called the Object Data Manager (ODM) that contains essential definitions for most objects in the system, including devices. Through the ODM. a device name is mapped to the location identifier. The device names are referred to by special files found in the / dev directory. For example, the SCSI disk identified previously might have the device name hdisk3 (the fourth hard disk identified by the system). The device named hdisk3 is accessed by the file name /dev/hdisk3. If a device is moved so that it has a different location identifier, the ODM is updated so that it retains the same device name. and the move is transparent to users. This is facilitated by the physical volume identifier stored in the first sector of a physical volume. This unique 128-bit number is used by the system to recognize the physical volume wherever it may be attached because it is also associated with the device name in the ODM. Linux On Linux, device names are displayed in the format: • sdx [N] • hdx [N] In the syntax: sd refers to a SCSI disk, and hd refers to an EIDE disk. x is a letter that indicates the order of disks detected by the operating system. For example, sda refers to the first SCSI disk, sdb refers to the second SCSI disk. and so on. N is an optional parameter that represents a partition number in the range I through 16. For example. sda7 references partition 7 on the first SCSI disk. Primary partitions on a disk are I. 2, .~.4: logical partitions have numbers 5 and up. If the partition number is omitted, the device name indicates the entire disk. 1-8 VERITAS Storage Foundation 5.0 for UNIX.' Fundamentals Copynqht ', ;'[)Of) Symantec Corporaucn All fights reserverr
  • 27. Physical Data Storage Note: Throughout this course, the term disk is used to mean either disk or LUN. Whatever the OS sees as a storage device, VxVM sees as a disk. • Reads and writes on unmanaged physical disks can be a slow process . • Disk arrays and multipathed disk arrays can improve 110 speed and throughput. IApplications! Databases / Users I 00PhYSi!1[!LUNS 00Disk array: A collection of physical disks used to balance 1/0 across multiple disks Multipathed disk array: Provides multiple ports to access disks to achieve performance and availability benefits Disk Arrays Reads and writes on unmanaged physical disks can be a relatively slow process, because disks are physical devices that require time to move the heads to the correct position on the disk before reading or writing. If all of the read and write operations are performed to individual disks. one at a time. the read-write time can become unmanageable. A disk 1/1"'(11' is a collection of physical disks. Performing 110 operations on multiple disks in a disk array can improve 1/0 speed and throughput. Hardware arrays present disk storage to the host operating system as LUNs. Multipathed Disk Arrays Some disk arrays provide multiple ports to access disk devices. These ports. coupled with the host bus adaptor (IIBA) controller and any data bus or 110 processor local to the array. compose multiple hardware paths to access the disk devices. This type of disk array is called a muttipathed disk aI'I'O'. You can connect rnultipathed disk arrays to host systems in many different configurations. such as: Connecting multiple ports to different controllers on a single host Chaining ports through a single controller on a host Connecting ports to di tferent hosts simultaneously Lesson 1 Virtual Objects 1-9 Cop'r'riglll'(~ 2006 Symantec Corporation All rights. roservoo
  • 28. symarucc Virtual Data Storage • Volume Manager creates a virtual layer of data storage. • Volume Manager volumes appear to applications to be physical disk partitions. • Volumes have block and character device nodes in the Zdev tree: Idev/vxl lr l dsk/ ... Multidisk configurations: • Concatenation • Mirroring • Striping • RAID·S High Availability: • Disk group import and deport • Hot relocation • Dynamic multipathing Disk Spanning Load·Balancing Virtual Data Storage Virtual Storage Management VER lTAS Volume Manager creates a virtual level of storage management above the physical device level by creating virtual storage objects. The virtual storage object that is visible to users and applications is called a 1'0111111<'. What Is a Volume? A volume is a virtual object, created by Volume Manager, that stores data. A volume consists of space from one or more physical disks on which the data is physically stored. How Do You Access a Volume? Volumes created by VxVM appear to the operating system as physical disks, and applications that interact with volumes work in the same way as with physical disks. All users and applications access volumes as contiguous address space using special device files in a manner similar to accessing a disk partition. Volumes have block and character device nodes in the / dev tree. You can supply the name of the path to a volume in your commands and programs, in your file system and database configuration files, and in any other context where you would otherwise use the path to a physical disk partition. 1-10 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynght '6 2006 Symantec Comor auon All fights reserved
  • 29. I Volume Manager Control When you place a disk under VxVM control, a cross-platform data sharing (CDS) disk layout is used, which ensures that the disk is accessible on different platforms, regardless of the platform on which the disk was initialized. I····- ..·-------·.L~_~lOS-reserved areas I that contain: " Platform blocks I " VxVM 10 blocks I " AIX and HP-UX . coexistence labels Public Region Volume Manager-Controlled Disks With Volume Manager. you enable virtual data storage by bringing a disk under Volume Manager control. By default in VxVM 4.0 and later. Volume Manager uses a cross-platform data sharing (CDS) disk layout. A CDS disk is consistently recognized by all VxVM-supported UNIX platforms and consists of: Ox-reserved area: To accommodate plat tonn-spcci fie disk usage. f 2RK is reserved for disk labels. platform blocks. and platform-coexistence labels. Private region: The private region stores information. such as disk headers. configuration copies. and kernel logs. in addition to other platform-specific management areas that VxVM uses to manage virtual objects. The private region represents a small management overhead: Operating System Default Block/Sector Size Default Private Region Size Solaris 512 bytes 65536 sectors (I 024K) HI'-UX 1024 bytes 3276Rsectors ( I024K) AIX 512 bytes 65536 sectors (I024K) Linux 512 bytes 65536 sectors ( I024K) Public region: The public region consists of the remainder of the space on the disk. The public region represents the available space that Volume Manager can LIseto assign to volumes and is where an application stores data. Volume Manager never overwrites this area unless specifically instructed to do so. Lesson 1 Virtual Objects 1-11 Copvriqht 'I' 2006 svoteotec Corporation All rights reservco
  • 30. syrnantec. Comparing CDS and Pre-4.x Disks CDS Disk (>4.x Default) Private region (metadata) and public region (user data) are created on a single partition. Suitable for moving between different operating systems Not suitable for boot partitions Sliced Disk (Pre-4.x Solaris Default) Private region and public region are created on separate partitions. Not suitable for moving between different operating systems Suitable for boot partitions Simple Disk (Pre •••.x HP-UX Default) Private region and public region are created on the whole disk with specific offsets. Not suitable for moving between different operating systems Suitable for boot partitions Note: This format is called hpdisk format as of VxVM 4.1 on the HP-UX platform. Comparing CDS Disks and Pre-4.x Disks The pre-t.v disk layouts arc still available in VxVM 4.0 and later. These layouts are used for hringing the boot disk under VxVM control on operating systems that support that capability, On platforms that support bringing the boot disk under V.xVM control, CDS disks cannot be used tor boot disks. CDS disks have specific disk layout requirements that enable a common disk layout across different platforms, and these requirements arc not compatible with the particular platform-specific requirements of boot disks. Therefore, when placing a hoot disk under VxVM control. you must use a prc-4.x disk layout (sliced on Solaris, hpdisk on HP-UX). For non boot disks, you can convert CDS disks to sliced disks and vice versa by using VxVM utilities. Other disk types, working with boot disks, and transferring data across platforms with CDS disks are topics covered in detail in later lessons. 1-12 Cupyrlght L 200t) Symantec Corporation All f,gtl1s reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 31. Volume Manager Storage Objects Disk Group Volumes acctdg expvol payvol acctd 01-01 acctdgO~ - O~ llcctdg03-02 expvol-Ol acctdg01-02 llcctdg03-01 acctdg02-01 Ploxes payvol-Ol payvol-02 -VxVM Disks acctdg03 Subdisks Physical Disks Volume Manager Storage Objects I Disk Groups A disk group is a collection of Vx VM disks that share a common configurution. You group disks into disk groups for management purposes. such as to hold the data for a specific application or set of applications. For example. data for accounting applications can be organized in a disk group called acctdg. A disk group contigurcttion is a set olrccords with detailed information about related Volume Manager objects in a disk group. their attributes. and their connections. Volume Manager objects cannot span disk groups. For example. a volume's subdisks. plexes. and disks must be derived from the same disk group as the volume. You can create additional disk groups as necessary. Disk groups enable you to group disks into logical collections. Disk groups and their components can be moved as a unit from one host machine to another. Volume Manager Disks A Volume Manager (VxVM) disk represents the public region ota physical disk that is under Volume Manager control. Each VxVM disk corresponds to one physical disk. Each VxVM disk has a unique virtual disk name called a disk media name,The disk media name is a logical name used lor Volume Manager administrative purposes. Volume Manager uses the disk media name when assigning space to volumes. A VxVM disk is given a disk media name when it is added to a disk group. Default disk media name: diskgrollplili Copyright ~~.2Un6 Svmantec Corporanon All rights reserved 1-13Lesson 1 Virtual Objects
  • 32. You can supply the disk media name or allow Volume Manager to assign a default name. The disk media name is stored with a unique disk ID to avoid name collision. After a VxVM disk is assigned a disk media name, the disk is no longer referred 10 by its physical address. The physical address (for example, clltlldll or hdiskll) becomes known <IS the disk access record. Subdisks A VxVM disk can be divided into one or more subdisks. A subdisk is a set of contiguous disk blocks that represent a specific portion ofa VxVM disk, which is mapped to a specific region of a physical disk. A subdisk is a subsection of a disk's public region. A subdisk is the smallest unit of storage in Volume Manager. Therefore, subdisks are the building blocks for Volume Manager objects. A subdisk is defined by an offset and a length in sectors on a VxVM disk. Default subdisk name: DMname-1l1l A Vx VM disk can contain multiple subdisks, but subdisks cannot overlap or share the same portions ofa VxVM disk. Any VxVM disk space that is not reserved or that is not part of a subdisk is free space. You can use free space to create new subdisks. Conceptually, a subdisk is similar to <I partition. Both a subdisk and a partition divide a disk into pieces defined by an offset address and length. Each of those pieces represent a reservation of contiguous space on the physical disk. However, while the maximum number of partitions to a disk is limited by some operating systems, there is no theoretical limit to the number of subdisks that can be attached to a single plex. This number has been limited by default to <I value 01'4090. If required, this default can be changed, using the vo1_ subdisk _num tunable parameter. For more information on tunable parameters, see the I'ERITAS Volume .Mal/agerAdministrator '.1 Guide. Plexes Volume Manager uses subdisks to build virtual objects called plexes. A plex is a structured or ordered collection of subdisks that represents one copy of the data in a volume. A plex consists of one or more subdisks located on one or more physical disks. The length of a plex is determined by the last block that can be read or written on the last subdisk in the plex. Default plcx name: volume_name-Il# Volumes A volume is a virtual storage device that is used by applications in a manner similar to <I physical disk. Due 10 its virtual nature, a volume is not restricted by the physical size constraints that apply to a physical disk. A VxVM volume can be as large as the total of available. unreserved free physical disk space in the disk group. A volume consists of one or more plcxes. Default volume name: vol ul1le name## 1-14 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyright l(; 2006 Syruantec Corporation All fights reserved
  • 33. , S)111'1I1t('( ~ Volume Layouts Volume layout: The way plexes are configured to remap the volume address space through which 1/0 is redirected .....Di~~~U [--R-~-;;i~;~~;1 LayeredConcatenated Striped RAID-O Data Redundancy Mirrored RAID-5 Striped and RAl0-O+1RAl0-5 Volume Manager RAID Levels RAID I RAID is an acronym for Redundant Array of Independent Disks. RAID is a storage management approach in which an army of disks is created. and part of the combined storage capacity of the disks is used to store duplicate information about the data in the array. By maintaining a redundant array of disks. you can regenerate data in the case of disk failure. RAID configuration models arc classified in terms of RAID levels. which arc defined by the number of disks in the array. the way data is spanned across the disks. and the method used for redundancy. Each RA ID level has speci lie features and performance benefits that involve a trade-oil between performance and reliability. Volume Layouts RAID levels correspond to volume layouts. A volume's layout refers to the organization of plexcs in a volume. Volume layout is the way plcxes are configured to remap the volume address space through which 110 is redirected at run-time. Volume layouts are based on the concepts of disk spanning. redundancy. and resilience. Disk Spanning Disk spanning is the combining of disk space from multiple physical disks to Iorrn one logical drive. Disk spanning has two forms: Lesson 1 Virtual Objects Copyright!': 2006 Symantec Corporation All rights reserved 1-15
  • 34. Concatenation: Concatenation is the mapping of data in a linear manner across two or more disks. In a concatenated volume. subdisks are arranged both sequentially and contiguously within a plex. Concatenation allows a volume to be created from multiple regions of one or more disks if there is not enough space for an entire volume on a single region of a disk. Striping: Striping is the mapping of data in equally-sized chunks alternating across multiple disks. Striping is also called interleaving. In a striped volume. data is spread evenly across multiple disks. Stripes are equally-sized fragments that are allocated alternately and evenly to the subdisks of a single plcx. There must be at least two subdisks in a striped plex , each of which must exist on a different disk. Configured properly. striping not only helps to balance 1/0 but also to increase throughput. Data Redundancy To protect data against disk failure, the volume layout must provide some form of data redundancy. Redundancy is achieved in two ways: Mirroring: Mirroring is maintaining two or more copies of volume data. A mirrored volume uses multiple plcxcs to duplicate the information contained in a volume. Although a volume can have a single plex, at least two are required for true mirroring (redundancy of data). Each of these plexes should contain disk space from different disks for the redundancy to be useful. Parity: Parity is a calculated value used to reconstruct data after a failure by doing an exclusive OR (XOR) procedure on the data. Parity information can be stored on a disk. Ifpart ofa volume fails, the data on that portion of the failed volume can be re-created from the remaining data and parity information. A RAIO-S volume uses striping to spread data and parity evenly aeross multiple disks in an array. Each stripe contains a parity stripe unit and data stripe units. Parity can be used to reconstruct data if one of the disks fails. In comparison to the performance of striped volumes, write throughput of RAI D- S volumes decreases, because parity infonnauon needs to be updated each time data is accessed. However. in comparison to mirroring, the use of parity reduces the amount of space required. Resilience A resilient volume, also called a layered volume, is a volume that is built on one or more other volumes. Resilient volumes enable the mirroring of data at a more granular level. For example. a resilient volume can be concatenated or striped at the top level and then mirrored at the bottom level. A layered volume is a virtual Volume Manager object that nests other virtual objects inside of itself. Layered volumes provide better fault tolerance by mirroring data at a more granular level. 1-16 VERITAS Storage Foundation 5.0 for UNIX' Fundamentals CopYrlyhl '& 2006 Symantec Corporauon AU nqhts reserved
  • 35. , syrnanrec I Lesson Summary • Key Points This lesson described the virtual storage objects that VERITAS Volume Manager uses to manage physical disk storage, including disk groups, VxVM disks, subdisks, plexes, and volumes. • Reference Materials VERITAS Volume Manager Administrator's Guide 'symantl'C Labs and solutions for this lesson arc located on the following pages: Appendix A provides complete lab instructions. "Lib I' IrilJ"duc'ing tile: !,;11 Lnvironmctu." p:I',!C i-,~ Appendix B provides complete lab instructions and solutions, "I .ab 1 S"IUlioI1S: 1,llrodliCIII;2 the 1:11) 1..11in1J1!l1CIlI," rage' B-,; Lab 1 Lab 1: Introducing the Lab Environment In this lab, you are introduced to the lab environment, system, and disks that you will use throughout this course. For Lab Exercises, see Appendix A. For .t:cabSoluti~!!s, see Appendix B, Lesson 1 Virtual Objects Copyright'~ 2006 Svmantec Corporation All rights reserved 1-17
  • 36. 1-18 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynghi .~ 2006 Svmantec Corporation. AlIlIghl!'i resorvec
  • 37. Lesson 2 Installation and Interfaces
  • 38. syrnantec Lesson Introduction • Lesson 1: Virtual Objects ~_Lesson~'!..sta~lationandInterface.s_~" • Lesson 3: Creating a Volume and File System • Lesson 4: Selecting Volume Layouts • Lesson 5: Making Basic Configuration Changes • Lesson 6: Administering File Systems • Lesson 7: Resolving Hardware Problems "~, .AS:, #'lli~1[ii-Jl L , svmantcc Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Installation Identify operating system compatibility and Prerequisites other preinstallation considerations. Topic 2: Adding License Keys Obtain license keys, add licenses by using vxlic inst, and view licenses by using vxlicrep. Topic 3: VERITAS Software Identify the packages that are included in the Packages Storage Foundation 5.0 software. Topic 4: Installing Storage Install Storage Foundation interactively, by Foundation using the installation utility. Topic 5: Storage Foundation Describe the three Storage Foundation user User Interfaces interfaces. Topic 6: Managing the VEA Install, start, and manage the VEA server. Server 2-2 Copynyhl:- 2006 Svmantec Corporaucn. All fights reserveu VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 39. '''l!t ..,~. ~ .. ", ,,, ~[ , S}ll1;!n1CC. as Compatibility The VERITAS Storage Foundation product line operates on the following operating systems: SF Solaris HP-UX AIX Linux Version Version Version Version Version 5.0 8,9,10 11i.v2 (0904) 5.2,5.3 RHEL 4 Update 3, SLES 9 SP3 4.1 8,9,10, x86 11i.v2 (0904) No release RHEL 4 Update 1 (2.6), SLES 9 SP1 4.0 7,8,9 No release 5.1, 5.2, 5.3 RHEL 3 Update 2 (1686) 3.5.x 2.6,7,8 11.11i (0902) No release No release* • Note: Version 3.2.2 on Linux has functionality equivalent to 3.5 on Solaris. I Installation Prerequisites OS Version Compatibility Before installing Storage Foundation. ensure that the version of Storage Foundation that you are installing is compatible with the version of the operating system that you are running. You may need to upgrade your operating system before you install the latest Storage Foundation version. VERITAS StorageFoundation 5.0 operates on the following operating systems: Solaris 8 (SPARe Platform 32-bit and 64-bil) Solaris 9 (SPARe Platform 32-bit and M-bil) Solari, 10 (SPARe Platform M-bil) September 2004 release of HP-UX II i version 2.0 or later AIX 5.2 ML6 (legacy) AIX 5.3 TL4 with SP4 Red Hat Enterprise l.inux 4 (RIIEL 4) with Update 3 (2.6.9-34 kernel) on AMD Optcron or Intel Xeon EM64T (xX6_(4) SUSE Linux Enterprise Server <) (SLES 9) with SP3 (2.6.5-7.244. 252 kernels) on AMD Opteron or Intel Xcon EM64T (xR6_(4) Check the /'F:R1TAS Storag« Foundation Release No/es Ior additional operating Solaris HP-UX AIX Llnux system requirements. Lesson 2 Installation and Interfaces 2-3 Copyright b 2006 svoantcc Corporation All nqhts reserved.
  • 40. symantec. Support Resources ·Il;'l,q! Storag~ Founnanonfor U~~IX Products I Search for Technotes I Support services ~~E:~~,~""""~ 1Wf'l'tIttlfUialln. r~'-"-'''-''-F-"-,,,,-,-.,,--~----rp~a~tc~h~e~s=;I--------'--------------- "un •.•.••,"tlf~d•• "·, lI.IM ••••••••.'."". hll ••• " •.•••lo.11<,11'. '->."';' ~:.;tn ~J ""~': .• '!It-:.," •• s. ~~, "'.1)00;<1':><;"< Jo·1'.~$ot> ,~'~.;~~'''' ""~,!~~.•.,,,~ ,.~.•~~~,,<;<'" r·,.,,~,',1""rH .'.'''~'''''' ~,~.•••.",~~'·""··'''''.''''f· C.h"·"-" ':l'""c•.- •.."'!!....•••...•I"*"""~-~,~ 'w,'<q- •.•••"""'~r.·j'f'· ".""' ...•• .,~,.'"'At.~~'·""~'>'' ~1'''fl''t'"F''''''''~I'r.-~,t>.U' "," ~" •• It••• ,"',.,.I! ••• ,,- ••••• '''' .••'~ .'" :£.:..••,..,.. F<.'A>u<III._ •• •( •. "-1)' http://support.veritas.com(',ur4'''·'',IIa",I.il<·''''I''"'· " •• ~~: ..,_''' ••.•• ,'''~ •• ·d ••• l01·••• ,.,.., ••••••••••••• "'"'".;> :t"".~.1":1""'-. ""'.1,1-:" "''to ..••..., .,~•••r""· •.'" +"" ,~._. "'j<.('.o.:', "~"""'''''~ "',-.~ .•.1' Version Release Differences With each new release of the Storage Foundation software. changes are made that may affect the installation or operation of Storage Foundation in your environment. By reading version release notes and installation documentation that are included with the product, you can stay informed of any changes, For more information about specific releases ofYERITAS Storage Foundation, visit the YERITAS Support Web site at: http: / / support. veri tas. com This site contains product and patch information, a searchable knowledge base of technical notes, access to product-specific news groups and c-mail norification services, and other infonnation about contacting technical support staff. Note: If you open a case with YERITAS Support. you can view updates at: http://support.veritas.com/viewcase You can access your case by entering the e-mail address associated with your case and the case number. 2-4 Copynqhtc. 2006 Svmaruec Ccmoranon All fights reserved VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
  • 41. ,S)111illllt'L I Storage Foundation Licensing • Licensing utilities are contained in the VRTSvlic package, which is common to all VERITAS products. • To obtain a license key: - Create a vLicense account and retrieve license keys online. vLicense is a Web site that you can use to retrieve and manage your license keys. or - Complete a License Key Request form and fax it to VERITAS customer support. • To generate a license key, you must provide your: - Software serial number - Customer number - Order number Note: You may also need the network and RDBMS platform, system configuration, and software revision levels. Adding License Keys You must have your license key before you begin installation. because you are prompted for the license key during the installation process. A new license key is nut necessary if you are upgrading Storage Foundation from a previously licensed version of the product. lfyou have an evaluation license key, you must obtain a permanent license key when you purchase the product. The VER[TAS licensing mechanism checks the system date to verify that it has not been set back. [I' the system date has been reset. the evaluation license key becomes invalid. Obtaining a License Key License keys arc delivered on Software License Certificates to you at the conclusion of the order fulfillment process. The certificate specifics the product keys and the number of product licenses purchased. A single key enables you to install the product un the number and type of systems for which you purchased the license. License key arc non-node locked. In a non-node locked model. one key can unlock a product on different servers regardless ufllost ID and architecture type. In a nude locked model. a single license is tied to a single specific server. For each server. you need a di fferent key. Lesson 2 Installationand Interfaces Copyright C 2006 Symamec Corporation. All nqhts reserved 2-5
  • 42. symaruec Generating License Keys [http://vli-~~~-s-e-.v~-;i-ta-s~.-c~Jf-~---' I········· I. Access automatic I , license key generation and delivery. • Manage and track I i license key inventory and usage. I· Locate and reissue lost license keys. I· Report, track, and 1 resolve license key I issues online. I· Consotidate and share license key information with other accounts. • To add a license key: vxlicinst • License keys are installed in: /etc/vx/licenses/lic • To view installed license key information: vxlicrep Displayed information includes: - License key number - Name of the VERIT AS product that the key enables - Type of license - Features enabled by the key Generating License Keys with vLicense VERITAS vl.icense (v L icense. veri tas. com) is a self-service online license management system. vl.iccnsc supports production license keys only. Temporary. evaluation. or demonstration keys must be obtained through your VERITAS sales representative. Note: The VRTSvl ic package can coexist with previous licensing packages. such as VRTSIic. If you have old license keys installed in /etc/vx/eIm.leave this directory on your system. The old and new license utilities cun coexist. 2-6 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals Copvnqht i; 2006 Svmantec Corporation An fights reserved
  • 43. ,S)11Hlnt.?( . I ~ What Gets Installed? In version 5.0, the default installation behavior is to install all packages in Storage Foundation Enterprise HA. In previous versions, the default behavior was to only install packages for which you had typed in a license key. In 5.0, you can choose to install: • All packages included in Storage Foundation Enterprise HA or • All packages included in Storage Foundation Enterprise HA, minus any optional packages, such as documentation and software development kits VERITAS Software Packages When you install a product suite. the component product packages are installed automatically. When installing Storage Foundation. be sure to follow the instructions in the product release notes and installation guides. Package Space Requirements Before you install any of the packages. confirm that your system has enough free disk space to accommodate the installation. Storage Foundation programs and files are installed in the /. /usr. and / opt tile systems. Refer to the product installation guides for a detailed list of package space requirements. Solaris Note VxFS often requires more than the default RK kernel stack size. so entries are added to the jete/system file. This increases the kernel thread stack size of the system to 24K. The original / ete/ system file is copied to /ete/fs/vxfs/system.preinstall. Lesson 2 Installation and Interfaces 2-7 COpyrig!lt:fj 2006 Symanlec Corporation. All rights reserved
  • 44. symantec Optional Features VERITAS FlashSnap - Enables point-in-time copies of data with minimal performance overhead - Includes disk group split/join, FastResync, and storage checkpointing (in conjunction with VxFS) VERITAS Volume Replicator - Enables replication of data to remote locations - VRTSvrdoc: VVR documentation • VERITAS Cluster Volume Manager Used for high availability environments Features are Included In the VxVM package, but they require a separate license. Features are Included In the VxFS package, but they require a separate license. VERITAS Quick 1/0 for Databases Enables applications to access prealiocated VxFS files as raw character devices • VERITAS Cluster File System Enables multiple hosts to mount and perform file operations concurrently on the same file Dynamic Storage Tiering Enables the support for multivolume file systems by managing the placement of files through policies that control both initial file location and the circumstances under which existing files are relocated Storage Foundation Optional Features Several optional features do not require separate packages, only additional licenses. The following optional features are built-in to Storage Foundation that you can enable with additional licenses: VERITAS Flashxnap: FlashSnap facilitates point-in-time copies of data, while enabling applications to maintain optimal performance, by enabling features, such as FastResync and disk group split and join functionality. FlashSnap provides an efficient method to perform offline and off-host processing tasks, such as backup and decision support. VERITAS Volume Replicator: Volume Rcplicator augments Storage Foundation functionality to enable you to replicate data to remote locations over any IP network. Replicated copies of data can be used for disaster recovery, off-host processing, off-host backup, and application migration. Volume Replicator ensures maximum business continuity by delivering true disaster recovery and flexible off-host processing. Cluster Functionality: Storage Foundation includes optional cluster functionality that enables Storage Foundation to be used in a cluster environment. A cI/lSII!I' is a set of hosts that share a set of disks. Each host is referred to as a node in a cluster. When the cluster functionality is enabled, all of the nodes in the cluster can share VxVM objects. The main benefits of cluster configurations are high availability and off-host processing. VERITAS Cluster Server (VCS): ves supplies two major components integral to eFS: the Low Latency Transport (LLT) package and the Group 2-8 CQPynyht'~ 2006 Syrn;'Jnlt:r.: Corporation. All fights reserveu VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 45. Membership and Atomic Broadcast (GAB) package. LLT provides node- to-node communications and monitors network communications. GAB provides cluster state. configuration. and membership service. and it monitors the heartbeat links between systems to ensure that they are active. VERIT AS Cluster File System (CFS): CFS is a shared file system that enables multiple hosts to mount and perform IIle operations concurrently on the same file. VERITAS Cluster Volume Manager (CVM): CVM creates the cluster volumes necessary for mounting cluster file systems. VERIT AS Quick 1/0 for Databases: VERITAS Quick 1/0 for Databases (referred to as Quick 1;0) enables applications to access preallocated VxFS tiles as raw character devices. This provides the administrative benefits of running databases on file systems without the performance degradation usually associated with databases created on file systems. Dynamic Storage Tiering (DST): DST enables the support for multivolume file systems by managing the placement of files through policies that control both initial tile location and the circumstances under which existing files are relocated. Lesson 2 Installation and Interfaces Copyrigh1 It 200{J Svmentec Corporation. All nqtus reserved. I 2-9
  • 46. syrnantec Installation Menu Storage Foundation and High Availability Solutions 5.0 SYMANTEC Product Version Installed Licensed Veritas Cluster Server Veritas File System Veritas Volume Manager Veri tas Volume Replicator Veritas Storage Foundation Veritas Storage Foundation for Oracle Veri tas Storage Foundation for DB2 Veritas Storage Foundation for Sybase Veri tas Storage Foundation Cluster File System Veritas Storage Foundation for Oracle RAe no no no no no no no no no no no no no no no no Task Menu: [2> Install/Upgrade a Product L) License a Product U) Uninstall a Product Q) Quit C) Configure an Installed Product P) Perform a Preinstallation Check 0) View a Product Description ?) Help Enter a Selection: [I,C,L,P,U,D,Q,?] Installing Storage Foundation The Installer is a menu-based installation utility that you can use to install any product contained on the VERITAS Storage Solutions CD-ROM. This utility acts as a wrapper for existing product installation scripts and is most useful when you are installing multiple VERI rAS products or bundles, such as VERITAS Storage Foundation or VERITAS Storage Foundation tor Databases. Note: The example on the slide is from a Solaris platform. Some of the products shown on the menu may not be available on other platforms. For example, VERITAS File System is available only as part of Storage Foundation on HP-liX. Note: The VERITAS Storage Solutions CD-ROM contains an installation guide that describes how to use the installer utility. You should also read all product installation guides and release notes even if you are using the installer utility. To add the Storage Foundation packages using the installer utility: 1 Log on as supcruscr. 2 Mount the VERITAS Storage Solutions CD-ROM. 3 Locate and invoke the installer script: cd / cdrom/ CD_name ./installer 4 If the licensing utilities are installed. the product status page is displayed. This list displays the VERITAS products on the CD-ROM and the installation and licensing status of each product. If the licensing utilities are not installed, you receive a message indicating that the installation utility could not cletermine product status. 2-10 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynghl & 2006 Svrnaotoc Corporation All flyhls reserved
  • 47. 5 Type I to install a product. Follow the instructions to select the product that you want to install. Installation begins automatically. When you add Storage Foundation packages by using the installer utility. all packages are installed. lfyou want to add a specific package only. for example. only the VRTSvrndoc package. then you must add the package manually from the command line. After installation. the installer creates three text files that can be used for auditing or debugging. The names and locations or each file are displayed at the end or the installation and are located in / opt/VRTS/ install / logs: IFile Description Installation log file Contains all commands executed during installation. their output. and any errors generated by the commands: used for debugging installation problems and for analysis by VERITAS Support Responsefile Contains configuration information enteredduring the procedure: can be used for future installation procedures when using the installer script with the -responsefile option Summary file Contains the output of Vf:RITAS product installation scripts: shows products that were installed. locations of log and response files, and installation messagesdisplayed Methods for Adding Storage Foundation Packages A first-time installation or Storage Foundation involves adding the software packages and configuring Storage Foundation fur first-time use. You can add VERITAS product packages by using one of three methods: Method Command Notes VLRITAS installer Installs multiple VERITAS Installation Menu products interactively. Installs packagesand conligures Storage Foundation (or first-time use. Product installation installvm Install individual VFRITAS scripts installfs products internctively. installsf Installs packagesand configures Storage Foundation lor first time use. Native operating pkgadd (Solaris) Install individual packages. for system package swinstall (iIP-UX) example. when using your 0'11 installation installp (AIX) custom installation scripts, commands First-time Storage Foundation rpm (Linux ) configuration must be run as a Then. to configure SF: separatestep. vxinstall Lesson 2 Installation and Interfaces 2-11 Copynqht "( 2006 Svmanter. Corporation All rights reserved
  • 48. Default Disk Group o You can set up a system- wide default disk group to which Storage Foundation commands default if you do not specify a disk group . o If you choose not to set a default disk group at installation, you can set the default disk group later from the command line. Note:In StorageFoundation 4.0and later,the rootdg requirementno longerexists. symaru«, Configuring Storage Foundation : When you i~~taIISt~r~g~F~~~dation, y~u are asl<edify~~'V~~tt~~~~:l Enclosure-Based Naming HostJ "-cl Disk Enclosures .,.II '·c' 2 . ~.I. enc j' -;;ncl encO o Standard device naming is based on controllers, for example, cltOdOs2. o Enclosure-based naming is based on disk enclosures, for example, encO. Configuring Storage Foundation When you install Storage Foundation, you are asked if you want to configure it during installation. This includes deciding whether to use enclosure-based naming and a default disk group. What Is Enclosure-Based Naming'! An enclosure.or disk enclosure,is an intelligent disk array. which permits hot- swapping of disks. With Storage Foundation. disk devices can be named for enclosures rather than for the controllers through which they are accessed as with standard disk device naming (for example. eOtOdO or hdisk2). Enclosure-based naming allows Storage Foundation to access enclosures as separate physical entities. By configuring redundant copies of your data on separate enclosures, you can safeguard against failure of one or more enclosures. This is especially useful in a storage area network (SAN) that uses Fibre Channel hubs or fabric switches and when managing the dynamic multipathing (DMP) feature of Storage Foundation. For example, if two paths (el t 99dO and e2t99dO) exist to a single disk in an enclosure, VxVM can use a single DMP metanode, such as eneO 0, to access the disk. What Is a Default Disk Group'! The main benefit of creating a default disk group is that Storage Foundation commands default to that disk group if you do not specify a disk group on the command line. defaul tdg specifies the default disk group and is an alias for the disk group name that should be assumed if a disk group is not specified ill a command. 2-12 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COp,'!v;)ht G" 2()06 Svmantec COIl-'or<lIlOI1. All flI)t1ls reservcc
  • 49. I Storage Foundation Management Server Storage Foundation 5.0 provides central management capability by introducing a Storage Foundation Management Server (SFMS). With SF 5.0, it is possible to configure a SF host as a managed host or as a standalone host during installation. A Management Server and Authentication Broker must have previously been set up if a managed host is required during installation. To configure a server as a standalone host during installation, you need to answer "n" when asked if you want to enable SFMS Management. You can change a standalone host to a managed host at a later time. Note: This course does not cover SFMS and managed hosts. Storage Foundation Management Server Storage Foundation 5.0 provides central management capability by introducing a Storage Foundation Management Server (SFMS). For more information. refer to the S/Or"geFoundation ManagementS(,I"I·(,I" Administrator's Guide. Lesson 2 Installationand Interfaces Copyright «; 2006 Svmantec Corporation All fignls rese-veo 2-13
  • 50. Sularis symantec Verifying Package Installation To verify package installation, use OS-specific commands: • Solaris: pkginfo -1 VRTSvxvm • HP-UX: sw1ist -1 product VRTSvxvm • AIX: lslpp -1 VRTSvxvm • Linux: rpm -qa VRTSvxvm Verifying Package Installation llyou are not sure whether VERITAS packagesare installed, or if you want to verify which packagesare installed on the system,you can view information about installed packagesby using Ox-specific commands to list package information. To list all installed packageson the system: pkginfo To restrict the list to installed VERITAS packages: pkginfo I grep VRTS To display detailed information about a package: pkginfo -1 VRTSvxvm HP-UX To list all installed packageson the system: sw1ist -1 product To restrict the list to installed VERITAS packages: sw1ist -1 product I grep VRTS To display detailed information about a package: sw1ist -1 product VRTSvxvm 2-14 Cqpyngtll (,. 2006 Symautec Corporauon All nqhts reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 51. AIX To list all installed packages on the system: lslpp To restrict the list to installed VERITAS packages, type: lslpp -1 'VRTS*' To verify that a particular Iileset has heen installed, use its name, for example: lslpp -1 VRTSvxvrn To verify package installation on the system: rpm -qa I grep VRTS To verify a specific package installation on the system: rpm -q[i] package_name For example, to verify that the VRTSvxvm package is installed: rpm -q VRTSvxvrn The - i option lists detailed information about the package. ILinux Lesson 2 Installation and Interfaces 2-15 Copyrighi ~ 2006 Svrnantec Corporation All rigl1;; reserved
  • 52. svmantec Storage Foundation User Interfaces Storage Foundation supports three user interfaces: • VERITAS Enterprise Administrator (VEA): A GUI that provides access through icons, menus, wizards, and dialog boxes Note: This course only covers using VEA on a standalone host. • Command-Line Interface (CLI): UNIX utilities that you invoke from the command line • Volume Manager Support Operations (vxdiskadm): A menu-driven, text-based interface also invoked from the command line Note: vxdiskadm only provides access to certain disk and disk group management functions. Storage Foundation User Interfaces Storage Foundation User Interfaces Storage Foundation supports three user interfaces, Volume Manager objects created by one interface are compatible with those created by the other interfaces. YERITAS Enterprise Administrator (YEA): VERITAS Enterprise Administrator (VEA) is a graphical user interface to Volume Manager and other VERITAS products. VEA provides access to Storage Foundation functionality through visual clements, such as icons, menus. wizards, and dialog boxes. Using VEA, you can manipulate Volume Manager objects and also perform common tile system operations. Command-Line Interface (CLI): The command-line interface (ell) consists of UNIX utilities that you invoke from the command line to perform Storage Foundation and standard UNIX tasks. You can use the ell not only to manipulate Volume Manager objects. but also to perform scripting and debugging functions. Most of the ell commands require supcruser or other appropriate privileges. The ell commands perform functions that range from the simple to the complex, and some require detailed user input. Volume Manager Support Operations (vxdiskadm): The Volume Manager Support Operations interface, commonly called vxdiskadm, is a menu-driven, text-based interface that you can use for disk and disk group administration functions. The vxdiskadm interface has a main menu from which you can select storage management tasks. A single VEA task may perform multiple command-line tasks. 2-16 Copyuqtu 'c 2(1)6 Syn.autec Corporauon All fights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 53. - I , syrnaruec. ; Menu Bart t"'_ ~.,... •....•~ .•...•....; VEA: Main Window Quick Access Bar ; Toolbar lit 0 <_l ;::""ot.,,.._.! t_'f-"'W"" ~"'br-'" ,..,,,,,-.,.,-.l~<C"'><» ""cO._"",," ~<W1'A<'.~ "'•.•••".I>t.~l ,,: _.,._.,n ~ •.•.•••• ">. t loi,-,q , '> ll"'''' ~ u; ,- ..." :: t,.i:.,,;~~' .'_•....••. Three ways to access tasks: 1. Menu bar 2. Toolbar 3. Context menu (right-click) ""'_==":;":'-':'...1 ~';':"'"d,.·'.·'. '" '~.•.,~ Using the VEA Interface The VERITAS Enterprise Administrator (VEA) is the graphical user interface for Storage Foundation and other VERITAS products. You can use the Storage Foundation features of VEA to administer disks, volumes, and file systems on local or remote machines. VEA is a Java-based interface that consists of a server and a client. You must install the VEA server on a UNIX machine that is running VERITAS Volume Manager. The VEA client can Tunon any machine that supports the Java 1.4 Runtime Environment, which can be Solaris. IIP-UX, AIX, Linux, or Windows. SOllie Storage Foundation features ofVEA include: Remote Administration Security Multiple Host Support Multiple Views of Objects Setting VE! Preferences You can customize general VEA environment auributes through the Preferences window (Select Tools - --Prcfcrcncev). Lesson 2 Installation and Interfaces 2-17 CopYllght ttl2006 Symanter. Corporation. All rights resetveo
  • 54. symaruec VEA: Viewing Tasks and Commands To view underlying command lines, double-click a task. , *-'(:~<;,<j~" "'¥i:t'w. " , 'I" ~ ,.__" '-'--""';"--+-1 ~;;~~i':::J~~;~'~ fr<l flnlC" l!<;.u,t.H~" t>O!k!'lIillIll ('jU! •• r~ltr ••O<.IP T,"~1."l (1"~t<i.uU!<)o~ ."OI$tUi. 'oWN ,",!Jg eso 'I'+! 1(!)t;rrr", ~fnOl •••••~.I¢"!I"'I(iK<:,.,~::.'l' UU( kl'~-":-:O''''''''''----'--'-----' i~~lr4"~IIfll Viewing Commands Through the Task Log The Task Log displays a history of the tasks performed in the current session. Each task is listed with properties. such as the target object of the task. the host, the start time, the task status, and task progress. Displaying the Task Lug window: To display the Task Log, click the Logs tab at the left of the main window. Clearing the Task History: Tasks are persistent in the Task History window. To remove completed tasks from the window, right-click a task and select Clear All Finished Tasks. Viewing ell Commands: To view the command lines executed for a task, double-click a task. The Task Log Details window is displayed tor the task. The CLI commands issued are displayed in the Commands Executed field of the Task Details section. VERITAS Storage Foundation 5.0 for UNIX: Fundamentals2-18 COlJyflgtll '"' 2006 Symantec Coporauon All lights reservcc
  • 55. , S)111.1I1lt'l. I Command-Line Interface You can administer CLI commands from the UNIX shell prompt. Commands can be executed individually or combined into scripts. Most commands are located in /usr/sbin. Add this directory to your PATH environment variable to access the commands. Examples of CLI commands include: vxassist vxprint vxdg vxdisk Creates and manages volumes Lists VxVM configuration records Creates and manages disk groups Administers disks under VM control Using the Command-Line Interface The Storage Foundation command-line interface (CLl) provides commands used for administering Storage Foundation from the shell prompt on a UNIX system. CLl commands can be executed individually for specific tasks or combined into scripts. The Storage Foundation command set ranges from commands requiring minimal user input to commands requiring detailed user input. Many of the Storage Foundation commands require an understanding of Storage Foundation concepts. Most Storage Foundation commands require supcruser or other appropriate access privileges. CLI commands are detailed in manual pages. Accessing Manual Pages for CLI Commands Detailed descriptions ofVxVM and VxFS commands. the options for each utility. and details on how to use them are located in VxVM and VxFS manual pages. Manual pages are installed by default in / opt/VRTS/man. Add this directory to the MANPATI I environment variable. if it is not already added. To access a manual page. type man command name. Examples: man vxassist man mount vxfs Linux Note On Linux. you must also set the MANSECT and ~1ANPATH variables. Lesson 2 Installation and Interfaces Copyrigtlt l~, 2U06 Symamec Corpotauon All rights .csorvoo 2-19
  • 56. symantcc The vxdi skadm Interface vxdiskadm Volume Manager Support Operations Menu: volumeManager/Disk 1 Add or initialize one or more disks 2 Encapsulate one or more disks 3 Remove a disk 4 Remove a disk for replacement 5 Replace a failed or removed disk list List disk information Display help about menu ?? Display help about the menuing system q Exit from menus Note: This example is from a Solaris platform. The options may be slightly different on other platforms. Using the vxdiskadm Interface The vxdiskadm command is a CLI command that you can use to launch the Volume Manager Support Operations menu interface. You can use the Volume Manager Support Operations interface, commonly referred to as vxdiskadm. to perform common disk management tasks. The vxdiskadm interface is restricted 10 managing disk objects and does not provide a means of handl ing all other VxVM objects. Each option in the vxdiskadm interface invokes a sequence ofCLI commands. The vxdiskadm interlace presents disk management tasks to the user as a series of questions. or prompts. To start vxdiskadm. you type vxdiskadm at the command line to display the main menu. The vxdiskadm main menu contains a selection of main tasks that you can use to manipulate Volume Manager objects. Each entry in the main menu leads you through a particular task by providing you with information and prompts. Default answers arc provided for many questions, so you can select common answers. The menu also contains options for listing disk information, displaying help information. and quilling the menu interface. The tasks listed in the main menu are covered throughout this training. Options available in the menu differ somewhat by platform. See the vxdiskadm (1m) manual page for more details on how to use vxdiskadm. Note: vxdiskadm can be run only once per host. A lock file prevents multiple instances from running: /var / spool / locks/ .DrSKADO. LOCK. 2-20 Copynqht F; ';:006 Svmamcc Corporation All rights reserved VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
  • 57. I Installing VEA Installation administration file (Solaris only): VRTSobadmin Windows Client packages: • VRTSobgui, VRTSat, VRTSpbx, VRTSicsco (UNIX) Server packages: • VRTSob • VRTSobc33 • VRTSaa VRTSccg • VRTSdsa • VRTSvail • VRTSvmpro • VRTSfspro 'UN/X VRTSddlpr --_.__._---_._--..- r-;, Install the VEA I server on a UNIX I machine running I I. Storage Foundation. Install the VEA client on any I machine that ,I supports the Java 1.4 Runtime Environment (or I later). • windows/VRTSobgui .rosi (Windows) VEA is installed automatically when you run the SF installation scripts. You can also install VEA by adding packages manually. Managing the VEA Software YEA consists of a server and a client. You must install the YEA server on a UNIX machine that is running YERITAS Volume Manager. You can install the YEA client on the same machine or on any other UNIX or Windows machine that supports the Java 1.4 Runtime Environment (or later), Installing the VEA Server and Client on UNIX If you install Storage Foundation by using the Installer utility. you arc prompted to install both the ,[A server and client packages automatically. If you did not install all of the components by using the Installer. you can add the YEA packages separately. It is recommended that you upgrade YEA to the latest version released with Storage Foundation in order to take advantage of new functionality built into YEA. You can use the YEA with 4.1 and later to manage 3.5.2 and later releases. When adding packages manually. you must install the Volume Manager (VRTSvl ie. VRTSvxvrn) and the infrastructure packages (VRTSat. VRTSpbx. VRTSieseo) before installing the YEA server packages. After installation. also add the YEA startup scripts directory. / opt/VRTSob/bin. to the PATH environment variable. Lesson 2 Installationand Interfaces 2-21 Copyright ,°,2006 Symanrec Corporation. Anuqhts resorvoo
  • 58. syrnanrec Starting the VEA Server and Client Once installed, the VEA server starts up automatically at system startup. To start the VEA server manually: 1. Log on as superuser. 2. Start the VEA server by invoking the server program: /opt/VRTSob/bin/vxsvc (on Solaris and HP-UX) /opt/VRTSob/bin/vxsvcctrl (on Linux) When the VEA server is started: /var /vx/ isis/vxis is. lock ensures that only one instance of the VEA server is running. /var/vx/isis/vxisis .log contains server process log messages. To start the VEA client: On UNIX: /opt/VRTSob/bin/vea On Windows: Select Start->Programs->VERIT AS-> VERITAS Enterprise Administrator. Starting the VEA Server In order to use YEA. the YEA server must be running on the UNIX machine to be administered. Only one instance of the VEA server should be running at a time. Once installed. the YEA server starts up automatically at system startup. You can start the YEA server manually by invoking vxsvc (on Solaris and HP-UX). vxsvcctrl (on Linux ), or by invoking the startup script itself, for example: Solaris /etc/rc2.d/S73isisd start ~IP-LJX /sbin/rc2.d/S700isisd start The YEA client call provide simultaneous access to multiple host machines. Each host machine must be running the VEA server. Note: Entries for your user name and password must exist in the password file or corresponding Network Information Name Service table on the machine to be administered. Your user name must also be included in the YERITAS administration group (v r t s adm, by default) in the group tile or NIS group table. If the vrtsadm entry does not exist. only root can run YEA. You can contigure YEA to connect automatically to hosts when you start the YEA client. In the YEA main window. the Favorite Hosts node can contain a list of hosts that arc reconnected by default at the startup of the YEA client. 2-22 Copyright .; 2006 Svrnantec Corpcrahon. AU nqnts reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 59. , symanrec. Managing VEA The VEA server program is: /opt/VRTSob/bin/vxsvc (Solaris and HP-UX) /opt/VRTSob/bin/vxsvcctrl (Linux) To confirm that the VEA server is running: vxsvc -m (Solaris and HP-UX) vxsvcctrl status (Linux) To stop and restart the VEA server: /etc/init.d/isisd restart (Solaris) /sbin/init.d/isisd restart (HP-UX) To kill the VEA server process: vxsvc -k (Solaris and HP·UX) vxsvcctrl stop (Linux) To display the VEA version number: vxsvc -v (Solaris and HP-UX) vxsvcctrl version (Linux) Managing the VEA Server I Monitoring VEA Event and Task Logs You can monitor VEA server events and tasks from the [vent Log and Task Log nodes in the VEA object tree. You can also view the VEA log file. which is located at /var /vx/ isis/vxisis. log. This tile contains trace messages for the V[A server and VEA service providers. Copylight <E2006 Symantec Corporation Anucnts reserved 2-23Lesson 2 Installation and Interfaces
  • 60. symantcc Labs and solutions for this lesson are located on the following pages: Appendix A provides complete lab instructions, "lab ::: lnstatl.uiou and hucrruccs." I'a~l' , ..7 Appendix B provides complete lab instructions and solutions, "Lab 2 Solutiuns: lnstullation and lnrcriucc-." page n 7 Lesson Summary • Key Points In this lesson, you learned guidelines for a first- time installation of VERITAS Storage Foundation, as well as an introduction to the three interfaces used to manage VERITAS Storage Foundation. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Storage Foundation Installation Guide - VERITAS Storage Foundation Release Notes - Storage Foundation Management Server Administrator's Guide 2-24 Lab 2 Lab 2: Installation and Interfaces In this lab, you install VERITAS Storage Foundation 5.0 on your lab system. You also explore the Storage Foundation user interfaces, including the VERITAS Enterprise Administrator interface, the vxdiskadm menu interface, and the command-line interface. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B, Copynqt-t ( 20DI'}Svrnantec Lorpcratton All riqhts leserved VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
  • 61. Lesson 3 Creating a Volume and File System
  • 62. svmantec Lesson Introduction • Lesson 1: Virtual Objects • Lesson 2: Installation and Interfaces • Lesson 3: Creating a Volume and File "",', System • Lesson 4: Selecting Volume Layouts • Lesson 5: Making Basic Configuration Changes • Lesson 6: Administering File Systems • Lesson 7: Resolving Hardware Problems ~ ~~'" ,~,"';'~;i!1I.. , symantcc Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Preparing Disks and Initialize an OS disk as a VxVM disk and Disk Groups for Volume create a disk group by using VEA and Creation command-line utilities. Topic 2: Creating a Volume Create a concatenated volume by using VEA and from the command-line, Topic 3: Adding a File System Add a file system to and mount an existing toa Volume volume. Topic 4: Displaying Volume Display volume layout information by using Configuration Information VEA and by using the vxprint command. Topic 5: Displaying Disk and View disk and disk group information and Disk Group Information identify disk status. Topic 6: Removing Volumes, Remove a volume, evacuate a disk, remove a Disks, and Disk Groups disk from a disk group, and destroy a disk group. 3-2 Cop;lfIyhl'~ 2006 Svrnantec COrpOI(ltloll All rights reserved VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
  • 63. ."-*..,,.,.dt~., d"U1I;:M Selecting a Disk Naming Scheme Types of naming schemes: • Traditional device naming: OS-dependent and based on physical connectivity information • Enclosure-based naming: OS-independent, based on the logical name of the enclosure, and customizable You can select a naming scheme: • When you run Storage Foundation installation scripts • Using vxdiskadm, "Change the disk naming scheme" Enclosure-based named disks are displayed in three categories: Enclosures: enclosurenarne # Disks: Disk # Others: Disks that do not return a path-independent identifier to VxVM are displayed in the traditional OS-based format. Preparing Disks and Disk Groups for Volume Creation- Here are some examples of naming schemes: Naming Scheme Example Traditional Solaris: /dev/ l r l dsk/clt9dOs2 HP-UX: /dev/ l r ] dsk/c3t2dO (no slice) AIX: /dev/hdisk2 I.inux: /dev/sda. /dev/hda Enclosure-based senaO- 1.senaO - 2,senaO- 3.. Enclosure-based Customized englab2.hrl.boston3 I Benefits of enclosure-based naming include: Easier fault isolation: Storage Foundation can more effectively place data and metadata to ensure data availability. Device-name independence: Storage Foundation is independent of arbitrary device names used by third-party drivers. Improved SAN management: Storage Foundation can create better location identification information about disks in large disk limns and SANs. Improved cluster management: In a cluster environment. disk array names on all hosts in a cluster can be the same. Improved dynamic multipathing (DMP) management: With multipathcd disks. the name of a disk is independent of the physical communication paths. avoiding confusion and conflict. Copyrighl;~ 2006 Symantec Corporation. All nqtus reserved 3-3Lesson 3 Creating a Volume and File System
  • 64. symantec ~ Stage 1: ; Initialize disk. J ~ ! Uninitialized : Disk ; Stage 2: Assign disk to disk group. Before Configuring a Disk for Use by VxVM In order to use the space ofa physical disk to build VxVM volumes, you must place the disk under Volume Manager control. Before a disk can be placed under volume Manager control, the disk media must be formatted outside ofVxVM using standard operating system formatting methods. SCSI disks arc usually prcformaued. After a disk is formatted. the disk can be initialized for use by Volume Manager. In other words. disks must be detected by the operating system, before VxVM can detect the disks. Stage One: Initialize a Disk A formatted physical disk is considered uninitialized until it is initialized for use by VxVM. When a disk is initialized. the public and private regions are created. and VM disk header information is written to the private region. Any data or partitions that may have existed on the disk are removed. These disks are under Volume Manager control but cannot be used by Volume Manager until they are added to a disk group. Note: Encapsulation is another method of placing a disk under VxVM control in which existing data on the disk is preserved. This method is covered in a later lesson. Changing the Disk Layout To display or change the default values that are used for initializing disks, select the "Change/display the default disk layouts" option in vxdiskadm: 3-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYright ,~, 2006 Svmantec Cornorauon All fights reserved
  • 65. For disk initialization. you can change the default format and the default length of the private region. If the attribute settings for initializing disks are stored in the user-created tile. / etc/ defaul t /vxdi sk, they apply to all disks to be initialized On Solaris for disk encapsulation. you can additionally change the offset values for both the private and public regions. To make encapsulation parameters different from the default VxVM values. create the user-detined / etc/ defaul t /vxencap tile and place the parameters in this tile. On HP-UX when converting LVM disks. you can change the default format and the default private region length. The attribute settings are stored in the /etc/default/vxencap file. Stage Two: Assign a Disk to a Disk Group When you add a disk to a disk group. VxVM assigns a disk media name to the disk and maps this name to the disk access name. Disk media name: A disk media name is the logical disk name assigned to a drive by VxVM. VxVM uses this name to identify the disk for volume operations. such as volume creation and mirroring. Disk access name: A disk access name represents all UNIX paths to the device. A disk access record maps the physical location to the logical name and represents the link between the disk media name and the disk accessname. Disk accessrecords arc dynamic and can be re-created when vxdctl enable is run. The disk media name and disk access name. in addition to the host name. are written to the private region of the disk. Space in the public region is made available for assignment to volumes. Volume Manager has full control of the disk. and the disk can be used to allocate space tor volumes. Whenever the VxVM configuration daemon is started (or vxdctl enable is run). the system reads the private region on every disk and establishes the connections between disk access names and disk media names. A tier disks are placed under Volume Manager control. storage is managed in terms of the logical configuration. File systems mount to logical volumes. not to physical partitions. Logical names. such as /dev/vx/ l r l dsk/diskgroup/volume_name. replace physical locations. such as /dev/ [rl dsk/ device_name. The free space in a disk group refers to the space on all disks within the disk group that has not been allocated as subdisks, When you place a disk into a disk group. its space becomes part or the tree space pool of the disk group. Stage Three: Assign Disk Space to Volumes When you create volumes. space in the public region of a disk is assigned to the volumes. Some operations. such as removal of a disk from a disk group. are restricted itspace on a disk is ill use by a volume. Lesson 3 Creating a Volume and File System Copyright c· 2006 Symantcc Corporation All rignls reserved I 3-5
  • 66. Disk Group Purposes sym.uuec Disk groups enable you to: • Group disks into logical collections for a set of users or applications. • Easily move groups of disks from one host to another. • Ease administration of high availability environments through deport and import operations. sysdg L[~~r[j[j[j[j VM disks VM disks LiJ:~=:Ifqfigill [j [j [j [j [j [j [j [j VM disks VM disks What Is a Disk Group? A disk group is a collection of physical disks, volumes, plexes, and subdisks that are used for a common purpose. A disk group is created when you place at least one disk in the disk group. When you add a disk to a disk group. a disk group entry is added to the private region header of that disk. Because a disk can only have one disk group entry in its private region header. one disk group does not "know about" other disk groups, and therefore disk groups cannot share resources, such as disk drives, plexes, and volumes. A volume with a plcx can belong to only one disk group. and subdisks and plexes ofa volume must be stored in the same disk group. You can never have an "empty" disk group, because you cannot remove all disks from a disk group without destroying the disk group. Why Are Disk Groups Needed? Disk groups assist disk management in several ways: Disk groups enable the grouping of disks into logical collections for a particular set of users or applications. Disk groups enable data. volumes. and disks to be easily moved from one host machine to another. Disk groups ease the administration of high availability environments. Disk drives can be shared by two or more hosts. but they can be accessed by only one host at a time. If one host crashes. the other host can take over its disk groups and therefore its disks. 3-6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynght@ 2006 Svmantec Corporanon All flghLS reservec
  • 67. System-Wide Reserved Disk Groups Reserved I System A I [iJ acc~ names: LEi8J• bootdg bootdg • sysdg • defaultdg defaultdg • acctdg mJ[j EJEH"HJI • nodg 1::---'1I System B bootdg • nodg defaultdg • nodg noc1g1s the default value tor bootdg and defaultdg. To display what is set as bootdg or defaul tdg: vxdg bootdg vxdg defaultdg To set the default disk group after VxVMinstallation: vxdctl defaultdg diskgroup ISystem-Wide Reserved Disk Groups VxVM has reserved three disk group names that are used to provide boot disk group and default disk group functionality. The names "bootdg, ., "de f au I tdg, " and "nodq" arc system-wide reserved disk group names and cannot be used as names for any of the disk groups that you set up. If you choose to place your boot disk under VxVM control. VxVM assigns bootdg as an alias for the name of the disk group that cuntains the volumes that are used to boot the system. de£aultdg is an alias for the disk group name that should be assumed if the -g option is not specified to a command. You can set defaul tdg when you install VERITAS Volume Manager or anytime alter installation. By default. both bootdg and defaul tdg arc set to nodg. Notes The definitions ofbootdg and defaul tdg are written to the volboot file. The definition ofbootdg results in a symbolic link from the named bootdg in /dev/vx/dsk and /dev/vx/rdsk. The rootdg disk group name is no longer a reserved name for VxVM versions after 4.0. If you arc upgrading from a version ofVolume Manager earlier than 4.0 where the system disk is encapsulated in the rootdg disk group, the bootdg is assigned the value of rootdg automatically. Copyright'f 2006 Symantec Corporation All rights reserved 3-7Lesson 3 Creatinga VolumeandFileSystem
  • 68. syrnantcc To create a disk group, you add a disk to a disk group. • You can add a single disk or multiple disks. • You cannot add a disk to more than one disk group. • Default disk media names vary with the interface used to add the disk to a disk group, but they are conventionally in the format diskgroup##, such as datadgOO, datadgOl, and so on. • Disk media names must be unique within a disk group. • Adding a disk to a disk group makes the disk space available for use in creating Volume Manager volumes. Creating a Disk Group A disk must be placed into a disk group before it can be used by VxVM. A disk group cannot exist without having at least one associated disk. When you create a new disk group. you specify a name for the disk group and at least one disk to add to the disk group. The disk group name must be unique for the host machine. Adding Disks To add a disk to a disk group, you select an uninitialized disk or a free disk. If the disk is uninitializcd, you must initialize the disk before you can add it to a disk group. Disk Naming When you add a disk to a disk group, the disk is assigned a disk media name. The disk media name is a logical name used tor VxVM administrative purposes. Notes on Disk Naming You can change disk media names after the disks have been added to disk groups. However. if you must change a disk media name, it is recommended that you make the change before using the disk for any volumes. Renaming a disk does not rename the subdisks on the disk, which may be confusing. Assign logical media names. rather than use the device names. to facilitate transparent logical replacement of failed disks. Assuming that you have a sensible disk group naming strategy, the VEA or vxdiskadm default disk naming scheme is a reasonable pol icy to adopt. 3-8 COPYright ~ 2006 Symantec Corporation. All rights reserveo VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 69. I Create a disk group or add disks using vxdiskadm: "Add or initialize one or more disks" Initialize disks: vxdisksetup -i device_tag [attributes) vxdisksetup - i Disk 1 (Enclosure-based naming) vxdisksetup -i c2tOdO (Solaris and HP-UX) vxdisksetup -i hdisk2 (AIX) vxdisksetup - i sda2 (Linux) Initialize the disk group by adding at least one disk: vxdg init diskgroup disk_name=device_tag vxdg init datadg datadgOl=Disk_l Add more disks to the disk group: vxdg -g diskgroup adddisk disk_name=device_tag vxdg -g datadg adddisk datadg02=Disk_2 From the vxdiskadm main menu, select the "Add or initialize one or more disks" option. Specify the disk group to which the disk should be added. To add the disk to a new disk group, you type a name for the new disk group. You use this same menu option to add additional disks to the disk group. To verify that the disk group was created, you can use vxdisk list. When you add a disk to a disk group, the disk group configuration is copied onto the disk, and the disk is stamped with the system host !D. Creating a Disk Group: vxdiskadm Lesson 3 Creating a Volume and File System Copyright ,~ 2006 Symantec Corporation. All rights reserved 3-9
  • 70. svrnantcc New Disk Group Wizard Ei1Ercet e unique neme for this di::.f 9rouP, and then select .5iiI~ the disk: to include .11 .-~====~-------, Av~iIabIe disks: Add > '3 Qts~_O ..!.l Add AI » <3Dtsk_, JGi DIS'_S ir":·~'~·:~~:~""·11. oJ:;i Dts,_6 I~Olsk 7 ..:..I «Remo'w'e AI Actions->Add Disk to Disk Grou~~ ~ Creating a Disk Group: VEA Select: Disk Groups folder, or a free or uninitialized disk Navigatlun path: Acrions=-c-Ncw Disk Group Input: Group Name: Type the name of the disk group to be created. Available/Selected disks: Select at least one disk to be placed in the new disk group. Disk names: To specity a disk media name for the disk that you arc placing in the disk group. type a name in the Disk name field. Itno disk name is specified. VxVM assigns a default name. If you arc adding multiple disks and specify only one disk name. VxVM appends numbers to create unique disk names. Organization Principle: In an Intelligent Storage Provisioning (ISP) environment. you can organize the disk group based on policies that you setup. This option is covered in a later lesson. Comment: Any user comments Create cluster group: Displayed on HP-UX plauorms: to create a shared disk group. mark this check box: only applicable in a cluster environment. Activation 'lode: Displayed on HP-UX platforms: applies to cluster environments: possible values are on: Read write. Read only: the default selling is Read write for non-cluster environments. Notc: When working in a SAN environment. or any environment in which 3-10 VERITAS Storage Foundation 5,0 for UNIX: Fundamentals Copynght':;. ;WOf)Symantec copcreuou. All rights reserved
  • 71. multiple hosts may share access to disks. it is recommended that you perform a rescan operation to update the YEA view of the disk status before allocating any disks. From the command line, you can run vxdctl enable. Adding a Disk: VEA Select: A free or unuutinlized disk Navlgation path: Aclions->AJd Disk 10 Disk Group Input: Disk Group name: Select an existing disk group. New disk group: Click the ~ew disk group button 10 add the disk to a new disk group. Select the disk to add: You can move disks between the Selected disks and Available disks fields by using the Add and Remove buttons. Disk Name(s): By default. Volumc Manager assignsa disk media name that is basedon the disk group name of"a disk. You can assign a different name to the disk by Iyping a name in the Disk namcts) field. lfyou are adding more than one disk. place a space between each name in the Disk nametsj field. Comment: Any user comments IWhen the disk is placed under YxYM control. the Type property changes to Dynamic. and the Status property changes to Imported. Lesson 3 Creating a Volume and File System 3-11 Copyright,f;; 2006 swnemec Corporaunn. All rigtlts reserved.
  • 72. syrnantec Creating a Volume: CLI To create a volume: vxassist -g diskgroup make volume name length [attributes] For example: vxassist -g datadg make datavol 100m Block and character (raw) device files are set up that you can use to access the volume: • Block device file for the volume: /dev/vx/dsk/diskgroup/volume_name • Character device file for the volume: /dev/vx/rdsk/diskgroup/volume_name To display volume attributes, use: vxassist -g diskgroup help showattrs Creating a Volume Creating a Volume When you create a volume using VEA or ell commands, you indicate the desired volume characteristics, and VxVM creates the underlying plexcs and subdisks automatically. The VxVM interfaces require minimal input if you use default settings. For experienced users, the interfaces also enable you to enter more detailed specifications regarding all aspects of volume creation. Before You Create a Volume Before you create a volume. ensure that you have enough disks to support the layout type. A striped volume requires at least two disks. A mirrored volume requires at least one disk for each plex. A mirror cannot be on the same disk that other plcxes of the same volume arc using. Creating a Volume: CLI To create a volume from the command line. you use the vxassist command. In the syntax: Use the -g option to specify the disk group in which to create the volume. make is the keyword for volume creation. volume_name is a name you give to the volume. Specify a meaningful name. length specifies the number of sectors in the volume, You can specify the length by adding an rn, k. g, or t to the length. 3-12 Copynqbt : 2(106 Symal118C:couorauon. All fights resorvec VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
  • 73. I , synuuucc Creating a Volume: VEA: Assigning Disks r······························ _ . ! Actions->New Volume I , Step 1: Select disks to use for the volume. i Name ..J Controllers iil.9 0:3 ,.j UIS'S ~ c3t2 .. , 16.9.. c:3tJ.. , 16.9 .. --=:..J Excluded: ~ Enclosures ......31 DISk r Mirror Across: 1 . ::J r stripe Across: I"'I-"--::J--' r Ordered Creating a Volume: VEA Select: A disk group Navigation path: Actions->:--Jcw Volume Input: Disks for this volume: Let VxVM decide (detuult). or manually select disks to use. Volume attributes: Specify a volume name. the size of the volume. the type of volume layout. and other layout characteristics. Assign a meaningful name to the volume that describes the data stored in the volume. File system: Create a tile system on the volume and set file system options. New Volume Wizard Step I: Assigning Disks to Use for a New Volume By default. VxVM locates available space on all disks in the disk group and assigns the space to a volume automatically based on the layout you choose. Alternatively. you can choose specific disks. mirror or stripe across controllers. trays. targets. or enclosures. or implement ordered allocation. Ordered allocation is a method of allocating disk space to volumes based on a specific set "fVxVM rules. Lesson 3 Creating a Volume and File System 3-13 Copyriglll 1'. 2006 Svmantec Corporation All rigtlts reserved
  • 74. Creating a Volume: VEA: Setting Volume Attributes _ S~te~p~2~:~s~p~e~C~ifY~V~O~lu~m~e~a~tt~r~ib~u~te~s~'~==::::-__ ---j Default options change based on !olume name: :d~la'~IOli Ithe layout type comment i:.·~";·:·::·:"~··~":"·"""""""""·"""""""""·"""""""""·" .J•• ~ !you setect. Si<e: :: . . ... _ _i. GB .1 I Ma~ Size: Layout Mlrrorln!o nMirrored svmuntcc New Volume Wizard Step 2: Specifying Attributes for a New Volume Volume name: Assign a meaningful name to the volume that describes the data stored in the volume. Size: Specify a size for the volume. The default unit is GB. If you click the Max Size button, YxYM determines the largest size possible for the volume based on the layout selected and the disks to which the volume is assigned. Select a size for the volume based on the volume layout and the space available in the disk group. The size of the volume must be less than or equal to the available tree space on the disks. The size specified in the Size field is the usable space in the volume. For a volume with redundancy (RAID-5, mirrored), YxYM allocates additional tree space for the volume's parity information (RAID-5) or additional plexcs (mirrored). The free space available for constructing a volume of a specific layout is generally less than the total free space in the disk group unless the layout is concatenated or striped with no mirroring or logging. Layout: Select a layout type from the group of options. The default layout is concatenated. Concatenated: The volume is created using one or more regions of specified disks. Striped: The volume is striped across two or mort: disks. The default number uf columns across which the volume is striped is two. and the default stripc unit size is 128 sectors (64K) on Solaris, AIX. and l.inux: 128 sectors (128K) on HP-UX. You can specify different values, I!..Concatenatell o ~triped C; BAlO-S ,.~, CQncatenated Mirrored () Strined Mirrored nEnable [astResl'11c '..1 jnnialtze zero 3-14 Copynqbt - ;:OG6Svrnantec Corporation. All rlghlS ieeesvea VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 75. Concatenated Mirrored and Striped Mirrored: These options denote layered volume layouts. Mirror Info: Mirrored: Mirroring is recommended, To mirror the volume, mark the Mirrored check box. Only striped or concatenated volumes can be mirrored. RAID-5 volumes cannot he mirrored. Total mirrors: Type the total number of mirrors for the volume. A volume can have up to 32 plcxcs: however. the practical limit is 31. One plex is reserved by YxYM to perform restructuring or relocation operations. Enable logging: To enable logging. mark the Enable logging check box. If you enable logging. a log is created that tracks regions of the volume that are currently being changed by writes. In case ofa system failure. the log is used to recover only those regions identified in the log. YxYM creates a dirty region log or a RAD-5 log. depending on the volume layout. If the layout is RAID-5. logging is enabled by default. and YxYM adds an appropriate number of logs to the volume. Enable Fastltesync:To enable FastResync. mark the Enable FastResync check box. This option is displayed only if you have licensed the FastResync option. Initialize zero: To clear the volume before enabling it for general use. mark the Initialize zero check box. For security purposes. you can use the Initialize Zero option to overwrite all existing data in the volume area. However, this is time consuming due to all the space that has to be written. No layered volumes: To prevent the creation of a layered volume, mark the No layered volumes check box. This option ensures that the volume has a nonlaycred layout. If a layered layout is selected. this option is ignored. Lesson 3 Creating a Volume and File System Copyriqht ts" 2006 Symilntec: Corporanoo AI; right., rcsprvcri I 3-15
  • 76. symantec Creating a Volume: VEA: Adding a File System Durin Volume Creation r No file system r. Create a fie system I ~--------~------~~Create OptIons '-1,,'-;-' ".,-" -::J'~ 6Iockslze: IDefault (I " ::J Now F'. System Det ... MountOptions Mountp... r:11-rnn-w-' 1----- P Create mount posit r Read only P Honor setuid P i<M=t9f;;;;:~g~lrit~ r Mountat boot fsck pess: r Mount file System Det"'I s.,; New Volume Wizard Step 3: Creating a Snapshot Cache Volume A storage cache may be named and shared among several volumes in the same disk group, This is used only for point-in-time copies. New Volume Wizard Step 4: Creating a File System on a New Volume When you create a volume, you can place a file system on the volume and specify options for mounting the tile system. You can place a tile system on a volume when you create a volume or any time after creation. The default option is "No tile system:' To place a tile system on thc volume, select the "Create a tile system" option and specify: File system type: Specify the tile system type as either vxfs (VERITAS File System: or other OS-supported file system types (U FS on Solaris; I-IFS on H P- UX; on AIX. JFS and .IFS2 arc not supported on VxVM volumes), To add a VERITAS tile system. the VxFS product must be installed with appropriate licenses, Create Options: Compress: If your platform supports tile compression. this option compresses the files on your tile system (not available on Solaris/HP-UX). Allocation unit or Block size: Select an allocation unit size (for OS- supported tile system types): or a block size (for VxFS tile systems). New File System Details: Click this button to specify additional tile- system-specific mkf s options. For VxFS, the only explicitly available additional options are large file support and log size. You can specify other options in the Extra Options field, 3-16 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYrlghl'.," 2006 Svrnantec Corporation All fight':. reservcc
  • 77. Mount Options: Mount point: Specify the mount point directory on which to mount the file system. The new file system is mounted immediately after it is created. Leave this field empty if you do not want to mount the tile system. Create mount point: Mark this check box to create the directory il' it does not exist. The mount point must be specified. Read only: Mark this check box to mount the file system as read only. Honor setuid: Mark this check box to mount the file system with the suid mount option. This option is marked by default. Add to tile system table: Mark this check box to include the file system in the /etc/vfstab file (Solaris). the /etc/fstab file (IIP-UX. Linux). or the /etc/filesystems tile (AIX). Mount at boot: Mark this check box to mount the file system automatically whenever the system boots. This option is not displayed on IIP-UX. fack pass: Specify how many fsck passeswill be run if the file system is not clean at mount time. Mount File System Details: Click this button to specify additional mount options. For YxFS. the explicitly available additional options include disabling Quick 110. setting directory permissions and owner. and setting caching policy options. You can specify other options. such as quota. in the Extra options field. Lesson 3 Creating a Volume and File System Copynght if' 2006 Symantec Corporation All right~ res.erved I 3-17
  • 78. syrnanicc. Adding a File System to a Volume A file system provides an organized structure to facilitate the storage and retrieval of files. You can add a tile system to a volume when you create a volume or any time after you create the volume initially. When a tile system has been mounted on a volume. the data is accessed through the mount point directory. When data is written to files. it is actually written to the block device tile: /dev/vx/dsk/disk_group/volume_name. When fsck is run on the file system. the raw device tile is checked: / dev/vx/ rdsk/ disk _group/ vol ume_ name. Adding a File System After Volume Creation 1. CLI: Create the file system using mkf s (VxFS) or OS-specific file system creation commands. VEA: Select Actions->File System->New File System 2. ell: Create a mount point directory on which to mount the file system. VEA: Specify the mount point in the New File System dialog box. 3. CLI: Mount the volume to the mount point by using the mount command. VEA: If a file system was previously created on a volume, but not mounted, you can explicitly mount the file system by selecting Actions->File System-> Mount File System. Solaris I HP-UX I ~I~.J. LinuxJ Adding a File System to a Volume: CLI To add a file system to a volume from the command line. you must create the tile system. create a mount point for the tile system, and then mount the tile system. Solaris To create and mount a YxFS tile system: mkfs -F vxfs /dev/vx/rdsk/datadg/datavol mkdir /data mount -F vxfs /dev/vx/dsk/datadg/datavol /data To create and mount a UFS file system: newfs /dev/vx/rdsk/datadg/datavol mkdir Idata 3-18 COpyflglll~ 2006 Symantec Corporation All rights ro servoo VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 79. mount /dev/vx/dsk/datadg/datavol /data HI'-lIX To create and mount a YxFS file system: mkfs -F vxfs /dev/vx/rdsk/datadg/datavol mkdir /data mount -F vxfs /dev!vx/dsk/datadg/datavol /data To create and mount an IIFS file system: newfs -F hfs /dev/vx/rdsk/datadg/datavol mkdir /data mount -F hfs /dev/vx/dsk/datadg/datavol /data AIX To create and mount a YxFS file system using mkfs: mkfs -v vxfs /dev/vx/rdsk/datadg/datavol mkdir /data mount -v vxfs /dev/vx/dsk/datadg/datavol /data To create and mount a YxFS file system using crfs: crfs -v vxfs -d /dev/vx/rdsk/datadg/datavol -m /data -A yes Notes: An uppercase V is used with mkfs: a lowercase v is used with crfs (to avoid conflict with another crfs option). crfs creates the file system. creates the mount point. and updates the file systems file (/etc/filesystems). The -A yes option requests mount at boot. If the file system already exists in /etc/filesystems. you can mount the file system by simply using the syntax: mount mount_point. Linux To create and mount a YxFS file system using mkfs: mkfs -t vxfs /dev/vx/rdsk/datadg/datavol mkdir /data mount -t vxfs /dev/vx/dsk/datadg/datavol /data Lesson 3 Creating a Volume and File System Copyright rr 2006 Sjrnantec Co-potation All riglns reserved I 3-19
  • 80. Mounting a File System at Boot To mount the file system automatically at boot time, edit the OS-specific file system table file to add an entry for the file system. Specify information, such as: Device to mount: Device to f sck: • Mount point: • File system type: • fsck pass: • Mount at boot: Mount options: /dev/vx/dsk/datadg/datavol /dev/vx/rdsk/datadg/datavol /data vxfs Mount Options Mount pOlnt,"C":Jd-cala---f ~ Create mount ~oinl syrnarucc. Mounting a File System at Boot Using CLI. if you want the tile system to be mounted at every system boot, you must edit the tile system table file by adding an entry for the file system. If you later decide to remove the volume. you must remove the entry in the tile system table tile. 1 yes C Read only '" Honor setujd Platform File System Tahle File Sularis /etc/vfstab HI'-UX /etc/fstab AIX letc/filesystems l.inux /etc/fstab AIX In AIX, you can use the following commands when working with the tile system table tile, /etc/filesystems: To Jew entnes lsfs mount po i nt: To change details of an entry, use chf s. For example. to turn off mount at boot chfs -A no mount po i nt: 111 YEA. in the Mount File System dialog. mark the "Add to tile system table" and "Mount at boot" (not on HP-UX) check boxes. the entry is made in the file system table tile automatically. Ifthe volume is later removed through YEA. its corresponding tile system table tile entry is also removed automatically. 3-20 COPYJlyht& 2006 Svmantec Corporation. All nqlns reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 81. S)111'l!1t('( I Displaying Volume Information: CLI To display volume configuration information: vxprint -g diskgroup [options] -vpsd Select only volumes (v), plexes (p), subdisks (s), or disks (d). List hierarchies below selected records. Display related records of a volume containing subvolumes. Print single-line output records that depend upon the configuration record type. Display all information from each selected record. -h -r -t -1 - a Display all information about each selected record, one record per line. -A Select from all active disk groups. -e pattern Show records that match an editor pattern. Displaying Volume Configuration Information Displaying Volume Layout Information: ell The vxprint Command You can use the vxpr int command to display information about how a volume is configured. This command displays records from the VxVM configuration database. vxprint -g diskgroup [options] The vxpr int command can display information about disk groups. disk media. volumes. plcxes, and subdisks. You can specify a variety of options with the command to expand or restrict the information displayed. Only some of the options are presented in this training. For more information about additional options. see the vxprint (Lm) manual page. Lesson 3 Creating a Volume and File System 3-21 Copvnqbt ;f' 2006 svroeruec co.oorauco All rights reserved
  • 82. auto auto auto auto 2048 2048 2048 2048 4191264 4191264 4191264 4191264 Displaying Volume Information: ell...- ...-..-...----.--~--.--.-- ..-....-- ..-..-~-.------. vxprint -9 datad9 -ht I more I OG NAME NCONFlG NLOG MINORS GROUP-ID ST NAME STATE DM CNT SPARE CNT APPVOL_ CNT OM NAME DEVICE TYPE PRIVLEN PUBLEN STATE RV NAME RLINK_CNT I<STATE STATE PRIMARY DATAVOLS SRL RL NAME RVG KSTATE STATE REM..HOSTREM_DG REM_RLNI< CO NAME CACHEVOL KSTATE STATE SD NAME SV NAME SC NAME DC NAME SP NAME PLEX PLEX PLEX P.ARENTVOL SNAPVOL DISI< DISKOFFSLENGTH [COLI] OFF DEVICE MODE VOLNAME NVOLLAYRLENGTH [COL/JOFF AM/NM MODE CACHE DISKOFFS LENGTH [COLI J OFF DEVICE HODE LOGVOL OCO dg datadg default default 91000 1000753077.1117. train12 dm datadg01 dID datadg02 dm datadg03 dm datadg04 c1tlOdOs2 c1 tlldOs2 clt14dOs2 c1tl5dOs2 -I To interpret the -I output, match header -j lines with output lines. ENABLED AC"TIVE-· "2 ii~ CONCAi~N~"~~. ·.·.~M~~ ·RW 1 datadgOl 0 21168 0 clt10dO ENA Displaying Information for All Volumes To display the volume. plcx, and subdisk record information for a disk group: vxprint -g diskgroup -ht In the output, the top few lines indicate the headers that match each type of output line that follows. Each volume is listed along with its associated plexes and subdisks and other VxVM objects. dg is a disk group. st is a storage pool (used in Intelligent Storage Provisioning). dm is a disk. rv is a replicated volume group (used in VERITAS Volume Rcplicator). rl is an rlink (used in VERITAS Volume Rcplicator). eo is a cache object. vt is a volume template (used in Intelligent Storage Provisioning). v is a volume. p L is a plcx. sd is a subdisk. sv is a subvolumc. se is a storage cache. de is a data change object. sp is a snap object. For more information, s.:ethe vxp ri nt; ( i m) manual page. 3-22 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynyhl :.,;,:!(Jr)f:; Svmantec Corpor auon All nghts reserved
  • 83. ~ S)111iHlI(,( Object Views in the Main Window !!!ILogs SIZe CDS Id~t.;dan~ rl ~t~rlgOl c!~lcH!CI If(rrnr1p(1 1('1:'8':·8 I'€,= (I.;jt~dll III ported I q:~ (,8 '''t?s Highlight a volume, and click the tabs to display details. Displaying Volume Information: VEA To display information about volumes in VEA. you can select from several different views. Object Views in the Main Window You can view volumes and volume details by selecting an object in the object tree and displaying volume properties in the grid: To view the volumes in a disk group. select a disk group in the object tree and click the Volumes tab in the grid. To explore detailed components of a volume. select a volume in the object tree and click each of the tabs in the grid. Lesson 3 Creating a Volume and File System 3-23 Copyright t: 2006 Symantec corcoreuoo All nqh(s reserved
  • 84. symantcc. Viewing Basic Disk Information: ell To display basic information about all disks: vxdisk -0 alldgs list DEVICE TYPE DISK GROUP STATUS cltlOdOs2 auto:cdsdisk datadgOl datadg online cl tlldOs2 auto:cdsdisk datadg02 datadg online cl tl2dOs2 auto:cdsdisk online cltl3dOs2 auto:none online invalid cl t14dOs2 auto:none online invalid cl tlSdOs2 auto:none online invalid clt16dOs2 auto:none online invalid clt17dOs2 auto:none online invalid VxVM Disks ..J Free Disk Note: In a shared access environment, when displaying disks, run vxdctl enable frequently to rescanfor disk changes. Uninitialized Disks Displaying Disk and Disk Group Information Displaying Basic Disk Information: ell You use the vxdisk list command to display basic information about all disks attached to the system. The vxdisk list command displays the: Device names for all recognized disks Type of disk, that is. how a disk is placed under VxVM control Disk names Disk group names associated with each disk Status of each disk In the output: A status of onl ine. in addition to entries in the Disk and Group columns indicates that the disk has been initialized or encapsulated, assigned a disk media name, and added to a disk group. The disk is under Volume Manager control and is available for creating volumes. A status of onl ine without entries in the Disk and Group columns indicates that the drive has been initialized or encapsulated but is not currently assigned to a disk group. A status of onl ine inval id indicates that the disk has neither been initialized nor encapsulated by VxVM. The disk is not under VxVM control. Note: On the HP-UX platform, LVM disks have a type of auto: LVM and a status ofLVM. 3-24 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynght'~ 200ti Svrnaruec Corporatron All rights rescrveo
  • 85. J~-< , S)111i1l1t('( I To display detailed information for a disk: vxdisk -g diskgroup list disk_name vxdisk -g datadg list datadgOl Device: cltlOdOs2 devicetag: cltlOdO type: auto hostid: disk: group: train12 name=datadgOl id=l000753057.1114.train12 name=datadg id=l000753077.1117.train12 To display a summary for all disks: vxdisk -s list To display detailed information about a disk. you use the vxdisk list command with the name of the disk group and disk: vxdisk -g diskgroup list disk name In the output: Device is the YxYM name for the device access path. device tag is the name used by YxYM to refer to the physical disk. type is how a disk was placed under YM control. auto is the default type. hos t id is the name of the system that currently manages the disk group to which the disk belongs: ifblank. no host is currently controlling this group. disk is the YM disk media name and internal ID. group is the disk group name and internal !D. To view a summary of information tor all disks. you use the - s option with the vxdisk list command. Note: The disk name and the disk group name are changeable. The disk ID and disk group !D are never changed as long as the disk group exists or the disk is initialized. Note: The detailed information displayed by this command will be discussed later in the course Lesson 3 Creating a Volume and Fite System 3-25 Copyright C'" 2006 Symantor; Corporation. All nqhts reserved
  • 86. svmantcc cap_'t; d,jt,JlJ!)OI iht"d<J Keeping Track of Your Disks By viewing disk information. you can determine if a disk has been initialized and added to a disk group, verify the changes that you make to disks, and keep track of the status and configuration of your disks. Displaying Disk Information: VEA The status of a disk can be: Not Initialized: The disk is not under YxYM control. The disk may be in use as a raw device by an application. Free: The disk is initialized by YxYM but is not in a disk group. You cannot place a disk in this state using YEA, but YEA recognizes disks that have been initialized through other interfaces. Foreign: The disk is under the control uf another host. Imported: The disk is in an imported disk group. Deported: The disk is in a deported disk group. Disconnected:The disk contains subdisks that are not available because of hardware failure. This status applies to disk media records tor whieh the hardware has been unavailable and has not been replaced within YxYM. External: The disk is in use by a foreign manager. Inactive/Import failed: The disk group is not imported but the disks have thc same host ID tag as the hostnarne of the system, for example, if the disk group is deported using the same hostname. 3-26 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYright ~ 2006 Svmeotec Corporation. All nqhts reserved
  • 87. ,S)11mll1tx. General CDS: v~ Efl: No 5t~tus' Ilfl)OI'ted Capacity: ';j"%"i i Select a unit to 'I display capacity and unallocated space in I other units.Unalocatedsp.!Jce: ~95! UDIO' I No see-e: Reserved: Hot use: AlocatOl'Dlsk: Yes cowrent: Viewing Disk Properties: VEA In YEA, you can also view disk properties in the Disk Properties window. To open the Disk Properties window, right-click a disk and select Properties. The Disk Properties window includes the capacity of the disk and the amount of unallocatcd space. You can select the units for convenient display in the unit of your choice. 3-27Lesson 3 Creating a Volume and File System Copvnqht 'r;; 2006 Svmantec Corporation All rights reserved
  • 88. symantec To display disk groups: vxdg list NAME STATE ID datadg enabled,cds 9695836l3.l025.cassius newdg enabled, cds 971216408.l133.cassius To display free space in a disk group, use one of these: vxassist -g diskgroup help space vxdg -g diskgroup free Displaying Disk Group Information: ell To display disk group information: Use vxdg list to display disk group names, states, and IDs for all imported disk groups in the system. Use vxdg free to display tree space on each disk. This command displays free space on all disks in all disk groups that the host can detect. Add -g diskgroup to restrict the output to a specific disk group. Note: This command does not show space on spare disks. Reserved disks are displayed with an "r " in the FLAGS column. Use vxdisk -0 alldgs 1ist to display all disk groups, including deported disk groups. For example: vxdisk -0 alldgs list DEVICE TYPE DISK GROUP STATUS Disk - 1 auto:cdsdisk datadgOl datadg online Disk 7 auto:cdsdisk (acctdg) online 3-28 Copynoht :. 2006 Svmaulec Corporation All rights ruserveo VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 89. I Viewing Disk Group Information: VEA ditadg Right-click a disk group and select Properties. The object tree in the VEA main window contains a Disk Groups node that displays all of the disk groups attached to a host. When you click a disk group. the VxVM objects contained in the disk group are displayed in the grid. To view additional infonnation about a disk group. right-click a disk group and select Properties. The Disk Group Properties window is displayed. This window contains basic disk group properties. including: Disk group name. status. 10. and type Number of disks and volumes Disk group version Disk group size and tree space Note: On IIP-UX. there is another attribute between the Version and Enabled attributes. which is Shared: No. Stalus Imported Id 11~)094318 25.~oulsecl>lV9 CDS "es Disks: Volumes ColTtlnt'l9!iion Yes Version 140 EtlBblM 'tag JlLl/V.ftlo"ornooe- 01"" uetacn pOll Olobal Ol$kgroupfail POlicy Dgdlsabl~ .AJlo!pd snes 911a ConSIstency O. sne: 3 ~-':f, ,-.p rree eper e 1 :::~:I:, ,.,[3 Viewing Disk Group Properties: VEA Lesson 3 Creating a Volume and File System Copynqht '''' 2006 Syrnantec Corporanon All rights reserved 3-29
  • 90. syrnantec Removing a Volume • When a volume is removed, the space used by the volume is freed and can be used elsewhere. • Unmount the file system before removing the volume. VEA: • Select the volume that you want to remove. • Select Actions->Delete Volume. vxassist remove volume: vxassist -g diskgroup remove volume volume name vxassist -g datadg remove volume datavol vxedit: vxedit -g diskgrollp -rf rm volume name vxedit -g datadg -rf rm datavol Removing Volumes, Disks, and Disk Groups Removing a Volume Only remove a volume if you are sure that you do not need the data in the volume, or if the data is backed up elsewhere. A volume must be closed before it can be removed. For example. if the volume contains a tile system. the tile system must be unmounted. You must edit the OS-specitic tile system table tile manually in order to remove the entry for the tile system and avoid errors at boot. If the volume is used as a raw device. the application. such as a database. must close the device. 3-30 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cupyllghl'f: 20(}6 S,.rn<l!lIU. Corporauon All fights reserved
  • 91. , symaruec I Before removing a disk, you may need to evacuate data from the disk to another disk in the disk group. VEA: • Select the disk that you want to evacuate. • Select Actions->Evacuate Disk. vxdiskadm: "Move volumes from a disk" ell: vxevac -g d i ek qr oiip from disk [to_disk] vxevac -g datadg datadgOl datadg02 To evacuate to any disk except for datadg03: vxevac -g datadg datadg02 !datadg03 Evacuating a disk moves the contents of the volumes on a disk to another disk. The contents of a disk can be evacuated only to disks in the same disk group that hale sufficient free space. Evacuating a Disk Lesson 3 Creating a Volume and File System Copyrighl '[~ 2006 Syrnantec Corporation All rights reserved 3-31
  • 92. symnntrc roup VEA: • Select the disk that you want to remove. • Select Actions->Remove Disk from Disk Group. vxdiskadm: "Remove a disk" ell: vxdg -g diskgroup rmdisk disk_name Remove the disk from the diskvxdiskunsetup [-CJ device_tag Example: group, and then uninitiatize it. vxdg -g datadg rmdisk datadg02 vxdiskunsetup Disk 2 Removing a Disk I f you select all disks for removal from the disk group, the disk group is destroyed automatically. You can verify the removal by using the vxdisk list command to display disk information. A deconfigured disk has a status ofonline invalid and no longer has a disk media name or disk group assignment. The vxdiskunsetup Command After the disk has been removed from its disk group. you can remove it from Volume Manager control completely by using the vxdiskunsetup command. This command reverses the configuration of a disk by removing the public and private regions that were created by the vxdisksetup command. The vxdiskunsetup command docs not operate on disks that are active members of an imported disk group. This command does not usually operate on disks that appear to be imported by some other hostv for example, a host that shares access to the disk. You can use the - C option to force deconfiguration otthc disk, removing host locks that may be detected. 3-32 COPYright C:'2006 Symaotec Ccrporanoo All righL<; reserved VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
  • 93. olddg DO DO ___L _ Destroying a disk group: • Means that the disk group no longer exists • Removes all disks • Is the only method for freeing the last disk in a disk group VEA: Actions->Destroy Disk Group ell: vxdg destroy diskgroup Example: To destroy the disk group olddg and place its disks in the free disk pool: vxdg destroy olddg IDestroying a Disk Group Destroying a disk group permanently removes a disk group from Volume Manager control, and the disk group ceasesto exist. When you destroy a disk group, all of the disks in the disk group are reinitialized as empty disks. Volumes and configuration information about the disk group are removed. Because you cannot remove the last disk in a disk group, destroying a disk group is the only method to free the last disk in a disk group for reuse. A disk group cannot be destroyed ilany volumes in that disk group arc in use or contain mounted file systems. The bootdg disk group cannot be destroyed. Caution: Destroying a disk group can result in data loss. Only destroy a disk group if you are sure that the volumes and data in the disk group are not needed. Destroying a Disk Group: VEA Actions-c=Desrroy Disk GroupNavigation path: The disk group to be destroyedSelect: Input: Group name: Specify the disk group to be destroyed. Destroying a Disk Group: ell To destroy a disk group 11-0111the command line, use the vxdg destroy command. Note: You can hring back a destroyed disk group by importing it with its dgid. Lesson 3 Creating a Volume and File System Copyriglll =: 2006 Symantec Corporauoo All rights reserved 3-33
  • 94. Labs and solutions for this lesson are located on the following pages: Appendix A provides complete lab instructions. "Lab :;. (I,';IIIII>! a V,)IIIlile' .uul File ~y'k'I11."page i-I~ Appendix B provides complete lab instructions and solutions. "/';iIJ 3 Sohuion-: Cn.:atillg 'I Volume and lrlc S'kPl." I'd!!': 1:1·21 • Key Points In this lesson, you learned how to create a volume with a file system. This lesson also described device-naming schemes and how to add a disk to a disk group, in addition to how to view configuration information for volumes, disk groups, and disks, In addition, you learned how to remove a volume, disk, and disk group. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Storage Foundation Installation Guide 3-34 svmantec. Lab 3 Lab 3: Creating a Volume and File System In this lab, you create new disk groups, simple volumes, and file systems, mount and unmount the file systems, and observe the volume and disk properties. The first exercise uses the VEA interface. The second exercise uses the command-line interface. For Lab Exercises, see Appendix A, For Lab Solutions, see Appendix B. Copynqnt', 2006 Svrnanu,c Corporatron. AUuqtus reserved VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
  • 95. Lesson 4 Selecting Volume Layouts
  • 96. Lesson Introduction " Lesson 1: Virtual Objects " Lesson 2: Installation and Interfaces " Lesson 3: Creating a Volume and File System " ..~(!!>!>()Il~~:~E!'.(!<:tiny'~~o~~~E!_~~r~~!!>_. " Lesson 5: Making Basic Configuration Changes " Lesson 6: Administering File Systems " Lesson 7: Resolving Hardware Problems Will antt'( Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Comparing Identify the features, advantages, and Volume Layouts disadvantages of volume layouts supported by VxVM. Topic 2: Creating Volumes Create concatenated, striped, and with Various Layouts mirrored volumes by using VEA and from the command line. Topic 3: Creating a Create layered volumes by using VEA Layered Volume and from the command line. Topic 4: Allocating Allocate storage for a volume by Storage for Volumes specifying storage attributes and ordered allocation. 4-2 C{JP~1I9hl :- 2006 Symanter Cbrporauon All fights reservou VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 97. ~V1if.~- ·---'S)~il;~tt'( Concatenated Layout Disk Group Volume Plex Subdisks VxVM Disks Subdisks datadg datavol IComparing Volume Layouts Each volume layout has different advantages and disadvantages. For example. a volume can be extended across multiple disks to increase capacity. mirrored on another disk to provide data redundancy. or striped across multiple disks to improve [/0 performance. The layouts that you choose depend on the levels of performance and reliability required by your system. Concatenated Layout A concatenated volume layout maps data in a linear manner onto one or more subdisks in a plcx. Subdisks do not have to be physically contiguous and can belong to more than one VM disk. Storage is allocated completely from one subdisk before using the next subdisk in the span. Data is accessed in the remaining subdisks sequentially until the end of the last subdisk. For example. if you have 14 GB of data then a concatenated volume can logically map the volume address space across subdisks on di fferent disks. The addresses o GB to 8 (iB of volume address space map to the first 8-gigabytc subdisk. and addresses 9 GB to 14 (iB map to the second 6-gigabyte subdisk. An address offset of 12 GB. therefore. maps to an address onset of4 GB in the second subdisk. Copyright <£ 200[, Synrantec Corporation All riqhts rcserveo 4-3Lesson 4 Selecting Volume Layouts
  • 98. Disk Group Volume datadg Striped Layout Plex .,c E "o (.) SU10 SU11 SU12 datavol-Ol SU1 SU2 SU3 datavol Subdisks SU4 SU5 SU6 SU7 SUB SU9 VxVM Disks Subdisks Striped Layout A striped volume layout maps data so that the data is interleaved, or allocated in stripes, among two or more subdisks on two or more physical disks. Data is allocated alternately and evenly to the subdisks of a striped plex. The subdisks are grouped into "columns." Each column contains one or more subdisks and can be derived from one or more physical disks. To obtain the maximum performance benefits of striping, you should not use a single disk to provide space for more than one column. All columns must be the same size. The minimum size of a column should equal the size of the volume divided by the number of columns. The default number of columns in a striped volume is based on the number of disks in the disk group. Data is allocated in equal-sized units, called stripe 1111 its. that are interleaved between the columns. Each stripe unit is a set of contiguous blocks on a disk. The stripe unit size can be in units of sectors, kilobytes. megabytes, or gigabytes. The default stripe unit size is 64K, which provides adequate performance for most general purpose volumes. Performance of an individual volume may be improved by matching the stripe unit size to the 110 characteristics of the application using the volume. 4-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cop)'flyhl .~ ?()(J6Svmantec Corporauou All fights reserved
  • 99. ~-~"'~. ~~ ..' ";'.r&~;11[~-------'»~11a;lIt~~ Mirrored Layout . r+: --; Each piex must Disk Group datadg have disk space from different Volume C datavol :=:: disks to achieve redundancy. Plex Subdisks VxVM Disks Subdisks datavol-Ol datavol-02 I I datadg03-01 datadgOl-02 datadgO~-02 ...!. ----..J!L r-datadg02-- r-datadg03-- dat4c",;;; ;;]. datadg03-01 datadg02..02 d~tadg')? 02 J&.tadti'.!;' :;.~ ...J,"lt.il~i~~t)_~~ 0 datadgOl datadgOl-02 Mirrored Layout By adding a mirror to a concatenated or striped volume, you create a mirrored layout. A mirrored volume layout consists of more than one plcx that duplicate the information contained in a volume. Each plcx in a mirrored layout contains an identical copy of the volume data. In the event of a physical disk failure and when the plex on the failed disk becomes unavailable, the system can continue to operate using the unaffected mirrors. Although a volume can have a single plcx. at least 111'0 plexes are required 10 provide redundancy of data. Each of these plcxes must contain disk space trom di tlcrcnt disks to achieve redundancy. Volume Manager uses true mirrors. which means that all copies of the data are the same at all rimes. When a write occurs to a volume. all plexcs must receive the write before the write is considered complete. Distribute mirrors across controllers to eliminate the controller as a single point of failure. Lesson 4 Selecting Volume Layouts Copyright f 2006 Symantcc. Corporation All rights reserved I 4-5
  • 100. RAID-5 Layout Disk Group Volume Plex Subdisks VxVM DisksI,---I~~~,..J~~!I., Subdisks P = Parity; a calculated value used 1---------------' to reconstruct data after disk failure. RAIO-5 Layout A RAID-5 volume layout has the same attributes as a striped plex, but it includes one additional column of data that is used for parity. Parity provides redundancy. Parity is a calculated value used to reconstruct data after a failure. While data is being written to a RAID-5 volume. parity is calculated by performing an exclusive OR (XOR) procedure on the data. The resulting parity is then written to the volume. If a portion ofa RAIO-5 volume fails, the data that was on that portion of the failed volume can be re-created from the remaining data and parity information. RAID-5 volumes keep a copy of the data and calculated parity in a plex that is striped across multiple disks. Parity is spread equally across columns. Given a five-column RAID-5 where each column is I GB in size, the RAID-5 volume size is 4 GB, One column of space is devoted to parity, and the remaining four I-G8 columns arc used tor data. The default stripe unit size for a RAID-5 volume is 16K. Each column must be the same length but may be made from multiple subdisks of variable length. Subdisks used in different columns must nut be located on the same physical disk. RAIO-5 requires a minimum of three disks for data and parity. When implemented as recommended. an additional disk is required for the log. RAIO-5 cannot be mirrored. 4-6 COPYJlghl : 2()06 Syrl1antec Cnrporauon AU fights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 101. ,S)111<1ntt'( Comparing Volume Layouts Concatenation Striping Mirroring RAID·5 • Removessize • Parallel data • Improved • Redundanc¥ restrictions transfer reliability and through panty ., availability • Requires lessCIl • Better • Load-balanclng01 space than ~ utilization of • Improved read c free space • Improved performance mirroring 10 performance > • Improved read"0 • Simplified (if properly • Fast recovery performance0( administration configured) through logging • Fast recovery through logging ., • No redundancy • No redundancy • Requires more • Slower writeCIl 01 • Single disk • Single disk disk space performance s than mirroring e failure causes failure causes • Slightly 10 volume failure. volume failure. slower write • Poor>"0 performance performance10 ., after a disk 0 failure Comparing Volume Layouts IConcarcnatlon: Advantages Removes size restrictions: Concatenation removes the restriction on size or storage devices imposed by physical disk size. Better utlllzarlon of free space: Concatenation enables better utilization of free space on disks by providing for the ordering of available discrete disk space on multiple disks into a single addressable volume. Simplified administration: System administration complexity is reduced because making snapshots and mirrors uses any size space. and volumes can be increased in size by any available amount. Concatenation: Disadvantages No protection against disk failure: Concatenation does not protect against disk failure. A single disk failure results in the failure of the entire volume. Striping: Advantages Improved performance through parallel data transfer: Improved performance is obtained by increasing the effective bandwidth of the I '0 path to the data. This may be achieved by a single volume JiO operation spanning across a number of disks or by multiple concurrent volume 1/0 operations to more than one disk at the same time. Load-balancing: Striping is also helpful in balancing the I/O load from multiuser applications across multiple disks. Copyriqht '. 20(16 Syrnantec Corporation All dgtlls reserved 4-7Lesson 4 Selecting Volume Layouts
  • 102. Striping: Disadvantages No redundancy: Striping alone offers no redundancy or recovery features. Disk failure: Striping a volume increases the chance that a disk failure results in failure of that volume, For example, if you have three volumes striped across two disks, and one of the disks is used by two of the volumes, then if that one disk goes down, both volumes go down. Mirroring: Advantages Improved reliability and availability: With concatenation or striping, failure of anyone disk makes the entire plex unusable. With mirroring, data is protected against the failure of anyone disk. Mirroring improves the reliability and availability of a striped or concatenated volume. Improved read performance: Reads benefit from having multiple places from which to read the data. Mirroring: Disadvantages Requires more disk space: Mirroring requires twice as much disk space, which can be costly for large configurations. Each mirrored plex requires enough space for a complete copy of the volume's data. Slightly slower write performance: Writing to volumes is slightly slower, because multiple copies have to be written in parallel. The overall time the write operation takes is determined by the time needed to write to the slowest disk involved in the operation. The slower write performance ofa mirrored volume is not generally significant enough to decide against its use. The benefit of the resilience that mirrored volumes provide outweighs the performance reduction. RAIO-5: Advantages Redundancy through parity: With a RAID-5 volume layout data can be re- created from remaining data and parity in case ofthe failure of one disk. Requires less space than mirroring: RAIO-5 stores parity information, rather than a complete copy of the data. Improved read performance: RAID-5 provides similar improvements in read performance as in a normal striped layout Fast recovery through logging: RAID-5 logging minimizes recovery time in case of disk failure. RAIO-5: Disadvantages Slow write performance: The performance overhead for writes can be substantial, because a write can involve much more than simply writing to a data block. A write can involve reading the old data and parity, computing the new parity, and writing the new data and parity. If you have more than twenty percent writes, do not use RAID-5. Very poor performance after a disk failure: After one column fails, all 1/0 performance goes down. This is not the case with mirroring, where a disk failure does not have any significant effect on performance. 4-8 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Couvnqnt ,~ 2006 Symantec Corpcratron All nqhts reserved
  • 103. Selecting a Layout Type: VEA Specify volume attributes. Creating Volumes with Various Layouts You can create volumes with a variety or layouts. In V[i in the Specify Volume Attributes window. select: Layout: Select a layout type from the group of options. The default layout is concatenated. Concatenated:The volume is created using one or more regions of specified disks. Striped: The volume is striped across two or more disks. The default number of columns across which the volume is striped is two. and the default stripe unit size is 12X sectors (64K) on Solaris. iIX. and Linux: 128 sectors (128K) on I II'-UX. You can specify different values. ConcatenatedMirrored and Striped Mirrored: These options denote layered volume layouts. Mirror Info: Mirrored: Mirroring is recommended. To mirror the volume. mark the Mirrored check box. Only striped or concatenated volumes can be mirrored. RiID-S vol limes cannot be mirrored. Total mirrors: Type the total number of mirrors for the volume. i volume can have up to 32 plexcs: however. the practical limit is 31. One plex is reserved by VxVM to perform restructuring or relocation operations. ~olume name: ~~':;;";:~""~""~~~C~~"""'~'"" cornmgnt Si~e' Layout Minor Info .~. Concatenatel! ~! Qtrlped -:' fiAID-5 C; CQn(atenatE'd Mirrored C..StrlQed Mirrored r:-JEna~le [astResync f] !nltiallze zero Lesson 4 Selecting Volume Layouts COP/right '~~.2006 Svmantoc Corporation Ail right~ reserved I 4-9
  • 104. svrnantec Concatenated Volume: CLI To create a concatenated volume: vxassist -g datadg make datavol 109 Disk group name This command creates a concatenated volume called datavol with a length of 10 gigabytes, in the disk group datadg, using any available disks. Creating a Concatenated Volume: CLI By default. vxassist creates a concatenated volume that uses one or more sections of disk space. The vxassist command attempts to locate sufficient contiguous space on one disk for the volume. However. if necessary, the volume is spanned across multiple disks. Vx VM selects the disks on which to create the volume. Note: To guarantee that a concatenated volume is created. include the Layout e nost ri pe attribute in the vxassist makecommand. Without the layout attribute. the default layout is used that may have been changed by the creation of the letc/default/vxassist file. For example: vxassist -g datadg make datavol 109 layout=nostripe If you want the volume to reside on specific disks. you can designate the disks by adding the disk media names to the end of the command. More than one disk can he specified. vxassist [-g diskgroupJ make volume name length [disks ... J 4-10 Copyright :,2006 Symantec Lorporatron All rights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 105. To create a striped volume. you add the layout type and other attributes to the vxassist makecommand. layout=stripe designates the striped layout. ncol =n designates the number of stripes. or columns. across which the volume is created. This attribute has many aliases. For example. you can also use ns t r ipe e n or s t r ipe s e n. The minimum number or stripes in a volume is 2 and the maximum is R. You can edit these minimum and maximum values in /ete/default/ vxassist using the min _eol umns and max_eol umns attributes. stripeunit=size specifics the size of the stripe unit to he used. The default is MK. To stripe the volume across specific disks. you can specify the disk media names at the end of the command. The order in which disks are listed on the command line does not imply any ordering of disks within the volume layout. To exclude a disk or list of disks. add an exclamation point ( ! ) before the disk media names. For example. !datadgOl specifies that the disk datadgOl should not be used to create the volume. , 5)1 1;1111 <;'c . Striped Volume: ell To create a striped volume: vxassist -g diskgroup make volume_name length layout=stripe [ncol=n] [stripeunit=size] [disks ... J Examples: vxassist -g acctdg make payvol 2g layout=stripe ncol=3 !acctdg04 vxassist -g acctdg make expvol 2g layout=stripe ncol=3 stripeunit=256k acctdgOl acctdg02 acctdg03 Creating a Striped Volume: CLI Lesson 4 Setecting Volume Layouts Cop yeiqht ;;; 2006 Swnantec Corporation All rights r"!S(,lIIed I 4-11
  • 106. syrnarucc. To mirror a concatenated volume, you add the Layout erni r r o r attribute in the vxassist command. To specify more than two mirrors, you add the nmirror attribute. When creating a mirrored volume. the volume initialization process requires that the mirrors be synchronized. The vxassist command normally waits for the mirrors to be synchronized before returning to the system prompt. To run the process in the background. you add the -b option. Mirrored Volume: ell To create a mirrored volume: vxassist -g diskgroup [-b] make volume name length layout=mirror [nmirror=number] ,Ex~,!,plel!: ! Concatenated i and mirrored i Specify three t mirrors. vxassist -g datadg make datavol Sg layout;mirror vxassist -g datadg make datavol Sg layout=stripe,mirror nmirror=3 Creating a Mirrored and Logged Volume: CLI When you create a mirrored volume, you can add a dirty region log by adding the Loq t ype e d r I attribute: vxassist -g diskgroup [-b] make volume_name length layout=mirror logtype=drl [nlog=n] Specify Loqr.ypee d r I to enable dirty region logging. A log plex that consists of a single subdisk is created. If you plan to mirror the log. you can add more than one log plex by specifying a number of logs using the nLoq e n attribute. where n is the number of logs. To create a concatenated volume that is mirrored and logged: vxassist -g datadg make datavol Sm layout=mirror logtype=drl Note: Dirty regions logs are covered in a later lesson. vxassist -g datadg -b make datavol Sg layout=stripe,mirror nmirror=3 4-12 ; Run process in lbackground. Creating a Mirrored Volume: CLI Copynght ,; ~006 Symantec Corporauoo All fights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 107. I ~synlilnti'(. Estimating Volume Size: ell To determine the largest possible size for a volume: vxassist -g diskgroup maxsize attributes Example: vxassist -g datadg maxsize layout=raid5 Maximum volume size: 376832 (184Mb) To determine how much a volume can expand: vxassist -g diskgroup maxgrow volume name Example: vxassist -g datadg maxgrow datavol Volume datavo1 can be extended by 366592 to 1677312 (819Mb) Estimating Volume Size: ell The vxassist command can determine the largest possible size for a volume that can currently be created with a given set of attributes. vxass ist can also determine how much an existing volume can be extended under the current conditions. This maxs i ze command does not create the volume but returns an estimate ofthe maximum volume size. The output value is displayed in sectors, by default. II'the volume with the speci lied attributes cannot be created, an error message is returned: VxVM vxassist ERROR V-S-1-7S2 No volume can be created within the given constraints The maxgrow command docs not rcsizc the volume but returns an estimate or how much an existing volume can be expanded. The output indicates the amount by which the volume can be increased and the total size to which the volume can grow. The output is displayed in sectors, by default. Lesson 4 Selecting Volume Layouts Copynqht 'h- 2006 Svmanter. Corporauon All !lgnts reservec 4-13
  • 108. symantcc Observing Volume Layouts in VEA i Highlighta ~~I~;;;~-~~d-sel~~t-Acti~~s--->-L-ay-o-u-t-V-i-e-w-·-I [jle liew /olurn€" (j~tav(lI02 T~...pe ~:ttlpe(i Size 2097152 #(:01 c· State Healthy #1!1irrors 1 Stripe Sz.1 28 ptex ciatavol(l2·01 Type ~::tripeli I r-r- Siale: P.,ltached r' PrefenecP No ~;tripe Sz 128 #Col: 2 Sul)(Jlsk (latadgO"1·02 Size "1048576 Colurnnl Offset 20~1T152 Plex Offset (I Usage. Strlpeci Subdlsk datadg(l2-01 t:izel(l48576 Column 0 Offset 0 F'lex Offset (I Usage Sltiped I SelectVieW->Hori~~;:;t~I~~----I' View->Verticalto changethe orientationof thediagram. Volume Layout Window The Volume Layout window displays a graphical view of the selected volume's layout, components, and properties. You can select objects or perform tasks on objects in the Volume Layout window. This window is dynamic. so the objects displayed in this window arc updated automatically when the volume's properties change. To display the Volume Layout window, highlight a volume and select Actions--->Layout View. The View menu changes the way objects are displayed in this window. Select Vicw->Horizontalto display a horizontal layout and View->Vertical to display a vcrticul layout. 4-14 Copyright ~ 2006 Svmantec Corporation AlIllghls reserveo VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 109. , S)111ill1ltX .•••••• Volume to Disk Mapping Window Dilk Group: daudg Volumes iii Click a triangle to hide subdisks. * j~..!.~_. .__ ._._ ...__ ._'~.,) Dilk Group: datadg o datodgOl-0l (0: 20111152) ~ Volumes datodgOl.02 (2091152: 1048516) 0dg02. datodg02·01(0: 1048516) datodg02.01(0: 1048576) ~ ;' idatadg01 6 dol_'-01 (0: 2(97152) dol 1·02(2097152: 1048576) Click a dot to highlight an intersecting row and column. • ',I IVolume to Disk Mapping Window The Volume to Disk Mapping window displays a tabular view of volumes and their relationships to underlying disks. To display the Volume to Disk Mapping window. highlight a disk group and select Actions vc-Disk/Volumc Map. To view subdisk layouts. click the triangle button to the left of the disk name. or select View-- >Expand All. To help identify thc row and column headings in a large grid, click a dot in the grid to highlight the intersecting row and column. Lesson 4 Selecting Volume Layouts 4-15 Copyrigh1'~ 200b Sjmaruec Corporation, All nghlS reserved
  • 110. symarucc Volume View Window : Highlight a volume and select Actions->Volume View. "'-1" 1:.id:EX~~J'-"" .. .............. AddatavoI01!"':'!TYPe: Concat Size. 1.000 GB Mirrors 1 Logged: No IJJ I .:I3t3dg01.(l'1 II 1.(100 G8 LJdatavoI02!' IIType. Striped Size. 1.000 OB Mirrors 1 Logged. No jJ datadg02·01 dal.ldg01-02 512 000 MB 512.000 M8 columnO cctvmrr 1 Volume View Window The Volume View window displays characteristics of the volumes on the disks. To display the Volume View window. select a volume or disk group and select Aetions->Volumc View. Display options in the Volume View window include: Expand: Click the Expand button to display detailed information about volumes. New volume: Click the New Volume button to invoke the New Volume wizard. 4-16 Copyright 'h- 21106 Symantec Coenor.auon All fights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 111. ,S)111iHllt'(. The Disk View window displays a close-up graphical view of the layout of subdisks in a volume. To display the Disk View window. select a volume or disk group and select Actions->Disk View. Display options in the Disk View window include: Expand: Click the Expand button to display detailed information about all disks in the Disk View window. Vol Details: Click the Vol Details button to include volume names, layout types. and volume status for each subdisk. Projection: Click the Projection button to highlight objects associated with a selected subdisk or volume. Projection shows the relationships between objects by highlighting objects that are related to or part ofa specific object. Caution: You can move subdisks in the Disk View window by dragging subdisk icons to different disks or to gaps within the same disk. Moving subdisks reorganizes volume disk space and must be performed with care. Disk View Window : Highlight a volume and select Actions->Disk View. Volume Overview ISubdisk Details ID(I~ Expand I~ Vol Details Projection ~ @datadgldatadg01:t1t2d6s2 Size: 4.961 G8 Free: 3.461 G8 (70%) datavcln t (Concatenated) I ;,1 av,,102(SI';p,d) I3061GB I1000 <38 of 1 000 G8 5-12.000 MB of 1.000 GB Health, He attbv Q datadgldatadg02.c1 t2d7s2 Size' 4 961 GB Free. 4.461 GB (90%) datavol02 (Stnpe d) 14061 GB I 512000 M8 of 1.000 08 Health., Disk View Window Lesson 4 Selecting Volume Layouts Copyright r. 2006 Syrnantec Corporation Att rights reserved I 4-17
  • 112. Top-Level Volume How Do Layered Volumes Work? Subvolumes Underlying Disks • Volumes are constructed from subvolumes . • The top-level volume is accessible to applications. Advantages • Improved redundancy • Faster recovery times Disadvantages Requires more VxVM objects Volume Volume ISUbdiS1 Plex ISUbdIS, FUbdiSkl Plex Plex Creating a Layered Volume What Is a Layered Volume? Vx VM provides two ways to mirror your data: Original 'xVM mirroring: With the original method of mirroring, data is mirrored at the plex level. The loss of a disk results in the loss of a complete plcx. A second disk failure could result in the loss ofa complete volume if the olume has only two mirrors. To recover the volume, the complete volume contents must be copied from backup. Enhanced mirroring: VxVM 3.0 introduced support for an enhanced type of mirrored volume called a layered volume, A layered volume is a virtual Volume Manager object that mirrors data at a more granular level. To do this, VxVM creates subvolumes from traditional bottom-layer objects, or subdisks. These subvolumes function much like volumes and have their own associated plexcs and subdisks. With this method of mirroring, data is mirrored at the column or subdisk level. Loss of a disk results in the loss of a copy of a column or subdisk within a plex. Further disk losses may occur without affecting the complete volume. Only the data contents of the column or subdisk affected by the loss of the disk need to be recovered. This recovery can be performed from an up-to-date mirror of the failed disk. Note: Only VxVM versions 3.0 and later support layered volumes. To create a layered volume. you I11UStupgrade the disk group that owns the layered volume to version 60 Dr later. 4-18 COPYright r:; ;m06 Symantec Corporation. All rlgtllS reserved VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
  • 113. How Do Layered Volumes Work? In a regular mirrored volume. top-level plcxcs consist of subdisks. In a layered volume. these subdisks are replaced by subvolumcs. Each subvolumc is associated with a second-level volume. This second-level volume contains second-level plexes, and each second-level plcx contains one or more subdisks. In a layered volume. only the top-level volume is accessible as a device for use by applications. Note: You can also build a layered volume from the bottom up by using the vxmake command. For more information. see the vxmake (1m) manual page. Layered Volumes: Advantages Improved redundancy: Layered volumes tolerate disk failure better than nonlayered volumes and provide improved data redundancy. Faster recovery times: If a disk in a layered volume filils. a smaller portion of the redundancy is lost. and recovery and resynchronization times are usually quicker than for a nonlayered volume that spans multiple drives. For a stripe-mirror volume. recovery of a single subdisk failure requires resynchronization of only the lower plex, not the top-level plcx. For a mirror-stripe volume. recovery of a single subdisk failure requires resynchronization of the entire plex (full volume contents) that contains the subdisk. Layered Volumes: Disadvantages Requires more VxVM ohjccts: Layered volumes consist of more VxVM objects than nonlayered volumes. Therefore. layered volumes Illay Ii II up the disk group configuration database sooner than nonlaycrcd volumes. When the configuration database is full. you cannot create more volumes in the disk group. With SF 5.0. the default size of the private region is 32 MB. Each VxVM object requires about 25() bytes. Note: On the Solaris platform. in prc=l.v tonnat, the private region size is rounded up 10 the cylinder boundary. With modern disks with large cylinder sizes. this size can be quire large. The private region can be made larger when a disk is initialized. The size cannot be changed once disks have been initialized. Lesson 4 Selecting Volume Layouts 4-19 Copyrigtll 2006 Symantec Corporation. All rights reservoc I
  • 114. Traditional Mirroring Plex i sd = subdisk diskOl disk02 disk03 disk04 Volume Status Down u Down Comparing Regular Mirroring with Enhanced Mirroring To understand the purpose and benefits of layered volume layouts, compare regular mirroring with the enhanced mirroring of layered volumes in a disk failure scenario. Regular Mirroring The example illustrates a regular mirrored volume layout called a mirror-stripe layout. Data is striped across two disks, diskOl and disk03, to create one plex, and that plex is mirrored and striped across two other disks, disk02 and disk04. If two drives fail, the volume survives 2 out 01'6 (1/3) times. As more subdisks are added to each plcx, the odds of a traditional volume surviving a two-disk failure approach (but never equal) 50 percent. II'a disk fails in a mirror-stripe layout, the entire plex is detached, and redundancy is lost on the entire volume. When the disk is replaced, the entire plcx must be brought up-to-dare. or resynchronized. C0IJynyhi ic 2006 Svmantec Corporauon. All nqhts reserved VERITAS Storage Foundation 5.0 for UNIX.' Fundamentals4-20
  • 115. Layered Volumes The example illustrates a layered volume layout called a stripe-mirror layout. In this layout, VxVM creates underlying volumes that mirror each subdisk. These underlying volumes are used as subvolumes to create a top-level volume that contains a striped plex of the data. If two drives fail. the volume survives 4 out of6 (2/3) times. In other words, the use of layered volumes reduces the risk of failure rate by 50 percent without the need for additional hardware. As more subvolumes are added. the odds of a volume surviving a two-disk failure approach 100 percent. For volume failure to occur, both subdisks that compose a subvolume must tail. If a disk tails. only ihe failing subdisk must be detached, and only thai pori ion of the volume loses redundancy. When the disk is replaced, only a portion ofthe volume needs 10 be recovered, which takes less time. ~~~.~-..i-' " ,~f---- --'~)~l;;;;l-(~'~' Layered Volumes disk01 disk02 disk03 disk04 IS = su IS Volume Status X X Down X X Up X X Up X X Up • When two disks X X Up : fail, the volume X X Down _ survives 4/6, or X = failed disk I i 2/3 times. Failed Volume Status Subdisks Stripe-Mirror (Layered) Mirror-Stripe (Nontavered) I and 2 Down J)own I and J Up Up I and 4 Up Down 2 and J Up Down 2 and 4 Up Up 3 and 4 Down Down Lesson 4 Selecting Volume Layouts Copyright ~: 2006 Svrnantec Corporation All rigtlls reserved I 4-21
  • 116. Terminology for Mirrored Layouts The four types of mirroring in VxVM: • mirror-concat (Non-layered, RAID-O+1) - The top-level volume contains more than one plex (mirror). - Plexes are concatenated. • mirror-stripe (Non-layered, RAID-O+1) - The top-level volume contains more than one plex (mirror). - Plexesare striped. • concat -mirror (Layered, RAID-1+0) - The top-level volume is a concatenated plex. - Subvolumes are mirrored. • stripe-mirror (Layered, RAID-1+0) - The top-level volume is a striped plex. - Subvolumes are mirrored. symantcc Layered Volume Layouts In general, use regular mirrored layouts for smaller volumes and layered layouts for larger volumes. By default in VxVM, a volume larger than I GB is created as a layered volume, unless you specify otherwise. Before you create layered volumes, you need to understand the terminology that defines the different types of mirrored layouts in VxVM. mirror-concat: This layout mirrors data across concatenated plexes. The concatenated plexes can consist of subdisks of different sizes. When you create a simple mirrored volume that is less than I GB in size. a nonlayercd mirrored volume is created by default. mirror- stripe: This layout mirrors data across striped plexes. The striped plexes can consist of different numbers of subdisks. concat -mirror: This volume layout contains a single plcx consisting ofone or more concatenated subvolurnes. Each subvolume consists of two concatenated plexes (mirrors), which consist of one ur more subdisks. II' you have two subdisks in the top-level plex, a second subvolumc is created, which is used as the second concatenated subdisk of the plex. In the VEA interface, the Gl.ll tcrm used for a layered. concatenated layout is Concatenated Mirrored. These volumes require at least two disks. stripe-mirror: This volume layout stripes data across mirrored volumes. The difference between stripe-mirror and concat-mirror is that the top-level plex is striped rather than concatenated. Each mirrored subvolumc must have the same number of disks. In the VEA interface. the GUI term used for a layered. striped layout is Striped M inured. Striped Mirrored volumes require at least four disks. Copy light ,~: ;!Ullti Syruauter. CorPOfil!IOJL AlIlIght<; reserved 4-22 VERITAS Storage Foundation 5.0 for UNIX.' Fundamentals
  • 117. , symurnec Creating Layered Volumes VEA: In the New Volume Wizard, select Concatenated Mirrored or Striped Mirrored as the volume layout. vxassist make: I vxassist -g datadg make datavol 109 layout=stripe-mirror vxassist -g datadg make datavol 109 layout=concat-mirror Note: To create simple mirrored volumes (nonlayered), you can use: • layout=mirror-concat • layout=mirror-stripe Creating a Layered Volume: VEA In the New Volume wizard. select one of the two layered volume layout types: Concatenated Mirrored: The Concatenated Mirrored layout refers to a concat-mirror volume. Striped Mirrored: The Striped Mirrored layout refers to a stripe-mirror volume. Creating a Layered Volume: CLI In the vxassist make syntax. you can specify any ofthe following layout types: To create layered volumes: layout=concat-mirror layout=stripe-mirror To create simple mirrored volumes: layout=mirror-concat layout=mirror-stripe For striped volumes. you can specify other attributes, such as ncol=number_of_columns and s t ri peun i t e s i z e. Lesson 4 Selecting Volume Layouts 4-23 Copyriqht <t 2006 Symantec Corporation All rights reserved
  • 118. syrnantec Viewing Layered Volumes vxprint -rt volOl Top-level v volOl ENABLED ACTIVE ... volume and plex pl volOl-03 volOl ENABLED ACTIVE ... Subvolume, sv volOl-SOl volOl-03 volOl-LOl 1. .. second-level v2 volOl-LOl ENABLED ACTIVE ... volume, plex, p2 volOl-POl volOl-LOl ENABLED ACTIVE ... and subvolume s2 datadg05-02 volOl-POl datadg05 0 ... p2 volOl-P02 volOl-LOl ENABLED ACTIVE ... s2 datadg03-02 volOl-P02 datadg03 O... sv volOl-S02 vol01-03 volOl-L02 1. .. Viewing a Layered Volume: VEA To view the layout of a layered olume, you can use any of the methods tor displaying volume information. including the: Object views in the main window Disk View window Volume View window Volume to Disk Mapping window Volume Layout window Viewing a Layered Volume: ell To view the contiguration of a layered volume from the command line, you use the -r option of the vxprint command. Thc - r option ensures that subvolume configuration information for a layered volume is displayed. The - L option is also useful for displaying layered volume information when used with - r. -L displays related records of a volume containing subvolumcs, but grouping is performed under any volume. 4-24 Copyright ,::. 200fi Svmautec Corporation All nqhts reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 119. 'S)111ilI1hX. With storage attributes, you can specify: • Which storage devices are used by the volume • How volumes are mirrored across devices When creating a volume, you can: • Include specific disks, controllers, enclosures, targets, or trays to be used for the volume. • Exclude specific disks, controllers, enclosures, targets, or trays from being used for the volume. • Mirror volumes across specific controllers, enclosures, targets, or trays. (By default, VxVM mirrors across different disks.) Allocating Storage for Volumes Specifying Storage Attributes for Volumes YxYM selects the disks on which each volume resides automatically. unless you specify otherwise. To create a volume on specific disks. you can designate those disks when creating a volume. By specifying storage attributes when you create a volume. you can: Include specific disks. controllers. enclosures. targets. or trays to be used for the volume. Exclude specific disks. controllers. enclosures. targets. or trays from being used lor the volume. Mirror volumes across specific controllers. enclosures. targets. or trays. (By default. YxYM does not permit mirroring on the same disk.) By specifying storage attributes. you can ensure a high availability environment. For example. you can only permit mirroring of a volume on disks connected to different controllers and eliminate the controller as a single point of failure. Note: When creating a volume. all storage attributes that you specify for use must belong to the same disk group. Otherwise. YxYM does not use these storage attributes to create a volume. Lesson 4 SelectingVolumeLayouts 4-25 Copynghl 20:)6 Symaruer. Corporation All nqtus reserved I
  • 120. syrnantec Storage Attributes: Methods VEA: In the New Volume wizard, select "Manually select disks for use by this volume" and select the disks and storage allocation policy. ell: Add storage attributes to vxassist make: vxassist -g diskgroup make volume_name length [layout=layoutJ [mirror=ctlr Ienclr I target] [I] [storage_attributes •.. J ~ • Disks: datadg02 • Mirror across controllers: C t II mirrormctlr . Exclude. : E~:I~:U~::~ ctlr: c2 • Mirror across enclosures: mirror=enclrenclr:emcl • Targets: target: c2t4 • Mirror across targets: mirrormtarget • Trays: c2tray2 For example, to exclude all disks that are on controller cz: vxassist -g datadg make datavol 5g !ctlr:c2 Specifying Storage Attributes: VEA You can specify that the volume is to be mirrored or striped across controllers. enclosures. targets, or trays . .-J M!rrorAcross: (.QnlrCi!ff I (I ~ ~IrJpe Across: ,co';;trOlier r(J .-J Qrdered CQl11rolier -J' Tray 1-~---~-ITarget Enclosure Note: A tray is a set of disks within certain Sun arrays, Note that this option may not be available 011 other platforms. To exclude a disk. controller. enclosure. target. or tray. you add the exclusion symbol ( I ) before the storage attribute. For example. to exclude datadg02 from volume creation. yuu use the format: ! datadg02. For example. to create a volume on specific disks by creating a 5-GB volume called datavol on datadg03 and datadg04: vxassist -g datadg make datavol 5g datadg03 datadg04 Copyflghl 200b Symantec Corporation, All nqtus reservec 4-26 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
  • 121. , syrnanrec Ordered Allocation Ordered allocation enables you to control how columns and mirrors are laid out when creating a volume. With ordered allocation, storage administrators can override the built-in allocation defaults. VEA: In the New Volume wizard, select "Manually select disks for use by this volume." Select the disks and the storage allocation policy and mark the Ordered check box. '" go""''' I CLl: Add the -0 ordered option: vxassist -9 diskgrotlp[-o ordered] make volume name length [layout=layout] ... Specifying Ordered Allocation of Storage for Volumes In addition 10 specifying which storage devices YxYM uses10 create a volume. you can also specify how the volume is distributed on the specified storage. By using the ordered allocation feature ofYxYM. you can control how volumes are laid out on specified storage. For example. if you are creating a three-column mirror-stripe volume using six specified disks. YxYM creates column I on the first disk. column 2 on the second disk. and column 3 on the third disk. Then. the mirror is created using the fourth, fifth, and sixth specified disks. Without the ordered allocation option. Vx VM selects disks in several ways. including the following: vxconf igd selects a disk in the group which has no subdisks. vxconf igd selects subdisks for a striped plcx from disks already associated into striped plcxes rather than disks associated into concat plcxcs. vxconf igd selects a disk with an existing log plex for the log plcx of another volume. YxYM has default methods for space allocation. as indicated by the f sgen UTYPEin the vxprint output. Storage administrators can override the built-in defaults: First, YxYM concatenates subdisks in columns. Secondly. YxVM groups columns in striped plcxcs. Finally. YxYM forms mirrors. Use the -0 ordered option to the vxassist make command. Lesson 4 Selecting Volume Layouts Copyri9ht:f; 20U6 Syrnantec Corporation. All rights reserved I 4-27
  • 122. symantec Ordered Allocation: Example Specifying the order of columns: vxassist -g datadg -0 ordered make datavol 2g layout=stripe neol=3 datadg03 datadg02 datadgOl Without using ordered allocation: (No guarantee of disk order) vxassist -g datadg make datavol 2g layout=stripe neol=3 datadg03 datadg02 datadgOl Example I: Order of Columns To create a IO-GB striped volume, called datavol, with three columns striped across three disks: vxassist -g datadg -0 ordered make datavol 109 layout=stripe neol=3 datadg03 datadg02 datadgOl Because the -0 ordered option is specified, column I is placed on datadg03, column 2 is placed on datadg02. and column 3 is placed on datadgOl. Without the - 0 ordered option. column I would be placed on da tadgO 1. and so on. Example 2: Order of Mirrors To create a mirrored volume using datadg02 and datadg04: vxassist -g datadg -0 ordered make datavol 109 layout=mirror datadg04 datadg02 Because the -0 ordered option is specified, the first mirror is placed on datadg04, and the second mirror is placed on datadg02. Without this option. the first mirror could be placed on either disk. Note: There is no logical difference between the mirrors. However, by controlling the order of mirrors, you can allocate plex names with specific disks (for example, datavol- 01with datadg02 and datavol- 02 with datadg04). This level of control is significant when you perform mirror breakoff and disk group split operations. You can establish conventions that indicate to you which specific disks are used for the mirror breakoff operations. 4-28 Copyright ,c 2006 Svmautec Corporation AlllIghls reservec VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 123. , sym.uuec. I Lesson Summary • Key Points This lesson described the advantages and disadvantages of volume layouts supported by VxVM. You learned how to create concatenated, striped, mirrored, and layered volumes. In addition, you learned how to allocate storage for a volume by specifying storage attributes and ordered allocation. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Storage Foundation Release Notes 'symantct:, Lab 4 Lab 4: Selecting Volume Layouts In this lab, you create simple concatenated volumes, striped volumes, and mirrored volumes. You also practice creating a layered volume and using ordered allocation while creating volumes. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Labs and solutions tor this lesson arc located on the following pages: Appendix A provides complete lab instructions, "1,Clh .1: Sc:kC:llli" ',,:UI11<: lavoutv." P;I1!C ,·21 Appendix B provides complete lab instructions and solutions, "1;11, -+ '.;olnt;"I',: Sl..'lecting v'olumc IJOlih," pdg;,:lL ~:~ Lesson 4 Selecting Volume Layouts Copyright c: 2006 Syrnantec Corporation. Ali right" rc serveo 4-29
  • 124. 4-30 VERITAS Storage Foundation 5.0 for UNIX· Fundamentals CopyJlytl1 ~ 20Dti Svrnantec Corporatron. All flotlls reservoc
  • 125. Lesson 5 Making Basic Configuration Changes
  • 126. symantcc Lesson Introduction • Lesson 1: Virtual Objects • Lesson 2: Installation and Interfaces • Lesson 3: Creating a Volume and File System • Lesson 4: Selecting Volume Layouts • Lesson 5: Making Basic Configuration ~ gfJi:lrJ{1f!~W""""W"_~ • Lesson 6: Administering File Systems • Lesson 7: Resolving Hardware Problems ~~'.%~'k& .Ab'l[dl! .• l($;tl!c. 'symanlLT Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Administering Add a mirror to and remove a mirror from an Mirrored Volumes existing volume, add a log, and change the volume read policy. Topic 2: Resizing a Volume Resize an existing volume by using VEA and from the command line. Topic 3: Moving Data Between Deport a disk group from one system and Systems import it on another system. Topic 4: Renaming Disks and Rename disks and disk groups. Disk Groups Topic 5: Managing Old Disk Upgrade disk groups and convert non-CDS Group Versions disk groups to CDS. 5-2 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 127. Example Array Structure LUNs are a virtual presentation. Therefore, you have to take the array configuration into account to understand where the ac;:~.!!~datais placed. Array with ~it~~UUUUUU00 ;~'::~'~' 100[j0[j[1 ~;:a~~D groups r",,,· • • • Twelve array- " based LUNs. ~ -. Administering Mirrored Volumes Example Array Structure In an array. the LUNs are a virtual presentation. Therefore. you cannot know where in the array the actual data will be put. That means you have no control over the physical conditions. The array in the slide contains slots for 14physical disks. and the configuration places 12 physical disks in the array. These physical disks arc paired together into (, mirrored RAID groups. In each RAID group. 12 logical units. or LUNs, arc created. These LUNs appear to hosts as SAN-based SCSI disks. The remaining two disks are used as spares in case one of the active disks fails. I Copyright ':92006 Syrnantec Corporation All rights reserved 5-3Lesson 5 Making Basic Configuration Changes
  • 128. syrn.mtec When to Add a Mirror to a Volume • To add redundancy if it is not provided at the I SAN I hardware level. 1 Old Array I" I (/1 I • To eliminate the disk J LJ"~I,..N..!ew. Array. array as a single point ~._ of failure (SPOF) by mirroring across arrays. • To provide disaster recovery across sites when there is a SAN connecting two or more sites. Original Data Mirror of Data I Migrating Data I • To improve concurrent read performance by adding mirrors with different 1/0 paths. • To migrate data from one array to another. When to Add a Mirror to a Volume Without Storage Foundation, moving data from one array to another requires downtime. Using Storage Foundation. you can mirror 10 a new array, ensure it is stable, and then remove the plexes from the old array, No downtime is necessary, These are the steps for migrating data using Storage Foundation: 1 Add new array to SAN. 2 Mirror volumes to new array. 3 Remove plexes/LUNs from old array. 4 Remove old array. This is useful in many situations, for example, if a company purchases a new array. With Storage Foundation, yuu: 1 Add the new array to the SAN. 2 Zone for the server to see the LUNS. 3 Rcscan with VEA. 4 Add the LUNs from the new array to the disk group. S Mirror the volumes to the new army. 6 Remove the plcxes on the old array. 7 Remove the LUNS that are on the old array from the disk group. This method does not require downtime. VERITAS Storage Foundation 5.0 for UNIX: Fundamentals5-4
  • 129. II'a volume was not originally created as a mirrored volume. or iI'you want to add additional mirrors. you can add a mirror to an existing volume. By default, a mirror is created with the same plex layout as the plex already in the volume. For example. assume that a volume is composed of a single striped plex. If you add a mirror to the volume. VxYM makes that plex striped. as well. You can specify a different layout using YEA or from the command line. A mirrored volume requires at Icast two disks. You cannot add a mirror to a disk that is already being used by the volume. A volume can have multiple mirrors. as long as each mirror resides on separate disks. Only disks in the same disk group as the volume can be used to create the new mirror. Unless you specify the disks to be used for the mirror. Vx Vlvl automatically locates and uses available disk space to create the mirror. A volume can contain up to .,2 plcxcs (mirrors): however. the practical limit is : I. One plex should be reserved for use by YxYM for background repair operations. Adding and Removing Mirrors to a Volume Adding a Mirror: • Only concatenated or striped volumes can be mirrored. • By default, a mirror is created with the same plex layout as the original volume. • Each mirror must reside on separate disks. ~ ~ • All disks must be in the same disk group. ~~ A volume can have up to 32 plexes, or mirrors. • Adding a mirror requires plex synchronization. Removing a Mirror: When a mirror is removed, the space occupied by that mirror can be used elsewhere. When a mirror (plex) is no longer needed. you can remove it. You can remove a mirror to provide free space. to reduce the number of mirrors. to remove a temporary mirror. Caution: Removing a mirror results in loss of data redundancy. If a volume only has two plcxes. removing one ofthem leaves the volume unmirrorcd. Adding and Removing Mirrors Removing a Mirror Lesson 5 Making Basic Configuration Changes Copyright 1: 2006 Svmantec Corporation All rights reserved I 5-5
  • 130. symantec Adding/Removing Mirrors VEA: • SelectActions->Mirror->Add. • SelectActions->Mirror->Remove. vxassist mirror: vxassist -g diskgroup mirror volume name [layout=layout~_ type] [di sk _ name] vxassist -g datadg mirror datavol vxassist remove mirror: vxassist -g cti ekoroup remove mirror vojume name L! l dm~nal1le To remove the plex that contains a subdisk from the disk datadg02: vxassist -g datadg remove mirror datavol !datadg02 To remove the plex that uses any disk except datadg02: vxassist -g datadg remove mirror datavol datadg02 Adding a Mirror: VEA Select: Ihe volume to be mirrored Navigation path: Actions- ~>Mirror->;dd Input: Number of mirrors to add: Type a number. Default is I. Choose the layout: Select from Concatenated or Striped. Select disks to use: Vx VM can select the disks. or you can choose specific disks. You can also mirror or stripe across controllers. trays. targets. or enclosures. To verify that a new mirror was added. view the total number of copies of the volume as displayed in the main window. The total number of copies is increased by the number of mirrors added. Adding a Mirror: CLI To add a mirror onto a specific disk. you specify the disk name in the command: vxassist -g datadg mirror datavol datadg03 Removing a Mirror: CLI To remove a mirror. use vxassist remove mirror: vxassist -9 diskgroup remove mirror volume name You can also use vxplex: vxplex -g diskgroup -0 rm dis plex_name 5-6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYllgtH ';/2006 Symantec Corpor aucn. All nghts reserveo
  • 131. ~. symarue« Adding a Dirty Region Log (DRL) to a Volume • Log keeps track of changed regions. • If the system fails, only the changed regions of the volume must be recovered. • Not enabled by default. When enabled, one log is created. • You can create additional logs to mirror log data. VEA: • Actions->Log->Add • Actions->Log->Remove vxassist vxassist -g diskgroup addleg volume name [logtype=drl] {nlo9:::on) [attributes) vxassist -9 diskgroup remove log vol ume vxassist -9 datadg addlog datavol logtype~drl vxassist -9 datadg remove log nlog=2 datavol Adding a Log to a Volume Logging in VxVM By enabling logging. YxYM tracks changed regions of a volume. Log information can then be used to reduce plcx synchronization times and speed the recovery of volumes after a system failure. Logging is an optional feature, but is highly recommended. especially for large volumes. Dirty Region Logging Dirty region logging (DRL) is used with mirrored volume layouts. DRL keeps track of the regions that have changed due to JlO writes to a mirrored volume. Prior to every write. a bit is set in a log to record the area of the disk that is being changed. In case of system failure. DRL uses this information to recover only the portions of the volume that need to be recovered. IfDRL is not used and a system failure occurs, all mirrors of the volumes must be restored to a consistent state by copying the full contents of the volume between its mirrors. This process can be lengthy and I/O intensive. When you enable logging on a mirrored volume. one log plex is created by default. The log plex uses space from disks already used for that volume, or you can specify which disk to use. To enhance performance. you should consider placing the log plex on a disk that is not already in use by the volume. To create a volume that is mirrored and logged: vxassist -g datadg make datavol Sm layout=mirror logtype=drl Lesson 5 MakingBasicConfigurationChanges CDpyright F-2006 swneotec Corporation. All riqtllS reserved I 5-7
  • 132. Volume Read Policies I Round Robin I S Olume Read .. . .. Read 110 I/O Rea 110 Selected Plex Read r-, Volume..., 110 Is there a 1'-'striped plex? .~ IDefault Method I "- .-' IPreferred Plex I Siteread I Read I/O ,.... .G~~~~~~.....,from host: Site R ~H Site : at Site A i A t=J f; t::::j B : !.... ~ ..... j svrnantcc. Volume Read Policies with Mirroring One of the benefits of mirrored volumes is thai you have more than one copy of the data from which to satisfy read requests. The read policy for a volume determines the order in which plexes are accessed during 1/0 operations. VxVM has three read policies the YOIl can specify to satisfy read requests: Round robin: VxVM reads each plcx in turn in "round-robin" manner lor each nonsequential 1:0 detected. Sequential access causes only one plex to be accessed in order to take advantage of drive or controller read-ahead caching policies. If a read is within 256K ofthe previous read. then the read is sent to the same plcx. Preferred plcx: Vx VM reads first from a plex that has been named as the preferred plcx. Read requests are satisfied from one specific plex, presumably the plcx with the highest performance. Ifthe preferred plex fails, another plex is accessed. For example. if you are mirroring in a campus environment and the local plcx would he faster than the remote one. setting the local plex as the preferred plcx would increase performance. Selected plex: This is the default read policy. Under the selected plex policy, Volume Manager chooses an appropriate read policy based on the plex configuration to achieve the greatest 1/0 throughput. If the mirrored volume has exactly one enabled striped plex, the read policy defaults to that plex; otherwise. it defaults to a round-robin read policy. Sitcrcad: VxVM reads preferentially from plcxcs at the locally defined site. This is the default policy for volumes in disk groups where site consistency has been enabled. 5-8 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals CUPYrlght e 20% Svmantec Corporation All n91'16reserved
  • 133. , symamec. Setting the Volume Read Policy VEA: • Actions->Set Volume Usage • Select from Based on layouts, Round robin, or Preferred. vxvol rdpol: vxvol -g diskgroup rdpol policy volume name [plexl Examples: • To set the read policy to round robin: vxvol -g datadg rdpol round datavol • To set the read policy to read from a preferred plex: vxvol -g datadg rdpol prefer datavol datavol-02 • To set the read policy to select a plex based on layouts: vxvol -g datadg rdpol select datavol Changing the Volume Read Policy: VEA Select: A volume Navigation path: Actions->Set Volume Usage Input: Volume read policy: Select Basedon layouts (default: the selected plex method). Round robin. Site local read, or Preferred. Iryou select Preferred. then you can also select the preferred plcx from the list of available plexes, Changing the Volume Read Policy: CLI vxvol -9 diskgroup rdpol round volume name vxvol -9 diskgroup rdpol prefer volume name preferred_plex vxvol -9 diskgroup rdpol select volume name I Copyright:~ 2006 Symantec coro.euoo All flqllt~ -eserveo 5-9Lesson 5 Making Basic Configuration Changes
  • 134. symantec Resizing a Volume To resize a volume, you can: • Specify a desired new volume size. • Add to or subtract from the current volume size. g-Disk space must be available. - VxVM assigns disk space, or you can specify disks. L_ ~t.....__..-J Shrinking a volume enables you to use space elsewhere. VxVM returns space to the free space pool. If a volume is resized, its file system must also be resized. VxFS can be expanded or reduced while mounted. • UFS/HFS can be expanded, but not reduced. HFS needs to be unmounted to be expanded. • Ensure that the data manager application supports resizing. Resizing a Volume Resizing a Volume If users require more space on a volume. you can increase the size of the volume. Ifa volume contains unused space that you need to use elsewhere, you can shrink the volume. When the volume size is increased. sufficient disk space must be available in the disk group. When increasing the size of a volume. VxVM assigns the necessary new space from available disks. By default, VxVM uses space from any disk in the disk group, unless you define specific disks. Resizing a Volume with a File System Volumes and file systems arc separate virtual objects. When a volume is resized, the size of the raw volume is changed. Ifa tile system exists that uses the volume, the file system must also be resizcd. When you resize a volume using VEA or the vxres i ze command, the tile system is also resized. Resizing Volumes with Other Types of Data For volumes containing data other than file systems. such as raw database data. you must ensure that the data manager application can support the resizing of the data device with which it has been configured. 5-10 Copyright ;: 2006 Svmantec Corporauon AU lights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 135. , symamec Resizing a Volume: Methods Method What Is Reslzed? VEA Both volume and file system To resize a volume from the command line. you can use either the vxassist command or the vxresize command. Both commands can expand or reduce a volume 10 a specific size or bv a specified amount of space. with one significant difference: vxres i ze automatically resizes a volume's f Ie system. vxassist does not resin? a volume's tile system. When using vxass i s t ,you must resize the tile system separately by using the f sadmcommand. vxresize Both volume and file system IWhen you expand a volume. both commands automatically locate mailable disk space unless you designate specific disks to use. When you shrink a volume, the unused space becomes free space in the disk group. When you resizc a volume. you can specify the length of a new volume in sectors. kilobytes. megabytes. or gigabytes. The unit of measure is added as a suffix to the length (s, k, m, or g). Ifno unit is specified, the default unit is sectors. vxassist Volume only fsadm File system only (VxFS only) Resizing a Volume and File System: Methods Lesson 5 Making Basic Configuration Changes Copyriyhl~' 2006 Symantec Corporation All rights reserved 5-11
  • 136. syruantcc Resizing a Volume: VEA I Highlight a volume, and select Actions->Resize Volume. Volume name: datavoiOI Specify the amount Current volume size: 1500 ~ of space to add or subtract, or specify a Add by: 1 fMB-::J new volume size. 5ubtract by: I fMB-::J New volume size; I fMB-::J Max Size I r. Let VolumeManager decide what disks to use for this volume [If. desired, specify r Manually select disks for use by this volume J disks to be used lfor the additional space. Resizing a Volume and File System: VEA Select: The volume to be resized Navigation path: Actions -->Resize Volume Input: Add by: To increase the volume size by a specific amount of 'pace. input how much space should be added to the volume. Subtract by: To decreasethe volume size by a specific amount of space. input how much 'pace should be removed. New volume size: To specify a new volume size. input the size. Max Size: To determine the largest possible size. click Max Size. Select disks for use by this volume: You can select specific disks to use and specify mirroring and striping options. Force: You can force the rcsize if the size is being reduced and the volume is active. Notes: When you resize a volume. if a YERITAS file system (YxFS) is mounted on the volume. the file system is also rcsized. The tile system is not resized if it is unmounted. VERITAS Storage Foundation 5.0 for UNIX: Fundamentals5-12
  • 137. ,S)111ank'C. Resizing a Volume and File System: vxresize The new_length operand can begin with a plus sign (+) to indicate that the new length is added to the current volume length. ; minus sign ( - ) indicates that thc new length is subtracted. - b runs the process in the background. The ability to expand or shrink a file system depends on the tile system type and whether the tile system is mounted or unmounted. Resizing a Volume: vxresize vxresize [-bJ fs_tlpe -g diskgroup volume narne [+ I-J new_length Original volume size: 10 MB ~ vxresize -g mydg myvol SOm vxresize -g mydg myvol +lOm vxresize -g mydg myvol 40m vxresize -g mydg myvol -10m File System Type Mounted FS Unmounted FS VxFS Expand and shrink '101 allowed UFS (Solaris) Expand only Expand only HFS (llP-UX) /01 allowed Expand only I Example: The size of the volume myvol is 10MB. To extend myvol to 50 MB: vxresize -g mydg myvol 50m To extend myvol by an additional 10 MB: vxresize -g mydg myvol +10m To shrink myvol back to a length 01'40 MB: vxresize -g mydg myvol 40m To shrink myvol by an additional 10MB: vxresize -g mydg myvol -10m Lesson 5 Making Basic Configuration Changes Copyrigtll:E 2006 Syrnantcc Corporation. All rights reserved. 5-13
  • 138. syrnantec Resizing a Volume: vxassist vxassist -g diskgrollp {growtolgrowbylshrinktol shrinkby} volume_name size Original volume size: 20 MB CD vxassist -g datadg growto datavol 40m ~ vxassist -g datadg growby datavol 10m vxassist -g datadg shrinkto datavol 30m vxassist -g datadg shr nkby datavol 10m Resizing a Volume Only: vxassist q rowt o growby shrinkto shrinkby Increases volume (0 specified length Increases volume hi' specified amount Reduces volume (0 specified length Reduces volume 1>.1' specified amount Resizing a File System Only: fsadm You may need to resize a tile system to accommodate a change in use-for example, when there is an increased need for space in the file system. You may also need to resize a tile system as part of a general reorganization of disk usage-for example, when a large tile system is subdivided into several smaller file systems. You can resize a VxFS tile system while the tile system remains mounted by using the f sadm command: fsadm [-b newsizel [-r rawdevl mount point Using fsadm to resizc a tile system does not automatically resize the underlying volume. When you expand a tile system. the underlying device must be large enough to contain the new larger file system. 5-14 CCP'i'flght:, ,nn6 Symantcc Corporation All fights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 139. S}11liHH.X Resizing a Dynamic LUN • If you resize a LUN in the hardware, you should resize the VxVM disk corresponding to that LUN. • Disk headers and other VxVM structures are updated to reflect the new size. • Intended for devices that are part of an imported disk group. VEA: • Select the disk that you want to expand. • Select Actions->Resize Disk. ell: vxdisk [-fJ -g diskgrollp resize dm nAme Example: vxdisk -g datadg resize datadgOl Resizing a Dynamic LUN When you resize a LUN in the hardware. you should resize the YxYM disk corresponding to that LUN. You can use vxdi sk res i ze to update disk headers and other VxYM structures to match a new LUN size. This command does not resize the underlying LUN itself. I Lesson 5 Making Basic Configuration Changes 5-15 Copyright tt' 2006 Symantec Corpo-auon All riqhts reserved
  • 140. r: Computer B1, Computer A I ,------;-----, acctdg l§'CJJ engdg , Additional Disks Moving Data Between Systems Example: Disk Groups and High Availability The example in the diagram represents a high availability environment. In the example, Computer A and Computer B each have their own bootdg on their own private SCSI bus. The two hosts are also on a shared SCSI bus. On the shared bus, each host has a disk group, and each disk group has a set of VxVM disks and volumes. There are additional disks on the shared SCSI bus that have not been added to a disk group. If Computer A fails. then Computer B. which is on the same SCSI bus as disk group acctdg. can take ownership or control of the disk group and all of its components. 5-16 Copyright - /.u1J6Syruantec Corporation All flghls reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 141. What is a deported disk group? • The disk group and its volumes are unavailable. • The disks cannot be removed. • The disk group cannot be accessed until it is imported. Before deporting a disk group: • Unmount file systems. • Stop volumes. When you deport a disk group, you can specify: • A new host • A new disk group name ~lk.•."&i ,~wP .)~ Deporting a Disk Group Deporting a Disk Group A deported disk group is a disk group over which management control has been surrendered. The objects within the disk group cannot be accessed. its volumes are unavailable. and the disk group configuration cannot be changed. (You cannot access volumes in a deported disk group because the directory containing the device nodes for the volumes are deleted upon deport.) To resume management of the disk group. it must be imported. A disk group cannot be deported irany volumes in that disk group arc in use. Before you deport a disk group. you must unmount file systems and stop any volumes in the disk group. Deporting and Specifying a New Host I When you deport a disk group using YEA or CLI commands, you have the option to speeify a new host to which the disk group is imported at reboot. Iryou know the name of the host to which the disk group will be imported. then you should specify the new host during the operation. If you do not specify the new host. then the disks could accidentally be added to another disk group. resulting in data loss. You cannot specify a new host using the vxdiskadm utility. Deporting and Renaming When you deport a disk group using YEA or CLI commands. you also hale the option to rename the disk group when you deport it. You cannot rename a disk group when deporting using the vxdiskadm utility. Lesson 5 Making Basic Configuration Changes Copyright G 2006 Svrnantec Corporation. Alilighis reserved 5-17
  • 142. symaruec . Select Actions->Deport Disk Group. : Disk group to be deported datadg lid Dellor! options New name: r+: r-······ N8¥/hOst: ~I vxdiskadm: "Remove access to (deport) a disk group" vxdg [-n new_name] [-h hostname] deport diskgroup Deporting a Disk Group: Disks that were in the disk group now have a state of Deported. If the disk group was deported to another host. the disk state is Foreign. Note: If you amine the disks. you must manually online the disks before you import the disk group. To online a disk. use vxdiskadm option "Enable (online) a disk device." Before deporting a disk group. unmount all tile systems used within the disk group that is to be deported. and stop all volumes in the disk group: umount mount po~nt vxvo1 -g diskgroup stopa11 5-18 Copyngt1 C' 21106 Svmautec Corporauon All nqbts rescrveo VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 143. Importing a Disk Group Importing a disk group reenables access to the disk group. When you import a disk group, you can: • Specify a new disk group name. • Clear host locks. • Import as temporary. • Force an import. Importing a Deported Disk Group All volumes are stopped by default after importing a disk group and must be started before data can be accessed. Importing and Renaming A deported disk group cannot be imported ifanother disk group with the same name has been created since the disk group was deported. You can import and rename a disk group at the same time. Importing and Clearing Host Locks When a disk group is created. the system writes a lock on all disks in the disk group. The lock ensures that dual-ported disks (disks that can be accessed simultaneously by two systems) are not used by both systems at the same time. If a system crashes. the lucks stored on the disks remain. and if you try to import a disk group containing those disks. the import fails. Importing As Temporary A temporary import does not persist across reboots. A temporary import can be useful, for example. if you need to perform administrative operations on the temporarily imported disk group. VEA docs not support temporary import. Forcing an Import A disk group import fails if the VxVM configuration daemon cannot rind all ofthe disks in the disk group. If the import tails because a disk has failed. you can force the disk group to be imported. Forcing an import should always be performed with caution. Lesson 5 Making Basic ConfigurationChanges Copyright rs 20u6 Symantec Corporation All rights reserved I 5-19
  • 144. symantec Importing a Disk Group , Select Actions->Import Disk Group. Group name: datadg,1155041Sn64.coursedev6 Options blewname: IClalaclg =cc"----- SileName G [] Clear hosllD Options include: Clearing host IDs at import Forcing an import • Starting all volumes Importing a Disk Group: By default, when you import a disk group by using VEA, all vol limes in the disk group are started automatically. By default. the vxdiskadm import option starts all volumes in the disk group. When yuu import a disk gruup from the command line, you must manually start all volumes A disk group must be deported from its previous system before it can be imported to the new system. During the import operation. the system checks for host import lucks. Ifany lucks are found. you are prompted to clear the locks. To temporarily rename an imported disk group. you use the - t option. This option imports the disk group temporarily and docs not set the autoimport flag, which means that the import cannot survive a reboot. To display all disk groups, including deported disk groups: [] Eorce ~ Start all ~olumes STATUS online online 5-20 vxdiskadm: "Enable access to (import) a disk group" vxdg [-ftCl [-n nelv_namelimport dJskgroup vxvol -g diskgroup startall VXdlSk -0 alldgs list DEVICE clt2:iOs2 clt2dls2 TYPE auto:cdsdisk auto:cdsdisk DISK datadgOl GROUP datadg (acctdg) CopYrlght'~ 2006 Syruanter Ccrpo.auoo All lights reserved VERITAS Storage Foundation 5,0 for UNIX' Fundamentals
  • 145. Renaming Disks and Disk Groups Changing the Disk Media Name VxVM creates a unique disk media name for a disk when you add a disk to a disk group. Sometimes you may need to change a disk name to reflect changes or ownership or use or the disk. Renaming a disk does not change the physical disk device name. The new disk name must be unique within the disk group. VEA: o Select the disk that you want to rename. o Select Actions->Rename Disk. o Specify the original disk name and the new name. vxedit rename: vxedit -g diskgroup rename old name new name Example: vxedit -g datadg rename datadgOl datadg03 Notes: o The new disk name must be unique within the disk group. o Renaming a disk does not automatically rename subdisks on that disk. IBefore you rename a disk, you should carefully consider the change. VxVM names subdisks based on the disks on which they are located. J disk named datadgOl contains subdisks that are named datadgOl-Ol, datadgOl-02, and so on. Renaming a disk does not automatically rename its subdisks. Volumes are not affected when subdisks arc named differently trom the disks. Before You Rename a Disk Lesson 5 Making Basic ConfigurationChanges Copyrtght ~ 2006 Svrnantec Corporation AII'iqlm. reserved 5-21
  • 146. symantcc. Host A Deport " I In VEA, select Actions-> : Rename Disk Group "~ -------- vxdg -n new~nafile deport vxdg import new name vxdg deport old name vxdg -n new_name import Renaming a Disk Group You cannot import or deport a disk group when the target system already has a disk group of the same name. To avoid name collision or to provide a more appropriate name for a disk group, you can rename a disk group. To rename a disk group when moving it from one system to another, you specify the new name during the deport or during the import operations. To rename a disk group without moving the disk group, you must still deport and reimport the disk group on the same system. The YEA interface has a Rename Disk Group menu option. On the surface, this option appears to be simply renaming the disk group. However, the option works by deporting and rcimporting the disk group with a new name. Using the CLI, for example, 10 rename the disk group datadg to mktdg: vxdg -n mktdg deport datadg vxdg import mktdg vxvol -g mktdg startall or vxdg deport datadg vxdg -n mktdg import datadg vxvol -g mktdg startall From the command line, you must restart all volumes in the disk group: vxvol -9 new name startall 5-22 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cu~ynyht:; 20l~,':':".panlec Corporauou All r.glils rescr-•..eo
  • 147. 'symantt'( All disk groups have a version number based on the Storage Foundation release. Each disk group version supports a set of features. You must upgrade old disk group versions in order to use new features. SF Disk Group Supported Disk Release Version Group Versions 3.2,3.5 90 20-90 4.0 110 20-110 4.1 120 20-120 5.0 140 20-140 To upgrade the disk group version: In VEA, select the disk group to be upgraded then select Actions->Upgrade Disk Group Version. In CLI, type: vxdg [-T version] upgrade diskgroup Managing Old Disk Group Versions Upgrading a Disk Group All disk groups have an associated version number. Each VxVM release supports a specific set of disk group versions and can import and perform tasks on disk groups with those versions. Some new features and tasks only work on disk groups with the current disk group version. so you must upgrade existing disk groups in order to perform those tasks. Once you upgrade a disk group. the disk group becomes incompatible with earlier releases ofVxVM that do not support the new version. Upgrading the disk group version is an online operation. You cannot downgrade a disk group version. IDisplaying the Disk Group Version: In the VEA Disk Group Properties window. if the Current version property is Yes. then the disk group version is current. In CLI. type: vxdg list newdg Group: newdg dgid: 97121640B.1133.cassius version: 140 Lesson 5 MakingBasic ConfigurationChanges 5-23 Copyriglll '£, 2006 Symantec Corporation. All righls reserved
  • 148. symantcc CDS Disk Groups CDS disk groups are used for seamless transfer of data between different platforms. For example, for moving copies of data to a backup server that is on a different OS. • CDS disk groups are created by default as of VxVM 4.x. • Disk groups created before version 4.x are non-CDS. ·····_··_··················1CDS attribute: c ds eon II DG version: version=110 (or higher)~~~~ Idatadg I A CDS disk group cannot have non-CDS disks in it. However, a CDS disk can be added to a non-CDS disk group as long as the disk group version supports it. Requirements for CDS Disk Groups Thc CDS attribute indicates that the disk group can be shared across platforms. CDS disk groups have fields indicating which platform-type created the disk group and which platform-type last imported the disk group, in addition to device quotas. 5-24 Ccpynyht ': 200b Symautec Cosporanco. All fights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 149. 'symanl('( Converting a Non-CDS Disk Group to a CDS Disk Group • The disk group must be in good condition . • Disk groups can be converted while online or offline. Use the CDS conversion utility vxcdsconvert to convert a VxVM non-CDS disk group to a CDS disk group: vxcdsconvert [-A] [-d defaultsfile] -g diskgroup [-0 novolstop] alignmentlalldisksldisk namelgroup [attribute] For example, to convert the disk group olddg to a CDS disk group while its volumes are still online, type: vxcdsconvert -g olddg -0 novolstop group Converting a Non-CDS Disk Group to a CDS Disk Group Requirements for Converting a Non-CnS Disk Group to a CDS Disk Group The disk group must be in good condition: No dissociated or disabled objects No sparse plexes No volumes requiring recovery or having pending snapshot operations No objects in an error state Disk groups can be converted online or online: Performing the conversion online. while use of the disk group continues. may greatly increase the amount of time required for conversion. Performing the conversion offline requires minimal online time. What Happens When a Disk Group Is Convertcd? The following are some other (actors to consider when converting a disk group: II non-CDS disk group is upgraded (using the vxdg upgrade command). II' the non-CDS disk group has one or more disks that are not CDS disks. these disks are converted to CDS disks. If the non-CDS disk group does not have a CDS-compatible disk group alignment. the objects go through relayout so that they are CDS-compatible. Applications using disks that require format conversion are terminated for the duration of the disk conversion process (unless novol stop is used). Using novolstop may require objects to be evacuated and then unrelocatcd. Lesson 5 Making Basic Configuration Changes COPYright e.2006 Syrnantec Corporation. All rigtw:; reserved. I 5-25
  • 150. symantec 5-26 Lesson Summary • Key Points This lesson described how to add a mirror to and remove a mirror from an existing volume, change the volume read policy, and resize an existing volume. You also learned how to rename disks and disk groups, upgrade disk groups, and convert non-CDS disk groups to CDS. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Storage Foundation Release Notes svmantec Lab 5 Lab 5: Making Basic Configuration Changes This lab provides practice in making basic configuration changes. In this lab, you add mirrors and logs to existing volumes, and change the volume read policy. You also resize volumes, rename disk groups, and move data between systems. [ For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Labs and solutions fur this lesson are located on the following pages: Appendix A provides complete lab instructions. "Lilt>:': fhklllg Basic ('on jl.~Ul ~t!ion changc-, l! pilgC :,<2(j Appendix B provides complete lab instructions and solutions. "1.;iI) :; Soluuons Mak iru; I$:'"IC ('unfi"ULlli()l1 C'hallf!l'';,'' pdgC n·-I"! CuPyrlght::: 7000 Svmantec COfP(;f;;j(I(JI) All rlyht5 reserved VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
  • 151. Lesson 6 Administering File Systems
  • 152. symantec Lesson Introduction • Lesson 1: Virtual Objects • Lesson 2: Installation and Interfaces • Lesson 3: Creating a Volume and File System • Lesson 4: Selecting Volume Layouts • Lesson 5: Making Basic Configuration Changes • Lesson 6: Adm;nister;nflEil~~y~~e,!,~'" • Lesson 7: Resolving Hardware Problems ~~%.,.#k., '~~r:_~£~~ , symautcc Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Comparing the Describe the benefits VxFS extent-based Allocation Policies of VxFS allocation over traditional block-based and Traditional File Systems allocation. Topic 2: Using VERITASFile Apply the appropriate VxFS commands System Commands from the command line. Topic 3: Controlling File Defragment a VxFS file system. System Fragmentation Topic 4: Logging in VxFS Perform logging in VxFS by using the intent log and the file change log. 6-2 Copyright I~;2006 Svmantec Corporauon. All rl~l'lh,' .erved VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
  • 153. Traditional Block-Based Allocation Block-based allocation: • Allocates space to the next rotationally adjacent block • Allocates blocks at random from a free block map • Becomes less effective as the file system fills • Requires extra disk 110 to write metadata n+8 n+13 n+20 n+21 S)11!antC<':. Comparing the Allocation Policies of VxFS and Traditional File Systems Both VxFS and traditional UNIX file systems, such as UFS. use index tables to store information and location information about blocks used for tiles. However. VxFS allocation is extent-based, while other file systems are block-based. Block-based allocation: File systems that US!! block-based allocation assign disk space to a file one block at a time. Extent-based allocation: File systems that use extent-based allocation assign disk space in groups of contiguous blocks, called extents. Example: UFS Block-Based Allocation UFS allocates space for files one block at a time. When allocating space to a tile. UFS uses the next rotationally adjacent block until the file is stored. UFS can perform at a level similar to an extent-based file system on scquential l/O by using a technique called block clustering. In UFS, the maxcontig file system tunable parameter can be used to cluster reads and writes together into groups of multiple blocks. Through block clustering. writes are delayed so that several small writes are processed as one large write. Sequential read requests can be processed as one large read through read-ahead techniques. Block-based allocation requires extra disk 1/0 to write file system block structure information, or metadata. Metadata is always written synchronously to disk. which can significantly slow overall file system performance. Over time, block-based allocation produces a fragmented file system with random file access. Lesson 6 Administering File Systems Copyright ,r;:,2006 Symantec Corporation. All rights reservC(1 I 6-3
  • 154. n+9 n+10 n+11 n+12 n+13n+14 n+1 n+16 n+17 n+37 n+38 n+39 symantec VxFS Extent-Based Allocation n n+1 n+2 n+3 n+4 n+5 n+6 n+7 n+8 Address-length pair consists of: • Starting block • Length of extent Extent: A set of contiguous blocks • Extent size is based on the size of I/O write requests. • When a file expands, another extent is allocated. • Additional extents are progressively larger, reducing the total number of extents used by a file. n+4 n+41n+42 VxFS Extent-Based Allocation VERITAS File System selects a contiguous range of tile system blocks. called an extent. for inclusion in a tile. The number of blocks in an extent varies and is based on either the I/O pattern of the application. or explicit requests by the user or programmer. Extent-based allocation enables larger I/O operations to be passed to the underlying drivers. VxFS attempts to allocate each tile in one extent of blocks. If this is not possible. VxFS attempts to allocate all extents for a tile close to each other. Each file is associated with an index block. called an inode. In an inode, an extent is represented as an address-lengthpair, which identifies the starting block address and the length of the extent in logical blocks. This enables the tile system to directly access any block of the file. VxFS automatically selects an extent size by using a default allocation policy that is based on the size of I/O write requests. The default allocation policy attempts to balance two goals: Optimum 1/0 performance through large allocations Minimal tile system fragmentation through allocation from space available in the file system that best fits the data The first extent allocated is large enough for the first write to the file, Typically, the first extent is the smallest power of 2 that is larger than the size of the first write. with a minimum extent allocation of 8K. Additional extents are progressively larger. doubling the size of the file with each new extent. This method reduces the total number of extents used by a single tile. 6-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals C()Py'l~lh 200f) Symamec ccrpo-auco All ngtlls reserved
  • 155. Using VERITAS File System Commands You can generally use VERITAS File System (Vxf'S) as an alternative to other disk-based, OS-specilic file systems, except for the lile systems used to boot the system. File systems used to boot the system are mounted read-only in the boot process, before the VxFS driver is loaded. VxFS can be used in place of: UNIX File System (UFS) on Solaris, except for root, /usr, /var, and /opt. Hierarchical File System (IIFS) on IIP-UX, except for /stand . Journaled File System (.IFS) and Enhanced Journalcd File System (JFS1) on AJX, except for root and /usr. Extended File System Version 2 (EXT1) and Version 3 (EXT3) on Linux, except for root, /boot, / et c, /1 .i b, /var, and /usr. Using VxFS Commands • VxFS can be used as the basis for any file system except for file systems used to boot the system. • Specify directories in the PATH environment variable to access VxFS-specific commands. • VxFS uses standard file system management syntax: command [fs type] [generic opt.iOIlS] [-0 VxFS_optiollS] [specialTmount_poillr] • Use the file system switchout to access VxFS-specific versions of standard commands. Without the file system switchout. the file system type is taken from the default specified in the default file system file. To use VxFS as your default. change this file to contain vxfs. Solaris I HP-UX I A1X I Linu~1 Location of VxFS Commands IPlatform Location of VxFS Commands Solaris /opt/VRTSvxfs/sbin,/usr/lib/fs/vxfs,/etc/fs/vxfs. 'opl/VRTS/bin HP-UX /opt/VRTS/bi~/sbin/fs,/usr/lbin/fs AIX /opt/VRTSvxfs/sbin,/usr/lib/fs/vxfs./etc/fs/vxfs Linux /sbin./usr/lib/fs/vxfs Specify these directories in the PATH environment variable. Lesson 6 Administering File Systems Cop ynqht i: 2006 Syrnantec. Corporation. All rights reserved 6-5
  • 156. General File System Command Syntax To access VxFS-specific versions, or wrappers, of standard commands, you use the Virtual File System switchout mechanism followed by the file system type, vxfs. The switchout mechanism directs the system to search the appropriate directories for Vxf Svspccific versions of commands. Platform File System Switchour Soluris -F vxfs HI'-UX -F vxfs AIX -v vxfs (or-v vxf s when used with er f s) l.inux -t vxfs Using VxFS Commands by Default If you do not use the switchout mechanism, then the file system type is taken from the default specified in the OS-specitic default file system file. If you want VERITAS File System to be your default tile system type, then you change the default tile system tile to contain vxfs. Platform Defuu It File System Fite - Solaris /ete/default/fs HI'-UX /ete/default/fs 1-- AIX /ete/vfs Linux /ete/default/fs 6-6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyuqltt ;, 20(l6 Svmantec Corporation All fights reserved
  • 157. VxFS-Specific mkf s Options 'synlilntt?c. -0 bsize_n Sets logical block size Default: 1024 bytes (lK) for most Cannot be changed after creation In most cases, the default is best. Resizing the file system does not change the block size. -0 logsize_n Sets size of logging area Default depends on file system size. Default is sufficient for most workloads. Log size can be changed after creation using fsadm. mkfs [fs_type] [-0 specific_options] special Lesson 6 Administering File Systems -0 N Provides information only • Does not create the file system -0 largefilesl nolargefiles Supports files >= 2 gigabytes (or >= 8 million files) Default: largefiles -0 version-n Specifies layout version Valid values are 4, 5, 6, and 7. Default: Version 7 Using mkfs Command Options YOLI can set a variety of file system properties when you create a V[RIT!S file system by adding VxFS-spccific options to the mkfs command. I 6-7 Copyright if; 2006 Svmantec Corporauon. All rights reserved
  • 158. symantec Other VxFS Commands Mount options: mount -r ... mount -v mount -p Mounts as read only Displays mounted file systems Displays in file system table format (Not on Linux) Mounts all in file system tablemount -a Unmount options: umount Imydata umount -a umount -0 force Irnydata Unmounts a file system Unmounts all mounted file systems Forces an unmount Display file system type: fstyp -v Idev/vx/dsk/datadg/datavol Display free space: df -F vxfs /mydata Identifying File System Type If you do not know the tile system type of a particular tile system. you can determine the tile system type by using the fstyp command. You can use the fstyp command to describe either a mounted or unmounted tile system. In YEA, right-dick a tile system in the object tree, and select Properties. The tile system type is displayed in the File System Properties window. Identifying Free Space To report the number of free disk blocks and inodcs fur a YxFS File System, you use the d f command. The d f command displays the number of free blocks and free inodes in a tile system or directory by examining thc counts kept in the superblocks. Extents smaller than XK may not be usable for all types of allocation, so the df command docs not count tree blocks in extents below 8K when reporting the total number of free blocks. In YEA. right-click a file system. and select Properties to display tree space and usage information. 6-8 COpy light 't: 2006 Svmautnc Corporation All rights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 159. ,S)ll1<lntt'l ~ ..~~ Fragmentation Degree of fragmentation depends on: • File system usage • File system activity patterns IInitial Allocation I ~~ IDefragmented I II 0 00 II a... ..0 ... _•••• --.080.--. ••• ••••• 00.. ..000 00000 ••• 00000 Fragmentation types: • Directory • Extent Controlling File System Fragmentation In a VERITAS file system, when free resources are initially allocated to files, they are aligned in the most efficient order possible to provide optimal performance. On an active file system, the original order is lost over time as files are created, removed, and resized. As space is allocated and deallocated from f les, the available free space becomes broken up into fragments. This means that space has to be assigned to files in smaller and smaller extents. This process is known as fragmentation. Fragmentation leads to degraded performance and availability. VxFS provides online reporting and optimization utilities to enable you to monitor and defragment a mounted tile system. These utilities are accessible through the file system administration command, f sadm. Types of Fragmentation VxFS addresses two types of fragmentation: Directory fragmentation: As files are created and removed, gaps are left in directory inodes. This is known as directory fragmentation. Directory fragmentation causes directory lookups to become slower. Extent fragmentation: As files are created and removed, the free extent map for an allocation unit changes from having one large tree area to having many smaller free areas. Extent fragmentation occurs when files cannot be allocated in contiguous chunks and more extents must be referenced to accessa file. III a case of extreme fragmentation, a file system may have free space, none or which can be allocated. Lesson 6 Administering Fife Systems Copyligh! <S.2006 Syrnantec Corporation. All nghts reserved I 6-9
  • 160. Monitoring Fragmentation To monitor directory fragmentation: fsadm -D frontl Dirs Total Searched Blocks total 486 99 Immed Immeds Dirs to Blocks Dirs to Add Reduce to Reduce 388 6 6 6 A high total in the Di rs to Reduce column indicates fragmentation. To monitor extent fragmentation: fsadm -E fhome in VEA, select File Syslem-> Properties->Statistics % Free blocks in extents smaller than 64 blks: 8.35 % Free blocks in extents smaller than 8 blks: 4.16 % blks allocated to extents 64 blks or larger: 45.81 Output displays percentages of free and allocated blocks per extent size. Running Fragmentation Reports You can monitor fragmentation in a VERITAS file system by running reports that describe fragmentation levels, You use the f sadm command to run reports on both directory and extent fragmentation. The d f command, which reports on tile system tree space. also provides information useful in monitoring fragmentation. Interpreting Fragmentation Reports In general. for optimum performance. the percentage of tree space in a tile system should not fall below 10 percent. A tile system with 10 percent or more free space has less fragmentation and better extent allocation. A badly fragmented file system will have one or more of the following characteristics: Greater than 5 percent of tree space in extents of less than 8 blocks in length More than 50 percent of tree space in extents of less than 64 blocks in length Less than 5 percent of the total tile system size available as free extents ill lengths of 64 or more blocks 6-10 CopYright :;··2006 S}'ma!H€C Corporation All rights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 161. symaruec.. (';I1I'.1##J;.....'-'. ~ Defragmenting a File System fsadm I-d] I-D] I-e] I-E] I-t time] I-p passes] mount point During extent reorganization: During directory reorganization: In VEA, highlight a file system, and select Actions->Defrag File System, • Small files are made contiguous. Large files are built from large extents. • Small, recent files are moved near the Inodes. Large, old files are moved to the end of the allocation unit. Free space Is clustered In the center of the allocation unit. Example: fsadm -e -E -s /mntl Valid entries are moved to the front. Free space is clustered in the center of the allocation unit. Directories are packed Into Inode area. Directories are placed before other files. • Entries are sorted by access time. Example: fsadm -d -D /mntl VxFS Defragmentation You can use the oniinc administration utility fsadm to defragment. or reorganize. file system directories and extents. The f sadm utility detragmcnts a file system mounted for read/write access by: Removing unused space from directories Making all small files contiguous Consolidating free blocks for file system use Only a privileged user can reorganize a file system. Defragmenting Extents Entries are sorted by the time of last access. Other f sadm Defragmentation Options If you specify both -d and -e. directory reorganization is always completed before extent reorganization. II'you use the - D and - E with the - d and - e options. fragmentation reports are produced both before and after the reorganization. You can use the - t and - p options to control the amount of work performed by f sadm. either in a specificd time or by a number of passes. By default. f sadm runs five passes. If both - t and - p are specified. f sadm exits if either of the terminating conditions is reached. Lesson 6 Administering File Systems Copynqtn,; 2006 Svrnantcc ccrro.auoo All rights reserved I 6-11
  • 162. symantec Scheduling Defragmentation • The frequency of defragmentation depends on usage, activity patterns, and the importance of performance. • Run defragmentation on demand or as a cron job: - Daily or weekly for frequently used file systems - Monthly for infrequently used file systems • Adjust defragmentation intervals based on reports. • To defragment using VEA, highlight a file system and select Actions->Defrag File System. Scheduling Defragmentation The best way to ensure that fragmentation does not become a problem is to defragment the file system on a regular basis. The frequency of dcfragmeruation depends on file system usage. activity patterns. and the importance of file system performance. In general, follow these guidelines: Schedule dctragmeruarion during a time when the file system is relatively idle. For frequently used tile systems. you should schedule dctrugmentauon daily or weekly. For infrequently used file systems. you should schedule dcfragmenration at least monthly, Full tile systems tend to fragment and are difficult to defragment. You should consider expanding the tile system. To determine the dcfragmcniation schedule that is best for your system, select what you think is an appropriate interval for running extent reorganization and run the fragmentation reports both before and after the reorganization, If the degreeof tragmcruation is approaching the bad fragmentation figures. then the interval between fsadm runs should be reduced. If the degree offragmentation is low, then the interval between f sadm runs can be increased. You should schedule directory reorganization for file systems when the extent reorganization is scheduled. The fsadm utility call run on demand and can be scheduled regularly as a cronjob. The dcfragmcutation process can take some time. You receive an alert when the process is complete. 6-12 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 163. vxbench~latform -w workload [options) file name 'synmn1t'C.dl1 Testing Performance Using vxbench Example: Sequential write vxbench_placform -w write -i iosize-8,iocount-131072 , /mnt/testfileOl total: 0.560 Bee 14623.53 KB/s cpu: 0.10 sys 0.01 user The output disptays etapsed time in seconds, throughput in KB/second, and CPU time for the system and the user in seconds. Example: Random write vxbench_platform -w rand_write -i iosize=8, iocount=131072,maxfilesize=1048576 /mnt/testfileOl Note: Separate suboptions using commas with no spaces. Benchmarking Using vxbench What Is Benchmarking'! Benchmarking is a testing technique that enables you to measure performance based on a set of standards. or benchmarks. You can use benchmarking techniques to try to predict the performance of a new file system configuration or to analyze the performance of an existing tile system. What Is vxbench'? VERITAS engineering developed a benchmarking tool called vxbench that enables you to create different combinations of I/O workloads. The vxbench program is installed as part of the VRTSspt software installation and exists under the / opt/VRTSspt/ FS/VxBench directory. Notes on Testing Performance The vxbench program applies a workload to a file system and measures performance based on how long file system operations take. II'anything else is using the tile system at the same time. then the vxbench performance reports are affected. For sequential workloads: iosize x iocount = size of the file. The ios ize and maxf ilesi ze parameters are defined in units of I K: therefore. .i os i ze e s defines a size of8K. Lesson 6 Administering File Systems Copynght If' 2006 Symantec Corporation All rights reserved I 6-13
  • 164. The vxbenchyl at form Command In the syntax, you specify the command followed by a type of workload. Valid workloads are: read write rand read rand write rand mixed mmap read mmap_write Performs a sequential read of the test tiles Performs a sequential write of the test tiles Performs a random read of the test files Performs a random write of the test files Performs a mix ofrandom reads and writes Uses mmapto read the test tiles Uses mmapto overwrite the test tiles After specifying the type of workload. you can add specific options that characterize the test that you want to perform. Finally. you specify the name of the tile on which to run the test. If you specify multiple filenames, vxbench_platform runs tests in parallel to each tile, which simulates multiple simultaneous users. If you use the option that specifies multiple threads. then each simulated user runs multiple threads. The total number of 110 threads is the number of users multiplied by the number of threads. Command Options By adding options to the vxbench _pl atform command. you can simulate a wide variety of I/O environments. The following table describes some of these options and their uses. You can display a complete list ofvxbenchylatform command options by typing vxbenchylatform -h. Option Use -h Prints a detailed help message -p Uses processes for users and uses threads for multithreaded 1,'0 (This is the default option.) -p Uses processes lor users and for multithreaded I/O -t Uses threads for users and lor multithreaded 110 -m Locks 1:'0 butlers in memory -s For multiuser tests. only prints summary results -v For multithreaded tests. prints per-thread results -k Prints throughput in kilobytes/second (This is the default option.) -M Print, throughput in megabytes/second -i [suboptions] SPCCitlCSsuboprions describing the test you want to perform vxbench is included in the VRTSspt package. 6-14 Copyright ~ 2006 SYI11~-I!l!eC Corporation All rights reserved VERITAS Storage Foundetion 5.0 for UNIX: Fundamentals
  • 165. ---- -'~)~;~~mlt:; The intent log records pending file system changes before metadata is changed. Structural IntentLo Files Logging in VxFS If the system crashes, the intent log is replayed by VxFS fsck. Role of the Intent Log A file system may be left in an inconsistent state after a system failure. Recovery of structural consistency requires examination of file system metadara structures.VERITAS File System provides fast file system recovery after a system failure by using a tracking feature called ill/em /oggillg or journaling. Intent logging is the process by which intended changes to file system metadata are written to a log before changes are made to the file system structure. Once the intent log has been written. the other updates to the tile system can be written in any order. In the event of a system failure. the VxFS f sck utility replays the intent log to nullify or complete file system operations that were active when the system failed. Traditionally. the length oftime taken for recovery using fsck was proportional to the size of the file system. For large disk configurations. running f sck is a time- consuming process that checks. verities. and corrects the entire file system. The VxFS version of the fsck utility performs an intent log replay to recover a file system without completing a full structural check of the entire tile system. The time required for log replay is proportional to the log size. not the file system size. Therefore. the file system can be recovered and mounted seconds after a system failure. Intent log recovery is not readily apparent to users or administrators. and the intent log can be replayed multiple times with no adverse effects. Note: Replaying the intent log may not completely recover the damaged file system structure if the disk suffers a hardware failure. Such situations may require a complete system check using the VxFS fsck utility. Lesson 6 AdministeringFile Systems Copyright ~i;:2006 Symarrtec Corporation All nqtus reserved I 6-15
  • 166. svmarucc Maintaining VxFS Consistency To check file system consistency by using the intent log for the VxFS on the volume datavol: fsck [fs_type] /dev/vx/rdsk/datadg/datavol To perform a full check without using the intent log: fsck [fs_type] -0 full,nolog /dev/vx/rdsk/datadg/datavol To check two file systems in parallel using the intent log: fsck [fs_type] -0 p /dev/rdsk/clt2dOs4 /dev/rdsk/cltOdOs5 To perform a file system check using the VEA GUI, highlight an unmounted file system, and select Actions->Check File System. Maintaining File System Consistency You use the YxFS-specitic version of the fsck command to check the consistency of and repair a Y xFS file system. The f sck utility replays the intent log by default, instead of performing a full structural tile system check. which is usually sufficient to set the tile system state to CLEAN. You can also use the f sck utility to perform a full structural recovery in the unl ikcly event that the log is unusable. The syntax for the f sck command is: fsck [fs_type] [genenc opUons] [-yl-Y] [-nl-N] [-0 full,nolog] special For a complete list of generic options. see the f sck (1m) manual page. Some of the generic options include: Option Description -m Checks. but docs not repair. a tile system before mounting -niN Assumes a responseof no to all prompts by fsck (This option does not replay the intent log and performs a full Isck.) -v Echoesthe expanded command line but does not execute the command -yly Assumes a responseof yes to all prompts by fsck (I r the lile system requires a lull tsck after the log replay. then a full tsck is performed.j - 0 p can only be run with log f sck. not with full f sck. 6-16 Copyright:: 211Ot; Syroante.. Comoraunn All notus rvserveo VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 167. , syrnanrcc. Resizing the Intent Log • Intent log size can be changed using fsadm: fsadm [-F vxfsl ·0 logaic •• size [,logvol_vol_namel mount_point Specify a Place log on a naw log alze. separate device. • Use the fsadm -L mount poi nt: command to get detailed information on the current intent log • Larger log sizes may improve performance for intensive synchronous writes, but may increase: - Recovery time - Memory requirements - Log maintenance time ..--.-... . . Highlight a file system, and select " Actions->Set Intent Log Options. File system Jmnt! Intent logsee: I Not on HP-UX r Resizing the Intent Log The VxFS intent log is allocated when the file system is first created. The size of the intent log is based on the size of the file system-the larger the file system, the larger the intent log. Default log size: Based on file system size: in the range of256K to 64 MB Default maximum log size: 64 !'vIB (Version 6 and 7 layout): 16 MB (Versions 4 and 5 layout) With the Version 6 disk layout. you can dynamically increase or decrease the intent lug size using the log option of the f sadmcommand. The allocation can be directed to a specified intent logging device, as long as the device exists and belongs to the same volume set as the file system. Increasing the size of the intent log (an improve system performance because it reduces the number of times the log wraps around. However, increasing the intent log size can lead to greater times required for a log replay if there is a system failure. / large log provides better performance on metadata-intensive workloads. Memory requirements for log maintenance increase as the log size grows. The log size should not be more than 50 percent of the physical memory size otthc system. / small log uses less space on the disk and leaves more room for file data. For example. selling a log size smaller than the default log size may be appropriate for a small floppy device. On small systems, you should ensure that the log size is not greater than half the available swap space. Note: The Loqvo I.« option 10 place the intent log on a separate volume can only be used with multi-volume file systems (file systems on volume sets). Copyright ~ 2006 Svmantec Corporation All nqhts reserveo Lesson 6 Administering File Systems I 6-17
  • 168. symantec Logging mount Options mount -F vxfs [-0 specific_options] Most logging delayed; great All structural performance improvement, changes logged but changes could be lost -0 log -0 tmplog IIntegrity . Performance -0 delaylog Default; some logging delayed; improves performance Controlling Logging Behavior VERITAS File System provides VxFS-spccific logging options that you can use when mounting a file system to alter default logging behavior. By default, when you mount a VERITAS file system, the -0 delaylog option is used with the mount command. With this option, some system calls return before the intent log is written. This logging delay improves the performance of the system, and this mode approximates traditional UNIX guarantees for correctness in case of system failures. You call specify other mount options to change logging behavior to further improve performance at the expense of reliabi lity. Selecting mount Options for Logging You can add VxFS-specific mount options to the standard mount command using - 0 in the syntax: mount [-F vxfs] [generic_options] [-0 specific_options] special mount_point Logging mount options include: • - 0 log • -0 delaylog • -0 tmplog 6-18 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cr;[JYrlgt1! ,0 :!006 SYJT1W'lec co-ooreuoo All r!gh!<; reserved
  • 169. , symanrec. Logging and Performance To select the best logging mode for your environment: • Understand the different logging options. • Test sample loads and compare performance results. • Consider the type of operations performed in addition to the workload. • Performance of 1/0 to devices can improve if writes are performed in a particular size, or in a multiple of that size. To specify an 1/0 size to be used for logging, use the mount option: -0 logiosize=size • Place the intent log on a separate volume and disk. Logging and VxFS Performance In environments where data reliability and integrity is of the highest importance. logging is essential. However, logging does incur performance overhead. If maximum data reliability is less important than maximum performance. then you can experiment with logging mount options. When selecting mount options for logging to try to improve performance. follow these guidelines: Lesson 6 Administering File Systems Copyright If, 2006 Symaruec Corporation All nqnts, reserved I 6-19
  • 170. syrnantec File Change Log The VxFS File Change Log (FCL) is another type of log that tracks changes to tiles and directories in a tile system. Applications that can make use ofthe FCL arc those that are typically required to scan an entire tile system to discover changes since the last scan, such as backup utilities, wcbcrawlers, search engines, and replication programs. The File Change Log records tile system changes such as creates, links, unlinks, renaming. data appended, data overwritten, data truncated, extended attribute modifications, holes punched, and other tile property updates. Note: The FCL records only that data has changed, not the actual data. It is the responsibility of the application to examine the tiles that have changed data to determine which data has changed. FCL. stores changes in a sparse file in the tile system namespace. The FCL log tile is always located in mount_point/lost+found/changelog. File Change Log Tracks changes to files and directories in a file system for use by backup utilities, webcrawlers, search engines, and replication programs. In contrast to the intent log, the FCL is not used for recovery. Location: mount_point/lost+found/changelog To activateldeactivate an FCL for a mounted file system: fcladm on I off mount_pOint (Default is off.) To remove an FCL (FCL must be off first): fcladm rm mount_point To obtain the current FCL state for a mounted file system: fcladm state mount_point To print the file change log: fcladm print Olprint x mount_point • To translate the log entries in inodes to full paths: vxlsino inode number mount_point Comparing the Intent Log and the File Change Log The intent log is used to speed recovery of the tile system after a crash. The FCL has no such role. Instead. the FCL is used to improve the performance of applications. For example: your IT department mandates that all systems undergo a virus scan once a week. The virus scan takes some time and your system takes a performance hit during the scan. To improve this situation, an FCL. could be used with the virus scanner. The virus scanner. if using an FCL, could read the log, find all tiles on your system that arc either new or that have been modified, and scan only those files. FCL is used with NctBackup to greatly improve the speed of incremental backups. 6-20 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cup, IIgl11 21)06 Svmantec CorPOIi:lIIOIl All rights reserveo
  • 171. , symaruec I Lesson Summary • Key Points This lesson describes how to administer file systems using VERITAS File System (VxFS). You learned how to defragment a file system and use the logging capabilities in VxFS. • Reference Materials - VERITAS File System Administrator's Guide - VERITAS Volume Manager Administrator's Guide 'symalltec Appendix B provides complete lab instructions and solutions. "lab (, Solulion,: ,dllllllislCring Fik S,knl"c." j1ag~B-(,7 Lab 6: Administering File Systems • In this lab, you practice file system administration, including defragmentation and administering the file change log. ~ Lab Exercises, see Appendix A. L~ Lab Solutions, see Appendix B. Labs and solutions for this lesson arc located on the following pages: Appendix A provides complete lab instructions. "l.ab (,: :dlllilli:':lcril1)! File> SY'ilCIli'." I':;:,:C /-.17 Lesson 6 Administering File Systems Copyright ~ 2006 Symanter. Corporation. All rights reserved 6-21
  • 172. 6-22 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyright ': 20()6 Svmantec Corporauon. All f191"1S re<;ervN
  • 173. Lesson 7 Resolving Hardware Problems
  • 174. syrnantcc. Lesson Introduction • Lesson 1: Virtual Objects • Lesson 2: Installation and Interfaces • Lesson 3: Creating a Volume and File System • Lesson 4: Selecting Volume Layouts • Lesson 5: Making Basic Configuration Changes • Lesson 6: Administering File Systems • Lesson 7: Res~/ving_,!~rdware Proble~~~ svrnantec Lesson Topics and Objectives Topic After completing this lesson, .. you will be able to: Topic 1: How Does VxVM Interpret failures in hardware. Interpret Failures in Hardware Topic 2: Recovering Disabled Recover disabled disk groups. Disk Groups Topic 3:Resolving Disk Resolve disk failures. Failures Topic 4: Managing Hot Manage hot relocation at the host Relocation at the Host Level level. 7-2 Copynytd ;. 2006 Svmantec Coruorauon All fights reservoo VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 175. Potential Failures in a Storage Environment ITemporary Failures I t m'."·,'··'j ~ " " Disk Arrays • Power cut • Fibre connection failure Complete SAN failure • SAN switch failure • HBA card/port failure r:A§(i, f~ i :-LUNlDisk fail~;;--"l /' (+~(~ (j . C~mplete disk array ~ JBOD failure • Site Failure ,_ _...1 , : Can be Permanent L~ Temporary _ How Does VxVM Interpret Failures in Hardware YxYM interprets failures in hardware in a variety of ways. depending on the type of failure. I Lesson 7 ResolvingHardwareProblems 7-3 Copyngtl! ,~~2006 Symanter; Corporation All nqbts reserved
  • 176. syrnarucc 1/0 Error Handling If the LUN/disk cannot be accessed at all, dynamic multipathing (DMP) disables the path. If there is only one path, the DMP node is disabled. Identifying 1/0 Failure Disk Failure Data availability and reliability are ensured through most failures if you are using VxVM redundancy features, such as mirroring or RAID-5. If the volume layout is not redundant. loss of a drive may result in loss of data and may require recovery from backup. For 1/0 failure on a nonrcdundant volume. VxVM reports the error. bUI it docs not take any further action. Disk Failure Handling When a drive becomes unavailable during an I/O operation or experiences uncorrcctablc I/O errors, the operating system detects SCSI failures and reports them 10 VxVM. The method that VxVM uses 10 process the SCSI failure depends on whether the failure occurs on a nonrcdundant or a redundant volume. FAILING vs. FAILED Disks Volume Manager differentiates between FAILING and FAILED drives: FAILING: Ifthere arc uncorrcctable 1/0 failures on the public region of the drive. bUI VxVM can still access the private region of the drive. the disk is marked as FAILING. FAILED: IfVxVM cannot access the private region or the public region. the disk is marked as FAILED. The condition flags and object slates arc described in detail in the Maintenance course. 7-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyllght •. 2006 Symaotec Corporauor All rlgt1ls rcsorveo
  • 177. ,S)11mmC(_ Identifying Disabled Disk Groups VxVM disk and disk group records before the failure: r;:;-xdlsk-iist-------------------------------------------------- iDEVICE TYPE DISK GROUP STATUS IdiSkO_l auto:cdsdisk datadgOl datadg online ldd ak O-- 2 auto:cdsdisk datadg02 datadg online !diskO 3 auto:none online invalid ivxdg list !NAME STATE ID jdatadg enabled. cds 11S0193039.S8_trainl jvxdisk list IDEVICE TYPE DISK GROUP datadg datadg STATUS IdiSkO 1 idiskO - 2 idiskO=3 :-;xdg list !NAME Idatadg auto: cdsdisk datadgOl auto:cdsdisk datadg02 auto online dgdisabled online dgdisabled error STATE disabled ID 11S0193039.S8.trainl Identifying Disabled Disk Groups When disk groups aredisabled. the statuschangesto dqd i s ab I ed. I Copyngh! (~';2nO!) Symanter. Corporanor, All rights reserved 7-5Lesson 7 Resolving Hardware Problems
  • 178. syrnantec Identifying Failed Disks STATUS online online online vxdisk list DEVICE TYPE DISK GROUP STATUS diskO -0 sliced rootdisk sysdg online diskO -1 auto:cdsdisk datadgOl datadg online diskO - 2 auto error diskO -3 auto error diskO - 4 auto error datadg02 datadg failed was:diskO 2 Identifying Failure: Disk Records When VxVM detaches the disk, it breaks the mapping between the VxVM disk-s-disk media record (datadg02)-and the disk drive (diskO 2). However. information on the disk. media record, such as the disk media name, the disk group, the volumes, plexcs. and subdisks on the Vx VM disk, and so on, is maintained in the configuration database in the active private regions of the disk group. The output ofvxdisk list displays the tailed drive as online until the VxVM configuration daemon is forced to reread all the drives in the system and to reset its tables. To force the VxVM configuration daemon to reread all the drives in the system: vxdctl enable After you run this command, the drive status changes to error for the failed drive, and the disk media record changes to fai led. The disk is immediately marked as error state, when the public region is not accessible. 7-6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Ccpvnqht '; 2006 Svmaotr-r Corporation All nqhts reserved
  • 179. symamec Permanent versus Temporary Failures • Temporary Failure - Data on the LUN/disk is still there, only temporarily unavailable. - When the hardware problem is resolved, in most cases recovery can make use of the pre-existing data. • Permanent Failure - The data on the LUN/disk is completely destroyed. - If the volumes were not redundant, data needs to be restored from backup. - However, the VxVM objects and the disk group configuration information can be restored. Disk Failure Types The basic types of disk failure are permanent and temporary. Permanent disk failures are failures in which the data on the drive can no longer be accessed for any reason (that is. uncorrectable). In this cascothe data on the disk is lost. Temporary disk failures are disk devices that have failures that are repaired some lime later. This type offailure includes a drive that is powered ofland back on. or a drive that has a loose SCSI connection that is fixed later. In these cases. the data is still on the disk. bUI it may not be synchronized with the other disks being actively used in a volume. I Lesson 7 Resolving Hardware Problems 7-7 Copyright (["':;2006 Syrnantec Corporation. All right" reserved
  • 180. syrnantec , Device Recovery • As soon as the hardware problem is resolved, the OS recognizes the disk array and the disks. • DMP automatically detects the change, adds the disk array to the configuration, and enables the DMP paths. This may take up to 300 seconds. If you want to make it faster, you can execute the vxdctl enablecommand immediately after resolving the hardware problem. • Relevant messages are logged to the system log. June 13 12':06:25 train1 vxdmp: [10 803759 kern.notice] NOTICB: VxVM vxdmp V-5-0-34 added disk array D60JODDA. datype = HDS9500-ALUA June 13 12:06:25 train1 vxdmp: [10 736771 kern.notice] NOTICE: VxVM vxdmp V-5-0-148 enabled path 32/0xaO belonging to the dmpnode 253/0x10 June 13 12:06:25 train1 vxdmp: [10 899070 kern.notice] NOTICB, VxVM vxdmp V-5-0-147 enabled dmpnode 253/0x10 Solari. Example Recovering Disabled Disk Groups Device Recovery As soon as the hardware problem is resolved. the OS recognizes the disk array and the disks, DMP automatically detects the change. adds the disk array to the configuration. and enables the DMP paths, Relevant messages arc logged to the system log. CopYrlght~' 2006 Svmantec Corporation All flgnl" reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals7-8
  • 181. I Recovering From Temporary Disk Group Failures The disks still have their private regions. Therefore, there is no need to recover the disk group configuration data. Recover the disk group as follows: 1. Unmount any disabled file systems in the disk group. 2. Deport the disk group. 3. Make sure that the DMP paths are enabled using: vxdisk -0 alldgs list 4. Import the disk group. 5. Start the volumes in the disk group using: vxvol -g diskgroup startall Note that mirrored volumes may go through a synchronization process at the background if they were open at the time of the failure. 6. Carry out file system checks. 7. Mount the file systems. Recovering From Temporary Disk Group Failures The disks still have their private regions. so there is no need to recover the disk group configuration data. Recover the disk group as described in the slide. Lesson 7 Resolving Hardware Problems Copyrigtll 'G 2006 Symaetec Corporation. All rights reserved 7-9
  • 182. Recovering From Permanent Disk Group Failures DMP recovery is again automatically done as in temporary failures. However, this time the disks do not have any private region that has the disk group configuration data. After the DMP paths are enabled, recover the disk group as follows: 1. Unmount any disabled file systems in the disk group. 2. Deport the disk group. At this point all disk group information is lost except for the configuration backups. 3. Restore the disk group configuration data. Note that mirrored volumes will go through a synchronization process at the background. 4. Re-create the file systems if necessary. 5. Restore data from a backup. Recovering From Permanent Disk Group Failures DMP recovery is again automatically done as in temporary failures. However. this time the disks do not have any private region that has the disk group configuration data. After the DMP paths are enabled. recover the disk group as described in the slide. 7-10 COPYlIgh!' 2006 Svmantec Corporauoo A.IInqhts «isorveo VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 183. , symamec . Disk Group Configuration Backup and Restore engdg engdg Commit I-c diskgroup Protecting the VxVM Configuration The disk group configuration backup and restoration feature enables you to back up and restore all configuration data for disk groups, and for volumes that are configured within the disk groups. The vxconf igbackupd daemon monitors changes to the YxYM configuration and automatically records any configuration changes that occur. The vxconf igbackup utility is provided for backing up and restoring a YxYM configuration for a disk group. The vxconf igres tore utility is provided for restoring the configuration. The restoration process has two stages: precommit and commit. In the precomrnit stage. you can examine the configuration of the disk group that would be restored from the backup. The actual disk group con figuration is not permanently restored until you choose 10 commit the changes. By default. YxYM configuration data is automatically backed up to the files: • /etc/vx/cbr/bk/diskgroup.dgid/dgid.dginfo • /etc/vx/cbr/bk/diskgroup.dgid/dgid.diskinfo • /etc/vx/cbr/bk/diskgroup.dgid/dgid.binconfig • /etc/vx/cbr/bk/diskgroup.dgid/dgid.cfgrec IConfiguration data from a backup enables you 10 reinstall private region headers of YxYM disks in a disk group, re-create a corrupted disk group configuration, or re- create a disk group and the YxYM objects within it. This process is handled automatically by the vxconf igbackupd daemon. Back Up I Precommit 1vxconfigbackup Jiskgroup vxconfigrestore -p diskgroup Lesson 7 Resolving Hardware Problems Copyright K: 2006 Symauter. Corporation, All rrghts -ese-veo 7-11
  • 184. symantcc. Disk Failure: Volume States After the Failure vxprint -g datadg -ht : datadg02 is the failed disk. DG NAME NCONFIG NLOG MINORS GROUP-IO OM NAME DEVICE TYPE PRIVLEN PUBLEN STATE RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL RL NAME RVG KSTATE STATE REM HOST REM __DG REM RLNK V NAME RVG KSTATE STATE LENGTH READPOL PREFPLEX UTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NeOL/WID MODE so NAME FLEX DISK DISKOFFS LENGTH [COLI] OFF DEVICE MODE SV NAME PLEX VOLNAME NVQLLAYR LENGTH [COLIl OFF AM/NM MODE dg datadg default default 64000 954250803.2005. train06 dm datadgOl diskO -1 auto:cdsdiak 1519 4152640 dm datadg02 NODEVICi: iv volOl ENABLED ACTIVE 204800 SELECT fagen 1 p L volOl-Ol volOl ENABLED ACTIVE 205200 CONCAT RW I sd datadgOl-Ol volOl-Ol datadgOl 0 205200 0 diskO 1 ENA p I volOl-02 volOl DISABLED NODKVICE 205200 CONCAT RW sd datadg02-0l volOl 02 datadg02 0 205200 0 RLOC i v vel02 DISABLED ACTIV1i 204800 SELECT fsgen ! pl vel02-0l vol02 DISABLRD NODBVICE 205200 CONCAT RW I sd datadg02-02 velO2-0l datadg02 205200 205200 NDliV Resolving Disk Failures Volume States After the Failure As soon as YxYM detects the disk failure, it detaches the disk media record from the disk access record, the corresponding plex states change to NODEYICE as shown 011 the slide. At this point, YxYM docs not differentiate between a permanent failure and a temporary failure. Cupyngtll c; 2006 Symantec Cornorauoo All ';'11,,<;-oservec VERITAS Storage Foundation 5.0 for UNIX' Fundamentals7-12
  • 185. Disk Replacement Tasks Replacing a failed or corrupted disk involves both physically replacing the disk and then logically replacing the disk and recovering volumes in VxVM: Disk replacement:When a disk fails, you replace the COITuptdisk with a new disk. The replacement disk cannot already be in a disk group. If you want to use a disk that exists in another disk group, then you must remove the disk from the disk group before you can use it as the replacement disk. Volume recovery:When a disk fails and is removed for replacement. the plex on the failed disk is disabled, until the disk is replaced. Volume recovery involves starling disabled volumes, resynchronizing mirrors, and resynchronizing RAID-5 parity. After successful recovery, the volume is available for use again. Redundant (mirrored or RAID-5) volumes can be recovered by VxVM. Nonredundant (unmirrored) volumes must be restored from backup. Disk Replacement Tasks ~ CDPhys;c.,Replacement . Replace corrupt disk With ~ a new disk. 0L09;C.' Repl.cemen, 8~:'~m. • Replace the disk in VxVM. ~ ~ • Start disabled volumes. ~ Sf!!] • Resynchronize redundant volumes. ~ I Lesson 7 Resolving Hardware Problems Copvnqht '.i;:; 200(; Svruanter. Corporation. All rights reserved 7-13
  • 186. Physically Replacing a Disk 1. Connect the new disk. 2. Ensure that the operating system recognizes the disk. 3. Get VxVM to recognize the disk: vxdctl enable 4. Verify that VxVM recognizes the disk: vxdisk -0 alldgs list Note: In VEA, use Actions->Rescan to run disk setup commands appropriate for the OS and ensure that VxVM recognizes newly attached hardware. Adding a New Disk 1 Connect the new disk. 2 Get the operating system to recognize the disk: Platform Ox-Specific Commands to Recognize a Disk Soluris devfsadm prtvtoc /dev/dsk/device_ name HP-UX ioscan -fC disk insf -e ;IX cfgmgr lsdev -C -1 device name Linux blockdev --rereadpt /dev/xxx 3 Get YxYM to recognize that a failed disk is now working again: vxdctl enable 4 Verify that Yx YM recognizes the disk: vxdisk list After the operating system and YxYM recognize the new disk, you can then use the disk as a replacement disk. Note: In YEA. use the Actions=-c-Rcscun option to run disk setup commands appropriate for the operating system. This option ensures that YxYM recognizes newly attached hardware. VERITAS Storage Foundation 5.0 for UNIX: Fundamentals7-14
  • 187. , symanrec$L, Logically Replacing a Disk VEA: • Select the disk to be replaced. • Select Actions->Replace Disk. vxdiskadm: "Replace a failed or removed disk" CLl: vxdg -k -g diskgroup adddisk disk_name=device_name The -k option forces VxVM to take the disk media name of the failed disk and assign it to the new disk. Use with caution. Example: vxdg -k -g datadg adddisk datadgOl=cltldO Note: You may need to initialize the disk prior to running the vxdg adddisk command: vxdisksetup -i device name Replacing a Disk: VEA Select the disk to be used as the new (replacement) disk.Input: Acuons=-c-Replacc Disk Select: The disk to be replaced Navigation path: YxYM replaces the disk and attempts to recover volumes. Replacing a Failed Disk: vxdiskadm To replace a disk that has already failed or that has already been removed. you select the "Replace a failed or removed disk" option. This process creates a public and private region on the new disk and populates the private region with the disk media name of the failed disk. Replacing a Disk: ell IThe - k switch forces YxYM to take the disk media name of the failed disk and assign it to the new disk. For example. if the failed disk datadgOI in the datadg disk group was removed. and you want to add the new device cl t IdO as the replacement disk: vxdg -k -g datadg adddisk datadgOl=cltldO Note: If the disk failure was temporary. the disk still has the private region that would enable YxYM to recognize it. In this case you can use the vxreat tach command instead of the vxdg - k adddi sk command to reattach the failed disk. Lesson 7 Resolving Hardware Problems 7-15 Copy!lgl1( rc' 20U6 Symantec Corporation All rights reserved
  • 188. svmaruec Recovering a Volume VEA: • Select the volume to be recovered. • Select Actions->Recover Volume. CLl: vxreattaeh [-ber] [device_tag] • Reattaches disks to a disk group if disk has a transient failure, such as when a drive is turned off and then turned back on • -r attempts to recover stale plexes using vxreeover. vxreeover [-bnpsvV] t-s diskgroup] [volume_name I disk_name] vxreeover -b -g datadg datavol Recovering a Volume The vxreat tach Command The vxrea t tach utility reattaches disks to a disk group and retains the same media name. This command attempts to find the name of the drive in the private region and to match it to a disk media record that is missing a disk access record. This operation may be necessary if a disk has a transient failure-for example, if a drive is turned offand then back on. or if the Volume Manager starts with some disk drivers unloaded and unloadable. The vxrecover Command To perform volume recovery operations from the command line, you use the vxrecover command. The vxrecover program performs plex attach, RAID-S subdisk recovery, and resynchronizc operations for specified volumes (volume _name), or tor volumes residing on specified disks (disk_name). You can run vxrecover any time to resynchronizc mirrors. For example, utter replacing the tailed disk datadgOl in the datadg disk group. and adding the new disk cltldOs2 in its place, you can attempt to recover the volume datavol: vxreeover -bs -g datadg datavol To recover. in the background, any detached subdisks or plexes that resulted trom replacement of the disk datadgOl in the datadg disk group: vxrecover -b -g datadg datadgOl 7-16 Cupyflght 'C, 200£ SYll'iintec corcoauon. All rights reserved VERITAS Storage Foundation 5.0 for UNIX' Fundamentals
  • 189. .~ ~ .;~~~~ , SYIl1:lI1h'( . Resolving Disk Failures - Summary Permanent Disk Failure Temporary Disk Failure 1. Fix the hardware problem. (Replace disks, re-cable, change HBA, ... j 2 Ensure that the OS recognizes the device 3. Force VxVM to scan for added devices: vxdctl enable 4-a. Initialize a new drive. vxdisksetup -i device_name 4-b. Attach the disk media name 4. Reattach the disk media name to the new drive. to the disk access name. vxdg -g diskgroup -k adddisk vxreattach disk nemevoev i ce name 5. Recover the redundant volumes. vxrecover 6. Start any non-redundant volumes. vxvol -9 diskgroup -£ start volume 7. Restore non-redundant volume 7. Check data for consistency. data from backup. fsck -F vxfs /dev/vx/rdsk/ diskgroup/vol wne Resolving Disk Failures: Summary Disk failures can be resolved by following the process described in the slide. Lesson 7 Resolving Hardware Problems Copyright if; 2006 Svmanter, Corporation All rights reserved I 7-17
  • 190. syrnantec Disk Failure: Volume States After Attaching the Disk vxprint -g datadg -ht DG NAME NCONFIG NLOG MINORS GRQUP- 10 DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE RV NAME RLINK eNT KSTATE STATE PRIMARY DATAVOLS SRL RL NAME RVG KSTATE STATE REM _HOST REM DG REM RLNK V NAME RVG KSTATE STATE LENGTH READPOL PREFPLEX UTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NeOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COLI] OFF DEVICE MODE SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COLI J OFF AM/NM MODE dg datadg default defaul t 64000 95425080 3.2005. train06 dm datadgOl diskO 1 auto: cdsdisk 1519 4152640 - dm datadg02 di.:kO_2 auto:cdadiak 1519 4152640 - v volOl ENABLED ACTIVE 204900 SELECT fsgen pl volOI-O! volOI ENABLED ACTIVE 205200 CONCAT RW ad datadgO1- 01 voIOI-O! datadgOl 0 205200 0 diskO _1 ENA p I volOl-02 volOI DlSABLRD IOPAIL 205200 CONCAT RW ad datadg02- 01 volOl-02 datadg02 0 205200 0 diskO -2 KNA v vol02 DISABLBD ACTIVE 204900 SELECT fsgen p L vo102-01 vol02 DISABLlID RECOV2R 205200 CONCAT RW ad datadg02- 02 vo102-01 datadg02 205200 205200 0 diskO -2 ENA Volume States After Attaching the Disk Media After reattaching the disk. volume and plex states are as displayed in the slide. Notice the different states of volOl and vol02. The volOl volume can still receive I/O and contains a plcx in the IOFAIL state. This indicates that there was a hardware failure underneath the plcx while the plcx was online. Also notice that the only plcx ofvol02 has a state of RECOVER. This state means that Vx VM believes that the data in this plex will need to be recovered. In a temporary disk failure, where the disk may have been turned oil' during an I/O stream, the data on that disk may still be valid. Therefore. do not always interpret the RECOVER state in terms of bad data on the disk. 7-18 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copvnoht ,; 2006 Symantec Corporation All nqt-ts reserved
  • 191. , synuuuec Disk Failure: Volume States During Recovery Ivxprint -g datadg -ht I DG NAME NCONFIG NLOG MINORS GROUP-ID DM NAME DEVICE TYPE PRIVLEN PUSLEN STATE RV NAME RLINK _CNT KSTATE STATE PRIMARY DATAVOLS SRL I RL NAME RVG KSTATE STATE REM HOST REM DG REM RLNK V NAME RVG KSTATE STATE LENGTH READPOL PREFPLEX UTYPE I PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NeOL/WID MODE SD NAME FLEX DISK DISROFFS LENGTH (COLI] OFF DEVICE MODE I SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COLli OFF AM/NM MODE I .,: d~tadg default default 64000 95425 08 03.2005. train06 I dm datadgOl diskO 1 auto:cdsdisk 1519 4152640 - Ielm datadg02 diskO 2 auto: cdsdisk 1519 4152640 v volOl ENABLED ACTIVE 204800 SELECT fsgen pl volOl-Ol volOl ENABLED ACTIVE 205200 CONCAT RW i sd datadgOl- 01 volDl-Ol datadgOl 0 205200 0 diskO 1 ENA I pi vo101-02 volOl ENABLED STALE 205200 CONCAT WO , sd datadg02 - 01 volOl-02 datadg02 0 205200 0 diskO _2 ENA Iv vol02 DISABLED ACTIVE 204900 SELECT fsgen p L voI02 -01 voI02 DISABLBD RECOVER 205200 CONCAT RW j sd datadg02- 02 vol02-01 datadg02 205200 205200 0 diskO 2 ENA Volume States After Recovering Redundant Volumes When you start the recovery on redundant volumes. the plcx that is not synchronized with the mirrored volume has a state of ENABLED and STALE. During the period of synchronization. the stale plex is write-only (WO). Aller the synchronization is complete. the plcx state changes to ENABLED and ACTIVE. and it becomes read-write (RW). I Lesson 7 Resolving Hardware Problems 7-19 Copyright '1: 2006 Symautec Coero-auon. All rights reserved
  • 192. symantec Intermittent Disk Failures • VxVM can mark a disk as failing if the disk is experiencing 1/0 failures but is still accessible.,~.----. I~~~~~~list TYPE 1 . .- idiskO 1 auto:cdsdisk datadgOl datadg online IdiskO:2_ .._.. au~o.:".<::'.<:Ii.."~..~~.".~dg02 datadg online • Disks marked as failing are not used for any new volume space. • To resolve intermittent disk failure problems: - If any volumes on the failing disk are not redundant, attempt to mirror those volumes: • If you can mirror the volumes, continue with the procedure for redundant volumes. • If you cannot mirror the volume, prepare for backup and restore. - If the volume is mirrored: • Prevent read 1/0 from accessing the failing disk by changing the volume read policy. Remove the failing disk. Replace the disk. Set the volume read policy back to the original policy. DISK STATUSGROUP Intermittent Disk Failure Intermittent disk failures are failures that occur off and on and involve problems that cannot be consistently reproduced. Therefore, these types of failures are the most difficult for the operating system to handle and can cause the system to slow down considerably while the operating system attempts to determine the nature of the problem. If you encounter intcrmiucnt failures, you should move data offofthe disk and remove the disk from the system to avoid an unexpected failure later. The method that you use to resolve intermittent disk failure depends on whether the associated volumes are redundant or nonrcdundanl. 7-20 Copyright ';' 2006 Sy!llar.(ec Corporation All rights resorveo VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 193. , symaruec. Ifvolumes are performing writes and each write is taking a long time to succeed because of the intermittent failures, then the system may slow down significantly and fall behind in its work, II' this scenario occurs, you may need to forcibly remove the disk and not evacuate the data: Use the vxdiskadm option. "Remove a disk for replacement." With this option, VxVM treats the drive as though it has already failed. The problem with using this command is that all volumes that have only two mirrors (or that have a RAID-5 layout for redundancy) and that are using this drive are no longer redundant until you replace the drive. During this period, if a bad block occurs on the remaining disk, you cannot easily recover and may have to restore from backup. You must also restore allnonredundant volumes using the drive from backup. 2 After you remove the drive. you must replace the drive in the same way as when a drive completely fails. To replace a drive. you can use the vxdiskadm option, "Replace a failed or removed disk." Note: The state uf the disk is set to REMOVED when you use the vxdiskadm option "Remove a disk for replacement." In terms of fixing the drive, the REMOVED state is the same as NODEVICE. You must use the vxdiskadm option, "Replace a failed or removed disk," to replace the drive. Forced Removal To forcibly remove a disk and not evacuate the data: 1, Use the vxdiskadm option, "Remove a disk for replacement." VxVM handles the drive as if it has already failed, 2. Use the vxdiskadm option, "Replace a failed or removed disk." Using the command line: vxdg -k -g diskgroup rmdisk Idisk_namel vxdisksetup -i Inew_device_namel vxdg -k -g diskgroup adddisk Idisk_namel:lnew_device_namel Forced Removal I Lesson 7 Resolving Hardware Problems Copvriqht '.c 2006 Svmantec Corporation. All rights reservoo 7-21
  • 194. symantec What Is Hot Relocation? Hot Relocation: The system automatically reacts to 1/0 failures on redundant VxVM objects and restores redundancy to those objects by relocating affected subdisks. Spare Disks VM Disks Subdisks are relocated to disks designated as spare disks or to free space in the disk group. Managing Hot Relocation at the Host Level What Is Hot Relocation? Hot relocation is a feature ofYxYM that enables a system to automatically react to I/O failures on redundant (mirrored or RAID-5) YxYM objects and restore redundancy and access to those objects. YxYM detects I/O failures on objects and relocates the affected subdisks. The subdisks arc relocated to disks designated as spare disks or 10 free space within the disk group. YxYM then reconstructs the objects thai existed before the failure and makes them redundant and accessible again. Note: YxYM hot relocation is applicable when working with both physical disks and hardware arrays. For example, even with hardware arrays if you mirror a volume aeross LUN arrays, and one array becomes unusable, it is better 10 reconstruct a new mirror using Ih..:remaining array than to do nothing. Partial Disk Failure When a partial disk failure occurs (that is, a failure affecting only some subdisks on a disk), redundant data on the failed pori Ion of the disk is relocated. Existing volumes on the unaffected portions of the disk remain accessible. With partial disk failure, the disk is not removed from YxYM control and is labeled as FAILING, rather than as FAILED. Before removing a FAI LING disk for replacement. you must evacuate any rcmaining olumcs on the disk. Note: Hot relocation is only performed for redundant (mirrored or RAID-5) subdisks on a tailed disk. Nonrcdundant subdisks on a failed disk are not relocated, but the system administrator is notified of the failure. 7-22 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 195. ~--a;:.'·~·-----'S}l1~lnll'l. Hot-Relocation Process Volumes VM Disks 1. vxrelocd detects disk failure. 2. Administrator is notified bye-mail. 3. Subdisks are relocated to a spare. 4. Volume recovery is attempted. How Does Hot Relocation Work? The hot-relocation feature is enabled by default. No system administrator action is needed to start hot relocation when a failure occurs. The vxrelocd daemon starts during system startup and monitors VxVM for failures involving disks, plexes, or RIID-5 subdisks. When a failure occurs. vxrelocd triggers a hot-relocation attempt and notifies the system administrator, through e-mail, of failures and any relocation and recovery actions. The vxrelocd daemon is started from the S95vxvm- recover file (on Solaris). / etc/ rc. d/ rc2 . d/ S02VXVID-r ecover file (on Linux). or / sbin/ rc2 . d/S096vxvm-recover (on IIP-UX). The argument to vxrelocd is the list of people to e-mail notice ofa relocation (default is root). To disable vxrelocd, you can place a "W' in front of the line in the corresponding start-up file. 1 successful hot-relocation process involves: 1 Failure detection: Detecting the failure of a disk, plex, or RAID-5 subdisk 2 Notification: Notifying the system administrator and other designated users and identifying the affected Volume Manager objects 3 Relocation: Determining which subdisks can be relocated, finding space for those subdisks, and relocating the subdisks (The system administrator is notified of the success or failure of these actions. Ilot relocation does not guarantee the same layout of data or the same performance after relocation.) 4 Recovery: Initiating recovery procedures, if necessary. to restore the volumes and data (Again, the system administrator is notified of the recovery auempt.) Lesson 7 Resolving Hardware Problems Copyright ~ 2006 Symantec Corporation All rights reserved I 7-23
  • 196. symantec How Is Space Selected? • Hot relocation attempts to move all subdisks from a failing drive to a single spare destination disk. • If no disks have been designated as spares, VxVM uses any available free space in the disk group in which the failure occurs. • If there is not enough spare disk space, a combination of spare disk space and free space is used. • Free space that you exclude from hot relocation is not used. How Is Space Selected for Relocation? When relocating subdisks. YxYM aucmpts to select a destination disk with the fewest differences from the failed disk. YxYM: Attempts to relocate to the same controller. target. and device as the failed drive 2 Attempt to relocate to the same controller and target, but to a different device 3 Attempts to relocate to the same controller, but to any target and any device 4 Attempts to relocate to a different controller 5 Potentially scaners the subdisks to different disks A spare disk must be initialized and placed in a disk group as a spare before it can be used for replacement purposes. Hot relocation attempts to move all subdisks from a failing drive to a single spare destination disk. if possible. If no disks have been designated as spares. YxYM automatically uses any available free space in the disk group not currently on a disk used by the volume. II' there is not enough spare disk space, a combination of spare disk space and tree space is used. Free space that you exclude li'OI11 hot relocation is not used. In all cases, hot relocation attempts to relocate subdisks to a spare in the same disk group, which is physically closest to the failing or failed disk. When hot relocation occurs, the failed subdisk is removed from the configuration database. The disk space used by the failed subdisk is not recycled as free space. 7-24 Copvnqt-t ~.2006 Svmantoc Corporauon. All right'>reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 197. , symnruec. Managing Spare Disks VEA: Actions->Set Disk Usage vxdiskadm: • "Mark a disk as a spare for a disk group" • "Turn off the spare flag on a disk" • "Exclude a disk from hot-relocation use" • "Make a disk available for hot-relocation use" ell: To designate a disk as a spare: vxedit -g diskCiroup set spare=onloff disk name To exclude/include a disk for hot relocation: vxedit -g diskgroup set nohotuse=onloff disk name To force hot relocation to only use spare disks: Add spare=only to /etc/default/vxassist Managing Spare Disks When you add a disk to a disk group, you can specify that the disk be added to the poul of spare disks available to the hot relocation feature ofVxVM. Any disk in the same disk group can use the spare disk. Try to provide at least one hot- relocation spare disk per disk group. While designated as a spare, a disk is not used in creating volumes unless you specifically name the disk on the command line. I Copyright © 2006 Svmantec Ccrporauon. All rights reserved 7-25Lesson 7 ResolvingHardwareProblems
  • 198. symantec Unreloeating a Disk VEA: • Select the disk to be unrelocated. • Select Actions->Undo Hot Relocation. vxdiskadm: "Unrelocate subdisks back to a disk" ell: vxunreloc [-fl [-g diskgroupl [-t tasktagl [-n disk_namel orig_disk_name • orig_disk_Ilal1le Original disk before relocation • -n disk name Unrelocates to a disk other than the original Forces unrelocation if exact offsets are not possible • - f Unrelocating a Disk The vxunreloc Utility The hot-relocation feature detects I/O failures in a subdisk, relocates the subdisk. and recovers the plex associated with the subdisk. YxYM also provides a utility that unrelocatcs a disk-that is, moves relocated subdisks back to their original disk. After hot relocation moves subdisks tj'OI11 a failed disk to other disks. you can return the relocated subdisks to their original disk locations after the original disk is repaired or replaced. Unrelocaiion is performed using the vxunreloc utility. which restores the system to the same configuration that existed before a disk failure caused subdisks to be relocated. 7-26 Copyright G ;:006 Svmantec Corporation. AU fights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 199. ,S}11Janti'l Lesson Summary • Key Points This lesson described how to interpret failures in hardware, recover disabled disk groups, resolve disk failures, and manage hot relocation at the host level. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Storage Foundation Release Notes ,syrnalltl" Lab 7 I Lab 7: Resolving Hardware Problems In this lab, you practice recovering from a variety of hardware failure scenarios, resulting in disabled disk groups and failed disks. First you recover a temporarily disabled disk group, and then you use a set of interactive lab scripts to investigate and practice recovery techniques. IFor Lab Exercises, see Appendix A. ~r Lab Solutions, see Appendix B. Labs and solutions for this lesson are located on the following pages: Appendix A provides complete lab instructions. "l.uh 7: [{c',)hill~. l lurdv arc Problems." l)<lgl' : -~ 7 Appendix B provides complete lab instructions and solutions. "I ab 7 Solutions [{C,,,l,'lllg I lardwarc Problems." i',lC'C H-~," Lesson 7 Resolving Hardware Problems 7-27 Copyright (i;;, 2006 Symanter; Corporation All rights reserved
  • 200. 7-28 VERITAS Storage Foundation 5.0 for UNIX· Fundamentals Copyright _ 2006 Svmautec Corporation All [191'IS reserved
  • 201. Appendix A Lab Exercises
  • 202. A-2 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 203. I ~ sym.uu«; Lab 1 Lab 1: Introducing the Lab Environment In this lab, you are introduced to the lab environment, system, and disks that you will use throughout this course. For Lab Exercises, see Appendix A. JFor Lab Solutions, see Appendix B. Lab 1: Introducing the Lab Environment In this lab. you are introduced to the lab environment. the systems. and disks that you will use throughout this course. You will also record some prerequisite information that will prepare you for the installation ofVERITAS Storage Foundation and the labs that follow throughout this course. The Lab Snlilt;"lh Ior Ihi, Llh ,n'c' Inc';lIcd Oil the 1,1110illl2 p;lge: "Lab I C;"iUlioll': Illimdllc';ilg !lIe Lab Lnviroumcut." "age 1-3 A-3Lab 1: Introducing the Lab Environment Copyright 1-"-,2006 Symantcc Corporation. All rights reserved
  • 204. Lab Environment Introduction The instructor will describe the classroom environment, review the configuration and layout of the systems. and assign disks for you to use. The content of this activity depends on the type of classroom, hardware. and the operating system(s) deployed. Lab Prerequisites Record the following information to be provided by your instructor: Object Sample Value Your Value root password veritas Host name trainl Domain name classrooml. int Fully qualified hostname trainl.classrooml (FQHI') .int Host name of the system train2 sharing disks ith my system (my partner system) ly Boot Disk: Solaris: cOt OdO HP-UX: clt15dO AIX: hdiskO l.inux: hda 2nd Internal Disk: Solaris: cOt2dO HP-UX: c3t15dO AIX: hdiskl l.inux: hdb ly Data Disks: Solaris: c It #dO - clt#d5 HP-UX c4tOdO - c4tOd5 AIX:hdisk21- hdisk26 Liuux: sda - sdf A-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynght .; ?()06 Svmantec Copornnon AU rights reserveu
  • 205. Location of Storage /student/ Foundation 5.0 software/sf/sf50 Software: Location of Lab Scripts: /student/labs/sf/ sf 50 Location of the fp /student/labs/sf/ program: sf50/bin Location of VERITAS Istudent/ Storage Foundation software/license/ license keys: sf 50 - entr - lic.txt Lab 1: Introducing the Lab Environment Copynqht C 2006 Symantec Corporation. All rights te servr-rt I A-5
  • 206. Instructor Classroom Setup Perform the following steps to enable zoning configurations for the Storage Foundation 5-day course (not required for High Availability Fundamentals): 1 Use coursesetup script: Select Classroom. (Setup scripts are all included in Classroom SAN configuration Version 2). Select Function To Perform: - Select Zoning by Zone Name 2 - Select Zoning and Hostgroup Configuration by Course Name 3 - Select/Check Hostgroup Configuration 2 Select option 3 - Select/Check Hostgroup Configuration. Select HostGroup Configuration to be Configured: 1 - Standard Mode: 2 or 4 node sharing, No DMP 2 DMP Mode: 2 node sharing, switchable between 1 path and 2 path access 3 - Check active HDS Hostgroup Configuration 3 Select option 2 - DMP Mode. Wait and do not respond to prompts. 4 Exit to first level menu, 5 Select option I - Select Zoning by Zone Name. Select Zoning Configuration Required: 1 - Mode 1: 6 sets of 2 Systems sharing 12 LUNs, no Tape Library available (HDS DMP Mode 6 x 2 Systems - Single Path to 12 LUNs) Mode 2: 3 sets of 4 Systems sharing 24 LUNs, no Tape Library available IHDS DMP Mode - 6 x 2 Systems - Dual Paths to 12 LUNs) 6 Select option I - Mode I (single path to 12 LUNs). 7 Select option 4 - Sularis as the OS. S Exit out of the course_setup script. 9 Rebout each system using reboot - - - r. A-6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyrlght'~ 2006 Symautec Corpor anon All riglil'> ies.uvoa
  • 207. 'symam('(. Lab 2 Lab 2: Installation and Interfaces In this lab, you install VERITAS Storage Foundation 5.0 on your lab system. You also explore the Storage Foundation user interfaces, including the VERITAS Enterprise Administrator interface, the vxdiskadm menu interface, and the command-line interface . . For Lab Exercises, see APpendix-A:! For Lab Solutions, see Appendix ~~ Lab 2: Installation and Interfaces I In this exercise. you install VERITAS Storage Foundation S.Uon yuur lab system. You also explore the VxVM user interfaces, including the VERITAS Enterprise Administrator interlace. the vxdiskadm menu interface. and the command line interface. The Lal.' S"llIilUlh Ill] this Ldl dr,' I,',·alui Oil the 1(11IuIIH2rage: "Lab 2 S"luilOIl< IlhlalLuinil :Illd Illll'llucl·'." P:lg,' H·' Prerequisite Setup To perform this lab. you need a lab system with the appropriate operating system and patch sets pre-installed. At this point there should be no Storage Foundation software installed on the lab system. The lab steps assume that the system has access to the Storage Foundation S.Osoftware and thai you have a Storage Foundation S.OEnterprise demo license key that can be used during installation. Lab 2: Installation and Interfaces Copyright <l'.2006 Syrnantec Corporation All nqtus rese-veo A-7
  • 208. Classroom Lab Values In preparation for this lab, you will need the following information about your lab environment. For your reference. you may record the information here. or refer back to the first lab where you initially documented this information. Object Sample Value Your Value root password veritas Host name trainl Domain name c La s s room t .int Fully qualified hostname trainl.classrooml (FQHl') .int My Boot Disk: Solaris: cOtOdO HP·UX: clt15dO AIX:hdiskO Linux: hda Location of Storage /student/ Foundation 5.0 software/sf/sf50 Software: Location of VERITAS /student/ Storage Foundation software/license/ license keys: sf50 - entr - lic.txt Location of Lab Scripts: /student/labs/sf/ sf50 A-8 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynqht ',:,2006 SY'1l8111ecCornoranon All nqhts reserved
  • 209. Preinstallation Determine if there are any VRTS or SYMC packages currently installed on your system. I2 Before installing Storage Foundation. save the following important system files into backup files named with a". preVM" extension. Also. save your boot disk information to a file lor later use (do not store the file in /tmp). You may need the boot disk information when you bring the boot disk under V.xVM control in a later lab. 3 Are any VERITAS license keys installed on your system'! Check for installed licenses. 4 To test if ONS is configured in your environment. check if nslookup resolves the hostname to a fully qualified hostname by typing nslookup hostname.If there is no ONS or if the hosmamc cannot be resolved to a fully qualified hostname, carry out the following steps: a Ensure that the fully qualified hostnamc is listed in the / ete/hosts file. For example. eat jete/hosts 192.168.xxx.yyy tl·ain#.domain train# where domainis the domain name used in the classroom. such as elassrooml. into If the fully quali ficd hostname is not in the / et e/hos t s file. add it as an alias to hostname. b Change to the directory containing lab scripts and execute the prepare_ns script. This script ensures that your lab system only uses local files for name resolution. Lab 2 Installation and Interfaces A-9 Copyright <h 20(10Symantec cc.roreuoo All fignls resorvon
  • 210. Installing VERITAS Storage Foundation Navigate to the directory containing the Veritas Storage Foundation software. Ask your instructor for the location of the installer script. Using the installer script, run a prechcck to determine if your system meets all preinstallauon requirements. If any requirements (other than the license software not being installed) arc not met, follow the instructions to take any required actions before you continue. Note that you can look into the log tile created to see the details of the checks the script performs. 2 Navigate to the directory containing the Veritas Storage Foundation software. Install and perform initial configuration of Storage Foundation (Vx VM and VxFS) using the following steps: a Start the installer script. b Select I for InstaJIIUpgrade a Product option. c Select the Veritas Storage Foundation software to install. On the HP-UX platform, confirm that you wish to continue the installation of this version. d Enter the name of your system when prompted. e Obtain a license key from your instructor and record it here. Type the license key when prompted. License Key: Enter n when you are asked if you want to enter another license key. Select to install All Veritas Storage Foundation packages when prompted. 9 Press Return to scroll through the list of packages. h Accept the default of y to configure SF. IIP-lIX On the HP-UX platform. the installer script starts the software installation without asking any configuration questions. When the software installation is complete, it prompts you to reboot your system. Continue with the configurarion usiug . / installer - conf igure alter the system is rebooted. Do not set lip enclosure-based naming tor Volume Manager. VERITAS Storage Foundation 5.0 for UNIX: FundamentalsA-10 Copvnqbt ('0 ;.1006 Symantec Corporation. All rights reserved
  • 211. Do not set up a default disk group. Obtain the domain name from your instructor and type the fully qualified host name of your system when prompted. For example: train5.classrooml.int I k Ifan error message is displayed that the fully-qualified host name could not be queried. press return to continue. m Do not enable Storage Foundation Management Server Management. The system will he a standalone host. n Select y to start Storage Foundation processes, o Wait while the installation proceeds and processes are started. p When the installation script completes. you will be asked to reboot your system. Perform the next lab step (lab step 3) to modify the root profile before rebooting your system. q This step is only for North American Mobile Academy lab environment. If you are working in a different lab environment. skip this step. If you are working in a North American Mobile Academy lab environment with iSCSI disk devices. change to the directory containing the lab scripts and execute the iscsi_setup lab script. This script disables DMP support for iSCS[ disks so that they can be recognized correctly by Volume Manager. Only if you are working in a North American Mobile Academy lab environment: cd /location_of_lab_scripts ./iscsi setup 3 Check in /. profile to ensure that the following paths are present. Note: Your lab systems may already be configured with these environment 'ariable settings. However. in a real-Ii fe environment you would need to carry out this step yourself. 4 Reboot your system. Lab 2: Installation and Interfaces A-11 Ccpyrigl11 ~ 2006 Symantec Corporation. All riqhts reserved
  • 212. Setting Up VERITAS Enterprise Administrator Is the YEA server running" Ifnot. start it. 2 Start the YEA graphical user interface. Note: On some systems, you may need to configure the system to use the appropriate display. For example. if the display is pel: O. before you run YEA, type: DISPLAY=pcl:O export DISPLAY It is also important that the display itself is configured to accept connections from your client, If you receive permission errors when you try to start YEA. in a terminal window on the display system, type: xhost system or xhost + where system is the hostname of the client on which you are running the vea command. 3 In the Select Protile window. click Manage Profiles button and contigure YEA to always start with the Default profile. 4 Click "Connect to a Host or Domain link" and connect to your system as root. Your instructor provides you with the password. 5 On the left pane (object tree) view. drill down the system and observe the various categories ofYxYM objects. 6 Select the Assistant perspective on the quick access bar and view tasks for systemname/StorageAgent. 7 Using the System perspective find out what disks are available to the OS. 8 Execute the Disk Scan command and observe the messages on the console view. Click on a message to see the details. 9 What commands were executed by the Disk Scan task? 10 Exit the YEA graphical interface. 11 Create a root equivalent administrative account named adminl for use of YEA. A-12 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cop~nght ,~ 2006 Svmantec CDJpOnllOIl All rights resorvco
  • 213. 12 Test the new account. After you have tested the new account, exit VEA. Exploring vxdiskadm From the command line. invoke the text-based VxVM menu interface. 2 Display information about the menu or about specific commands. 3 What disks are available to the OS·) 4 Exit the vxdiskadm interface. Lab 2: Installation and Interfaces Copyright 'E 2006 Svmantec Corporation All riglll" reserved I A-13
  • 214. Optional Lab: Accessing CLI Commands Note: This exercise introduces several commonly used VxVM commands. These commands and associated concepts arc explained in detail throughout this course. If you have used Volume Manager before, you may already be familiar with these commands. If you are new to Volume Manager, this exercise aims to show you the amount of information you can get from the manual pages. Note that you do not need to read all of the manual pages for this exercise. From the command line. invoke the VxVM manual pages and read about the vxassist command. 2 What vxassist command parameter creates a VxVM volume? 3 From the command line, invoke the VxVM manual pages and read about the vxdisk command. 4 What disks arc available to VxVM'.' 5 From the command line, invoke the Vx VM manual pages and read about the vxdg command. 6 How do you list locally imported disk groups'.' 7 From the command line, invoke the VxVM manual pages and read about the vxprint command. Optional Lab: More Installation Exploration When does the VxVM license expire? 2 What is the version and revision number of the installed version of V x VM'.' 3 Which daemons are running after the system boots under VxVM control? A-14 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals C[)pynf~hlc lOOA Svrnantec Cornoranon All nqhls .r-sorvco
  • 215. 'synmntc(. Lab 3 Lab 3: Creating a Volume and File System In this lab, you create new disk groups, simple volumes, and file systems, mount and unmount the file systems, and observe the volume and disk properties. The first exercise uses the VEA interface. The second exercise uses the command-line interface. For Lab Exercises, see Appendix A. For Lab Solutions, see App.!llldix ,!!._ I Lab 3: Creating a Volume and File System In this lab. you create new disk groups. simple volumes. and file systems. mount and unmount the file systems. and observe the volume and disk properties. The first exercise uses the VEA interface. The second exercise uses the command line interface. I'hc lal' S"llllie"b lor lili, hh .uc Ipealed on the i(lllu in!! p:Jgc: "Lab"" S,.[utiolls: (rc;rlillg;1 volume ami Fil,: SlsICIH." page H-.:'I If you use object names other than the ones provided. substitute the names accordingly in the commands. Caution: In this lab. do not include the boot disk in any of the tasks. Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need four empty and unused external disks to be used during the labs. Note: Although you should not have to perform disk labeling. here are some tips that may help if your disks are not properly formatted: On Solaris, use the forma t command to place a label on any disks that are not properly labeled for use under Solaris, Ask the instructor for details. On Linux, if you have problems initializing a disk. you may need to run this command: fdisk /dev/disk. Use options - 0 and -w to write a new DOS partition table. (The disk may have previously been used with Solaris.) Copyright'~ 2006 Symantec Corporation All rlght<;reserved A-15Lab 3: Creating a Volume and File System
  • 216. Classroom Lab Values In preparation for this lab, you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the first lab where you initially documented this information. Object Sample Value Your Value root password veritas Host name trainl 'Vly Data Disks: Solaris: c lt #dO - clt#d5 HP-UX: c4 t OdO - c4tOd5 AIX: hdisk21- hdisk26 l.inux: sda - sdf Prefix to be used with name object names A-16 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COp'rl!Jhl',~2006 Svmanter cope-anon All flghlS reserved
  • 217. Creating a Volume and File System: VEA Run and log on to the VEA interface as the root user. I2 View all the disk devices on the system. What is the status of the disks assigned to you for the labs? 3 Select an uninitializcd disk and initialize it using the VEA. Observe the change in the Status column. What is the status of the disk now') 4 Create a new disk group using the disk you initialized in the previous step. Name the new disk group namedgl.Observe the change in the disk status. Note: If you arc sharing a disk array. make sure that the prefix you are using for the disk group names is unique. 5 Using VEA create a new volume of size Igin namedgl.Name the new volume namevoll. Create a fi Ie system on it and make sure that the file system is mounted at boot time to / name1 directory. 6 Check if the file system is mounted and verify that there is an entry for this file system in the file system table. 7 View the properties of the disk in the namedgldisk group and note the Capacity and the Unallocated space fields. 8 Try to create a second volume, namevo12.in the namedgl and specify a size slightly larger than the unallocated space on the existing disk in the disk group. for example 4g in the standard Symantcc classroom systems. Do not create a file system on the volume. What happens') 9 Add a disk to the namedgl disk group. 10 Create the same volume. namevo12.in the namedgl disk group using the same size as in step R. Do not create a file system. 11 Observe the volumes by selecting the Volumes object in the object tree. Can you tell which volume has a mounted file system'! 12 Create a VxFS file system on namevo12and mount it to / name2directory. Ensure that thc lile system is not mounted at boot time. Check if the /name2 file system is currently mounted and verify that it has not been added to the file system table. Lab 3: Creating a Volume and File System A-17 Copyright @ 2006 Symanlec Corporation All figtlls reserved
  • 218. 13 Observe the commands that were executed by VEA during this section of the lab. Creating a Volume and File System: CLI View all the disk devices on the system. What is the status of the disks assigned to you for the labs? 2 Select an uninitializcd disk and initialize it using the CLI. Observe the change in the Status column. What is the status of the disk now'? 3 Create a new disk group using the disk you initialized in the previous step. Name the new disk group namedg2. Observe the change in the disk status. Note: If you arc sharing a disk array, make sure that the prefix you are using for the disk group names is unique. 4 Using the vxassist command, create a new volume of size Ig in namedg2. Name the new volume namevo13. 5 Create a Vcritas file system on thc namevo13 volume, mount the file system to the / name3 directory. Make sure that the file system is mounted at boot time. 6 Unmount the / name3 tile system. verify the unmount, and remount using the mount -a command to mount all file systems in the file system table. 7 Identify the amount of free spacein the namedg2 disk group. Try to create a volume in this disk group named namevo14 with a size slightly larger than the available free space. lor example 5g on standard Symantec classroom systems. What happens'! Note: The disk sizes in Syrnantcc Virtual Academy lab environments are slightly less than 2g. Ensure that you use the correct value suitable to your environment instead of the 5g example used here. 8 Initialize a new disk and add it to the namedg2 disk group. Observe the change in tree space. 9 Create the same volume. namevo14, in the namedg2 disk group using the same size as in step 7. A-18 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals CO~y!lght ,::;200fi Svmanrcc Corporanoo All rIghts reserved
  • 219. 10 Display volume information for namedg2disk group using the vxprint -g namedg2 -htr command. Can you identify which disks arc used for which volumes') I11 List the disk groups on your system using the vxdg list command. Removing Volumes, Disks and Disk Groups: eLl Unmount the / name3file system and remove it from the file system table. 2 Remove the namevo14volume in the namedg2disk group. Observe the disk group configuration information using the vxprint -g namedg2 -htr command. 3 Remove the second disk (namedg2 02) from the namedg2 disk group. Observe the change in its status. 4 Destroy the namedg2disk group. 5 Observe the status of the disk devices on the system. Removing Volumes, Disks and Disk Groups: VEA Unmount both / namel and / name2 file systems using YEA. Accept to remove the file systems from the tile system table ifprompted. Check if the file systems are unmounted and veri fy that any corresponding entries have been removed from the file system table. a Select the File Systems node in the object tree and select / narnel file system. b Select Actions->Unmount File System. c Confirm the unmount and select Yes when prompted to remove it from the file system table. d Selectthe /name2 file system. SelectActions->Unmount File System. Confirm the unmount. Lab 3: Creating a Volume and File System A-19 Copyright 'G 2006 svmeotcc Corporation. All nqhts reserved
  • 220. Both file systems should disappear from the file system list in VEA. You can use the command line tu verify the changes as follows: Solaris mount cat /etc/vfstab HP-UX. mount Linux cat /etc/fstab The / namel and / name2 file systems should nut be among the mounted fill' systems. and the file system table should nut contain any entries currespunding tu these tile systems. 2 Remove the namevo12volume in the namedgl disk group. a Select the Volumes node in the object tree and select namevo12 volume. b Select Actiuns->Delete Volume. Confirm when prompted. 3 Select the Disk Groups node in the object tree and observe the disks in the namedgl disk group. Can you identify which disk is empty'? The %Used column should show 0%for the unused disk which is the second disk in the disk gruup (namedgl02). 4 Remove the disk yuu identified as empty from the namedg1 disk group. Select the empty disk and select Actions->Remove Disk From Disk Gruup. 5 Observe all the disks on the system. What is the status of the disk you removed from the disk group') Select the Disks node in the object tree and observe the disks in the right pane view. The disk removed in step 4 shuuld be in Free state. 6 Destroy the namedgl disk group. a Select the Disk Gruups node in the object tree and the namedgl disk group in the right pane view. b Select Actions->Destroy Disk Group. Confirm when prompted. 7 Observe all the disks on the system. What is the status of the disks? Select the Disks node in the object tree and observe the disks in the right pane view. If you have followed all the lab steps, you should have 4 disks in Free state; they are already initialized but nut in a disk group. A-20 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyright 20(}6 Symantec Corporation AU ".!t'l~ reserveo
  • 221. I , symaruec Lab 4 Lab 4: Selecting Volume Layouts In this lab, you create simple concatenated volumes, striped volumes, and mirrored volumes. You also practice creating a layered volume and using ordered allocation while creating volumes. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Lab 4: Selecting Volume Layouts In this lab. you create simple concatenated volumes. striped volumes. and mirrored volumes. You also practice creating a a layered volume and using ordered allocation while creating volumes. The Lab Solution-, tor ihi-, lab arc locakd on the 1()llp ing J1dgc: 'Tab -l SuIUl!PIlS: S,icctlllg volume layouts." pageB-.~. Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need four empty and unused external disks to be used during the labs. A-21Lab 4: Selecting Volume Layouts Copyng"1 .:; 2006 svroentec Corporation. All fights reserved.
  • 222. Classroom Lab Values In preparation for this lab, you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the first lab where you initially documented this information. Object Sample Value Your Value_._-- 1'001 password veritas Host name trainl ;Iy Data Disks: Solaris: c It #dO - clt#d5 HP-UX: c-i t odo - c4tOd5 AIX: hdisk21- hdisk26 Linux: sda - sdf Prefix to be used with name object names A-22 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals .:., reserved
  • 223. Creating Volumes with Different Layouts: CLI Add four initialized disks to a disk group called namedg. Verify your action using vxdisk -0 alldgs list. Note: If you are sharing a disk array. make sure that the prefix you are using for the disk group name is unique. I2 Create a 50-MB concatenated volume in namedg disk group called namevoll with one drive. 3 Display the volume layout. What names have been assigned to the plex and subdisks? 4 Remove the volume. 5 Create a 50-MB striped volume on two disks in namedg and specify which two disks to use in creating the volume. Name the volume namevo12. What names have been assigned to the plex and subdisks? 6 Create a 20-MB. two-column striped volume with a mirror in namedg. Set the stripe unit size to 256K. Name the volume namevo13. 7 Create a 20-MB. two-column striped volume with a mirror. Set the stripe unit size to 12RK. Select at least one disk that you should not use. Name the volume namevo14. Was the volume created? 8 Create a 20-MB 3-column striped volume with a mirror. Specify three disks to be used during volume creation. Name the volume namevo14. Was the volume created') 9 Create the same volume specified in the previous step. but without the mirror. What names have been assigned to the plcx and subdisks? 10 Remove the volumes created in this exercise. 11 Remove the disk group that was used in this exercise. Lab 4: Selecting Volume Layouts A-23 Copyright~' 2U06 Symantec Corporation. All rights rcsorveo
  • 224. Creating Volumes with Different Layouts: VEA If you had exited out of VEA, start it and connect back to your system. 2 Add four initialized disks to a disk group called namedg. Verify your action in the main window. 3 Create a 50-MB concatenated volume in namedg disk group called namevoll with one drive. 4 Display the volume layout. Notice the naming convention ofthe plex and subdisk. 5 Remove the volume. 6 Create a 50-MI3 striped volume on two disks in namedg, and specify which two disks to use in creating the volume. Name the volume namevo12. View the volume. 7 Create a 20-MB, two-column striped volume with a mirror in namedg. Set the stripe unit size to 256K. Name the volume namevo13. View the volume. Notice that you now have a second plcx. 8 Create a 20-MI3, two-column striped volume with a mirror. Set the stripe unit size to 128K. Select at least one disk you should not use. Name the volume namevo14. Was the volume created? 9 Create a 20-MI3 3-column striped volume with a mirror. Specify three disks to be used during volume creation. Name the volume namevo14. Was the volume created'! 10 Create the same volume specified in step 9, but without the mirror. Note: If yo II did not cancel out of the previous step, then just uncheck the mirrored option and continue the wizard. Was the volume created') 11 Delete all volumes in the namedg disk group. A-24 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYrighl ,.;: 2(1(1t} Svo-anrec Corporation AIlIlY!1IS reserved
  • 225. 12 View the commands executed by VEA during this section of the lab. Creating Layered Volumes IYou can complete this exercise using either the VEA or the CLI interface. Solutions arc provided lor both. Note: In order to perform the tasks in this exercise. you should have at least four disks in the disk group that you are using. First. ensure that any volumes created in the previous Jabs arc removed from the namedgdisk group. 2 Create a IOO-MB Striped Mirrored volume with 110 logging. Name the volume namevoll. 3 Create a Concatenated Mirrored volume with no logging called namevo12. The size of the volume should be greater than the size of the largest disk in the disk group; lor example. if your largest disk is 4 UB. then create a ()-UB volume. Note: If you arc working in the Virtual Academy (VA) lab environment, your largest disk will have a size 01'2 GB. In this environment. you can use a 3GB volume size. 4 If you are using VEA. view the commands executed by VEA to create the namevo12 volume during this section of the lab. 5 View the volumes and compare the layouts. 6 Remove all of the volumes in the namedgdisk group. Lab 4: Selecting Volume Layouts A-25 Copyright ~ 200(; Swnanter Corporation All rights reserved
  • 226. Using Ordered Allocation While Creating Volumes You can complete this exercise using either the VEA or the CLI interface. Solutions are provided for both. Nute: In order to perform the tasks in this exercise, you should have at least tour disks in the disk group that you are using. Create a 20-I1B. two-column striped volume with a mirror in the namedg disk gruup. Name the volume namevoll. 2 Display the volume layout. How are the disks allocated in the volume'} Which disk devices are used'? 3 Remove the volume you just made. and re-create it by specifying the four disks in an order different from the original layout. Use the command line to create the volume in this step. 4 Display the volume layout. How arc the disks allocated this time') 5 Remove all of the volumes in the namedg disk group. A-26 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals CopynqN ,~,2006 Svmantec corooreuoo All nobts reserved
  • 227. Optional Lab: Creating Volumes with User Defaults: CLI I This optional guided practice illustrates how to use the files: • /ete/default/vxassist • /ete/default/alt_vxassist to create volumes with defaults specified by the user. Note that some of the default values may nut apply to VEA because VEA uses explicit values for number of columns. stripe unit size. and number of mirrors while creating striped and mirrored volumes. Create two files in lete/default: a lh~ V.1. c'dilOL cTc:lil' ,I iik c'a!b! ''::':cIS,::LE;t 11;,;[ inc'ludl" lill' Ji)!ldw!ng: # when mirroring create three mirrors nmirror=3 bUsing llie vi cduor.crc.uc,; tile called ,,1 v xa s si s r. Ih:1I iurludc- the loll(llllg: # use 256K as the default stripe unit size for # regular volumes stripeunit:256k 2 Use these files when creating the following volumes: a Create a IOO-MB volume called namevoll using Layout em.i r r o r: b Create a IOO-M8. two-column stripe volume called namevol2 using - d al t _ vxassist so that Volume Manager uses the default file: 3 View the layout of these volumes using VEA or by using vxprint -g namedg -htr. What do you notice? 4 Remove any vxassist default files that you created in this optional lab section, The presence of these tiles can impact subsequent labs where default behavior is assumed. 5 Remove all of the volumes in the namedg disk group. Lab 4: Selecting Volume Layouts A-27 Copyright © 2006 Sv-nantec Corporation All rights reserved
  • 228. A-28 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynght -: 200fi Svmaotec Corpoeauon All fights reserved
  • 229. I 'symant('( Lab 5 Lab 5: Making Basic Configuration Changes This lab provides practice in making basic configuration changes. In this lab, you add mirrors and logs to existing volumes, and change the volume read policy. You also resize volumes, rename disk groups, and move data between systems. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Lab 5: Making Basic Configuration Changes This lab provides practice in making basic configuration changes. In this lab. you add mirrors and logs to existing volumes. and change the volume read policy. You also resize volumes. rename disk groups. and move data between systems. Tile: Lab Solutiu1ls torthr- lab arc I()c<tkd PII lil,' !"II,),lIlg I'<lg,': "L;111:, Solutions: lakinf!. fl,,,i,, Cuni"iguFlti(lli Ckln;!cs." I"lg(' [--+7 Prerequisite Setup To perform this Jab.you need a lab system with Storage Foundation pre-installed. configured and licensed, In addition to this. you also need fuur external disks to be used during the labs. At the beginning ofthis lab. you should have a disk group called namedg that has four external disks and no volumes in it. A-29Lab 5: Making Basic Configuration Changes Copyngh! © 2006 Symanter; Corporation All rights reservecl
  • 230. Classroom Lab Values In preparation for this lab, you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the first lab where you initially documented this information. Object Sample Value Your Value root password veritas Host name trainl Host name of the system train2 sharing disks with my system (my partner system) My Data Disks: Solaris: clt#dO - clt#d5 HP-UX: c4tOdO - c4tOd5 AIX:hdisk21- hdisk26 Linux: sda - sdf 2nd Internal Disk: Soluris: cOt2dO HP-UX: c3t15dO AIX: hdiskl Linux: hdb Location of Lab Scripts /student/labs/sf/ (if any): sf50 Prefix to be used with name object names A-3D VERITAS Storage Foundation 5.0 for UNIX: Fundamentals ClJpynghl'; 2006 Svmantec Corporation. All nqt-ts reserved
  • 231. Administering Mirrored Volumes I You can complete this exercise using either the YEA or the CLI interface. Solutions are provided for both. Note: In order to perform the tasks in this exercise, you should have at least four disks in the disk group that you are using. 1 Ensure that you have a disk group called namedgwith four disks in it. Ir not. create the disk group using four disks. Note: If you have completed the previous lab steps you should already have the namedgdisk group with four disks and no volumes. 2 Create a 50-MB, two-column striped volume called namevol1 in namedg. 3 Display the volume layout. "ow are the disks allocated in the volume'! Note the disk devices used for the volume. 4 Add a mirror to namevol1.and display the volume layout. What is the layout of the second plcx? Which disks are used for the second plex? 5 Add a dirty region log to namevol1and specify the disk to use lor the DRL. Display the volume layout. 6 Add a second dirty region log to namevol1and specify another disk to use for the DRL. Display the volume layout. 7 Remove the first dirty region log that you added to the volume. Display the volume layout. Can you control which log was removed? 8 Find out what the current volume read policy for namevol1 is. Change the volume read policy to round robin, and display the volume layout. 9 Remove the original mirror (namevol1-01) from namevoll. and display the volume layout. 10 Remove namevoll. Resizing a Volume You can complete this exercise using either the YEA or the ell interface. Solutions are provided for both. Lab 5: Making Basic Configuration Changes A-31 Copynght ~ 2006 Svmamec Corporation AlIlIgtlls reserved
  • 232. If you have not already done so. remove the volumes created in the previous lab in namedg. 2 Create a 20-MB concatenated mirrored volume called namevoll in namedg. Create a Veritas tile system on the volume and mount it to / namel. Make sure that the file system is not added to the tile system table. 3 View the layout of the volume and display the size of the tile system. 4 Add data to the volume by creating a file in the tile system and verify that the fi le has been added. 5 Expand the tile system and volume to 100 MB. Observe the volume layout to see the change in size. Display f lc system size. Resizing a File System Only: ell Note: This exercise should be performed using the command line interface because the VEA docs not allow you to create a tile system smaller in size than the underlying volume. You also cannot change the size of the volume and the tile system separately using the GUI. 1 Create a 50-MB concatenated volume named namevo12 in the namedg disk group. 2 Create a Veritas tile system on the volume by using the mkfs command. Specify the tile system size as 40 MB. 3 Create a mount point / name2 on which to the mount the tile system. if it does not already exist. 4 Mount the newly created tile system on the mount point /name2. 5 Verify disk space using the df command (or the bdf command on HP-L!X). Observe that the total size of the tile system is smaller than the size of the volume. 6 Expand the tile system to the full size of the underlying volume using the f sadm -b news i ze option. 7 Verify disk space using the df command (or the bdf command on HP-UX). A-32 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYflgh! ;!O06 Svmuun-c Corporauoo All "Uf'l~ reserved
  • 233. 8 Make a lile on the file system mounted at /name2. so that the tree space is less than 50 percent of the total file system size. I 9 Shrink the file system to 50 percent of its current size. What happens" Renaming a Disk Group You can complete this exercise using either the VEA or the CLI interface. Solutions are provided for both. 1 Try to rename the namedg disk group to namedgl while the /namel and / name2 file systems are still mounted. Can you do it'! 2 Observe the contents of the / dev /vx/ rdsk and / dev /vx/ dsk directories and their subdirectories. What do you see') 3 Unmount all the mounted file systems in namedg disk group. 4 Rename the namedg disk group to namedgl. Do not forget to start the volumes in the disk group after the renaming if you are using the command line interface. 5 Observe the contents of the / dev /vx/ rdsk and / dev /vx/ dsk directories and their subdirectories. What has changed? 6 Observe the disk media names. Is there any change? 7 Mount the / namel and / name2 file systems. and observe their contents. Moving Data Between Systems You can complete this exercise using either the VEA or the CLI interface. Solutions are provided for both. Note: If you arc sharing a disk array. each participant should make sure that the prefix used for object names is unique. Copy new data to the / namel and / name2 file systems. For example. copy the jete/hosts file to /namel and the jete/group file to /name2. 2 View all the disk devices on the system. Lab 5: Making Basic Configuration Changes A-33 Copyright I,,' 2006 Syrnantec Corporation All rigl11sreserved
  • 234. 3 Unmount all file systems in the namedgl disk group and deport the disk group. Do not give it a new owner. View all the disk devices on the system. 4 Identify the name of the system that is sharing access to the same disks as your system. If you are not sure. check with your instructor. Note the name of the partner system here. Partner system hostnamc: 5 Using the command line interface, perform the following steps on your partner system: Note: If you are working on a standalone system, skip step a in the following and use your own system as the partner system. a Remote login to the partner system. b Import the namedgl disk group on the partner system, start the volumes in the imported disk group. and view all the disk devices on the system. c While still logged in to the partner system, mount the /namel and / name2tile systems. Note that you will need to create the mount directories on the partner system before mounting the file systems. Observe the data in the fi Ie systems. d Unmount the file systems on your partner system. e On your partner system, deport namedg1 and assign your own machine name, for example. trainS. as the New host. Exit from the partner system. 6 On your own system import the disk group and change its name back to namedg. View all the disk devices on the system. 7 Deport the disk group namedg by assigning the ownership to anotherhost. View all the disk devices un the system. Why would you do this? 8 From the command line display detailed information about one of the disks in the disk group using the vxdi sk 1 is t device_ tag command. Note the hos t id field in the output. 9 Import namedg. Were you successful? A-34 VERITAS Storage Foundation 5.0 for UNIX.-Fundamentals Copynght ~~ LOOf) Symantec Ccrporauon All nqhts reserved
  • 235. 10 Now import namedg and overwrite the disk group lock. What did you have to do to import it and why? I 11 From the command line display detailed information about the same disk in the disk group as you did in step X using the vxdi sk 1 i st device_tag command. Note the change in the hostid field in the output. 12 Remove all of the volumes in the namedg disk group. Lab 5: Making Basic Configuration Changes A-35 Copyright i; 2006 svmemec Corporation. All rigl1l5 reserved
  • 236. Preparation for Defragmenting a Veritas File System Lab A lab exercise in the next lesson requires that you run a script that sets up files with different size extents. Because the script can take a long time to run, you may want to begin running the script now, so that the necessary environment is created by the next lab time. Identify the device tag for the second internal disk on your lab system. If you do not have a secondinternal disk or if you cannot use the second internal disk, use one of the external disks allocated to you. Second internal disk (or the external disk used in this lab): _ 2 Initialize the second internal disk (or the external disk used in this lab) using a non-CDS disk format. 3 Create a non-cds disk group called testdg using the disk you initialized in step 2. 4 In the tes tdg disk group create a I-CiB conca ten atcd volume called testvol initializing the volume space with zeros using the .i n i t.e z e r o option to vxassist. 5 Create a VxFS file system on testvol and mount it on Ifs_test. 6 Ask your instructor for the location of the exten t s . sh script. Run the extents. sh script. Note: This script can take about 15 minutes to run. 7 Verify that the VRTSsptsoftware is already installed on your system. If not. ask your instructor for the location of the software and install it. Note: Before Storage Foundation 5.0. the VRTSsptsoftware was provided as a separate support utility that needed to be installed by the user. With 5.0. this software is installed as part of the product installation. 8 Ensure that the directory where the vxbench command is located is included in your PATH definition. A-36 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyngh!r~:200[; SvmantecCornorauon AlIlIgh<;reserved
  • 237. ,S)l1mme(. I Lab 6 Lab 6: Administering File Systems • In this lab, you practice file system administration, including defragmentation and administering the file change log. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Lab 6: Administering File Systems In this lab. you practice file system administration. including dcfragmcntauon and administering the file change log. The Lah Solurion« fpr Ihis lab <lr,' Ipc·<lIUI Oil the li)llowing pagc· "Lab (, Solurions: :rillliniskrlllg Fill' SV'iCJllS." I'ag.: B·h7 Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this, you also need four external disks and the second internal disk to be used during the labs. If you do not have a second internal disk ur if you cannot use the second internal disk, you need five external disks to complete the labs. At the beginning of this lab. you should have a disk group called namedg that has four external disks and no volumes in it. The second internal disk should be empty and unused. Note: If you are working in a North American Mobile Academy lab environment. you cannot use the second internal disk during the labs. If thai is the case. select one of the external disks 10 complete the lab steps. Lab 6: Administering File Systems Copyright (f 200G Symantec Corporation All rights reserved A-37
  • 238. Classroom Lab Values In preparation for this lab, you will need the following information about your lab environment. For your reference. you may record the information here, or refer back to the first lab where you initially documented this information. Object Sample Value Your Value My Data Disks: Soluris: clt#dO - clt#d5 HP-UX: c4tOdO - c4tOd5 AIX: hdisk21- hdisk26 l.inux: sda - sdf 2nd lnternal Disk: Solaris: cOt2dO HP-UX: c3t15dO AIX: hdiskl Linux: hdb Location of Lah Scripts /student/labs/sf/ (if any): sf50 Prefix to be used with name object names A-38 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COl'yrll.jhl ,,;: 2006 Svmantec Corporation An fights mservec
  • 239. Preparation for Defragmenting a Veritas File System Lab I Note: If you have already performed these steps at the end of the last lab, then you can skip this section and proceed with Defragmenting a Veritas File System section. Identify the device tag for the second internal disk on your lab system. If you do not have a second internal disk or if you cannot use the second internal disk, use one of the external disks allocated to you. Second internal disk (or the external disk used in this lab): _ 2 Initialize the second internal disk (or the external disk used in this lab) using a non-CDS disk format 3 Create a non-CDS disk group called testdg using the disk you initialized in step 2. 4 In the testdg disk group create a I-GB concatenated volume called testvol initializing the volume space with zeros using the ini t=zero option to vxassist. 5 Create a VxFS file system on testvol and mount It on Ifs test. 6 Ask your instructor for the location of the extents. sh script. Run the extents. sh script. Note: This script can take about 15 minutes to run. 7 Verify that the VRTSspt software is already installed on your system. If not. ask your instructor for the location of the software and install it. Note: Before Storage Foundation 5.0, the VRTSspt software was provided as a separate support utility that needed to be installed by the user. With 5.0. this software is installed as part of the product installation. 8 Ensure that the directory where the vxbench command is located is included in your PATH definition. Lab 6: Administering File Systems A-39 Copyright <h.2006 Svmantec Corporation. All rights reserved
  • 240. Defragmenting a Veritas File System The purpose of this section is to examine the structure of a fragmented and an untragmcntcd tile system and compare the file system's throughput in each case, The general steps in this exercise arc: Make and mount a file system Examine the structure of the new file system for extents allocated Then examine a fragmented tile system and report the degree of fragmentation in the file system Use a support utility called vxbench to measure throughput to specific files within the fragmented tile system De-fragment the tile system. reporting the degree offragmentation Repeat executing the vxbench utility using identical parameters to measure throughput tu the same tiles within a relatively un fragmented tile system Compare the total throughput before and after the dcfragmcntauon process In the namedg disk group create a I-GB concatenated volume called namevoll. 2 Create a VxFS tile system on namevol1 and mount it on / namel. 3 Run a fragmentation report on / namel to analyze directory and extent fragmentation. Is a newly created, empty tile system considered fragmented? In the report. what percentages indicate a tile system's fragmentation'? 4 What is a fragmented tile system" 5 If you were shown the following extent fragmentation report about a tile system. what would you conclude? Directory Fragmentation Report Dirs Total Immed Immeds Dirs to Blocks to total Searched Blocks Dirs to Add Reduce 199185 85482 115118 5407 5473 Reduce 5655 6 Unmount / namel and remove namevoll in the namedg disk group, Note: The following steps will use the Ifs_test tile system to analyze the impact of fragmentation on the tile system performance, Verify that the extents. sh script has completed before you continue with the rest of this lab, A--40 VERITAS Storage Foundation 5,0 for UNIX: Fundamentals Copynqht '.;.2006 Syrnantcc Corporation All rights reserved
  • 241. 7 Run a fragmentation report on Ifs_test to analyze directory and extent fragmentation. Is If s t est fragmented? Why or why nor' What should be done" I8 Use the Is -Te command to display the extent attributes of the tiles in the Ifs test file system. Note that on the Solaris platform you need to use the 1s command provided by the VxFS file system software to be able to use the -e option. 9 Measure the sequential read throughput to a particular file. for example. an 8MB file on an 8K extent (for example. Ifs_test/test48). in a fragmented file system using the vxbench utility and record the results. Use an 8K sequential I/O size. Notes: You need to use the vxbenchutility that is appropriate for the platform you arc working on. for example vxbench_9 on Solaris 9. To identify the appropriate vxbenchcommand. use the 1s -1 lopt/VRTSspt/FSI VxBenchcommand. If this path is not in your PAT" environment variable. use the fullpath of the command while running the corresponding vxbenchutility. Remount the file system before running each 110 test. 10 Repeat the same test for an 8Mb file on an 8Mb extent (ILX example. using the Ifs_test/test58 file). Note that the file system must be remounted between the tests. Can you explain why" 11 Defragment I fs _test and gather sumlllary statistics after each pass through the file system. Aller the defragmentation completes, determine if I f s_test is fragmented' Why or why not? Note: The defragmentation can take about 5 minutes to complete. 12 Measure the throughput of the unfragmcnted file system using the vxbench utility on the same files as you did in steps <) and 10. Is there any change in throughput'.' Notes: You need to use the vxbenchutility that is appropriate for the platform you are working on. for example vxbench_9 on Solaris 9. To idcnti ty the appropriate vxbenchcommand. use the Is -1 Iopt/VRTSsptl Fsi VxBenchcommand. If this path is not in your PATH environment variable. use the fullpaih of the command while running the corresponding vxbench utility. The file system must be remounted before each test to clear the read buffers. If you have used external shared disks on a disk array used by other systems for this lab. the performance results may be impacted by the disk Lab 6: Administering File Systems A-41 Copyright ([ 2()06 svmamec Corporation All rights reserved
  • 242. array cache and may not provide a valid comparison between a fragmented and defragmented file system. 13 What is the difference between an unfragmented and a fragmented file system') 14 Is anyone environment more prone to needing dcfragmcntatiou than another'! Reading the File Change Log (FCL) In the namedgdisk group create a new IO-MB volume called namevoll. Create a YxFS tile system on namevoll and mount II on /fcl test 2 Turn the FCL on for /fcl test. and ensure that it is on. 3 Go to the directory that contains the FCL. 4 Display the superblock for /fcl_test. 5 How do you know that there have been no changes in the file system yet'? 6 Add some files to /fcl_test. Then remove one of the tiles you just added. 7 Display the superblock for /fcl_test. 8 How do you know that changes have bcen made to the file system') 9 Print the number of the FCL. 10 Which files are identified by the inode numbers that are listed in the Create type'! 11 Unmount the fcl_ test tile system and remove namevoll. 12 The next two lab sections are optional labs on analyzing and dcfragmenting fragmented tile systems. If you are not planning to carry out the optional labs. unmount / f s_tes t file system and destroy the tes tdg disk group; otherwise. skip this step. A-42 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals CcpYflgrl't 2006 Symaotec Corporation All Jlgl'1ts reserved
  • 243. Optional Lab Exercises The next set of lab exercises is optional and may be performed if YOll have time. These exercises provide additional practice in defragmenting a file system and monitoring fragmentation. Optional Lab: Defragmenting a Veritas File System IThis section uses the If s_t est file system to analyze the impact of fragmentation on the performance ota variety otl/O types on files using small and large extent sizes. Recreate the fragmented Ifs_test file system using the following steps: a Unmount the Ifs test file system. b Recreate a vxfs file system in the testvol in testdg. c Mount the file system to Ifs_test. d Ask your instructor for the location of the extents. sh script. Run the extent s .sh script. Note: This script can take about 15 minutes to run. 2 Run a series of performance tests for a variety ofI/O types using the vxbench utility to compare the performance of the tiles with the RKextent size (/fs_test/test48) and the ROOOKextent size (/fs_test/test58) by performing the following steps. Complete the following table when doing the performance tests. Lab 6: Administering File Systems A-43 Copyright II'; 2006 Symantec Corporation Ail rights reserved.
  • 244. Test Type Time (seconds) Throughput (KB/second) Before After Defrag Before After Defrag llefl'ag Defrag Sequential 2.709 .526 2953.22 15202.10 reads. XK extent Sequential .547 .549 14634.57 14576.20 reads. HOOOK extent Random 8.268 6.267 967.54 1276.53 reads. XK extent Random 6.541 6.468 1223.02 1236.91 reads. 80()tJK extent Note: Results can vary depending on the nature of the data and the model of array used. No performance guarantees are implied by this lab. 3 Ensure that the directory where the vxbench utility is located is included in your PATH definition. 4 Sequential 1/0 Test Note: You must unmount and remount the file system Ifs_test before each step to dear and initialize the butter cache. To test the 8K extent size: 5 Random I/O Test To test the 8K extent size: 6 Defragment the If s_test tile system. The dcfragmentation process takes some time. 7 Repeat the vxbench performance tests and complete the table with these performance results. 8 Compare the results of the dcfragmcntcd tile system with the fragmented tile system. 9 When finished comparing the results in the previous step. un mount the Ifs_test tile system and destroy the testdg disk group. A-44 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYligN l' 200!,) Svmanlec Corporation All nqhts rr-served
  • 245. Optional Lab: Additional Defragmenting Practice I In this exercise, you monitor and defragment a file system by using the fsadm command. Create a new 2-GB striped volume called narnevoll in narnedgdisk group. Create a VxFS file system on narnevoll and mount it on I fs_ test. 2 Repeatedly copy a small existing tile system to If s_test using a new target directory name each time until the target file system is approximately R5 percent full. For example, on the Solaris platform: for i in 1 2 3 > do > cp -r lopt Ifs test/opt$l > done Note: Monitor the file system size using df -k on the Solaris platform and bdf on the I1P-UX platform, and CTRL-C out of the for loop when the file system becomes approximately R5 percent full. 3 Delete all files in the Ifs_test file system over 10 MB in size. 4 Check the level of fragmentation in the Ifs_test tile system. 5 Repeat steps 2 and .j using values 4 5 for i in the loop. Fragmentation of both free space and directories will result. 6 Repeat step 2 using values 6 7 for i. Then delete all files that are smaller than 64K to release a reasonable amount of space. 7 Defragment the file system and display the results. Run fragmentation reports both before and after the detragmentation and display summary statistics after each pass. Compare the f sadmreport from step 4 with the final report from the last pass in this step. 8 Unmount the If s test file system and remove the narnevoll volume used in this lab. Lab 6: Administering File Systems A-45 Copyright Sl 2006 Syrnantec Corporation All right:'> reserved
  • 246. A-46 VERITAS Storage Foundation 5.0 for UNIX Fundamentals Copvnqet ~ 2006 Svmautec Corporation All (l{JI'I1S reserved
  • 247. I 'SY111anttx Lab 7 Lab 7: Resolving Hardware Problems In this lab, you practice recovering from a variety of hardware failure scenarios, resulting in disabled disk groups and failed disks. First you recover a temporarily disabled disk group, and then you use a set of interactive lab scripts to investigate and practice recovery techniques. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Lab 7: Resolving Hardware Problems In this lab. you practice recovering from a variety of hardware failure scenarios. resulting in disabled disk groups and failed disks. First you recover a temporarily disabled disk group and then you use a set of interactive lab scripts to investigate and practice recovery techniques. Each interactive lab script: Sets up the required volumes Simulates and describes a failure scenario Prompts you to fix the problem Finally. a set of optional labs are provided to enable you to investigate disk failures further and to understand the behavior of spare disks and hot relocation. The l.ab Solutio11' ror 1111'0 Jdb arc Incat,'" ,11 IlL' 1<"lpIing page. "Lab "7 SOillli,)!lS: j(,',ol, in~ I Lm!;ll,' I'rohlcmv." p;ti!' B-~) Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need four external disks to be used during the labs. At the beginning of this lab. you should have a disk group called namedg that has four external disks and no volumes in it. Lab 7: Resolving Hardware Problems Copyright It, 2006 Symantec Corporation. All rights reservec A-47
  • 248. Classroom Lab Values In preparation for this lab, you will need the following information about your lah environment. For your reference, you may record the intormation here, or refer back to the first lab where you initially documented this information. f--c0bject Sample Value You,' Value lIy Data Disks: Solaris: c 1t #dO - clt#d5 I-II'-UX: c4tOdO - c4tOd5 AIX: hdisk21- hdisk26 l.inux: sda - sdf Location of Lab Scripts: /student/labs!sf/ sf50 Prefix to be used with name object names A-48 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyrl(.lht";:· L006 Symantec Corporation All rights reserved
  • 249. Recovering a Temporarily Disabled Disk Group Remme all disks except lor one (namedgOl) from the namedg disk group. I2 Create a Ig volume called namevoll in namedg disk group. 3 Create a file system on namevoll and mount it to / namel. 4 Copy the contents of / etc/ defaul t directory to / name1 and display the contents of the file system. 5 Ask your instructor for the location ofthe faildg_temp script. and note the location here: Script location: _ 6 Start writing to a file in the /namel file system at the background using the following command: dd if=/dev/zero of=/namel/testfile bs=1024 count=500000 & 7 In one terminal change to the directory containing the script and before the I/O completes, execute faildg_temp namedg command. Notes: The faildg_temp script disables the single path to the disk in the disk group to simulate a hardware failure. This is just a simulation and not a real failure; therefore, the operating system will still be able to seethe disk after the failure. The script waits until you arc ready with analyzing the failure, to re-enable the path to the disk in the disk group. If the 110 you started in step 6 completes before you can simulate the failure, you can start it again to observe the I/O failure. 8 Wait lor the (/0 to fail and in another terminal observe the error displayed in the system log. 9 Use the vxdisk -0 alldgs list and vxdg list commands to determine the status of the disk group, and the disk. 10 What happened to the file system') 11 When you are done with analyzing the impact of the failure, change to the terminal where the faildg_temp script is waiting and enter "e" to correct the temporary failure. Lab 7: Resolving Hardware Problems A-49 Copyrighte: 2006 Svmentec Corporation All rights reserved
  • 250. Note: In a real failure scenario. after the hardware recovery, you would need to first verify that the operating system can see the disks and then verify that Volume Manager has detected the change in status. If not, you can force VxVM to scan the disk by executing the vxdctl enable command. This will not be necessary for this lab. 12 Assuming that the failure was due to a temporary fiber disconnection and that the data is still intact. recover the disk group and start the volume. Verify the disk and disk group status using the vxdisk -0 alldgs 1 is t and vxdg 1 i st commands. 13 Remount the tile system and verify that the contents are still there. Note that you will need to perform a tile system check before you mount the tile system. 14 Unmount the file system and remove namevoll. At the end of this section you should be left with a namedg disk group with a single disk and three initialized disks that arc free to be used in a new disk group. Preparation for Disk Failure Labs Overview The following sections use an interactive script to simulate a variety of disk failure scenarios. Your goal is to recover from the problem as described in each scenario. Use your knowledge ofVxVM administration. in addition to the VxVM recovery tools and concepts described in the lesson. to determine which steps to take to ensure recovery. After you recover the test volumes, the script verities your solution and provides you with the result. You succeed when you recover the volumes without corrupting the data. For most of the recovery problems, you can use any of the VxVM interfaces: the command line interface, the VERITAS Enterprise Administrator (VEA) graphical user interface, or the vxdiskadm menu interface. Lab solutions are provided for only one method. If you have questions about recovery using interfaces not covered in the solutions, see your instructor. Setup Due to the way in which the lab scripts work, it is important to set up your environment as described in this setup section: If your system is set to use enclosure-based naming, then you must turn 011' enclosure-based naming before running thelab scripts. 2 Create a disk group named tes tdg and add three disks (preferably of the same size) to the disk group. Assign the following disk media names to the disks: testdgOl, testdg02, and testdg03. A-50 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 251. Note: If you do not have enough disks. you can destroy disk groups created in other labs (fur example. namedg)in order to create the testdg disk group. I 3 Before running the automated lab scripts. set the DGenvironment variable in your root profile to the name of the test disk group that you are using: Rerun your profile by logging out and logging back on. or manually running it. 4 Ask your instructor fur the location of the lab scripts. Note: This lab can only be performed on Solaris. HP-UX. and Linux. Recovering from Temporary Disk Failure In this lab exercise. a temporary disk failure is simulated. Your goal is to recover all of the redundant and nonredundant volumes that were on the failed drive, The lab script run _di sks sets up the test volume configuration. simulates a disk failure. and validates your solution for recovering the volumes. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DGis set to the name of the testdg disk group. For example: DG="testdg" export DG 1 From the directory that contains the lab scripts. run the script run_disks. and select option I. "Turned on' drive (temporary failure)": This script sets up two volumes: testl with a mirrored layout test2 with a concatenated layout Note: II'you receive an error messageabout the / image file system becoming full during volume setup. ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk power-off by saving and overwriting the private region on the drive that is used by both volumes. Then. when you are ready to power the disk back on. the script restores the private region as it was before the failure. 3 Assume that the failure was temporary. In a second terminal window. attempt to recover the volumes. 4 After you recover the volumes, type e in the lab script window. The script veri fics whether your solution is correct. Lab 7: Resolving Hardware Problems A-51 Cop~rlght 'E: 2006 Symanter; Corporauon All rights reserved
  • 252. Recovering from Permanent Disk Failure In this lab exercise, a permanent disk failure is simulated. Your goal is to replace the failed drive and recover the volumes as needed. The lab script run_disks sets lip the test volume configuration. simulates a disk failure, and validates your solution for recovering the volumes. Ask your instructor for the location of the run_disks script. Before You Begin: Check to ensure that the environment variable DG is set to the name otthe testdg disk group: If DG is not set, set it before you continue: DG="testdg" export DG From the directory that contains the lab scripts, run the script run_disks, and select option 2, "Power failed drive (permanent failure)": This script scts up two volumes: testl with a mirrored layout test2 with a concatenated layout Note: If you receive an error message about the / imagefile system becoming full during volume setup. ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk powcr-ortby saving and overwriting the private region on the drive that is used by both volumes. The disk is detached by VxVM. 3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Then. recover the volumes. 4 After you recover the volumes, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise. if the disk device that was originally used during disk failure simulation is in onl ine inval id state, rcinitialize the disk to prepare for later labs. A--52 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynyll ~ 2l1Uh Svmantec Corporation. All fights r.:,,>crvcd
  • 253. Recovering from Intermittent Disk Failure (1) IIn this lab exercise. intermittent disk failures are simulated. but the system is still OK. Your goal is to move data from the failing drive and remove the failing disk. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run_disks script. BeforeYou Begin:Check to ensure that the environment variable DG is set to the name of the testdg disk group: If it is not set. set it before you continue: DG="testdg" export DG From the directory that contains the lab scripts. run the script run_disks. and select option 3. "Intermittent Failures (system still ok)": This script sets up two volumes: testl with a mirrored layout test2 with a concatenated layout Note: If you receive an error messageabout the / image tile system becoming full during volume setup. ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. You arc informed that the disk drive used by both volumes is experiencing intermittent failures that must be addressed. 3 In a second terminal window. move the data on the failing disk to another disk. and remove the failing disk. 4 After you resolve the problem. type e in the lab script window. The script verities whether your solution is correct. 5 When you have completed this exercise. add the disk you removed from the disk group back to the testdg disk group so that you can use it in later labs. Lab 7: Resolving Hardware Problems A-53 Copyriglll '0 2006 Svmantec Corporation. All rigtHS reserved
  • 254. Optional Lab Exercises The next set of lab exercises is optional and may he performed if you have time. These exercises provide additional recovery scenarios, as well as practice in replacing physical drives and working with spare disks. A final activity explores how to use the Support website, which is an excellent troubleshooting resource. Optional Lab: Recovering from Intermittent Disk Failure (2) In this optional lab exercise, intermittent disk failures are simulated, and the system has slowed down significantly, so that it is not possible to evacuate data from the failing disk. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run _di sks script. Before You Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: IfDG is not set, set it before you continue: DG="testdg" export DG From the directory that contains the lab scripts, run the script run_disks, and select option 4. "Intcrmirtcnt Failures (system too slow)": This script sets up two volumes: tes t 1 with a mirrored layout tes t2 with a concatenated layout Note: If you receive an error message about the / image tile system becoming full during olume setup. ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. You are informed that: The disk drive used by both volumes is experiencing intermittent failures that need to be addressed immediately. The system has slowed down significantly, so it is not possible to evacuate the disk before removing it. 3 In a second terminal window. perform the necessaryactions to resolve the problem. 4 After you resolve the problem. type e in the lab script window. The script verities whether your solution is correct. A-54 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyright ~,20()£ Syr'lilntec CnrpOfiltloP All nqnts reserved
  • 255. Optional Lab: Recovering from Temporary Disk Failure - Layered Volume IIn this optional lab exercise. a temporary disk failure is simulated. Your goal is to recover all of the volumes that were on the failed drive. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run_disks script. BeforeYou Begin:Check to ensure that the environment variable DGis set to the name of the testdg disk group: If DG is not set. set it before you continue: DG="testdg" export DG From the directory that contains the lab scripts. run the script run_disks. and select option 5. "Turned off drive with layered volume": This script sets up two volumes: tes t 1 with a concat-mirror layout test2 with a concatenated layout Note: II'you receive an error messageabout the / image file system becoming full during volume setup. ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk power-offby saving and overwriting the private region on the drive that is used by both volumes. Then. when you arc ready to power the disk back on. the script restores the private region as it was before the failure. 3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volumes. 4 Arter you recover the volumes. type e in the lab script window. The script veri lies whether your solution is correct. Lab 7: Resolving Hardware Problems A-55 Copyright K 2006 Svmantec Corporancn All rights reserved
  • 256. Optional Lab: Recovering from Permanent Disk Failure> Layered Volume In this optional lab exercise, a permanent disk failure is simulated. Your goal is to replace the failed drive and recover the volumes as needed. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run disks script. Before Yuu Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: If DG is not set, set it before you continue: DG="testdg" export DG From the directory that contains the lab scripts, run the script run_di sk s, and select option 6. "Power failed drive with layered volume": This script sets up two volumes: testl with a concat-mirror layout test2 with a concatenated layout Note: If you receive an error message about the / image tile system becoming full during volume setup, ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk power-offby saving and overwriting the private region on the drive that is used by both volumes. The disk is detached by YxYM. 3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Then. recover the volumes. The rest of this lab exercise includes optional lab instructions where you perform a variety of basic recovery operations. VERITAS Storage Foundation 5.0 for UNIX: FundamentalsA-56
  • 257. Destroy the testdg disk group and add the three disks back to the namedg disk group, At this point you should have one disk group called namedgwith four empty disks in it. There should be no volumes in the namedgdisk group, II'you had destroyed the namedgdisk group in previous lab sections. re-create it. I Optional Lab: Removing a Disk from VxVM Control 2 In the namedgdisk group create a IOO-MB.mirrored volume named namevoll. Create a Veritas file system on namevol1 and mount it to / name1directory, 3 Display the properties of the volume, In the table. record the device and disk media name of the disks used in this volume, 4 Remove one of the disks that is being used by the volume for replacement. 5 Confirm that the disk was removed, 6 From the command line. check that the state of one of the plexcs is DISABLED and REMOVED, 7 [I' you are not already logged in YEA. start YEA and connect to your local system. Check the status of the disk that has been removed. 8 Replace the disk back into the namedgdisk group. 9 Check the status of the disks. What is the status of the replaced disk') 10 Display volume information. What is the state of the plexes of namevol 1'; 11 In YEA. what is the status of the replaced disk') What is the status of the volume') 12 From the command line. recover the volume. During and after recovery. check the status of the plex in another command window and in YEA, Lab 7: Resotving Hardware Problems A-57 Copyright t: 2006 Symaotec Corporanon All rights reserved
  • 258. Optional Lab: Replacing Physical Drives (Without Hot Relocation) Note: If you have skipped the previous optional lab section called Removing a Disk From VxVM Control. you may need to destroy testdg and add the three disks back to the narnedgdisk group before you start this section. If you had destroyed the narnedgdisk group in previous lab sections, re-create it. Ensure that the narnedgdisk group has a mirrored volume called narnevoll with a Veritas file system mounted on / narne1.I f not, create a 100-IIB mirrored volume called narnevoll in the narnedgdisk group, add a VxFS file system to the volume, and mount the tile system at the mount point /narne1. 2 If the vxrelocd daemon is running, stop it using ps and kill, in order to stop hot relocation from taking place. Verify that the vxrelocd processes are killed before you continue. Notes: If you have executed the run_di sks script in the previous lab sections, the vxrelocd daemon may already be killed. There are two vxre locd processes on the Solaris platform. You must kill both of them at the same time. 3 Next, simulate disk failure by writing over the private region using the overwritepr script followed by vxdctl disable and vxdctl enable commands. Ask your instructor for the location of the script. While using the script, substitute the appropriate disk device name for one of the disks in use by narnevoll, lor example on Linux use sbd, on Solaris and HP-UX use cltBdO. 4 When the error occurs, view the status of the disks from the command line. 5 View the status of the volume from the command line. 6 In V EA, what is the status of the disks and volume? Note: 011 the HP-UX platform, the vxdctl di sable command may cause the StorageAgent used by the VEA GUI to hang. If this happens, the VEA (iUI docs not detect the changes. Use the following command to restart the agent: /opt/VRTSobc/pa133/bin/vxpalctrl -a StorageAgent -c restart 7 Rcscan lor all attached disks: A-58 VERITAS Storage Foundation 5.0 for UNIX Fundamentals Copyngt'l~' ;W06SvrnamecCorporation All fights reserved
  • 259. 8 Recover the disk by replacing the private and public regions on the disk. In the command. substitute the appropriate disk device name. lor example on Linux, use sbd: Note: This step is only necessary when you replace the failed disk with a brand new one. If it were a temporary failure. this step would not be necessary. I9 Bring the disk back under VxVM control: 10 Check the status of the disks and the volume, 11 From the command line. recover the volume. 12 Check the status of the disks and the volume to ensure that the disk and volume are fully recovered. 13 Unmount the / namel tile system and remove the namevoll volume. Optional Lab: Exploring Spare Disk Behavior Note: If you have not already done so. destroy testdg and add the three disks back to the namedg disk group before you start this section. You should have four disks (namedgO 1 through namedg04) in the disk group namedg. Set all disks to have the spare flag on. 2 Create a IOO-MB mirrored volume called sparevol. Is the volume successfully created') Why or why not? 3 Attempt to create the same volume again. but this time specify two disks to use. Do not clear any spare flags on the disks. 4 Remove the sparevol volume. 5 Verify that the relocation daemon (vxrelocd) is running. Ifnot. start it as follows: 6 Remove the spare flags from three of the four disks. 7 Create a I OO-MB concatenated mirrored volume called spare2vol. 8 Save the output ofvxprint -g namedg -thr to a file. Lab 7: Resolving Hardware Problems A-59 Copyright,£ 2006 Svmantec Corporation All rights reserved
  • 260. 9 Display the properties of the spare2vol volume. In the table, record the device and disk media name of the disks used in this volume. You are going to simulate disk failure on one of the disks. Decide which disk you are going to fail. 10 Next, simulate disk failure by writing over the private region using the overwri tepr script followed by vxdctl disable and vxdctl enable commands. Ask your instructor for the location of the script. While using the script, substitute the appropriate disk device name for one of the disks in use by spare2voL for example on Linux use sbd, on Solaris and HP-UX use clt8dO. 11 Run vxprint -g namedg - rth and compare the output to the vxprint output that you saved earlier. What has occurred'? Notc: You may need to wait a minute or two for the hot relocation to complete. 12 In YEA, view the disks. Notice that the disk is in the disconnected state. Notc: On the HP-UX platform. the vxdct 1 disable command may cause the StorageAgent used by the YEA GUI to hang. If this happens, the YEA GUI does not detect the changes. Use the following command to restart the agent: /opt/VRTSobc/pa133/bin/vxpalctrl -a StorageAgent -c restart 13 Run vxdisk -0 alldgs list. What do you notice'? 14 Rcscan for all attached disks. 15 In YEA. view the status of the disks and the volume. 16 Recover the disk by replacing the private and public regions on the disk. In the command, substitute the appropriate disk device name, for example on Linux, use sbd: 17 Bring the disk back under Yx YM control and into the disk group. 18 In YEA, undo hot relocation for the disk. 19 Wait until the volume is fully recovered before continuing. Check to ensure that the disk and the volume are fully recovered. 20 Remove the spare2vol volume. A-60 VERITAS Storage Foundation 5.0 for UNIX.' Fundamentals Copvnqht «. 2006 svroeotec Corporation All fights reserved
  • 261. Optional Lab: Using the Support Web Site Access the latest information on VERITAS Storage Foundation. Note: If you are working in the Virtual Academy lab environment. you may not be able to access the Veritas Technical Support web site, because the DNS configuration was changed during software installation by the prepare_ns script. To restore the original DNS configuration. change to the directory containing the lab scripts, execute the restore_ns script and try to access the web site again. I 2 What is the VERlTAS Support mission statement" Hint: It is in the Support Handbook (page 3). 3 How many on-site support visits are included in a Extended Support contract" l low about with a Business Critical Support" Hint: In the Support l lundbook. see table on page 4 and explanation on page 5. 4 Which AIX platform is supported for Storage Foundation 5,()" 5 Access a recent Hardware Compatibility List for Storage Foundation. Which Brocade switches are supported by VERITAS Storage Foundation and lligh Availability Solutions 5.0 on Solaris? 6 Where would you locate the Patch with Maintenance Pack I for VERITAS Storage Solutions and Cluster File Solutions 4.0 IlH Solaris 7 Perform this step only if you are working in the Virtual Academy lab environment. If you have executed the restore_ns script to restore the name resolution configuration at the beginning of this lab section in step I, change to the directory containing the lab scripts and execute the prepare_ns script before you continue. If necessary: cd /script_location ,/prepare ns Lab 7: Resotving Hardware Problems A-51 Copyright,S: 2006 Svmantec Corporation All rights rescrvec
  • 262. A-62 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals CGpynght 'f: 2(JI)6 Svmar-tec Cnrporanon All fights reserved
  • 263. Appendix B Lab Solutions
  • 264. 8-2 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copvnqtu ,: 2006 Svmantoc Corporation. All rights reserved
  • 265. , symantee_ I Lab 1 Lab 1: Introducing the Lab Environment In this lab, you are introduced to the lab environment, system, and disks that you will use throughout this course. For Lab Exercises, see Append~ For Lab Solutions, see Append~--J Lab 1 Solutions: Introducing the Lab Environment In this lab. you are introduced to the lab environment. the systems. and disks thai you will use throughout this course. You II ill also record some prerequisite information that will prepare you for the installation ofVERITAS Storage Foundation and the labs that follow throughout this course. The Lab Exercises for this lab are located on the following page: Lab 1 Solutions: Introducing the Lab Environment 8-3 Copyright ~")2006 Svnantoc Corporation All rights reserved
  • 266. Lab Environment Introduction The instructor will describe the classroom environment, review the configuration and layout of the systems, and assign disks for you to use. The content olthis activity depends on the type of classroom, hardware, and the operating system:s) deployed. Lab Prerequisites Record the following information to be provided by your instructor: Ohject Sample Value Your Value root password veritas Host name trainl Domain name classrooml.int Fully qualified hostname trainl.classrooml (FQHI) .int Host name of the system train2 sharing disks with my system (my partner system) 'Iy Root I)isk: Solaris: cOtOdO HP-UX: clt15dO AIX:hdiskO Linux: hda ~~ternal Disk: Solaris: cOt2dO HP-UX: c3t 15dO AIX: hdiskl Linux: hdb 'Iy Data Disks: Solaris: c 1t #dO - clt#d5 HJ>-UX: c4tOdO - c4tOd5 AIX:hdisk21- hdisk26 l.inux: sda - sdf 8-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cccynqnt (:. 2U06 S),Hl31l1CC Corporation All Pyt1tS roservec
  • 267. Location of Storage /student/ Foundation 5.0 software/sf/sf50 Software: Location of Lab Scripts: /student/labs/sf/ sf50 Location of the fp /student/labs/sf/ program: sf50/bin Location or V F:RITAS /student/ Storage Foundation software/license/ license keys: sf50 - entr -lic.txt I Lab 1 Solutions: Introducing the Lab Environment B-5 Copyright ~ 2006 Symantec Corporation All rights reserved
  • 268. Instructor Classroom Setup Perform the following steps to enable zoning configurations for the Storage Foundation 5-day course (not required for High Availability Fundamentals): 1 Use course_setup script: Select Classroom. (Setup scripts are all included in Classroom SAN configuration Version 2). Select Function To Perform: Select Zoning by Zone Name 2 - Select Zoning and Hostgroup Configuration by Course Name 3 - Select/Check Hostgroup Configuration 2 Select option 3 - Select/Check Hostgroup Configuration. Select BostGroup Configuration to be Configured: 1 - Standard Mode: 2 or 4 node sharing, No DMP 2 DMP Mode: 2 node sh~ring, switch~ble between 1 path and 2 pa th access 3 - Check active HDS Hostgroup Configuration 3 Select option 2 - DMI' Mode. Wait and do not respond to prompts. 4 Exit to first level menu. 5 Select option I - Select Zoning by Zonc Name, Select Zoning Configuration Required: 1 - Mode 1: 6 sets of 2 Systems sharing 12 LUNs, no Tape Library available (HDS DMP Mode - 6 x 2 Systems - Single Path to 12 LUNs) 2 - Mode 2: 3 sets of 4 Systems sharing 24 LUNs, no Tape L.ibrary available (HDS DMP Mode 6 x 2 Systems - Dual Paths to 12 LUNs) 6 Select option I - Mode I (single path to 12 LUNs). 7 Select option 4 - Solaris as the OS. S Exit out ofthe course_setup script. 9 Reboot each system using reboot - - - 1". B-6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyn!-)ht ~ 20()6 Svrnantec copo-auoo. AlillyhlS reserved
  • 269. , symanter I Lab 2 Lab 2: Installation and Interfaces In this lab, you install VERITAS Storage Foundation 5.0 on your lab system. You also explore the Storage Foundation user interfaces, including the VERITAS Enterprise Administrator interface, the vxdiskadm menu interface, and the command-line interface. For Lab Exercises, see Appendix A. Fo.! Lab Solutions, see Appendix B. Lab 2 Solutions: Installation and Interfaces In this exercise. you install V[RITAS Storage Foundation 5.0 on your lab system. You also explore the VxVM user interfaces. including the V[RITAS Enterprise Administrator interface. the vxdiskadm menu interface. and the command line interface. The Lab Exercises for this lab are located on the following page: Prerequisite Setup To perform this lab. you need a lab system with the appropriate operating system and patch sets pre-installed. At this point there should be no Storage Foundation software installed on the lab system. The lab steps assume that the system has access to the Storage Foundation 5.0 software and that you have a Storage Foundation 5.0 Enterprise demo license key that can be used during installation. Lab 2 Solutions Installation and Interfaces B-7 COp)'rigt11 r,;:.2006 Syrnaruec Corporation. All righls reserved
  • 270. Classroom Lab Values In preparation for this lab, you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the first lab where you initially documented this information. Object Sample Value Your Value root password veritas Host name trainl Domain name classrooml,int Fully qualified hostname trainl,classrooml (FQIII) ,int 1y Boot Disk: Solaris: cOtOdO HP-UX: clt15dO AIX: hdiskO Linux: hda Location of Storage /student/ Foundation 5,0 soft ware/ sf / sf50 Software: Location ofVERITAS /student/ Storage Foundation software/license/ license keys: sf 50 -entr - lic,txt Location of Lah Scripts: /student/labs/sf/ sf50 B-8 VERITAS Storage Foundation 5.0 for UNIX Fundamentals Copyngt11G 2006 Svroaruec COfpOr(lIW'" Ill! "llnts reserved
  • 271. Preinstallation Determine if there are any VRTS or SYMC packages currently installed on your system. Solaris pkginfo I grep -i VRTS pkginfo I grep -i SYMC HP-lJX swlist -1 product I grep VRTS swlist -1 product I grep SYMC Note: If you have chosen to install the VxVM bundle that comes with the IIiv2 operating system software. you will see versions 3.) ofVERITAS Volume Manager software including the VEA packages. f-- AIX lslpp -1 'VRTS* ' lslpp -1 'SYMC*' Linux rpm -qa I grep VRTS rpm -qa I grep SYMC 2 Before installing Storage Foundation. save the following important system files into hackup files named with a '". preVM" extension. Also. save your hoot disk information to a file for later use (do not store the file in /tmp). You may need the boot disk information when you bring the boot disk under VxVM control in a later lab. Solaris ep fete/system /ete/system.preVM ep /ete/vfstab /ete/vfstab.preVM prtvtoe /dev/rdsk/boot - disk device - name > /ete/bootdisk.preVM IW-lJX ep /stand/system /stand/system.preVM ep /ete/fstab /ete/fstab.preVM AIX ep /ete/filesystems /ete/filesystems.preVM ep /ete/vfs /ete/vfs.preVM Linux ep /ete/grub.eonf /ete/grub.eonf.preVM ep /ete/modules.eonf /ete/modules.eonf.preVM 3 Are any VERlTAS license keys installed on your system'? Check for installed licenses. vxlierep Note: The vxlicrep utility may not be available on your system at this point. Lab 2 Solutions: Installation and Interfaces Copyligl1t r£; 2006 Symantec Corporation All rigtlls reserved I 8-9
  • 272. 4 To test if DNS is configured in your environment, check if nslookup resolves the hostnamc to a fully qualified hostname by typing nslookup hostname. If there is no DNS or if the host name cannot be resolved to a fully qualified hostname. carry out the following steps: a Ensure that the fully qualified hostname is listed in the jete/hosts tile. For example, cat jete/hosts 192.168.xxx.yyy train#.domain train# where domain is the domain name used in the classroom, such as elassrooml. into If the fully qualified hosinamc is not in the jete/hosts tile. add it as an alias to hostname. b Change to the directory containing lab scripts and execute the prepare _ns script. This script ensures that your lab system only uses local files for name resolution. cd /location_of_lab_scripts ./prepare ns 8-10 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynghl ,,~.2006 Symantec Corporation All nqhb reserved
  • 273. Installing VERITAS Storage Foundation Navigate to the directory containing the Vcritas Storage Foundation software. Ask your instructor for the location of the installer script. Using the installer script. run a precheck to determine if your system meets all preinstallation requirements. If any requirements (other than the license software not being installed) are not met. follow the instructions to take any required actions before you continue, Note that you can look into the log file created to see the details of the checks the script performs. cd /software_location ./installer -precheck system where system is the hostname of your lab system. Select the option number for Vcrltas Storage Foundation when prompted. I 2 Navigate to the directory containing the Veriras Storage Foundation software. Install and perform initial configuration of Storage Foundation (VxVM and VxFS) using the following steps: a Start the installer script. cd /software_location ./installer b Select I for Install/Upgrade a Product option. c Select the Veritas Storage Foundation software to install. On the I IP-UX platform. confirm that you wish to continue the installation of this version. d Enter the name of your system when prompted. e Obtain a license key from your instructor and record it here. Type the license key when prompted. License Key: Enter n when you are asked if you want to enter another license key. Select to install All Veritas Storage Foundation packages when prompted. 9 Press Return to scroll through the list of packages. Lab 2 Solutions: Installation and Interfaces 8-11 Copynqhtt'. 2006 Symantec Cnrporatioe All rights reserved
  • 274. h Accept the default ory to configure SF. HP-liX On the HP-UX plauonn. the installer script starts the software installation without asking any configuration questions. When the softwure installation is complete. it prompts you to reboot your system. Continue with the eonliguration using. / installer - con figure utter the system is rebooted. cd / shutdown -ry now Alter reboot: cd /software_location ./installer -configure system Do not set up enclosure-based naming for Volume Manager. Do not set up a default disk group. kif an error message is displayed that the tully-qualified host name could not be queried, press return to continue. Obtain the domain name from your instructor and type the fully qualified host name of your system when prompted. For example: train5.classrooml.int m Do not enable Storage Foundation Management Server Management. The system will be a standalone host. n Select y to start Storage Foundation processes. o Wait while the installation proceeds and processes arc started. p When the installation script completes, you will be asked to reboot your system. Perform the next lab step (lab step 3) to modify the root profile before rebooting your system. q This step is only for North American Mobile Academy lab environment. If you arc working ill a different lab environment, skip this step. If you are working in a North American Mobile Academy lab environment with iSCSI disk devices, change to the directory containing the lab scripts and execute the iSCSl setup lab script. TIllS SCript disables DMP support fur iSCSI disks so that they can be recognized correctly by Volume Manager. Only if you arc working in a North American Mobile Academy lab environment: cd /location_of_lab_scripts 8-12 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYlIgtlt .~ 2006 Symanrec Corporation All nqnts reserved
  • 275. ./iscsi setup 3 Check in /. profile to ensure that the following paths are present. Note: Your lab systems may already be configured with these environment variable settings. l lowever, in a real-li fe environment you would need to carry out this step yourself Solaris. PATH=$PATH:/usr/lib/vxvm/bin:/opt/VRTSob/bin: AIX /usr/sbin:/opt/VRTS/bin:/opt/VRTSvxfs/sbin MANPATH=$MANPATH:/opt/VRTS/man export PATH MANPATH Linux PATH=$PATH:/usr/lib/vxvm/bin:/opt/VRTSob/bin: /usr/sbin:/opt/VRTS/bin:/opt/VRTSvxfs/sbin MANPATH=$MANPATH:/opt/VRTS/man export PATH MANPATH MANSECT=$MANSECT:lm ; export MANSECT IW·UX PATH=/usr/lib/vxvm/bin:/opt/VRTSob/bin: /opt/VRTS/bin:/usr/sbin: /usr/lbin/fs/vxfs4.1:$PATH export PATH I 4 Reboot your system. Solaris shutdown -y -i6 -gO HI'·LJX No needto reboot your system becauseit hasalready been rebooted after the package installation. Lab 2 Solutions Installation and Interfaces 8-13 C0pyright ih 2006 svrneotec Corporation AI! rights reserved
  • 276. Setting Up VERITAS Enterprise Administrator Is the YEA server running'! lfnot, start it. Solar-is, vxsvc -m (to confirm that the server is running) HI'-UX, vxsvc (if the SenTI" is not already running) AIX Linux vxsvcctrl status (to confirm that the server is running) vxsvcctrl start (if the server is not already running) 2 Start the YEA graphical user interface. vea & Note: On some systems, you may need to configure the system to use the appropriate display. For example, if the display is pel: O.before you run YEA, typc: DISPLAY=pcl:0 export DISPLAY It is also important that the display itself is configured to accept connections from your client. If you receive permission errors when you try to start YEA, in a terminal window on the display system, type: xhost system or xhost + where system is thc hostname of the client on which you are running the vea command. 3 In the Select Profile window. click Manage Profiles button and configure YEA to always start with the Default profile. Set "Start VEA using profile" option to Default and click Close, then click OK to continue. 4 Click "Connect to a Host or Domain link" and connect to your system as root. Your instructor provides you with the password. Hostname: (For example, train13) lJsername: root Password: (Your instructor provides the password.) 5 On the left pane (object tree) view. drill down the system and observe the various categories of VxYM objects. 6 Select the Assistant perspective on the quick access bar and view tasks for systemname/StorageAgent. 7 Using the System perspective find out what disks are available to the OS. 8-14 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynght l;; 200fi Svmantec Corporanon All rights reserved
  • 277. In the System perspective object tree, expand your host and the StorageAgent, then selectthe Disks node. Examine the Device column in the grid. 8 Execute the Disk Scan command and observe the messages on the console view. Click on a message to see the details. In the VEA System perspective object tree, select your host. Select Actions->Rescan. I9 What commands were executed by the Disk Scan task') Navigate to the Log perspective. Select the Task Log tab in the right pane and double-click the "Scan for new disks" task. 10 Exit the VEA graphical interface. In the VEA main window, select Filc->Exit. Confirm when prompted. Lab 2 Solutions: Installation and Interfaces 8-15 Copyright'i" 2006 Symantec Corporation All lighTs resorveo
  • 278. 11 Create a root equivalent administrative account named adminl for use of YEA. Solaris, Linux ur-ux AIX I. Create a new administrative account named adminl: useradd adminl passwd adminl 2. Type a password for adminl. 3 Modify the jete/group file to add the vrtsadm group and specify the root and adminl user, by using the vi editor: vi /ete/group 4. In the tile. move to the location where you want to insert the vrt sadmentry. change to insert mode by typing .i , then add the line: vrtsadm::99:root,adminl 5. When you are finished editing. press [Esc] to leave insert mode. 6. Then. save the tile and quit: :wq I. Create a newadministrative account named adminl by using SAM or command line utilities: useradd adminl passwd adminl 2. Type a password lor adminl. 3. Add the vrtsadm group and specify the root and adminl users as members. Use SAM or modify the jete/group file by using the v i editor: vi /ete/group 4. In the file. move to the location where you want to insert tile vrtsadm entry. change to insert mode by typing i, then add the line: vrtsadm::99:root,adminl 5. When you are finished editing. press [Esc] to leave insert mode. 11. Then, save the tile and quit: :wq! mkgroup -A vrtsadm useradd -m -G vrtsadm adminl passwd adminl (Type the password.) 12 Test the new account. After you have tested the new account, exit YEA. a Launch VEA: vea b Select "Connect tu a Host or Domain", and specify the host name: Hustname: (For example, train13) 8-16 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 279. c Select the "Connect using a different user account" option and click Connect. d Enter the username and password for the new user: User: adminl Password: (Type the password that you created for adminl.) e After confirming the account, select File->Exit. Exploring vxdiskadm IFrom the command line. invoke the text-based Vx YM menu interface. vxdiskadm 2 Display information about the menu or about specific commands. Type? at any of the prompts within the interface. 3 What disks are available to the OS? Type list at the main menu, and then type all. 4 Exit the vxdiskadm interface. Type q at the prompts until you exit vxdiskadm. Lab 2 Sotutions: tnstallation and Interfaces 8-17 Cnnvnqtu t. 2006 Svrnantec Corporation All riqt1f;; reservort
  • 280. Optional Lab: Accessing CLI Commands Note: This exercise introduces several commonly used VxVM commands. These commands and associated concepts are explained in detail throughout this course. If you have used Volume Manager before. you may already be familiar with these comma lids. If you are new to Volume Manager, this exercise aims to show you the amount of information you can gct from the manual pages. Note that you do not need to read all of the manual pages for this exercise. From the command line. invoke the VxVM manual pages and read about the vxassist command. man vxassist 2 What vxassist command parameter creates a VxVM volume? The make parameter is used in creating a volume. 3 From the command line. invoke thc VxVM manual pages and read about the vxdisk command. man vxdisk 4 What disks are available to Vx VM" vxdisk -0 alldgs list All the available disks are displayed in the list. 5 From the command line. invoke thc VxVM manual pages and read about the vxdg command. man vxdg 6 How do you list locally imported disk groups'? vxdg list 7 From the command line. invoke the VxVM manual pages and read about the vxp ri nt; command. man vxprint B-18 VERITAS Storage Foundation 5,0 for UNIX Fundamentals Copyright 'f 20011Svmantec Corporation All nqhts reserved
  • 281. Optional Lab: More Installation Exploration When docs the Vx VM license expire" vx1icrep I more 2 What is the version and revision number of the installed version ofVx v'M? Solaris pkginfo -1 VRTSvxvm In the output. look at the Version field. -- ur-ux sw1ist I grep -i vxvm The version is in the second column ofthe output. AIX lslpp -1 VRTSvxvm In the output. look under the column named Level. Linux rpm -qi VRTSvxvm 3 Which daemons are running after the system boots under Vx VM control" Solaris ps -eflgrep -i vx vxconfigd, vxrelocd, vxnotify, vxcached, vxesd, vxconfigbackupd, vxsvc, vxpal, vxsmf.bin HI'-lJX ps -ef I grep -i vx vxconfigd, vxrelocd. vxnotify, vxcached, vxesd, vxconfigbackupd, vxsvc, vxfsd, vxiod, vxpal, vxsmf.bin Lab 2 Solutions: Installation and Interfaces B-19 Copvuqht 'f: 2006 Symantec Corporation All rights roservoc I
  • 282. 8-20 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYright 2D06 Swnantec Corporauoo All fI~hlS reserved
  • 283. 'S}111illitCC. Lab 3 Lab 3: Creating a Volume and File System In this lab, you create new disk groups, simple volumes, and file systems, mount and unmount the file systems, and observe the volume and disk properties. The first exercise uses the VEA interface. The second exercise uses the command-line interface. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. I Lab 3 Solutions: Creating a Volume and File System In this lab. you create new disk groups. simple volumes. and file systems. mount and unmount the file systems. and observe the volume and disk properties. The first exercise uses the VEA interface. The second exercise uses the command line interface. The Lab Exercises for this lab are located on the following page: If you use object names other than the ones provided. substitute the names accordingly in the commands. Caution: In this lab. do not include the boot disk in any of the tasks. Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need four empty and unused external disks to be used during the labs. Note: Although you should not have to perform disk labeling. here are some tips that may help if your disks are not properly formatted: On Solaris, use the format command to place a label on any disks that are not properly labeled for use under Solaris. Ask the instructor for details. On Linux, if you have problems initializing a disk. you may need to run this command: fdi sk / dev / di sk. Use options - 0 and -w to write a new DOS partition table. (The disk may have previously been used with Solaris.) Copyngh!"!;) 2006 Symantec Corporauon All rights reservoc 8-21Lab 3 Solutions: Creating a Volume and File System
  • 284. Classroom Lab Values In preparation for this lab. you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the first lab where you initially documented this information. Object Sample Value Your Value root password veritas ~ '-{) , " Host name trainl -'l,,~ Data Disks: Solaris: c 1t #dO - clt#d5 HP-UX: c4tOdO - c4tOd5 AIX:hdisk21- hdisk26 Linux: sda - sdf Prefix to be used with name object names B-22 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyright .; 2006 Svmar-tec Comoranon AU nqt-ts resorvoo
  • 285. Creating a Volume and File System: VEA Run and log on to the YEA interface asthe root user. vea & 2 View all thedisk deviceson thesystem.What is thestatusolthc disks assigned to you for the labs" a lising the system perspective (StorageAgent view), drill down the object tree, and select the Disks node. b View the disks in the grid. Normally the disks should be in Not I nitialized state. I3 Selectan uninitialized disk and initialize it using the YEA, Observethechange in the Statuscolumn. What is the statusof the disk now a Select the disk in the grid, and select Actions->Initialize Disk. b Verify the selecteddisk in the Initialize Disk view and click OK. The status of the disk should change to Free. 4 Createa new disk group using thedisk you initialized in the previous step. Name the new disk group namedg1. Observethe changein the disk status. Note: If you are sharinga disk array.makesurethat the prefix you are using for the disk group namesis unique. a Select the newly initialized disk in the grid, and select Actions->New Disk Group. b In the New Disk Group wizard, click Next to skip the Welcome page. c Type the name of the disk group. Ensure that Enable Cross-platform Data Sharing (CDS) remains checked. If necessary,make changes to the selecteddisks, and click Next. d Confirm the disk selection. e Do not select a disk group organization principle when prompted. Click Finish. The status ofthe disk should change to Imported and the disk media name and the disk group name should be visible in the disk grid. 5 Using YEA createa new volume of size Ig in namedg 1. Name the new volume namevol1. Createa file systemon it and makesurethat the file systemis mountedat bout time to / name1 directory. a Select the Volumes node in the object tree and select Actions->New Volume. b In the New Volume Wizard, click Next on the welcome page. c Select the disk group name and click Next. Lab 3 Solutions: Creating a Volume and File System 8-23 Copyright:s 2006 Svmantec corpcrauon. All lights reservec.
  • 286. d Let volume manager decide what disks to use for this volume, and click Next to continue. e Enter volume name and size, and leave the other options at their default values. Click Next to continue. Leave the "Create as a Snapshot Cache Volume" option unchecked and click Next. 9 On the Create File System page, select to create a VxFS me system. Enter the mount point called / narnel and verifv that Add to file system table and (for Solarts) Mount at buot options are checked. Click Next. h Verify the summary information, and click Finish. 6 Check if the tile system is mounted and verify that there is an entry for this file system in the tile system table, Select the File Systems node in the object tree and observe the list of mounted me systems in the right pane view. The / narnel me system should be listed here. Note the "Mounted" and "In File System Table" columns. You can also use the command line to verify the changes as follows: Solaris mount cat /etc/vfstab HP-lIX, mount Linux cat /etc/fstab The / narnel file system should show as mounted and there should be a line in the me system table to ensure that it is mounted at boot time. 7 View the properties of the disk in the namedgl disk group and note the Capacity and the Unallocated space fields, Select Disks in the object tree, right click the disk in the narnedgl disk group, and select Properties. 8 Try to create a second volume. namevo12.in the namedgl and specify a size slightly larger than the unallocaicd space on the existing disk in the disk group. for example 4g in the standard Symantec classroom systems, Do not create a tile system on the volume, What happens? a Select the Volumes node in the object tree and select Actions->New Volume. b In the New Volume Wizard, click Next on the welcome page. c Select the disk group name and click Next. d Let volume manager decide what disks to use for this volume, click Next to continue. 8-24 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals CO[Jyugt11E 2006 Svmantec Corporauo».All nqhts r-sorveo
  • 287. e Enter volume name and size, and leave the other options at their default values, click Next to continue. Leave the Create as a Snapshot Cache Volume option unchecked. and click Next. 9 On the Create File System page leave the "No tile system" option checked and click Next. h Verify the summary information, and click Finish. You should receive an error indicating that Volume Manager cannot allocate the requested spacefor the volume, and the volume is not created. 9 Add a disk to the namedgl disk group. a Select the disk to be addcd to the disk group. b Select Actions->Add Disk to Disk Group. e Click Next on the Welcome page, verifv that the namedgl disk group is selectedand that the disk is listed under Selecteddisks, and click Next. d Confirm "hen prompted. e Verify the summary information and click Finish. 10 Create the same volume. namevo12.in the namedgl disk group using the same size as in step X. Do not create a tile system. a Select the Volumes node in the object tree and select Actions->Ncw Volume. b In the New Volume Wizard, click Next on the welcome page. e Select the disk group name and click Next. d Let volume manager decide what disks to use for this volume, click Next to continue. e Enter volume name and size, and leave the other options at their default values, click Next to continue. Leave the Create as a Snapshot Cache Volume option unchecked. and click Next. 9 On the Create File System page leave the "No tile system" option checked and click Next. h Verify the summary information, and click Finish. This time the volume creation should complete successfully. 11 Observe the volumes by selecting the Volumes object in the object tree. Can you tell which volume has a mounted file system') Select Volumes node in the object tree. In the right pane view you should notice that the tile system and mount point columns have tile system information for namevoll and not for namevo12. Lab 3 Solutions: Creating a Volume and File System Copyriqht it; 2006 Symantec Corporation All rights reserved I 8-25
  • 288. 12 Create a VxFS file system on namevo12and mount it to / name2 directory. Ensure that the file system is not mounted at boot time. Check if the / name2 tile system is currently mounted and verify that it has not been added to the file system table. a Select the namevol2 volume and select Actions->File System-> New File System. b Verity that the file system type is vxfs, enter the mount point, uncheck the "Add to tile system table" option, and click OK. c Select the File Systems node in the object tree and observe the list of mounted file systems in the right pane view. The / name2 file system should be listed here. Note the Mounted and In File System Table columns. You can also use the command line to verify the changes as follows: Solaris mount cat /etc/vfstab BP-lJX, mount Linux cat /etc/fstab The / name2 file system should show as mounted but there should be no change in the file system table. 13 Observe the commands that were executed by VEA during this section ofthe lab. a Select the Logs perspective in the quick accessbar. b Click Task Lug in the right pane view. c Observe the commands executed by VEA during this section of the lab by double clicking the individual tasks and observing the Task Details view. Creating a Volume and File System: CLI View all the disk devices on the system. What is the status of the disks assigned to you for the labs') vxdisk -0 alldgs list If you completed the first section of this lab, you should have two disks in namedgl in online status. The rest of the disks assigned to you should be in online invalid status. If you have a disk in error status, contact your instructor. B-26 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals Copynqht '<', 2006 Symautec corooreuoo. All nqhls reserved
  • 289. 2 Select an uninitialized disk and initialize it using the CLI. Observe the change in the Status column. What is the status of the disk now'! vxdisksetup -i device_tag where device_ tag is c#t#d# for Solaris and IIP-UX, hdisk# for AIX and sd# for Linux platforms. vxdisk -0 alldgs list The status of the disk should change to online but the DISK and CROUP columns should still be empty. 3 Create a new disk group using the disk you initialized in the previous step. Name the new disk group namedg2. Observe the change in the disk status. Note: If you are sharing a disk array. make sure that the prefix you are using for the disk group names is unique. vxdg init namedg2 namedg20I=device_tag where devi ce_ tag is c#t #d# for Solaris and HP-UX, hdisk# for AIX and sd# for Linux platforms. vxdisk -0 alldgs list The status of the disk is still online but the [)ISK and CROUP columns now show the new disk media name and the disk group name respectively. 4 Using the vxassist command. create a new volume of size Ig ill namedg2. Name the new volume namevo13. vxassist -g namedg2 make namevo13 Ig 5 Create a Vcritas file system on the namevo13 volume. mount the file system to the / name3 directory. mkfs -F vxfs /dev/vx/rdsk/namedg2/namevo13 Note: On LiIllIX, use mkf s - t. mkdir /name3 mount -F vxfs /dev/vx/dsk/namedg2/namevo13 /name3 Note: On Llnux, use mount -to Lab 3 Solutions: Creating a Volume and File System Copyright I[) 2006 Symantec. Corporation All nqhts. reserved I B-27
  • 290. Make sure that the tile system is mounted at boot time. Solaris vi /etc/vfstab ... /dev/vx/dsk/namedg2/namevo13 /dev/vx/rdsk/namedg2/namevo13 /name3 vxfs o yes - UP-LJX vi /etc/fstab ... /dev/vx/dsk/namedg2/namevo13 /name3 vxfs rw,largefiles,delaylog 0 2 6 Unmount the / name3 tile system, verify the un mount, and remount using the mount -a command to mount all tile systems ill the file system table. umount /name3 mount mount -a mount 7 Identity the amount of free space in the namedg2 disk group. TIY to create a volume in this disk group named namevo14 with a size slightly larger than the avai lablc free space. for example 5g on standard Symantcc classroom systems. What happens'? Note: The disk sizes in Symanicc Virtual Academy lab environments are slightly less than 2g. Ensure that you use the correct value suitable to your environment instead of the 5g example used here. vxdg -g namedg2 free The free space is displayed in sectors in the LENGTH column. vxassist -g namedg2 make namevo14 5g You should receive an error indicating that Volume Manager cannot allocate the requested space for the volume, and the volume is not created. 8 Initialize a new disk and add it to the namedg2 disk group. Observe the change in free space. vxdisksetup -i device_tag vxdg -g namedg2 adddisk namedg202=device_tag where device tag is c#t#d# for Solaris and HP-UX, hdisk# for AIX and sd# for Linux platforms. vxdg -g namedg2 free 9 Create the same volume. namevo14, in the namedg2 disk group using the same size as in step 7. vxassist -g namedg2 make namevo14 5g VERITAS Storage Foundation 5,0 for UNIX: Fundamentals8-28 Copyright _~'200G S)"IWlO>tE'C CorpOf(:l!iuli All fights reserved
  • 291. Note: The 5g volume size is usedas an example here. You may need to use a value more suitable to your lab environment if you arc not working in a standard Symantec classroom. This time the volume creation should complete successfully. 10 Display volume information for namedg2 disk group using the vxprint -g namedg2 -htr command. Can you identify which disks arc used for which volumes? vxprint -g namedg2 -htr I11 List the disk groups on your system using the vxdg list command. vxdg list If you have followed the labs so far, you should have two disk groups listed namedgl and namedg2. Lab 3 Solutions: Creating a Volume and File System B-29 Copyright IE 2006 Svmamec Corporation All rights reserved
  • 292. Removing Volumes, Disks and Disk Groups: ell Unmount the / name3file system and remove it from the file system table. Solaris umount /name3 vi /etc/vfstab Navigate to the line with the entry corresponding to the /name3 lile system and type dd 10 delete the line. Type :wq 10 saveand close the tile. HP-UX, umount /name3 Linux vi /etc/fstab Navigate 10 the line with the entry corresponding to the /name3 file system and type dd 10 delete the line. Type :wq 10 save and close the file. 2 Remove the namevo14volume in the namedg2disk group. Observe the disk group configuration information using the vxprint -g namedg2 -htr command. vxassist -g namedg2 remove volume namevol4 vxprint -g namedg2 -htr There should be only namevol3 volume, and the second disk, namedg202, should be unused. 3 Remove the second disk (namedg202)from the namedg2disk group. Observe the change in its status. vxdg -g namedg2 rmdisk namedg202 vxdisk -0 alldgs list Note that the disk is still ill online state; it is initialized. 4 Destroy the namedg2disk group. vxdg destroy namedg2 5 Observe the status of the disk devices on the system. vxdisk -0 alldgs list 8-30 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals CopynytH~ 2006 SvmantecCon.orauon Alillyn~ reserved
  • 293. Removing Volumes, Disks and Disk Groups: VEA Unmount both / namel and / name2 file systems using YEA. Accept to remove the file systems from the file system table ifprompted. Check if the file systems are unmounted and veri fy that any corresponding entries have been removed from the file system table. a Select the File Systems node in the object tree and select / namel file system. b Select Actions->Unmount File System. c Confirm the unmount and select Yes when prompted to remove it from the file system table. d Selectthe / name2 tile system. SelectActions->Unmount File System. Confirm the unmount. Both tile systemsshould disappear from the file system list in VEA. You can usethe command line to verify the changesas follows: Solaris mount cat /etc/vfstab U1'-l'X. mount Linux cat /etc/fstab The / namel and / name2 file systemsshould not be among the mounted tile systems,and the tile system tahle should not contain any entries corresponding to thesetile systems. 2 Remove the namevo12 volume in the namedgl disk group. a Select the Volumes node in the object tree and select namevol2 volume. b Select Actions->Delete Volume. Cnnflrm when prompted. 3 Select the Disk Groups node in the object tree and observe the disks in the namedgl disk group. Can you identify which disk is empty') The %Used column should show 0% for the unused disk which is the seconddisk in the disk group (namedgl02). 4 Remove the disk you identified as empty from the namedgl disk group. Select the empty disk and select Actions->Remove Disk From Disk Group. 5 Observe all the disks on the system. What is the status of the disk you removed from the disk group? Select the Disks node in the object tree and observe the disks in the right pane view. Lab 3 Solutions: Creating a Volume and File System Copyrigh10 2006 swoeotec Corporation. All !ighls reserved I 8-31
  • 294. The disk removed in step 4 should be in Free state. 6 Destroy the namedgl disk group. a Select the Disk Croups node in the object tree and the namedgl disk group in the right pane view. b Select Actions->Destroy Disk Croup. Confirm when prompted. 7 Observe all the disks on the system. What is the status of the disks? Select the Disks node in the object tree and observe the disks in the right pane view. If you have followed all the lab steps, you should have 4 disks in Free state; they are already initialized but not in a disk group. 8-32 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyngtll e. 20U6 SYIl1<1l1le{; Corpor auon A,II flgtlt<: reservert
  • 295. 'symanl<x. I Lab 4 Lab 4: Selecting Volume Layouts In this lab, you create simple concatenated volumes, striped volumes, and mirrored volumes. You also practice creating a layered volume and using ordered allocation while creating volumes. Lab 4 Solutions: Selecting Volume Layouts In this lab. you create simple concatenated volumes. striped volumes. and mirrored volumes. You also practice creating a a layered volume and using ordered allocation while creating volumes. The Lab Exercises for this lab are located on the following page: For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B. Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed, configured and licensed. In addition to this. you also need four empty and unused external disks to be used during the labs. Lab 4 Solutions: Selecting Volume Layouts Copvnqtn b 2006 Symarrter; Cmpor3MI1 All righlS reserved 8-33
  • 296. Classroom Lab Values In preparation for this lab, you will need the following information about your lab environment. For your reference. you may record the information here, or refer back to the first lab where you initially documented this information. Oh.jeet Sample Value Your Value root password veritas Host name trainl My Data Disks: Sularis: c It #dO - clt#d5 IIP-UX: c4 t OdD - c4tOd5 AIX:hdisk21- hdisk26 l.inux: sda - sdf Prefix to be used with name object flames 8-34 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Ccpynqnt ';' I'U[}6 Svn.antec Corporauon All lights rcscrveo
  • 297. Creating Volumes with Different layouts: Cli Add four initialized disks to a disk group called namedg. Verify your action using vxdisk -0 alldgs list. Note: If you arc sharing a disk array. make sure that the prefix you are using for the disk group name is unique. a If you have completed the Creating a Volume and File System lab (lab 3), you should already have four initialized disks. If not, initialize four disks for use in Volume Manager: vxdisksetup -i device_tag where devi ce_ tag is c#t#d# for Solaris and HP-UX, hdisk# for AIX and sd# for Linux platforms. (Do the above command for anv disks that have not beeninitialized for Volume Manager useand that will be used in this lab.) b Create a new disk group and add disks: vxdg init namedg namedgOl=devicel_tag namedg02=device2_tag namedg03=device3_tag namedg04=dev~ce4 tag Alternatively, you cau also create the disk ~roup using a single disk device and then add each additional disk as follows: vxdg -g namedg adddisk namedg##=device_tag 2 Create a 50-MB concatenated volume in namedg disk group called namevoll with one drive. vxassist -g namedg make namevoll SOm 3 Display the volume layout. What names have been assigned to the plex and subdisks? To view the assigned names, view the volume using: vxprint -g namedg -thr I more 4 Remove the volume. vxassist -g namedg remove volume namevoll 5 Create a 50-MB striped volume on two disks in namedg and specify which two disks to use in creating the volume. Name the volume namevo12. vxassist -g namedg make namevol2 SOm layout=stripe namedgOl namedg02 What names have been assigned to the plex and subdisks? To view the assigned names,view the volume using: vxprint -g nallledg -thr I more Lab 4 Solutions: Selecting Volume Layouts Copyright'C 2006 Syrnaruec Corporation, A.IIri9nt>. reserved I 8-35
  • 298. 6 Create a 20-MB, two-column striped volume with a mirror in namedg. Set the stripe unit size to 256K. Name the volume namevo13. vxassist -g namedg make namevol3 20m layout=mirror-stripe neol=2 stripeunit=256k What do you notice about the plexes? View the volume using vxprint -g namedg -thr I more. Notice that you now have a second plex. 7 Create a 20-MB. two-column striped volume with a mirror. Set the stripe unit size to 128K. Select at least one disk that you should not use. Name the volume namevo14. vxassist -g namedg make namevo14 20m layout=mirror-stripe neol=2 stripeunit=128k lnamedg03 Was the volume created? This operation should fail because there are not enough disks available in the disk group. A two-column striped mirror requires at least four disks. 8 Create a 20-MB 3-column striped volume with a mirror. Specify three disks to be used during VOIUlllC creation. Name thc volume namevo14. vxassist -g namedg -b make namevol4 20m layout=mirror-stripe neol=3 namedgOl namedg02 namedg03 Was the volume created') Again, this operation should fail because there are not enough disks available in the disk group. At least six disks are required for this type of volume configuration. 9 Create the same volume specified in the previous step, but without the mirror. vxassist -g namedg -b make namevol4 20m layout=stripe neol=3 namedgOl namedg02 namedg03 What names have been assigned to the plcx and subdisk s? To view the assigned names, view the volume using: vxprint -g namedg -thr I more 10 Remove the volumes created in this exercise. vxassist -g namedg remove volume namevol2 vxassist -g namedg remove volume namevol3 vxassist -g namedg remove volume namevo14 11 Remove the disk group that was used in this exercise. vxdg destroy namedg 8-36 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyngllt t; 2006 Svmantec Corporatmn AlIl!ghts reserveo
  • 299. Creating Volumes with Different Layouts: VEA If you had exited out orVEA. start it and connect back to your system. vea& 2 Add four initialized disks to a disk group called namedg. Verify your action in the main window. a In the System perspective, drill down to the Disks node in the object tree. b Select a disk, and select Actions->New Disk Group. e In the New Disk Group wizard, skip the welcome page,specify the disk group name, selectthe disks you want to usefrom the Available Disks list, and click Add. d Click Next, confirm your selection, do not select any Organization Principle, and click Finish. 3 Create a 50-MB concatenated volume in namedg disk group called namevoll with one drive. a Select the namedg disk group, and select Actions->New Volume. b In the New Volume wizard, let VxVI1 determine which disks to use. e Type the name of the volume, and specify a size of 50 MB. Verify that the Concatenated layout is selectedin the Layout region. d Complete the wizard by accepting all remaining defaults to create the volume. 4 Display the volume layout. Notice the naming convention ofthe plcx and subdisk. a Select the volume in the object tree, and select Actions->Volume View. b In the Volume View window, click the Expand button. Compare the information in the Volume View window to the information under the Mirrors, Logs, and Subdisks tabs in the right pane of the main window. 5 Remove the volume. a Select the volume, and select Actions->Dclete Volume. b In the Delete Volumc dialog box, click Yes. 6 Create a 50-MB striped volume on two disks in namedg. and specify which two disks to use in creating the volume. Name the volume namevo12. a Select the namedg disk group, and select Actions->New Volume. b In the New Volume wizard, select"Manually selectdisks to usefor this volume." Move two disks into the Included box, and then click Next. Lab 4 Solutions: Selecting Volume Layouts Copyright © 2006 Svrnantec Corporation. All nqhts reserved I B-37
  • 300. e Type the name of the volume, and specify a size of 50 MB. d Select the Striped option in the Layout region. Verify that the number of columns is 2. e Complete the wizard by accepting all remaining defaults to create the volume, View the volume. a Select the volume, and select Actions->Volume View. b Close the Volume View window when you arc satisfied. 7 Create a 20-MI3. two-column striped volume with a mirror in namedg. Set the stripe unit size to 256K. Name the volume namevo13. a Select the namedg disk group, and select Actions->New Volume. b In the New Volume wizard, let VxVM determine which disks to use. e Type the name of the volume, and specify a size of 20 MB. d Select the Striped option in the Layout region. Verify that the number of columns is 2. Set the Stripe unit size to 256K (512 sectors on Solaris, AIX, and Linux, 256 sectors on HP-lJX). e Mark the Mirrored check box in the Mirror Info region. Complete the wizard by accepting all remaining defaults to create the volume, View the volume. Notice that you now have a second plcx. a Select the volume, and select Actions->Volume View. b Close the Volume View window when you arc satisfled. 8 Create a 20-MI3. two-column striped volume with a mirror. Set the stripe unit size to 128K. Select at least one disk you should nor use. Name the volume namevo14. a Select the namedg disk group, and select Actions->New Volume. b In the New Volume wizard, select "Manually select disks to use for this volume." Muve one disk into the Excluded box, and then click Next. e Tvpe the name of the volume, and specify a size of 20 MB. d Select the Striped option in the Layout region. Verify that the number of columns is 2. Set the Stripe unit size to 256 (sectors), or 12HK. e Mark the l1irrored check box in the Mirror Info region. Complete the wizard by accepting all remaining defaults to create the volume. Was the volume created? This operation should fail, because there arc not enough disks available in the disk group. A two-column striped mirror requires at least four disks. 8-38 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Ccpvnqtu ~ L006 Symantec Corpcreuon All ngtlts reserved
  • 301. 9 Createa 20-MB 3-column striped volume with a mirror. Specify threedisks to be usedduring volume creation. Name the volume namevo14. a Select the namedg disk group, and select Actions->New Volume. b In the New Volume wizard, let 'xVI1 determine which disks to use. e Type the name of the volume, and specify a size of 20 MB. d Select the Striped option in the Layout region. Change the number of columns to 3. e Mark the Mirrored check box in the Mirror Info region. Click Next. You receive an error and are not able to complete the wizard. Wasthe volume created? Again, this operation should fail, becausethere arc not enough disks available in the disk group. At least six disks arc required for this type of volume configuration. I 10 Createthe samevolume specified in step9. but without the mirror. Note: If you did not cancelout of the previous step.thenjust unchcck the mirrored option and continue the wizard. a Select the namedg disk group, and select Actions->New Volume. b In the New Volume wizard, let VxVM determine which disks to use. e Type the name of the volume, and specify a size of 20 MB. d Select the Striped option in the Layout region. Change the number of columns to 3. e Complete the wizard by accepting all remaining defaults to create the volume. Wasthe volume created" Yes, the volume is created this time. 11 Deleteall volumes in the namedg disk group. a Select the namedg disk group, then select the Volumes tab. b Highlight all volumes in the window. e Select Actions->Delete Volume. d Click Yes To All. 12 View the commandsexecutedby YEA during this sectionor the lab. a Select the Logs perspective in the quick accessbar. b Click Task Log in the right pane view. e Double-click the individual tasks and observe the Task Details view. Lab 4 Solutions: Selecting Volume Layouts 8-39 Copyright if: 2006 Symantcc Corporation. All rigt!ts reserved
  • 302. Creating Layered Volumes You can complete this exercise using either the VEA or the CLl interface. Solutions are provided for both. Note: In order to perform the tasks in this exercise, you should have at least four disks in the disk group that you are using. First, ensure that any volumes created in the previous labs are removed from the namedgdisk group. VEA a Select the namedg disk group and click the Volumes tab in the right pane view. b To remove a volume, highlight the volume in the window, and select Actions=c-Dclete Volume. eLl vxprint -g namedg -htr I more For each volume in the namedg disk group: vxassist -g namedg remove volume volume_name 2 Create a I OO-MB Striped Mirrored volume with no logging. Name the volume namevoll. VEA a Select the namedg disk group, and select Actions=c-New Volume. b In the New Volume wizard, let 'xVI1 determine which disks to use. e Type the volume name, specify a volume size of 100 IIB, and select a Striped Mirrored layout. d Ensure that the Columns and the Total mirrors fields arc both set to the default value of 2. e Complete the wizard by accepting all remaining defaults to create the volume. CLI vxassist -g namedg make namevoll 100m layout=stripe-mirror nmirror=2 ncol=2 3 If you are using VEA, view the commands executed by VEA to create the namevoll volume during this section of the lab. 8-40 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYright ',' 2006 Svmanu-c Corporauoo All rt(.lhts rc sorveo
  • 303. a Select the Logs perspective in the quiek accessbar. b Click Task Log in the right pane view. e Double-click the specific task and observe the Task Details view. 4 Create a Concatenated Mirrored volume with no logging called namevo12. The size of the volume should be greater than the size of the largest disk in the disk group; for example. if your largest disk is 4 GB. then create a 6-GB volume. Note: If you are working in the Virtual Academy (VA) lab environment, your largest disk will have a size 01'2 GB. In this environment. you can use a 3GB volume size. VEA Ia Selectthe namedg disk group, and select Actions->New Volume. b In the New Volume wizard, let VxVM determine which disks to use. e Type the volume name, an appropriate volume size, and select a Concatenated Mirrored layout. d Ensure that the Total mirrors field is set to the default value of 2. e Complete the wizard by accepting all remaining defaults to create the volume. CLI vxassist -g namedg -b make namevo12 6g layout=concat-mirror nmirror=2 5 II'you arc using VEA. view the commands executed by VEA to create the namevo12 volume during this section of the lab. a Select the Logs perspective in the quick aeeessbar. b Click Task Log in the right pane view. e Double-click the specific task and observe the Task Details view. 6 View the volumes and compare the layouts. VEA a Uighlight the namedg disk group and select Actions->Volumc View. b Click the Expand button in the Volumes window. You can also highlight each volume in the object tree and view information in the tabs in the right pane. CLJ Lab 4 Solutions: Selecting Volume Layouts 8-41 Copvnqn 'c 2006 Symantec Corporation A11li'lllt<; reserved
  • 304. vxprint -g namedg -htr I more 7 Remove all of the volumes in the namedgdisk group. VEA a Select the namedg disk group, and click the Volumes tab in the right pane view. b Highlight all volumes in the window. e Select Actiuns->Delete Volume. d Click Yes To All. eLl vxassist -g namedg remove volume namevoll vxassist -g namedg remove volume namevol2 Using Ordered Allocation While Creating Volumes You can complete this exercise using either the VEA or the CLI interface. Solutions are provided for both. Note: In order to perform the tasks in this exercise, you should have at least four disks in the disk group that you arc using. Createa 20-MB. two-column striped volume with a mirror in the namedgdisk group. Name the volume namevoll. VEA a Select the namedg disk group, and select Actions->New Volume. b In the New Volume wizard, let VxVM determine which disks to use. e Type the name of the volume, and specify a size of 20 M B. d Select the Striped option in the Layout region. Verify that the number of columns is 2. e Mark the Mirrored check box in the Mirror Info region. Complete the wizard by accepting all remaining defaults to create the volume. CLI B-42 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyrlght'~ 20il6 Syrnantec Corporation All ngtlls reserved
  • 305. vxassist -g namedg make namevoll 20m layout=mirror-stripe ncol=2 2 Display the volume layout. l low are the disks allocated in the volume') Which disk devices are used':'. VEA a Selectthe Volumes node in the object tree and namevoll in the right pane view. b Select Actions->Layout View. Note the plex number and the column number for each subdisk on each disk. IeLl vxprint -g namedg -htr Notice which two disks are allocated to the first piex and which two disks are allocated to the second piex and record your observation. 3 Remove the volume you just made. and re-create it by specifying the four disks in an order different trorn the original layout. Use the command line to create the volume in this step. eLl vxassist -g namedg remove volume namevoll vxassist -g namedg -0 ordered make namevoll 20m layout=mirror-stripe ncol=2 namedg04 namedg03 namedg02 namedgOl 4 Display the volume layout. I low are the disks allocated this time". VEA a Select the Volumes node in the object tree and namevoll in the right pane view. b Select Actions->Layout View. Note the plcx number and the column number for each subdisk on each disk. eLl vxprint -g namedg -htr The plexes are now allocated in the order specified on the command line. Lab 4 Solutions: Selecting Volume Layouts 8-43 Copyright~;; 2006 Symaruec Corporation All rights teservec
  • 306. 5 Remove all of the volumes in the namedg disk group. VEA a Select the namedg disk group and click the Volumes tab in the right pane vicw. b Highlight all volumes in the window. c Select Actions->Delctc Volume. d Click Yes To All. eLl vxassist -g namedg remove volume namevoll Optional Lab: Creating Volumes with User Defaults: CLI This optional guided practice illustrates how to use the files: • lete/default/vxassist • /ete/default/alt vxassist to create volumes with defaults specified by the user. Note that some of the default values may not apply to VEA because VEA uses explicit values for number of columns, stripe unit size. and number of mirrors while creating striped and mirrored volumes. Create two tiles in jete/default: cd fete/default a Using the vi editor, create a tile called vxassist that includes the following: # when mirroring create three mirrors nmirror=3 b Using the vi editor, create a tile called alt_vxassist that includes the following: # use 256K as the default stripe unit size for # regular volumes stripeunit=256k B-44 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynght 'j" ';;O[)fiSvmantec Corporation All rights reserved
  • 307. 2 Use these files when creating the following volumes: a Create a IOO-MB volume called namevoll using Layout erni r r or: vxassist -g namedg make namevo11 100m 1ayout=mirror b Create a IO()-MB, two-column stripe volume called namevo12 using -d alt_vxassist so that Volume Manager uses the default file: vxassist -g namedg -d a1t vxassist make namevo12 100m 1ayout=stripe I3 View the layout of these volumes using VEA or by using vxprint -g namedg -htr. What do you notice? The first volume should show three plexes rather than the standard two. The second volume should show a stripe sizeof 256K instead of the standard 64K. 4 Remove any vxassist default Illes that you created in this optional lab section. The presence of these files can impact subsequent labs where default behavior is assumed. rm /etc/defau1t/vxassist rm /etc/defau1t/a1t vxassist 5 Remove all of the volumes in the namedg disk group. vxassist -g namedg remove volume namevoll vxassist -g namedg remove volume namevol2 Lab 4 Solutions: Selecting Volume Layouts 8-45 Copyri~t1t (t 2006 Svmantec Corporation All rigtlts reserved.
  • 308. 6-46 VERITAS Storage Foundation 5.0 for UNIX' Fundamentals COPYright .~ ;:'006 Svmantec Corporation. All fights reserved
  • 309. I , symnnter. Lab 5 Lab 5: Making Basic Configuration Changes This lab provides practice in making basic configuration changes. In this lab, you add mirrors and logs to existing volumes, and change the volume read policy. You also resize volumes, rename disk groups, and move data between systems. For Lab ExerCise. s, see Appendix A. ] For Lab Solutions, see Appendix B. Lab 5 Solutions: Making Basic Configuration Changes This lab provides practice in making basic configuration changes. In this lab, you add mirrors and logs to existing volumes. and change the volume read policy. You also rcsize volumes. rename disk groups, and move data between systems. The Lab Exercises for this lab arc located on the following page: Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need four external disks to he used during the labs. At the beginning of this lab, you should have a disk group called namedg that has four external disks and no volumes in it. Lab 5 Solutions: Making Basic Configuration Changes Copyright c. 2006 Symantec Cnrporauon. All fights reserved B-47
  • 310. Classroom Lab Values In preparation for this lab. you will need the following information about your lab environment. For your reference. you may record the information here. or refer back to the first lab where you initially documented this informal ion. Ohject Sample Value Your Value root password veritas Host name trainl Host name of the system train2 sharing disks with my system (my partner system) 1y Data Disks: Solaris: c It #dO - clt#d5 HP-UX: c4tOdO - c4tOd5 AIX:hdisk21- hdisk26 l.inux: sda - sdf 2nd Internal Disk: Solaris: cOt2dO HP-UX: c3t15dO AIX: hdiskl Linux: hdb l.ocation of Lah Scripts /student/labs/sf/ (if any): sf50 Prefix to he used with name ubject names B-48 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyrlght'.[; 2006 Svmantec Corporation All notusreserved
  • 311. Administering Mirrored Volumes You can complete this exercise using either the VEA or the CLl interlace. Solutions arc provided lor both. Note: In order to perform the tasks in this exercise, you should have at least four disks in the disk group that you are using. Ensure that you have a disk group called namedg with four disks in it. I f not. create the disk group using four disks. Note: I I'you have completed the previous lab steps you should already have the namedg disk group with four disks and no volumes. YEA ISelect the Disk Groups node in the ohject tree and select namedg. err vxdisk -0 alldgs list 2 Create a 50-M8. two-column striped volume called namevoll in namedg. VEA a Select the namedg disk group, and select Actions->New Volume. b In the New Volume wizard, let Vx'11 determine which disks to use. e Type the volume name, specify a volume size of 50 MR, and select a Striped layout. d Complete the wizard by accepting all remaining defaults to create the volume. CLI vxassist -g namedg make namevoll SOm layout=stripe ncol=2 3 Display the volume layout. l low are the disks allocated ill the volume'! Note the disk devices used for the volume. YEA a Select the Volumes node in the object tree and namevoll in the right pane view. b Select Actlons=c-Lavour View. Note the disk devices used for the first plex. Lab S Solutions: Making Basic Configuration Changes B-49 Copynght 'Z' 2006 Symantec Corporation All rights resorveo
  • 312. eLl vxprint -g namedg -htr Notice which two disks are allocated to the first plex and record your observation. 4 Add a mirror to namevo 11. and display the volume layout. What is the layout of the second plex? Which disks arc used for the second plcx? VEA a Highlight the volume to be mirrored, and select Actions->Mirror-> Add. b Accept the defaults in the Add Mirror dialog box and click OK. c Select namevoll and select Aetions->Layout View. Note the disk devices used for the second plex. Note that the default layout used for the second plex is the same as the flrst plex, CLl vxassist -g namedg mirror Ilamevoll vxprint -g namedg -htr Note the disk devices used for the second plex, Note that the default layout used for the second plex is the same as the first plex. 5 Add a dirty region log to namevo11 and specify the disk to use tor the DRL. Display the volume layout. VEA a Highlight the namevoll volume, and select Actions->Log->Add. b In the Add Log dialog box, select Manually assign destination disks. c Select one of the disks and click Add to add it to the Selected disks list. d Click OK to complete. Note: If you receive an error messageindicating that YEA could not allocate enough space for the log, ignore the message. e Highlight the volume under the Volumes node in the object tree and click the Logs tab. CLI vxassist -g namedg addlog namevoll logtype=drl nallledgOl vxprint -g namedg -rth 6-50 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals Ccpynqht ,t 2006 Syruantec Corporauon All nqhts reserved
  • 313. 6 Add a second dirty region log to namevoll and specify another disk to use for the DRL. Display the volume layout. VEA a Highlight the namevoll volume. and select Actions->log->Add. b In the Add Log dialog box. select Manually assign destination disks. c Selectone of the disks and click Add to add it to the Selecteddisks list. d Click OK to complete. Note: If you receive an error messageindicating that VEA could not allocate enough spacefor the log. ignore the message. e Highlight the volume under the Volumes node in the object tree and click the Logs tab. IeLl vxassist -g namedg addlog namevoll logtype=drl namedg02 vxprint -g namedg -rth 7 Remove the first dirty region log that you added to the volume. Display the volume layout. Can you control which log was removed? VEA a Highlight the namevoll volume. and select Actions-->Log-> Remove. b In the Remove Log dialog box. selectthe log you want to remove and click Add to add it to the Selectedlogs list. c Click OK and confirm when prompted to complete. d Highlight the volume under the Volumes node in the object tree and click the Logs tab. eLl vxassist -g namedg remove log namevoll !namedgOl vxprint -g namedg -rth 8 Find out what the current volume read policy for namevoll is. Change the volume read policy to round robin. and display the volume layout. VEA a Right click the namevoll volume in the right pane view, and select Properties. Observe the existing value of the Read policy lield. It should indicate the default value of Basedon layouts. Lab 5 Solutions: Making Basic Configuration Changes B-51 Copyright .~ 2006 Swnaruec Corporation AlIl1gtlls reserved.
  • 314. b Highlight the namevoll volume, and select Actions->Set Volume Usage. c Select the Round robin option and click OK. d Right click the namevoll volume in the right pane view, and select Properties. Observe the existing value of the Read policy field. It should have changed to Round robin. CLl vxprint -g namedg -htr You should observe that the read policy shows as SELECT which is the value used for selected based on layouts. vxvol -g namedg rdpol round Ilamevoll vxprint -g namedg -rth The value of the attribute will change to ROUND. 9 Remove the original mirror (namevoll- 01) from namevoll, and display the volume layout. VEA a Highlight the namevoll volume in the object tree, and click the Mirror~ tab in the right pane. b Right-click a plex, and select Acrlons=-c-Removc Mirror. c III the Remove Mirror dialog box, click Yes. d Highlight the namevoll volume, and select Actions->Layout View. Note that the DRL log is not removed automatically when you remove the mirror by specifying the plex name. CLl vxassist -g namedg remove mirror namevoll ! disk_used_by_original_mirror vxprint -g namedg -rth Note that the DRL log will also he removed automatically with this command becausethe volume is IIU longer mirrored. 10 Rcmovcnamevoll. VEA Highlight the namevoll volume, and select Actions=c-Delete Volume. 8-52 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals C0Pyrlgh! ~ 20M Svmantec Corporation All rights reserved
  • 315. eLl vxassist -g namedg remove volume namevoll Resizing a Volume I You can complete this exercise using either the VEA or the ell interface. Solutions are provided for both. If you have not already done so. remove the volumes created in the previous lab in namedg. lEA For each volume in your disk group, highlight the volume, and select Actions=-c-Dclctc Volume. eLl vxassist -g namedg remove volume volume_name 2 Create a 20-MB concatenated mirrored volume called namevoll in namedg. Create a Veritas tile system on the volume and mount it to !namel. Make sure that the file system is not added to the tile system table. lEA a Highlight the namedg disk group, and select Actions->New Volume. b Specify a volume name, the size, a concatenated layout, and select mirrored. c Ensure that "Enable logging" is not checked. d Add a VxFS tile system and set the mount point. Uncheck the Add to tile system table option. e Complete the wizard. eLl vxassist -g namedg make namevoll 20m layout=mirror mkfs -F vxfs /dev/vx/rdsk/namedg/namevoll Note: On Linux, usemkf s - t. mkdir / namel (if necessary) mount -F vxfs /dev/vx/dsk/namedg/namevoll /namel Note: On Linux, usemoun t - t. Lab 5 Solutions: Making Basic Configuration Changes B-53 Copyright 'i- 2006 Symantec Corporation. All rights reserved
  • 316. 3 View the layout of the volume and display the size of the file system. VEA Highlight the volume in the object tree and click each of the tabs in the right pane to display information about Mirrors, Logs, and Subdisks. You can also select Actions->Volume View, click the Expand button, and compare the information to thc main window. To view the tile system size, select the File Systems node in the object tree and observe the Size column for the / namel file system in the right pane view. eLl Solaris, vxprint -g namedg -rth Linux, df -k /namel AIX IIP-UX vxprint -g llamedg -rth bdf /namel 4 Add data to the volume by creating a file in the file system and verity that the tile has been added. echo "hello name" > /namel/hello 5 Expand the tile system and volume to 100 MS. Observe the volume layout to see the change in size. Display tile system size. VEA a Highlight the volume and select Actions->Resize Volume. b In the Resize Volumc dialog box, spcclty 100 M8 in the "New volume size" field, and click OK. c Right click the volume and select Properties to observe the change in size. d For thc file system size, select the File Systems node in the object tree and observe the Size column for the / namel file system. B-54 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals CDpynght"; 2006 Symantec Couiorauor, AlIlIgt1ls ro servec
  • 317. eLl Solaris, vxresize -g namedg namevoll 100m Linux, vxprint -g namedg -rth AIX df -k /namel HP-I.IX vxresize -g namedg namevoll 100m vxprint -g namedg -rth bdf /namel IResizing a File System Only: ell Note: This exercise should be performed using the command line interface because the VEA docs not allow you to create a file system smaller in size than the underlying volume. You also cannot change the size of the volume and the file system separately using the GUI. 1 Create a 50-MB concatenated volume named narnevo12 in the narnedg disk group. vxassist -g namedg make namevo12 SOm 2 Create a Veritas file system on the volume by using the mkf s command. Specify the file system size as 40 MB. mkfs -F vxfs /dev/vx/rdsk/namedg/namevo12 40m Note: On Linux, usemkfs -to 3 Create a mount point /narne2 on which to the mount the file system. ifit does not already exist. mkdir / name2 (if necessary) 4 Mount the newly created file system on the mount point /narne2. mount -F vxfs /dev/vx/dsk/namedg/namevo12 /name2 Note: On Limn, usemount - t. Lab 5 Solutions: Making Basic Configuration Changes B-55 Copyrigll1 & 2006 Syrrmntcc Corporation. All rights reserved
  • 318. 5 Verify disk space using the df command (or the bdf command on HP-UX). Observe that the total size of the file system is smaller than the size of the volume. Solaris, df -k Linux, AIX B!'-UX bdf 6 Expand the file system to the full size of the underlying volume using the fsadm -b newsize option. fsadm -b 50m -r /dev/vx/rdsk/namedg/namevol2 /name2 7 Verify disk space using the df command (or the bdf command on HP-UX). Solaris, df -k Linux, AIX UI'-UX bdf 8 Make a tile on the tile system mounted at / name2, so that the free space is less than 50 percent of the total tile system size. dd if=/dev/zero of=/name2/25_mb bs=1024k count=25 9 Shrink the tile system to 50 percent of its current size. What happens'? fsadm -b 25m -r /dev/vx/rdsk/namedg/namevol2 /name2 The command fails. You cannot shrink the tile system because blocks are currently in use. Renaming a Disk Group You can complete this exercise using either the VEA or the CLI interface. Solutions are provided for both. 1 Try to rename the namedgdisk group to namedgl while the / namel and / name2 tile systems are still mounted. Can you do it'? VEA a Highlight the namedgdisk group, and select Actions->Rename Disk Group. b Type in the new name and click OK. You receive an error messageindicating that the volumes in the disk group are in use. 8-56 VERITAS Storage Foundation 5.0 for UNIX Fundamentals Copyright if 20U6 Svmantec Corporauor- Allnghl':' rcservec
  • 319. eLl vxdg -n namedgl deport namedg You receive an error messageindicating that the volumes in the disk group are in USI'. 2 Observe the contents of the / dev/vx/ rdsk and / dev/vx/ dsk directories and their subdirectories. What do you see'! ls -lR /dev/vx/rdsk This directory contains a subdirectory for each imported disk group, which contains the character devices for the volumes in that disk group. ls -lR /dev/vx/dsk This directory contains a subdirectory for each imported disk group, which contains the block devices for the volumes in that disk group. I 3 Unmount all the mounted fi lc systems in narnedg disk group. YEA a Select the File Systems node in the object tree and highlight the file systemsyou want to unmount in the right pane view. b Select Actions->Unmount File System. c Confirm when prompted. eLl umount /namel umount /name2 4 Rename the narnedg disk group to narnedgl. Do not forget to start the volumes in the disk group after the renaming if you are using the command line interface. YEA a Highlight the namedg disk group, and select Actions->Rename Disk Group. b Type in the new name and click OK. eLl vxdg -n namedgl deport namedg vxdg import namedgl Lab 5 Solutions Making Basic Configuration Changes B-57 Copyright 'Z: 2006 Symantec Corporation. All righl~ rcservoc
  • 320. vxvol -g namedgl startall 5 Observe the contents of the / dev/vx/ rdsk and / dev/vx/ dsk directories and their subdirectories. What has changed'? Is -lR /dev/vx/rdsk Is -lR /dev/vx/dsk The device subdirectories are rebuilt with the new name of the disk group. 6 Observe the disk media names. Is there any change'! VEA Select namedgl in the object tree and click the Disks tab. Observe the Internal name column. There should be no change in disk media names. CLl vxdisk -0 al1dgs list vxprint -g namedgl -htr There should be no change in disk media names. 7 Mount the / narnel and / narne2 file systems, and observe their contents. VEA For each volume: a Highlight the volume, and select Actions->File System->Mount File System. b Type the Mount point and unselect the Add to file system table option. c Click OK to complete. CLl mount -F vxfs /dev/vx/dsk/namedgl/namevol1 /namel mount -F vxfs /dev/vx/dsk/namedgl/namevo12 /name2 Note: On Linux, usemount - t. Is -1 /namel Is -1 /llame2 8-58 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals Copynghl,: 20[10 S,mantec Corpor auor- All flghls reserved
  • 321. Moving Data Between Systems You can complete this exercise using either the VEA or the CLI interlace. Solutions arc provided for both. Note: If you are sharing a disk array. each participant should make sure that the prefix used for object names is unique. Copy new data to the /namel and /name2file systems. For example. copy the jete/hosts file to /namel and the jete/group file to /name2. ep tete/hosts /namel ep jete/group /name2 I2 View all the disk devices on the system. VEA Select the Disks node in the object tree and observe the disks in the right pane. Notc thc Status column. CLI vxdisk -0 alldgs list 3 Unmount all file systems in the namedgldisk group and deport the disk group. Do not give it a new owner. View all the disk devices on the system. VEA a Select the File Systems node in the object tree and highlight the file systems you want to unmount. b Select Actions->Unmount File System. c Confirm when prompted. d Select the disk group and select Actions->Deport Disk Group. Click OK. e Confirm your request when prompted in the Deport Disk Group dialog box. Select the Disks node in the object tree and observe the disks in the right pane. Note the change in the Status column. CLI umount /namel umount /name2 vxdg deport namedgl vxdisk -0 alldgs list Lab 5 Solutions: Making Basic Configuration Changes B-59 Copynght (f; 2006 Symantec Corporal Ion All rights reserved
  • 322. 4 Identify the name ofthe system that is sharing access to the same disks as your system. If you are not sure, check with your instructor. Note the name of the partner system here. Partner system hostname: _ 5 Using the command line interface. pcrfonn the following steps on your partner system: Note: If you arc working on a standalone system. skip step a in the following and use your own system as the partner system. a Remote login to the partner system, rlogin partner_system_hostname b Import the narnedgl disk group on the partner system. start the volumes in the imported disk group, and view all the disk devices on the system. On the partner system: vxdg import namedgl vxvo1 -g namedgl starta11 vxdisk -0 al1dgs list c While still logged in to the partner system, mount the / narnel and / name2 file systems. Note that you will need to create the mount directories on the partner system before mounting the file systems. Observe the data in the tile systems. On the partner system: mkdir /namel mkdir /name2 mount -F vxfs /dev/vx/dsk/namedgl/namevo11 /namel mount -F vxfs /dev/vx/dsk/namedgl/namevo12 /name2 Note: On Linux, use moun t - t. Is -1 /namel Is -1 /name2 The data should be the same as it was on your own system. d Unmount thc file systems on your partner system. On the partner system: umount /namel umount /name2 e On your partner system. deport narnedgl and assign your own machine name, tor example, trainS, as the New host. On the partner system: B-60 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cop/ughl G:2006 Symantoc Corporation. All nqtus reserved
  • 323. vxdg -h your_system_name deport namedgl Exit from the partner system. Type Exit. 6 On your own system imparl the disk group and change its name back to namedg. View all the disk devices on the system. VEA Ia Select the disk ~roup under the Disk Groups node and select Actions->Import Disk Group. b In the Import Disk Group dialog box, type namedg in the New name field, verify that the "Start all volumes" option is checked, and click OK. c Select the Disks node in the object tree and observe the disks in the right pane. The status should change to Imported. CLI vxdg -n namedg import namedgl vxvol -g namedg startall vxdisk -0 alldgs list 7 Deport the disk group namedg by assigning the ownership to anotherhost. View all the disk devices on the system. Why would you do this') V£A a Select the disk group under the Disk Groups node and select Actions->Deport Disk Group. b In the Deport Disk Group dialog box, check Deport options, and type anotherhost in the New host field. c Click OK and confirm when prompted. In the list of disks, the status of the disks in the deported disk group is displayed as Foreign. You would do this to ensure that the disks are not imported accidentally by any system other than the one whose name you assignedto the disks. CLI vxdg -h anotherhost deport namedg vxdisk -0 alldgs list Lab 5 Solutions: Making Basic Configuration Changes B-61 Copyright © 2006 Svmantec Corporation All rights reserved.
  • 324. 8 From the command line display detailed information about one of the disks in the disk group using the vxdisk list device_tag command. Note the hostid field in the output. vxdisk list device tag where device_tagis c#t#d# for Solaris and HP-UX, hdisk# for AIX and sd# for Linux platforms. 9 Import namedg. Were you successful'? VEA a Selectthe disk group and selectActions->Import Disk Group. b In the Import Disk Group dialog box, click OK. This operation should fail, becausenamedg belongsto another host. CLI vxdg import namedg This operation should fail, becausenamedg belongsto another host. 10 Now import namedg and overwrite the disk group lock. What did you have to do to import it and why" VEA a Selectthe disk group and selectActions->Import Disk Group. b In the Import Disk Group dialog box, mark the Clear host ID check box, verify that the "Start all volumes" option is checked, and click OK. c Confirm when prompted. CLI vxdg -c import namedg vxvol -g namedg startall 11 From the command line display detailed information about the same disk in the disk group as you did in step 8 using the vxdi sk 1i st devi ce_ tag command. Note the change in the hostid field in the output. vxdisk list device tag where device_ tag is c#t#d# for Solaris and HP-UX, hdisk# for AIX and sd# for Linux platforms. B-62 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copvnqnt '& 2006 Svmantec Corporauon. All rights reserved
  • 325. 12 Remove all of the volumes in the namedg disk group. VEA a Selectthe namedg disk group and click the Volumestab in the right pane view. b Highlight all volumesin the window. c SelectActions->Dclcte Volume. d Click YesTo All. eLl Ivxassist -g namedg remove volume namevoll vxassist -g namedg remove volume namevo12 Lab 5 Solutions: Making Basic Configuration Changes B-63 Copyright If: 2006 Swnantec Corrorauon. All rights reserved
  • 326. Preparation for Defragmenting a Veritas File System Lab A lab exercise in the next lesson requires that you run a script that sets up tiles with different size extents. Because the script can take a long time to run, you may want to begin running the script now, so that the necessary environment is created by the ncx t lab time. Identify the device tag for the second internal disk on your lab system. If you do not have a second internal disk or if you cannot use the second internal disk, use one of the external disks allocated to you. Second internal disk (or the external disk used in this lab): _ 2 Initialize the second internal disk (or the external disk used in this lab) using a non-CDS disk formal. Solaris, Linux, AIX vxdisksetup -i device_tag format=sliced where device_tag is c#t#d# for Solaris. II P-LJX Note: Check the status of the second internal disk using the vxdisk 1i st command. If the disk is displayed as an LVM disk. ensure that it is not used by any active LVM volume groups and take it out of LVM control using the pvremove command. Ir the pvremove command Jails due to an exported volume group information left on the disk. re-create an LVM header using the force option (pvcreate -f /dev/rdsk/device_name) before using the pvremove command to remove it. vxdisk list If necessary: vgdisplay -v /dev/vgOO pvcreate -f /dev/rdsk/device_tag pvremove /dev/rdsk/device tag where dev i ce_ tag is the device name of the second internal disk in the format c#t#d#. vxdctl enable vxdisk list vxdisksetup -i device tag format=hpdisk where device_ tag is c#t#d# for HP-UX. 3 Create a non-cds disk group called testdg using the disk you initialized in step 2. vxdg init testdg testdgOl=device_tag cds=off where device_ tag is c#t#d# fur Sularis and HP-UX. 8-64 CoPVrtghl : 2000 Svmanlec Corporauoo All nghts reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 327. 4 In the testdg disk group create a 1-(i8 concatenated volume called testvol initializing the volume space with zeros using the Ln i t e zero option to vxassist. vxassist -g testdg make testvol 19 init=zero 5 Create a VxFS file system on testvol and mount it on /fs_test. mkfs -F vxfs /dev/vx/rdsk/testdg/testvo1 Note: On Linux, usemkfs -to mkdir /fs test mount -F vxfs /dev/vx/dsk/testdg/testvol /fs_test Note: On Llnux, use mount -to I6 Ask your instructor for the location of the extents. sh script. Run the extents. sh script. Note: This script can take about 15 minutes to run. /student/labs/sf/sf50/extents.sh 7 Verify that the VRTSspt software is already installed on your system. Ifnot, ask your instructor for the location of the software and install it. Note: Before Storage Foundation 5.0. the VRTSspt software was provided as a separate support utility that needed to be installed by the user. With 5.0. this software is installed as part of the product installation. Sularis, pkginfo I grep VRTSspt Linux, AIX HP-liX sw1ist -1 product I grep VRTSspt 8 Ensure that the directory where the vxbench command is located is included in your PATH definition. echo $PATH I grep -i vxbench If necessary: export PATH=$PATH:/opt/VRTSspt/FS/VxBench Lab 5 Solutions: Making Basic Configuration Changes B-65 Copyright?:, 2006 Syrnantec Corporation. All IIghl<; reserved
  • 328. B-66 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Coovnqht:'; 2006 Syrw}i'-'C Corporatron. All rights roverveo
  • 329. ,S}lmmh'(. I Lab 6 Lab 6: Administering File Systems • In this lab, you practice file system administration, including defragmentation and administering the file change log, For Lab Exercises. see Appendix A. For Lab Solutions, see Appendix B. Lab 6 Solutions: Administering File Systems In this lab. YOIl practice file system administration. including defrngmentation and administering the file change log. The Lab Exercises for this lab are located on the following page: Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need four external disks and the second internal disk to be used during the labs. If you do not have a second internal disk or if you cannot use the second internal disk. you need live external disks to complete the labs. At the beginning otthis lab. you should have a disk group called narnedg that has four external disks and no volumes in it. The second internal disk should be empty and unused. Note: If you are working in a North American Mobile Academy lab environment. you cannot use the second internal disk during the labs. II'that is the case. select one or the external disks 10 complete the lab steps. Lab 6 Solutions: Administering File Systems Copyriglll "'; 2006 Symantec Corporation. All rights teservcrt 8-67
  • 330. Classroom Lab Values In preparation for this lab, you will need the following information about your lab environment. For your reference. you may record the information here. or refer back to the first lab where you initially documented this information. Object Sample Value Your Value My Data Disks: Solaris: clt#dO - clt#d5 HP-UX: c4tOdO - c4tOd5 IIIX: hdisk21- hdisk26 l.inux: sda - sdf 2nd Internal Disk: Solaris: cOt2dO HP-UX: c3t15dO IIIX: hdiskl Linux: hdb Location of Lab Scripts Istudent/labs/sf/ (if any): sf50 Prefix to be used with name object names B-68 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynght:;: 2006 Symantec Corporauon All fights reserved
  • 331. Preparation for Defragmenting a Veritas File System Lab Note: If you have already performed these steps at the end ofthe last lab, then you can skip this section and proceed with Dctragmcnting a Veritas File System section. Identify the device tag lor the second internal disk on your lab system. If you do not have a second internal disk or if you cannot use the second internal disk, use one of the external disks allocated to you. Second internal disk (or the external disk used in this lab):~ _ I2 Initialize the second internal disk (or the external disk used in this lab) using a non-CIJS disk format Solaris vxdisksetup -i device_tag format=sliced where device_ tag is c #t#d# for Solaris. HP-UX Note: Check the status of the second internal disk using the vxdi sk list command. lfthe disk is displayed as an LYM disk. ensure that it is not used by any active LYM volume groups and take it out or LYM control using the pvremove command. II' the pvremove command fails due to an exported volume group information left on the disk. re-create an LYM header using the torce option (pvcreate - f / dev /rdsk/ device_name) before using the pvremove command to remove it. vxdisk list If necessary: vgdisplay -v /dev/vgOO pvcreate -f /dev/rdsk/device_tag pvremove /dev/rdsk/device_tag where device_ tag is the device name of the second internal disk in the format c#t#d#. vxdctl enable vxdisk list vxdisksetup -i device_tag format=hpdisk where devi ce_tag is c#t#d# for .IP-UX. 3 Create a non-CDS disk group called testdg using the disk y()U initialized in step 2. vxdg init testdg testdgOl=device_tag cds=off where device_ tag is c#t#d# for Solaris and HP-UX. 4 In the testdg disk group create a I-GB concatenated volume called testvol initializing the volume space with zeros using the init=zero option to vxassist. Lab 6 Solutions: Administering File Systems Copynght ~ 2006 Symantec Corporation All rights reserved 8-69
  • 332. vxassist -g testdg make testvo1 19 init=zero 5 Create a VxFS tile system on testvol and mount II on /fs test. mkfs -F vxfs /dev/vx/rdsk/testdg/testvo1 Note: On Linux, usemkfs - t. mkdir /fs test mount -F vxfs /dev/vx/dsk/testdg/testvo1 /£s test Note: On Linux, usemount - t. 6 Ask your instructor for the location of the extents. sh script. Run the exten ts . sh script. Note: This script can take about 15 minutes to run. /student/1abs/sf/sf50/extents.sh 7 Verify that the VRTSspt software is already installed on your system. Ifnot, ask your instructor for the location of the software and install it. Note: Before Storage Foundation S.O,the VRTSspt software was provided as a separate support utility that needed to be installed by the user. With 5.0. this software is installed as part of the product installation. pkginfo I grep VRTSspt sw1ist -1 product I grep VRTSspt 8 Ensure that the directory where the vxbench command is located is included in your PATH definition. echo $PATH I grep -i vxbench If necessary: export PATH=$PATH:/opt/VRTSspt/FS/VxBench B-70 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynght C' 2006 Svmar-tec Corporation All rights reserved
  • 333. Defragmenting a Veritas File System The purpose of this section is to examine the structure of a fragmented and an unfragmented file system and compare the file system's throughput in each case. The general steps in this exercise are: Make and mount a file system Examine the structure of the new file system for extents allocated Then examine a fragmented file system and report the degree of fragmentation in the file system Use a support utility called vxbench to measure throughput to specific files within the fragmented file system De-fragment the file system, reporting the degree or fragmentation Repeat executing the vxbench utility using identical parameters to measure throughput to the same files within a relatively unfragmented file system Compare the total throughput before and after the defragmentation process In the namedg disk group create a I-GB concatenated volume called namevoll. vxassist -g namedg make namevoll 19 2 Create a VxFS file system on namevoll and mount it on / namel. mkfs -F vxfs /dev/vx/rdsk/namedg/namevoll Note: On Linux, use mkf s - t.. mount -F vxfs /dev/vx/dsk/namedg/namevoll /namel Note: On Linux, use moun t - t. 3 Run a fragmentation report on / namel to analyze directory and extent fragmentation. Is a newly created, empty file system considered fragmented" In the report. what percentages indicate a file system's fragmentation? fsadm -D -E /namel Directory Fragmentation Report Dirs Total Immed Immeds Dirs to Blocks to Searched Blocks Dirs to Add Reduce Reduce total 2 0 2 0 0 0 Extent Fragmentation Report Total Average Average Total Files File Blks # Extents Free Blks 0 0 0 1030827 blocks used for indirects: 0 % Free blocks in extents smaller than 64 blks: 0.01 Lab 6 Solutions: Administering File Systems 8-71 Copyrtqht © 2006 Symantec Corporation. All rights reserved I
  • 334. % Free blocks in extents smaller than 8 blks: 0.00 % blks allocated to extents 64 blks or larger: 0.00 Free Extents By Size 1: 1 2 : 1 4: 2 8: 2 16: 1 32: 2 64: 1 128: 2 256: 1 512: 2 1024: 1 2048: 0 4096: 1 8192: 1 16384: 0 32768: 1 65536: 1 131072: 1 262144: 1 524288: 1 1048576: 0 2097152: 0 4194304: 0 8388608: 0 16777216 : 0 33554432: 0 67108864: 0 134217728: 0 268435456: 0 536870912: 0 1073741824: 0 2147483648: 0 A newly created file system with no files or directories cannot be fragmented. The following table displays the percentages you should be observing in the output of the fragmentation report to determine if a tile system with tiles and directories is fragmented. Percentage Unfragmented Badly Fragmented 'Yo of Free hlocks in extents smaller < 5'Yo >50% than 64 blocks % of Free blocks in extents smaller < If}O > 5'Y. than 8 blocks % blks allocated to extents 64 blks or > 5'Yo <5% larger 4 What is a fragmented file system? A fragmented tile system is a tile system where the free space is in relatively small extents scattered throughout different allocation units within the tile system. 5 If you were shown the following extent fragmentation report about a file system. what would you conclude" Directory Fragmentation Report Dirs Total Immed Immeds Dirs to Blocks to total Searched Blocks Dirs to Add Reduce 199185 85482 115118 5407 5473 Reduce 5655 8-72 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYrlghl''- 2006 Svmantec Corporation All fight:. reseo.eo
  • 335. A high total in the Dirs to Reduce column indicates that the directories are not optimized. This file system's directories should be optimized by directory defragmentation. 6 Unmount Inamel and remove namevoll in the namedg disk group. umount Iname1 vxassist -g namedg remove volume namevol1 Note: The following steps will use the Ifs_test tile system to analyze the impact of fragmentation on the tile system performance. Verily that the extents. shscript has completed before you continue with the rest of this lab. 7 Run afragmentation report on Ifs test to analyze directory andextent fragmentation. Is Ifs_test fragmented? Why or why not'! What should be done') fsadm -D -E Ifs test Directory Fragmentation Report Dirs Total Immed Immeds Dirs to Blocks to Searched Blocks Dirs to Add Reduce Reduce total 2 o o2 1 o Extent Fragmentation Report Total Average Files File Blks 55 5102 blocks used for indirects: 640 % Free blocks in extents smaller than 64 blks: 33.44 % Free blocks in extents smaller than 8 blks: 18.89 Average # Extents Total 641 Free Blks 750037 % blks allocated to extents 64 blks or larger: 42.07 Free Extents By Size 1: 16891 2 : 11505 4: 25446 8 : 10868 16: 1384 32: 2 64: 0 128: 0 256: 10 512: 0 1024: 1 2048: 0 4096: 1 8192: 0 16384: 0 32768: 1 65536: 1 131072 : 1 262144: 1 524288: 0 1048576: 0 2097152: 0 4194304: 0 8388608: 0 16777216: 0 33554432: 0 67108864: 0 134217728: 0 268435456: 0 536870912: 0 1073741824: 0 2147483648 : 0 Dirs to Reduce column is O.Therefore, the directories do not needto he optimized. But the extents needto be optimized. Because: Lab 6 Solutions: Administering File Systems Copyright r{; 2006 SYll13nt0C Corpo'8110II All rights reserved I 8-73
  • 336. 'YoFree blocks in extents smaller than 64 blks: 33.44 «50%) - OK % Free blocks in extents smaller than 8 blks: 18.89 (>5°1.,)- Not OK %, blks allocated to extents 64 blks or larger: 42.07 (>5%) - OK Therefore, the file system's extents should be defragmented. 8 Lise the Is -Te command to display the extent attributes of the files in the Ifs_test tile system. Note that on the Solaris platform you need to use the Is command provided by the YxFS file system software to be able to use the -e option. /usr/lib/fs/vxfs/bin/ls -Ie /fs test Is -Ie /fs_test -rw-r--r-- 1 root other 2048000 Jul 14 17:57 test42 ores 0 ext 2 -rw-r--r-- 1 root other 4096000 Jul 14 17:57 test44 ores 0 ext 4 -rw-r--r-- 1 root other 6144000 Jul 14 17:57 test46 ores 0 ext 6 -rw-r--r-- 1 root other 8192000 Jul 14 17:57 test48 :res 0 ext 8 -rw-r--r-- 1 root other 8192000 Jul 14 17:57 test 50 -rw-r--r-- 1 root other 2048000 Jul 14 17:57 test52 ores 0 ext 2 -rw-r--r-- 1 root other 4096000 Jul 14 17:57 test54 ores 0 ext 4 -rw-r--r-- 1 root other 6144000 Jul 14 17:57 test56 ores 0 ext 6 -rw-r--r-- 1 root other 8192000 Jul 14 17:57 test58 :res 0 ext 8000 Two files that will be used in performance tests have been highlighted in the sample output provided here. 9 Measure the sequential read throughput to a particular tile. for example, an RMB tile on an RK extent (for example, /fs_test/test48), in a fragmented tile system using the vxbench utility and record the results. Use an RK sequential I/O size. Notes: You need to use the vxbench utility that is appropriate for the platform you are working on, for example vxbench_9 on Solaris 9. To identify the appropriate vxbench command, use the Is -1 Iopt/VRTSspt /FS / VxBench command. If this path is not in your PATH environment variable, use the fullparh of the command while running the corresponding vxbench utility. 8-74 VERITAS Storage Foundation 5.0 for UNIX' Fundamentals Copynqh! r£ 2006 Symautec Corporation All rights reserved
  • 337. Remount the file system before running each I/O test. Solaris mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol /fs - test /opt/VRTSspt/FS/VxBench/vxbench_9 -w read -i iosize=8k,iocount=1000 /fs - test/test48 HI'-lJX mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol /fs - test /opt/VRTSspt/FS/VxBench/vxbench_ll.23_pa64 -w read -i iosize=8k,iocount=1000 /fs .- test/test48 A sample output is provided here as an example: total: 7.147 sec 1119.40 KB/s cpu: 0.12 sys 0.00 user I10 Repeat the same test for an 8Mb file on an 8Mb extent (for example. using the Ifs_test/test58 file). Note that the lile system must be remounted between the tests. Can you explain why? The file system must be remounted to clear the read buffers. Solaris mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol /fs - test lopt/VRTSspt/FS/VxBench/vxbench - 9 -w read -i iosize=8k,iocount=1000 Ifs - test/test58 HI'·LJX mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol Ifs - test 10pt/VRTSspt/FS/VxBench/vxbench_ll. 23 _pa64 -w read -i iosize=8k,iocount=1000 Ifs - test/test58 A sample output is provided here as an example: total: 0.206 sec 38911.83 KB/s cpu: 0.17 sys 0.01 user 11 Defragment If s _ t es t and gather summary statistics after each pass through the file system, After the dcfragrnentation completes. determine if I f s_ test is fragmented? Why or why not? Note: The dctragmcntation can take about 5 minutes to complete. fsadm -e -E -s Ifs test Extent Fragmentation Report Total Average Average Total Files File Blks # Extents Free Blks 55 5102 641 750037 blocks used for indirects: 640 % Free blocks in extents smaller than 64 blks: 33.44 % Free blocks in extents smaller than 8 blks: 18.89 Lab 6 Sotutions: Administering File Systems 8-75 Copyright ~ 2006 Symantoc COrporation All rights reserved
  • 338. % blks allocated to extents 64 blks or larger: 42.07 Free Extents By Size 1: 16891 2: 11505 4: 25446 8: 10868 16: 1384 32: 2 64: 0 128: 0 256: 10 512: 0 1024: 1 2048: 0 4096: 1 8192: 0 16384: 0 32768: 1 65536: 1 131072: 1 262144: 1 524288: 0 1048576: 0 2097152: 0 4194304: 0 8388608: 0 16777216: 0 33554432: 0 67108864: 0 134217728 : 0 268435456: 0 536870912: 0 1073741824: 0 2147483648: 0 Pass 1 Statistics Extents Reallocations Ioctls Errors Searched Attempted Issued FileBusy NoSpace Total total 35210 16151 45 0 0 0 Pass 2 Statistics Extents Reallocations Ioctls Errors Searched Attempted Issued FileBusy NoSpace Total total 18296 8643 33 33 0 33 Extent Fragmentation Report Total Average Files File Blks 55 Average # Extents 333 Total Free Blks 7446052833 blocks used for indirects: 608 % Free blocks in extents smaller than 64 b1ks: 8.89 % Free blocks in extents smaller than 8 blks: 0.93 % blks allocated to extents 64 blks or larger: 46.94 Free Extents By Size 1: 2173 2: 8: 1122 16: 128: 1024: 8192: 65536: 524288: 4194304: 33554432: 268435456: 64: 994 512: 5 4096: 0 32768: 0 262144: 1 2097152 : 0 16777216: 0 134217728: 0 38 4: 1161 1104 32: 1021 989 256: 605 3 2048: 0 0 16384: 0 1 131072: 0 0 1048576: 0 0 8388608: 0 0 67108864 : 0 0 536870912: 0 8-76 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals C()P~flgtH ~_-200n Svmantec Corporauon All fights wspr'/<:'d
  • 339. 1073741824: 2147483648: oo The file system no longer needsto he defragmentcd, because: %, Free blocks in extents smaller than 64 blks: 8.89 «50'10) - OK (much better than before) 'Yt,Free blocks in extents smaller than 8 blks: 0.93 «I %) - OK (milch better than before) 01., blks allocated to extents 64 blks or larger: 46.94 (>5%) - OK (slightl~' better than before) 12 Measure the throughput of the untragmented file system using the vxbench utility on the same files as you did in steps 9 and 10, Is there any change in throughput" Notes: You need to use the vxbench utility that is appropriate for the platform you are working on. for example vxbench_9 on Solaris 9, To identify the appropriate vxbench command. use the 1 s -1 / opt /VRTSs pt / FS/ VxBench command. If this path is not in your PATH environment variable. use the tullpath of the command while running the corresponding vxbench utility. The file system must be remounted before each test to clear the read buffers. If you have used external shared disks on a disk array used by other systems for this lab. the performance results may be impacted by the disk array cache and may not provide a valid comparison between a fragmented and defragmented file system. Solaris mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol /fs - test /opt/VRTSspt/FS/VxBeneh/vxbeneh - 9 -w read -i iosize=8k.ioeount=1000 /fs - test/test48 -- HP-UX mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol /fs - test /opt/VRTSspt/FS/VxBeneh/vxbeneh_11.23_pa64 -w read -i iosize=8k.ioeount=1000 /fs - test/test48 A sample output is provided here as an example: total: 0.241 see 33187.31 KB/s epu: 0.13 sys 0.01 user Lab 6 Solutions: Administering File Systems 8-77 Copwiqbt D 2U06 Symantec Corporation. Alillghl5 reserved I
  • 340. Solaris mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol /fs test- /opt/VRTSspt/FS/VxBeneh/vxbeneh 9 -w read -i - iosize=8k,ioeount=1000 /fs --test/test58 UP-UX mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol /fs - test /Opt/VRTSspt/FS/VxBeneh/vxbeneh - 11.23 _pa64 -w read -i iosize=8k,ioeount=1000 /fs - test/test58 A sample output is provided here as an example: total: 0.202 see 39650.48 KB/s epu: 0.18 sys 0.00 user There is an improvement in throughput for both casesbut the improvement is highest for the file using small extent sizes (that is for Ifs test/test48). 13 What is the difference between an unfragmented and a fragmented file system'? A fragmented tile system has free space scattered throughout the file system in relatively small extents whereas an unfragmented tile system has free space in just a few relatively large extents. 14 Is any onc environment more prone to needing dcfragmcntation than another'? Yes,volatile environments wherein files are grown, shrunk, erased, moved, with ownership changes, and so on are prone to fragmentation. Stable environments, such as Oracle databases and logs, have very little impact on the supporting tile system so require infrequent dcfragmentation. Reading the File Change Log (FCL) In the namedg disk group create a new 10-1113 volume called namevoll. Create a YxFS tile system on namevoll and mount it on /fel_test. vxassist -g namedg make namevol1 10m mkfs -F vxfs /dev/vx/rdsk/namedg/namevoll Note: On Llnux, usemkfs - t. mkdir /fel test mount -F vxfs /dev/vx/dsk/namedg/namevoll /fel test Note: On Linux, usemount - t. 2 Turn the FCL on for /fel test. and ensure that it is on. B-78 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 341. feladm on Ifel test feladm state Ifel test ON 3 (10 to the directory that contains the FCL. ed Ifel test/lost+found Is ehangelog I4 Display the superblock for /fcl_test. feladm print 0 Ifel_test 5 l low do you know that there have been no changes in the file system yet? The superblock (fo££) and the end of the FCL file (loff) are the same number. 6 Add some tiles to /fcl test. Then remove one of the tiles youjust added. ed Ifel test toueh a b e rmb 7 Display the superblock for /fcl_test. feladm print 0 Ifel test 8 Ilow do you know that changes have been made to the file system" The superblock (foff) and the end of the FCL file (10££) are different numbers. 9 Print the number of the FCL. feladm print 1024 Ifel_test The fields are Change Type, Inode Number, Inode Generation, and Timestamp The Unlink and Rename types list the name of the file on the following line, preceded by the parent's inode number. 10 Which tiles are identified by the inode numbers that are listed in the Create type? vxlsino inode_number Ifel test 11 Unmount the fel_test tile system and remove namevoll. cd I umount Ifcl_test Lab 6 Solutions: Administering File Systems 8-79 Copyrighl.:t 2006 Svmanter; Corporation All rights reserved
  • 342. vxassist -g nallledg remove volume namevoll 12 The next two lab sections are optional labs on analyzing and defragmeming fragmented file systems. If you are not planning to carry out the optional labs, unmount Ifs_test tile system ami destroy the testdg disk group: otherwise. skip this step. umount Ifs test vxdg destroy testdg B-80 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyright if· 2006 Symanrac Corporation All nqbts reserved
  • 343. Optional Lab Exercises The next set of lab exercises is optional and may be performed iI'you have time. These exercises provide additional practice in defragmenting a file system and monitoring fragmentation. Optional Lab: Defragmenting a Veritas File System I This section uses the Ifs_test file system to analyze the impact of fragmentation on the performance of a variety of 110types on files using small and large extent sizes. Recreate the fragmented Ifs_test file system using the following steps: a Unmount the Ifs test file system UllIount Ifs test b Recreate a vx ts file system in the testvol in testdg. mkfs -F vxfs /dev/vx/rdsk/testdg/testvol Note: On Linux, usemkf s - t. c Mount the file system to /fs_test. mount -F vxfs /dev/vx/dsk/testdg/testvol /fs test Note: On Liuux, usemount -to d Ask your instructor for the location of the extents. sh script. Run the extent s. sh script. Note: This script can take about I:; minutes to run. /student/labs/sf/sf50/extents.sh 2 Run a series of performance tests for a variety of 1i0 types using the vxbench utility to compare the performance of the Illes with the XK extent size (/fs_test/test48) and the ROOOKextent size (/fs_test/test58) by performing the following steps. Complete the following table when doing the performance tests. Lab 6 Solutions: Administering File Systems 8-81 Copyright (i; 2006 Symantec Corporation All rights reservec
  • 344. Test Type Time (seconds) Throughput (KB/second) Before After Defrag Before After Defrag Defrag Defrag Sequential 2.709 .526 2953.22 15202.10 reads.XK extent Sequential .547 .549 14634.57 14576.20 reads.8000K extent Random 8.268 6.267 967.54 1276.53 reads.RK extent Random 6.541 6.468 1223.02 1236.91 reads.8UOUK extent Note: Results can vary depending on the nature of the data and the model of array used. No performance guarantees arc implied by this lab. 3 Ensure that the directory where the vxbench utility is located is included in your PATH definition. export PATH=$PATH:/opt/VRTSspt/FS/VxBench 4 Sequential 1/0 Test Note: You must unmount and remount the file system Ifs_test before each step to clear and initialize the buffer cache. To test the 8K extent size: mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol /fs test Note: On Linux, use mount -to vxbench_platform -w read -i iosize=8k,iocount=lOOO /fs test/test48 To test the 8000K extent size: mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol /fs test Note: On Linux, use mount - t. vxbench_platform -w read -i iosize=8k,iocount=lOOO /fs test/test58 5 Random 1/0 Test To test the 8K extent size: 8-82 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynyht 20011Svrnantec Corporaurm AI! fl9h!S rese-veo
  • 345. mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol /fstest Note: On Linux, usemount -to vxbench_platform -w rand_read -i iosize=8k,iocount=lOOO,maxfilesize=8000 /fs test/test48 To test the ROOOK extent size: mount -F vxfs -0 remount /dev/vx/dsk/testdg/testvol /fs test Note: On Linux, usemount -to vxbench-platform -w rand_read -i iosize=8k,iocount=lOOO,maxfilesize=8000 /fs test/test58 I6 Defragment the If s_tes t file system. The defragmenration process takes some time. Solaris. /opt/VRTSvxfs/sbin/fsadm -e -E -d -D -s /fs - test Linux, AIX BI'-LJX /usr/lbin/fs/vxfs5.0/fsadm -e -E -d -D -s /£s - test 7 Repeat the vxbench performance tests and complete the table with these performance results. 8 Compare the results of the defragmenrcd tile system with the fragmented file system. 9 When finished comparing the results in the previous step. unmount the Ifs_test file system and destroy the testdg disk group. umount /fs_test vxdg destroy testdg Optional Lab: Additional Defragmenting Practice In this exercise. you monitor and defragment a file system by using the f sadm command. Create a new 2-GB striped volume called namevoll in namedgdisk group. Create a VxFS file system on namevoll and mount it on Ifs_test. vxassist -g namedg make namevoll 2g layout=stripe mkfs -F vxfs Idev/vx/rdsk/namedg/namevoll Lab 6 Solutions: Administering File Systems B-83 Copyright © 2006 Symantec Coepo.auon All dqtlls recerven
  • 346. Note: On Linux, use mkfs - t. mkdir / f s _ tes t (if the directory docs not already exist) mount -F vxfs /dev/vx/dsk/namedg/narnevoll Ifs test Note: On Linux, use mount -to 2 Repeatedly copy a small existing file system to If s_test using a new target directory name each time until the target tile system is approximately 85 percent full. Fur example, 011 the Solaris platform: for i in 1 2 3 > do > cp -r lopt Ifs test/opt$i > done Note: Monitor the tile system size using df - k on the Solaris platform and bdf on the HP-UX platform, and CTRL-C out of the for loop when the file system becomes approximately 85 percent full. 3 Delete all files in the Ifs_test tile system over 10 MB ill size. find Ifs test -size +20480b -exec rm {} ; find Ifs test -size +20480 -exec rm {} ; 4 Check the level offragmentation ill the Ifs_test file system. fsadm -D -E /fs test 5 Repeat steps 2 and .3 using values 4 5 fur i ill the loop. Fragmentation of both free space and directories will result. 6 Repeat step :2 using values 6 7 for i. Then delete all tiles that are smaller than 6.:1K to release a reasonable amount of space. find Ifs test -type f -size -64k -exec rm {} ; find Ifs test -type f -size -128 -exec rm {} ; 7 Defragment the file system and display the results. Run fragmentation reports both before and after the defragmentation and display summary statistics after each pass. Compare the f sadrn report from step 4 with thc final report from the last pass in this step. fsadm -e -E -d -D -s /fs test 8 Unmount the If s_test file system and remove the namevoll volume used ill this lab. umount Ifs test vxassist -g namedg remove volume namevol1 B-84 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals Copvnqht 'i:. ~006 Symantec CorPOf?'1, '
  • 347. 'S)11U1I1rCC Lab 7 Lab 7: Resolving Hardware Problems In this lab, you practice recovering from a variety of hardware failure scenarios, resulting in disabled disk groups and failed disks. First you recover a temporarily disabled disk group, and then you use a set of interactive lab scripts to investigate and practice recovery techniques. For Lab Exercises, see Appendix A. For J::ab Solutions, see Appendix B. I Lab 7 Solutions: Resolving Hardware Problems In this lab. you practice recovering from a variety of hardware failure scenarios. resulting in disabled disk groups and railed disks. First you recover a temporarily disabled disk group and then you use a set of interactive lab scripts to investigate and practice recovery techniques. Each interactive lab script: Sets up the required volumes Simulates and describes a failure scenario Prompts you to fix the problem Finally. a set of uptiona I labs are provided to enable you to investigate disk failures further and to understand the behavior of spare disks and hot relocation. The Lab Exercises for this lab are located on the following page: Prerequisite Setup To perform this lab. you need a lab system with Storage Foundation pre-installed. configured and licensed. In addition to this. you also need four external disks to be used during the labs. At the beginning of this lab. you should have a disk group called namedg that has four external disks and no volumes in it. Copyrtght if; 2006 Symantec Corporation. All rights reserved 8-85Lab 7 Solutions: Resolving Hardware Problems
  • 348. Classroom Lab Values In preparation for this lab, you will need the following information about your lab environment. For your reference, you may record the information here, or refer back to the first lab where you initially documented this information. Object Sample Value Your Value :ly Data Disks: Sularis: c It #dO - clt#d5 HP-UX: c4tOdO - c4tOd5 AIX:hdisk21- hdisk26 Linux: sda - sdf Location of Lab Scripts: !student!labs!sf! sf50 Prefix to be used with name object names 8-86 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copvnqht G;2006 Symaotec Corporation All rights reserved
  • 349. Recovering a Temporarily Disabled Disk Group Remove all disks except for one (namedgOl) from the namedg dis~ g;P. vxdg -g namedg rmdisk namedg04 . l ~. vxdg -g namedg rmdisk namedg03 C~~: vxdg -g namedg rmdisk namedg02 .J I 2 Create a Ig volume called namevoll in namedg disk group. vxassist -g namedg make namevoll 19 3 Create a file system on namevoll and mount it to / namel. mkfs -F vxfs /dev/vx/rdsk/namedg/namevoll mkdir /namel mount -F vxfs /dev/vx/dsk/namedg/namevoll /namel 4 Copy the contents of / etc/ defaul t directory to / namel and display the contents of the file system. ep -r jete/default /namel Is -IR /~n~a~n~le~ll- ---------- -...~ 5 Ask your instructor for the location of the fa i 1dg_ temp script, and note the location here: Script location: . _ 6 Start writing to a file in the / namel file system at the background using the following command: dd if=/dev/zero of=/namel/testfile bs=1024 count=500000 & 7 In one terminal change to the directory containing the script and before the I/O completes. execute faildg_temp namedg command. Notes: The faildg_temp script disables the single path to the disk in the disk group to simulate a hardware failure. This is just a simulation and not a real failure: therefore. the operating system will still be able to see the disk after the failure. The script waits until you are ready with analyzing the failure. to re-enablc the path to the disk in the disk group. II'the 110 you started in step (, completes before you can simulate the failure. you can start it again to observe the 110 failure. ed /script_location ./faildg temp namedg Disabling device_tag Lab 7 Solutions: Resolving Hardware Problems 8-87 Copyright It, 2006 Symantec Corporation. All tights reserved
  • 350. Enter e when you are ready for the disks to be re- enabled: 8 Wait for the 110to fail and in another terminal observe the error displayed in the system lug. Solar-is, tail -f /var/adm/messages Linux, AIX HP-lIX tail -f /var/adm/syslog/syslog.log 9 Use the vxdisk -0 alldgs list and vxdg list commands to determine the statusof the disk group, and the disk. vxdisk -0 alldgs list vxdg list The disk group should show asdisabled and the disk status should change to online dgdisabled. 10 What happenedto the file system'! The till:' system is also disabled. 11 When you are done with analyzing the impact of the failure, changeto the terminal where the fa i ldg_ temp script is waiting and enter "e" to correct the temporary failure. Note: In a real failure scenario.after the hardware recovery, you would needto first verify that the operating system can seethe disks and then verity that Volume Manager hasdetected the change in status. If nut, you can force VxVM to scan the disk by executing the vxdctl enable command. This will not be necessaryfor this lab. On the terminal where the faildg_ temp script is waiting: Enter e when you are ready for the disks to be re- enabled: e 12 Assuming that the failure was due to a temporary fiber disconnection and that the data is still intact. recover the disk group and start the volume. Verity the disk and disk group statususing the vxdi sk - 0 a Ildgs 1i stand vxdg 1i s t commands. umount /namel vxdg deport namedg vxdg import namedg vxvol -g namedg startall vxdisk -0 alldgs list vxdg list 8-88 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynyht'~ 200n Syn:ilIHC',: Corporation All rights reserved
  • 351. The disk group should now be enabled and the disk status should change back to online. 13 Remount the file system and verify that the contents arc still there. Note that you will need to perform a tile system check before you mount the tile system. fsck -F vxfs /dev/vx/rdsk/namedg/namevoll mount -F vxfs /dev/vx/dsk/namedg/namevoll /narnel Is -IR /namel I14 Unmount the tile system and remove namevoll. At the end of this section you should be left with a namedg disk group with a single disk and three initialized disks that are free to be used in a new disk group. umount /namel vxassist -g namedg remove volume namevoll Preparation for Disk Failure Labs Overview The following sections use an interactive script to simulate a variety of disk failure scenarios. Your goal is to recover lrom the problem as described in each scenario. Use your knowledge of VxVM administration, in addition to the VxVM recovery tools and concepts described in the lesson. to determine which steps to take to ensure recovery. Aller you recover the test volumes, the script verifies your solution and provides you with the result. You succeed when you recover the volumes without corrupting the data. For most of the recovery problems, you can use any ofthc VxVM interfaces: the command line interface, the VERITAS Enterprise Administrator (VEA) graphical user interface, or the vxdi skadm menu interface. Lab solutions are provided for only one method. II'you have questions about recovery using interfaces not covered in the solutions. seeyour instructor. Setup Due to the way in which the lab scripts work. it is important to set up your environment as described in this setup section: II'your system is set to use enclosure-based naming, then you must turn off enclosure-based naming before running the lab scripts. 2 Create a disk group named testdg and add three disks (preferably of the same size) to the disk group. Assign the following disk media names to the disks: testdgOl. testdg02. and testdg03. vxdisksetup -i device_tag(if"eccssar~') vxdg init testdg testdgOl=device_tagl testdg02=device_tag2 testdg03=device_tag3 Lab 7 Solutions: Resolving Hardware Problems 8-89 Copyright!£: 2006 Symantec Corporation All right;; reserved
  • 352. Note: If you do not have enough disks, you can destroy disk groups created in other labs (for example, namedg) in order (0 create the testdg disk group. 3 Before running the automated lab scripts. set the DG environment variable in your root profile to the name of the test disk group that you are using: Solaris, vi / .profile HP-liX DG=testdg; export DG Linux vi /root/.bashrc DG=testdg; export DG Rerun your profile by logging out and logging back on, or manually running it. 4 Ask your instructor for the location of the lab scripts. Note: This lab can only be performed on Solaris, HP-UX, and Linux. Recovering from Temporary Disk Failure In this lab exercise, a temporary disk failure is simulated. Your goal is to recover all of the redundant and nonredundant volumes that were on thc failed drive. The lab script run _di sks sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volumes. Ask your instructor for the location of the run_disks script. Before You Begin: Ensure that the environment variable DGis set to the name of the testdg disk group. For example: DG="testdg" export DG From the directory that contains the lab scripts. run the script run_disks. and select option I. "Turned off drive (temporary failure)": ./run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Optional Lab 4 - Intermittent Failures (system too slow) 5) Optional Lab 5 - Turned off drive with layered volume 6) Optional Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 1 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals8-90 Copyright;';' 2006 svroaotec Corporation All nqhts reserved
  • 353. This script sets up two volumes: tes t 1 with a mirrored layout test2 with a concatenated layout Note: If you receive an error messageabout the / image file system becoming full during volume setup. ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk power-offby saving and overwriting the private region on the drive that is used by both volumes. Then. when you arc ready to power the disk back on. the script restores the private region as it was before the failure. 3 Assume that the failure was temporary. In a second terminal window. attempt to recover the volumes. Assume that the drive that was turned off and then back on was cl t2dO for a Solaris or HP-UX system or sdb for a Linux system (actual device name will vary by system). Notc: When performing recovery procedures, run vxprint and vxdisk list oftcn to seewhat is changing after issuing recovery commands: vxprint -g testdg -htr vxdisk -0 alldgs list To recover from the temporary failure: a Ensure that the opcrating system recognizes the device: Solaris devfsadm Note: Becauseyou have not changedthe SCSI location of the drive. running devfsadm may not be necessary.However, running this command verifies the existence and validity of the disk label. Prior to Solaris 7. you can usedrvconf ig and disks. HI'-LJX ioscan -c disk insf -e Linux partprobe Idev/sdb b Verify that thc operating system recognizes the device: Solaris prtvtoc Idev/rdsk/c1t2dOs2 HP-LJX ioscan -fnC disk (Verify that the disk is ill CLAIMED statc.) Linux fdisk -1 Idev/sdb Lab 7 Solutions: Resolving Hardware Problems Copyright f' 2006 Symantec Corporation. All rights reserved I B-91
  • 354. c Force the VxVM configuration daemun tu reread all of the drives in the system: vxdctl enable d Reattach the device tu the disk media record: vxreattach e Recover the volumes: vxrecover Start the nun redundant volume: vxvol -g testdg -£ start test2 4 After YOIl recover the volumes, type e in the lab script window. The script verifies whether your solution is correct. Recovering from Permanent Disk Failure In this lab exercise, a permanent disk failure is simulated. Your goal is to replace the tailed drive and recover the VOIIlIllCS as needed. The lab script run_disks sets up the test volume configuration, simulates a disk failure, and validates your solution for recovering the volumes. Ask your instructor for the location of the run_disks script. Before You Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: echo $DG IfDG is not set, set it before you continue: DG="testdg" export DG From the directory that contains the lab scripts, run the script run_disks, and select option 2, "Power tailed drive (permanent failure r": ./run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Optional Lab 4 - Intermittent Failures (system too slow) 5) Optional Lab 5 - Turned off drive with layered volume 8-92 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copytlghl~' 20(;6 Svmantec Corporaucn. All rights rescrvoo
  • 355. 6) Optional Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 2 This script sets up two volumes: test 1 with a mirrored layout test2 with a concatenated layout Note: If you receive an error messageabout the / image file system becoming full during volume setup, ignore the error message. This error will not have any impact on further lab steps or lab results. I2 Read the instructions in the lab script window. The script simulates a disk power-offby saving and overwriting the private region on the drive that is used by both volumes. The disk is detached by YxYM. 3 In a second terminal window, replace the permanently failed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Then. recover the volumes. Assume that the failed disk is testdg02 (clt2dO for a Solaris or HP-lJX system or sdb for a Linus system) and the new disk used to replace it is cl t3dO for a Solaris or H P-lJX system or sdd for a Linux system (actual device name will vary by system), which is originally uninitialized. Note: When performing recovery procedures, run vxprint and vxdisk list often to seewhat is changing after issuing recovery commands: vxprint -g testdg -htr vxdisk -0 alldgs list To recover from the permanent failure: a Initialize the new drive: Solarts, vxdisksetup -i clt3dO Hl'-UX Linux vxdisksetup -i sdd b Attach the disk media name (testdg02) to the new drive: Solaris. vxdg -g testdg -k adddisk testdg02=clt3dO HP-UX Linux vxdg -g testdg -k adddisk testdg02=sdd c Recover the volumes: Lab 7 Solutions: Resolving Hardware Problems 8-93 Copyright ~ 2006 Symanrer. Corporation. All nqt.ts reserved
  • 356. vxrecover d Start the nonrcdundant volume: vxvol -g testdg -f start test2 Alternatively, you can use the vxdiskadm menu interface: a Invoke vxdiskadm: vxdiskadm b From the vxdiskadm main menu, select the option, "Replace a failed or removed disk." When prompted, select clt3dO for a Solaris or HP-lJX system or sdd for a Linux system to initialize and replace testdg02. Note: If you receive an error while using vxdiskadm about a vxprint operation requiring a disk group, ignure the errur. e Start the nunredundant volume: vxvol -g testdg -f start test2 4 After you recover the volumes, type e in the lab script window. The script verities whether your solution is correct. 5 When you have completed this exercise, if the disk device that was originally used during disk failure simulation is in online invalid state, rcinuialize the disk to prepare tor later labs. For example: vxdisksetup -i device_tag Recovering from Intermittent Disk Failure (1) In this lab exercise. intermittent disk failures arc simulated. but the system is still OK. Your goal is to move data from the failing drive and remove the failing disk. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run_disks script. Before You Begin: Check to ensure that the environment variable DGis set to the name of the testdg disk group: echo $DG If it is not set, set it before you continue: DG="testdg" export DG 8-94 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyright '';' 2000 Svmautec Corporation. All nqhts reserved
  • 357. From the directory that contains the lab scripts. run the script run_disks. and select option 3. "Intermittent Failures (system still ok I": ./run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Optional Lab 4 - Intermittent Failures (system too slow) 5) Optional Lab 5 - Turned off drive with layered volume 6) Optional Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 3 I This script sets up two volumes: test1 with a mirrored layout test2 with a concatenated layout Note: If you receive an error message about the / image file system becoming lull during volume setup. ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. You are in limned that the disk drive used by both volumes is experiencing intermittent failures that must be addressed. 3 In a second terminal window. move the data on the failing disk to another disk. and remove the failing disk. Assume that testdg02 (clt2dO for a Solaris or HP-UX system or sdb for a Linux system, and with plex tes tl- 01 from the mirrored volume testl) is the drive experiencing intermittent problems (actual device name will vary by system). Note: When performing recovery procedures, run vxprint and vxdisk list often to see what is changing after issuing recovery commands: vxprint -g testdg -htr vxdisk -0 alldgs list To recover: a Set the read policy to read from a preferred plex that is not on the failing drive before evacuating the disk. This technique prevents 'xVM from accessing the failing drive during a read, if possible: vxvol -g testdg rdpol prefer testl testl-02 Lab 7 Solutions: Resolving Hardware Problems 8-95 Copyright '9 2006 Symantec Corporation All fights reserveo
  • 358. b Evacuate data from the failing drive to one or more other drives by using the vxdiskadm menu interface. Invoke vxdiskadm: vxdiskadm c From the vxdiskadm main menu, select the option, "Move volumes from a disk." Evacuate the volumcs on testdg02 to another disk in the disk group, such as testdg03. d Remove the failing disk by using the vxdiskadm menu interface. From the vxdiskadm main menu, select the option, "Remove a disk." Remove the disk testdg02. e Set the volume read policy back to the original read policy: vxvol -g testdg rdpol select testl Note: In this exercise, you still succeed even if you do not change the read policy or you do not remove the failing disk after evacuation. Warning: If the lab is repeated and a disk that has been used as a replacement disk in a previous lab is now used as a new disk to replace the failing disk without moving the volumes, the test results may succeed although they should fail. If this happens, remove the volume called image in the testdg disk group and re-run the lab. 4 Atier you resolve the problem. type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, add the disk you removed from the disk group back to the testdg disk group so that you can use it in later labs. For example: vxdg -g testdg adddisk testdg02=device_tag 8-96 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COpYfI<,Ihl',~20(}6 Svmaruec Corporation All rig!'>!!'reserved
  • 359. Optional Lab Exercises The next set of lab exercises is optional and may be performed iI'you have time. These exercises provide additional recovery scenarios, as well as practice in replacing physical drives and working with spare disks. A final activity explores how to use the Support website, which is an excellent troubleshooting resource. Optional Lab: Recovering from Intermittent Disk Failure (2) In this optional lab exercise, intermittent disk failures are simulated, and the system has slowed down significantly, so that it is not possible to evacuate data from the failing disk. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run _ di sks script. Before You Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: echo $DG I If DG is not set. set it before you continue: DG="testdg" export DG From the directory that contains the lab scripts, run the script run_disks, and select option 4. "Intermittent Failures (system too slow)": ./run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Optional Lab 4 - Intermittent Failures (system too slow) 5) Optional Lab 5 - Turned off drive with layered volume 6) Optional Lab 6 - Power failed drive with layered volume xl Exit Your Choice? 4 This script sets up two volumes: testl with a mirrored layout test2 with a concatenated layout Note: II'you receive an error messageabout the / image file system becoming full during volume setup. ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. You are informed that: Lab 7 Solutions: Resotving Hardware Probtems 8-97 Copyright I£: 2006 Symantec Corporation All rights reserved
  • 360. The disk drive used by both volumes is experiencing intermittent failures that need to be addressed immediately. The system has slowed down significantly, so it is not possible to evacuate the disk before removing it. 3 In a second terminal window, perform the necessary actions to resolve the problem. Assume that testdg02 (c1t2dO for a Solaris or HP-UX system or sdb for a Linux system and with plex test1- 01 from the mirrored volume test1) is the drfve experiencing intermittent problems (actual device name will vary by system). Note: When performing recovery procedures, run vxprint and vxdisk list often to seewhat is changing after issuing recovery commands: vxprint -g testdg -htr vxdisk -0 a11dgs list To recover: a Remove the failing disk for replacement by using the vxdiskadm menu interface. Invoke vxdiskadm: vxdiskadm b From the vxdiskadm main menu, select the option, "Remove a disk for replacement". Remove the disk testdg02. 00 not use a replacement disk yet. Note: If you receive an error while using vxdiskadm about a vxprint operation requiring a disk group, ignore the error. c To ensure that you have an uninitialized new disk to use as the replacement disk, you may need to copy zeros to the beginning of the failing disk and then uninitialize it before using it as the replacement disk. To carry out this task, you can use the / script_l ocat ion/bin/ cleandisk device_tag command where script_location is the home directory from which you are running the automated lab scripts. For example: Solarls, /script - location/bin/cleandisk clt2dO HP-UX Linux /script locat~on/bin/cleandisk sdb- d Replace the failed disk with a new disk by using the vxdiskadm menu interface. From the vxdiskadm main menu, select the option, "Replace a failed or removed disk." Select an uninitialized disk to replace testdg02. Note: If you receive an error while using vxdiskadm about a 8-98 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYrighl '~' 2006 S~rT'<lnlf'r Corporation All, .:',.,~, "('Iwd
  • 361. vxprint operation requiring a disk group, ignore the error. e Start the nonredundant volume. vxvol -g testdg -f start test2 4 After you resolve the problem. type e in the lab script window. The script verities whether your solution is correct. In this optional lab exercise. a temporary disk failure is simulated. Your goal is to recover all of the volumes that were on the failed drive. The lab script run _di sks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run _ d i sks script. Before You Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: echo $DG If DG is not set. set it before you continue: DG="testdg" export DG IOptional Lab: Recovering from Temporary Disk Failure· Layered Volume From the directory that contains the lab scripts. run the script run_disks. and select option 5. "Turned off drive with layered volume": ./run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Optional Lab 4 - Intermittent Failures (system too slow) 5) Optional Lab 5 - Turned off drive with layered volume 6) Optional Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 5 This script sets up two volumes: testl with a concat-mirror layout test2 with a concatenated layout Note: II'you receive an error message about the / image file system becoming full during volume setup. ignore the error message. This error will not have any impact on further lab steps or lab results. Lab 7 Solutions: Resolving Hardware Problems 8-99 Copynghl rt: 2006 Svmantec Corporation. All riqhts reserved
  • 362. 2 Read the instructions in the lab script window. The script simulates a disk power-off by saving and overwriting the private region on the drive that is used by both volumes, Then. when you are ready to power the disk back on, the script restores the private region as it was before the failure. 3 Assume that the failure was temporary. In a second terminal window, attempt to recover the volumes. Assume that the drive that was turned off and thcn back on was cl t2dO for a Solaris or HP-UX system or sdb for a Linux system (actual dcvicc namc will vary by system). Notc: When pcrforming recovery procedures, run vxprint and vxdisk list often to seewhat is changing after issuing recovery commands: vxprint -g testdg -htr vxdisk -0 alldgs list To recover from thc temporary failure: a Ensure that the opcrating system recognizcs thc device: Solaris devfsadm 1'I0te: Because you have not changed the SCSI location of the drive. running devf sadmmay not be necessary. However. running this command vcruics the existence and validity of the disk label. Prior to Solaris 7. you can use drvconf ig and disks. HP-UX ioscan -c disk insf -e Linux partprobe /dev/sdb b Vcrify that thc operating system rccognizes the device: Solaris prtvtoc /dev/rdsk/clt2dOs2 HP-UX ioscan -fne disk (Verify that the disk is in CLAIMED state.) Linux fdisk -1 /dev/sdb c Forcc the VxVI1 conliguration daemon to reread all ofthe drives in the system: vxdctl enable d Reattach the device to the disk media record: vxreattach 8-100 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals CupYrlyhl'~ 20;')6Symantec Corporauon AUuulns reserved
  • 363. e Recover the volumes: vxrecover Start the non redundant volume: vxvol -g testdg -f start test2 4 Alter you recover the volumes. type e in the lab script window. The script verifies whether your solution is correct. IOptional Lab: Recovering from Permanent Disk Failure· Layered Volume In this optional lab exercise. a permanent disk failure is simulated. Your goal is to replace the failed drive and recover the volumes as needed. The lab script run_disks sets up the test volume configuration and validates your solution for resolving the problem. Ask your instructor for the location of the run _ di sks script. Before You Begin: Check to ensure that the environment variable DG is set to the name of the testdg disk group: echo $DG If DG is not set. set it before you continue: DG="testdg" export DG From the directory that contains the lab scripts, run the script run_disks. and select option 6, "Power failed drive with layered volume": ./run_disks 1) Lab 1 - Turned off drive (temporary failure) 2) Lab 2 - Power failed drive (permanent failure) 3) Lab 3 - Intermittent Failures (system still ok) 4) Optional Lab 4 - Intermittent Failures (system too slow) 5) Optional Lab 5 - Turned off drive with layered volume 6) Optional Lab 6 - Power failed drive with layered volume x) Exit Your Choice? 6 This script sets up two volumes: Lab 7 Solutions: Resolving Hardware Problems 8-101 Copyright i~2006 Symantec Corporation All nqhts reserved
  • 364. testl with aconcur-mirror layout test2 with a concatenated layout Note: Ifyou receive an error message about the / image file system becoming full during volume setup, ignore the error message. This error will not have any impact on further lab steps or lab results. 2 Read the instructions in the lab script window. The script simulates a disk power-uffby saving and overwriting the private region on the drive that is used by buth volumes, The disk is detached by YxYM. 3 In a second terminal window, replace the permanently jailed drive with either a new disk at the same SCSI location or by another disk at another SCSI location. Then, recover the volumes. Assume that the failed disk is testdg02 (el t2dO for a Solaris or HP-UX system or sdb for a Linux system) and the new disk used to replace it is elt3dO fur a Solaris HP-LJX system ur sdd for a Linux system, which is originally uninitialized (actual device names will vary by system). Note: When performing recovery procedures, run vxprint and vxdisk list uften to seewhat is changing after issuing recovery commands: vxprint -g testdg -htr vxdisk -0 alldgs list Tu recover from the permanent failure: a Initialize the new drive: Solaris. vxdisksetup -i clt3dO 1U'-liX Linux vxdisksetup -i sdd b Attach the disk media name (testdg02) tu the new drive: Solaris, vxdg -g testdg -k adddisk testdg02=clt3dO HP-UX Linux vxdg -g testdg -k adddisk testdg02=sdd c Recover the volumes: vxrecover d Start the nonredundant volume: vxvol -g testdg -f start test2 Alternatively, you can use the vxdiskadm menu interface: 8-102 VERITAS Storage Foondation 5.0 for UNIX: Fundamentals COPYright! 2006 Symaruec Corporation All nghts reserved
  • 365. a Invoke vxdiskadm: vxdiskadm b From the vxdiskadm main menu, select the option, "Replace a failed or removed disk." When prompted, select c1t3dO for a Solaris or HP-liX system or sdd for a Linux system to initialize and replace testdg02. Note: If you receive an error while using vxdiskadm about a vxprint operation requiring a disk group, ignore the error. e Start the nonredundant volume: vxvol -g testdg -f start test2 I4 Alter you recover the volumes, type e in the lab script window. The script verifies whether your solution is correct. 5 When you have completed this exercise, if the disk device that was originally used during disk failure simulation is in onl ine inval id state, reinitialize the disk to prepare for later labs. For example: vxdisksetup -i device_tag The rest of this lab exercise includes optional lab instructions where you perform a variety of basic recovery operations. Optional Lab: Removing a Disk from VxVM Control Destroy the testdg disk group and add the three disks back to the namedg disk group. At this point you should have one disk group called namedgwith four empty disks in it. There should be no volumes in the namedgdisk group. If you had destroyed the namedgdisk group in previous lab sections, re-create it. vxdg destroy testdg vxdg ini t namedg namedg01=devi ce_ tagl (if the disk group docs not exist) vxdg -g namedg adddisk namedg02=device_tag2 namedg03=device_tag3 namedg04=device_tag4 2 In the namedgdisk group create a IOO-MB. mirrored volume named namevoll. Create a Veritas file system on namevoll and mount it to / namel directory. vxassist -g namedg make namevol1 100m layout=mirror mkfs -F vxfs /dev/vx/rdsk/namedg/namevol1 Note: On LiIlUX, usemkfs - t. Lab 7 Solutions: Resolving Hardware Problems 8-103 Copyright '~2006 Symantec Corporation. All nqhts reserved
  • 366. mkdir / namel (if necessary) mount -F vxfs /dev/vx/dsk/namedg/namevoll /namel Note: 011 Linux, usemount -to 3 Display the properties of the volume. In the table, record the device and disk media name of the disks used in this volume. vxprint -g namedg -thr For example, the volume namevoll usesnamedg02 and namedg04: Device Tag Disk Media Name Solaris, Disk I clt2dO namedg02 HP-UX Disk 2 clt3dO namedg04 Device Tag Disk ;Iedia Name Linux Disk I sde namedg02 Disk 2 sdf namedg04 4 Remove one of the disks that is being used by the volume for replacement. vxdg -g namedg -k rmdisk namedg02 5 Confirm that the disk was removed. vxdisk -0 alldgs list 6 From the command line. check that the state of one of the plexes is DISABLED and REMOVED. vxprint -g namedg -thr 7 If you are not already logged in VEA, start VEA and connect to your local system. Check the status of the disk that has been removed. III VEA, the disk is shown as disconnected, because the disk has been removed for replacement. 8 Replace the disk back into the namedg disk group. vxdg -g namedg -k adddisk namedg02=device_tag where devi ce_ tag is c#t#d# for Solaris and HP-lJX, hdisk# for AIX and sd# for Linux platforms. 9 Check the status of the disks. What is the status of the replaced disk? vxdisk -0 alldgs list The status of the disk is ONLINE. 10 Display volume information. What is the state of the plexes ofnamevoll'! B-104 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynqbt ~,2000:.Symantec Corporauon. All ngh:s reserved
  • 367. vxprint -g namedg -thr The plex using the disk you removed and replaced is marked RECOVER. 11 In YEA. what is the statusof the replaceddisk') What is the statusof the volume') The disk is reconnected; its status shows Imported as normal. Select the volume in the left pane, and click the Mirrors tab in the right pane. The plex is marked recoverable. vxrecover In YEA, the status of the plex changesto Recovering, and eventually to Attached. With vxprin t, thc status of the plcx changesto STALE and eventually to ACTIVE. I12 From the command line. recoverthe volume. During andalter recovery.check the statusof the plex in anothercommandwindow and in VEA. Optional Lab: Replacing Physical Drives (Without Hot Relocation) Note: If you haveskipped the previous optional lab section called Removing a Disk From YxYM Control. you may needto destroy testdg and add the three disks back to the namedg disk group before you start this section. If you had destroyedthe namedg disk group in previous lab sections.re-createit. If necessary: vxdg destroy testdg vxdg init namedg namedgOl=device_tagl(ifthediskgroupdoesnot exist) vxdg -g namedg adddisk namedg02=device_tag2 namedg03=device_tag3 namedg04=device_tag4 Ensurethat the namedg disk group hasa mirrored volume called namevoll with a Yeritas tile systemmountedon /namel. If not. createa IO()-MB mirrored volume called namevoll in the namedg disk group. add a YxFS file systemto the volume. and mount the file systemat the mount point /namel. [fnecessary: vxassist -g namedg make namevoll 100m layout=mirror mkfs -F vxfs /dev/vx/rdsk/namedg/namevoll Note: On Linux, usemkfs -to mkdir / namel (if necessary) mount -F vxfs /dev/vx/dsk/namedg/namevoll /namel Lab 7 Solutions: Resolving Hardware Problems 8-105 Copyright£:: 2006 svmaotec Corporation All rights reservec
  • 368. Note: On Linux, use mount - t. 2 If the vx r e I ocd daemon is running. stop it using psand ki 11,in order to stop hot relocation from taking place. Verify that the vxrelocd processes are killed before you continue. Notes: Ifyou have executed the run_di sks script in the previous lab sections, the vxrelocd daemon may already be killed. There are two v x r e Loc d processes on the Solaris platform. You must kill both of them at the same time. ps -ef I grep vxrelocd kill -9 pidl [pid21 ps -ef I grep vxrelocd 3 Next, simulate disk failure by writing over the private region using the overwri tepr script followed by vxdctl disable and vxdctl enable commands. Ask your instructor for the location of thc script. While using the script. substitute the appropriate disk device name for one of the disks in useby namevoll. lor example on Linux use sbd,on Solaris and HP-UX use clt8dO. cd /script_location ./overwritepr device_tag vxdctl disable vxdctl enable 4 When the CITor occurs. view the status of the disks from the command line. vxdisk -0 alldgs list The physical device is no longer associated with the disk media name and the disk group. 5 View the status of the volume from thc command line. vxprint -g namedg -thr The plex displays a status of DISABLED NODEVICE. 6 In VEA, what is the status of the disks and volume') Note: On the HP-UX platform, the vxdctl disable command may cause the StorageAgcnt used by the VEA GUI to hang. If this happens, the VEA GUI does not detect the changes. Use the following command to restart the agent: /opt/VRTSobc/pa133/bin/vxpa1ctr1 -a StorageAgent -c restart The status of the disk is Disconnected, and the volume has a Recoverable status for the plex, 7 Rescan tor all attached disks: 8-106 VERITAS Storage Foundation 5.0 for UNIX· Fundamentals Copynght (~2006 Svmanrec coroc.aucn All rights reserved
  • 369. vxdctl enable 8 Recover the disk by replacing the private and public regions on the disk. In the command. substitute the appropriate disk device name. for example on l.inux. use sbd: vxdisksetup -i device_tag Note: This step is only necessary when you replace the failed disk with a brand new one. If it were a temporary failure. this step would not be necessary. I9 Bring the disk back under YxYM control: vxdg -g namedg -k adddisk disk_name=devlce tag where disk nameis the disk media name of the failed disk and devi ce_ tag is the device name of the disk device used to replace the failed one. 10 Check the status of the disks and the volume. vxdisk -0 alldgs list vxprint -thf 11 From the command line. recover the volume. vxrecover 12 Check the status of the disks and the volume to ensure that the disk and volume are fully recovered. vxdisk -0 alldgs list vxprint -g namedg -thr 13 Unmount the / namel file system and remove the namevoll volume. umount /namel vxassist -g namedg remove volume namevoll Optional Lab: Exploring Spare Disk Behavior Note: If you have not already done so. destroy testdg and add the three disks back to the namedg disk group before you start this section. If necessary: vxdg destroy testdg vxdg -g namedg adddisk namedg02=device_tag2 namedg03=device_tagJ namedg04=device_tag4 You should have four disks (namedgO1 through namedg04) in the disk group namedg. Set all disks to have the spare !lag on. Lab 7 Solutions: Resolving Hardware Problems 8-107 Copynghl··~ 2006 Syrnantec Corporation All rights reserved
  • 370. vxedit -g namedg set spare=on namedgOl vxedit -g namedg set spare=on namedg02 vxedit -g namedg set spare=on namedg03 vxedit -g namedg set spare=on namedg04 2 Createa IOO-MB mirroredvolumecalledsparevol. vxassist -g namedg make sparevol 100m layout=mirror Is thevolumesuccessfullycreated?Why or why not? No, the volume is not created, and you receivethe error: ... Cannot allocate space for size block volume The volume is not created becauseall disks arc setasspares,and vxassist or VEA do not tind enoughfree spaceto create the volume. 3 Attempt to createthesamevolumeagain.but this time specifytwo disksto use.Do not clearany spare flagson thedisks. vxassist -g namedg make sparevol 100m layout=mirror namedg03 namedg04 Notice that VxVM overrides its default and appliesthe two sparedisks to the volume becausethe two disks were specilied by the administrator. 4 Removethe sparevol volume. vxassist -g namedg remove volume sparevol 5 Verify thattherelocationdaemon(vxrelocd) is running.11'1101, startit as follows: vxrelocd root & 6 Removethe spare flagsfrom threeof thefour disks. vxedit -g namedg set spare=off namedgOl vxedit -g namedg set spare=off namedg02 vxedit -g namedg set spare=off namedg03 7 Createa IOO-MB concatenatedmirrored volumecalledspare2vol. vxassist -g namedg make spare2vol 100m layout=mirror 8 Savetheoutputofvxprint -g namedg -thrtoafile. vxprint -g namedg -thr > /tmp/savedvxprint 9 Displaythepropertiesof thespare2vol volume.In thetable,recordthe deviceanddisk medianameof thedisksusedin this volume.Youarcgoing to simulatedisk failure on oneof thedisks.Decidewhich disk Y0U aregoingto fail. B-108 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
  • 371. For example, the volume spare2vol uses namedgOl and namedg02: Device Tag I>isk Media Name Disk I clt2dO namedgOl I>isk 2 clt3dO namedg02 10 Next, simulate disk failure hy writing over the private region using the overwri tepr script followed by vxdctl disable and vxdctl enabl e commands.Ask your instructor for the location of the script. While using the script. substitute the appropriate disk device name for one of the disks in use by spare2voL for example on Linux use sbd, on Solaris and HP-UX use clt8dO. cd /scr~pt l ocst i on ./overwritepr device_tag vxdctl disable vxdctl enable 11 Run vxprint - 9 namedg - rth and compare the output to the vxprint output that you saved earlier. What has occurred') Note: You may need to wait a minute or two for the hot relocation to complete. Hot relocation has taken place. The failed disk has a status of NODEVICE. VxVM has relocated the mirror of the failed disk onto the designated spare disk. 12 In VEA, view the disks. Notice that the disk is in the disconnected state. Note: On the IIP-UX platform, the vxdctl disable commandmay cause the StorageAgent used by the VEA GUI to hang. If this happens, the VEA GUI does not detect the changes. Use the following command to restart the agent: /opt/VRTSobc/pa133/bin/vxpalctrl -a StorageAgent -c restart 13 Run vxdisk -0 alldgs list. What do you notice') This disk is displayed as a failed disk. 14 Rcscan for all attached disks. vxdctl enable 15 In VEA, view the status of the disks and the volume. Uighlight the volume and click each of the tabs ill the right pane. You call also select Actions->Volume View and Actlons=c-Dtsk View to view status information. Lab 7 Solutions: Resolving Hardware Problems 8-109 Copyright 'E 2006 Symantec Corporation. All rigt15 reserved I
  • 372. 16 Recover the disk by replacing the private and public regions on the disk. In the command, substitute the appropriate disk device name, for example on Linux, use sbd: vxdisksetup -i device_tag 17 Bring the disk back under VxVM control and into the disk group. vxdg -g namedg -k adddisk namedg##=device_tag 18 In VEA, undo hot relocation for the disk. Right-click the disk group and select Undo Hot Relocation. In the dialog box, select the disk for which you want to undo hot relocation and click OK. After the task completes, the alert on the disk group should be removed. Alternatively, from the command line, run: vxunreloc -g namedg namedg## where namedg## is the disk media name of the failed and replaced disk. 19 Wait until the volume is fully recovered before continuing. Check to ensure that the disk and the volume arc fully recovered. vxdisk -0 alldgs list vxprint -g namedg -thr Note: The vxprint command shows the subdisk with the UR tag. 20 Remove the spare2vol volume. vxassist -g namedg remove volume spare2vol Optional Lab: Using the Support Web Site Access the latest information on VERITAS StorageFoundation. Note: If you are working in the Virtual Academy lab environment, you may not be able to accessthe Veritas Technical Support web site, becausethe DNS configurauon was changedduring software installation by the prepare_Ds script. To restore the original DNS configuration, changeto the directory containing the lab scripts. execute the re store _DSscript and try to access the web site again. If necessary: cd /scr~pt locat~on ./restore ns Go to the VERIT AS Technical Support Web site at http://support.veritas.com. From the Select Product Family menu, select Storage Foundation. From the Select Product menu, select Storage Foundation for UNIX. 8-110 VERITAS Storage Foundation 5.0 for UNIX· Fundamentals
  • 373. On the next window, click the "documents published in the last 30 days" link. This will show you any information that has been published for Storage Foundation in the last 30 days. 2 What is the VERITAS Support mission statement'? Hint: It is in the Support Handbook (page 3). Select the Support Handbook link at the bottom of the page. On page 3: "'VI' will provide world-class technical expertise acting as the customer advocate to maximize their investment in VERIT AS solutions." I3 How many on-site support visits are included in a Extended Support contract? How about with a Business Critical Support? Hint: In the Support Handbook, see table on page 4 and explanation on page S. Extended Support: No on-site support visits are included. Business Critical Support: Six on-site support visits are included. 4 Which AIX platform is supported for Storage Foundation s.o'? Select Compatibility & Reference link under Support Resources title. Set Show Document Types drop-down list to Manuals and Documentation. Set Show Results For drop down list to 5.0(AIX). Select the VERIT AS Storage Foundation (tm) 5.0 - Release Notes (AIX). See the Supported software section with the following information: - AIX 5.2 ML6 (legacy) - AIX 5.3 TL4 with SJl4 Veritas 5.0 products also operate on AIX 5.3 with SP3, but you must install an AIX interim fix. See the following TechNote for information on downloads, service pack availability, and other important issues related to this release. http://support.veritas.com/docs/282024 5 Access a recent Hardware Compatibility List for Storage Foundation. Which Brocade switches are supported by VERITAS Storage Foundation and Iligh Availability Solutions 5.0 on Solaris? Select the Compatibility & Reference tab. Click on the appropriate MCL link. 6 Where would you locate the Patch with Maintenance Pack I for VERITAS Storage Solutions and Cluster File Solutions 4.0 fix Solaris? Select the Software Updates & Downloads link on the left navigation bar and locate the patch. 7 Perform this step only if you are working in the Virtual Academy lab environment. If you have executed the restore_ns script to restore the Lab 7 Solutions: Resolving Hardware Problems 8-111 Copyright <12006 Symantcc Corporation. All rights reserveu
  • 374. name resolution configuration at the beginning of this lab section in step I, change to the directory containing the lab scripts and execute the prepare _ns script before you continue, If necessary: cd /script locat~on ./prepare_ns 8-112 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 375. Glossary A access control list (ACL) A list of users or groups whu have access privileges to a specified file. A file may have its own ACL or may share an ACL with other tiles. ACLs allow detailed access permissions lor multiple users and groups. active/active disk arrays This type of muliipathed disk array enables you to access a disk in the disk array through all the paths to the disk simultaneously. active/passive disk arrays This type of multipathed disk array allows one path to a disk to be designated as primary and used to access the disk at any time. agent A process that manages predefined YERITAS Cluster Server (YCS) resource types. Agents bring resources online. take resources offline. and monitor resources to report any state changes to YCS. When an agent is started. it obtains configuration information from yeS and periodically monitors the resources and updates yeS with the resource status. AIX coexistence label Data on disk that identifies the disk to the AIX volume manager (LYM) as being controlled by VxYM. The contents have no relation to YxYM lD Blocks. alert An indication that an error or failure has occurred on an object on the system. When an object fails or experiences an error. an alert icon appears. alert icon An icon that indicates that an error or failure has occurred on an object on the system. Alert icons usually appear in the status area of the YEA main window and on the affected object's group icon. allocation unit A basic structural component of YxFS. The YxFS Version 4 and later file system layout divides the entire file system space into fixed size allocation units. The first allocation unit starts at block zero. and all allocation units are a fixed length of 32K blocks. application volume A volume created by the intelligent storage provisioning (lSP) feature of YERITAS Volume Manager (YxYM). associate The process of establishing a relationship between YxYM objects; for example. a subdisk that has been created and defined as having a starting point within a plcx is referred to as being associated with that plex. associated plex A plcx associated with a volume. associated subdisk A subdisk associated with a plex. asynchronous writes A delayed write in which the data is written to a page in the system's page cache. but is not written to disk before the write returns to the caller. This improves performance. but carries the risk of data loss if the system crashes before the data is flushed to disk. atomic operation An operation that either succeeds completely or Iai Is and leaves everything as it was before the operation was started. If the operation succeeds. all aspects of the operation take effect at once and the intermediate states of change are invisible. If any aspect of the operation fails, then the operation aborts without leaving partial changes. attached A state in which a YxYM object is both associated with another object and enabled lor use, Copyright if: 2006 Svrnantec Corporation. All rigtm. reserved. Gtossary-1
  • 376. attribute Allows the properties of a LUN to be defined in an arbitrary conceptual space. such asa manufacturer or location. B back-rev disk group A disk group created using a version of Yx YM released prior to the release of CDS. Adding CDS functionality rolls over to the latest disk group version number. block The minimum unit of data transfer to or from a disk or array. Block-Level Incremental Backup (BLI Backup) A YERITAS backup capability that docs not store and retrieve entire files. Instead. only the data blocks that have changed since the previous backup are backed up. boot disk A disk used for booting purposes. This disk may be under YxYM control for some operating systems. boot disk group A disk group that contains the disks from which the system may be booted. bootdg A reserved disk group name that is an alias for the name of the boot disk group. browse dialog box A dialog box that is used to view and/or select existing objects on the system. Most browse dialog boxes consist of a tree and grid. buffered 1/0 During a read or write operation. data usually goes through an intermediate tile system butter before being copied between the user buffer and disk. If the same data is repeatedly read or written, this file system buffer acts as a cache. which can improve performance. See direct 1'0 and nnbuffcrcd//0. button A window control that the user clicks to initiate a task or display another object (such as a window or menu). c capability A feature that is provided by a volume. For example, a volume may exhibit capabilities, such as performance and reliability to various degrees. Applies to the ISP feature ofYxYM. CDS disk A disk whose contents and attributes are such that the disk can be used for CDS as part of a CDS disk group. In contrast, a non-CDS disk can neither be used for CDS nor be part of a CDS disk group. CDS disk group A YxYM disk group whose contents and attributes are such that the disk group can be used to provide tor cross-platform data sharing. In contrast, a non-CDS disk group (that is. a back-rev disk group or a current-rev disk group) cannot be used lor cross-platform data sharing. A CDS disk group can only contain CDS disks. CFS YERITAS Cluster File System. check box A control button used to select optional settings. A check mark usually indicates that a check box is selected. children Objects that belong to an object group. clean node shutdown The ability ofa node to leave the cluster gracefully when all access to shared volumes has ceased. clone pool A storage pool that contains one or more full-sized instant volume snapshots of volumes within a data pool. Applies to the ISP feature ofYxYM. cluster A set of host machines (nodes) that share a set of disks. Glossary-2 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 377. cluster file system A Y xFS fi lc system mounted on a selected volume in cluster (shared) mode. cluster manager An externally- provided daemon that runs on each node in a cluster. The cluster managers on each node communicate with each other and inform YxYM of changes in cluster membership. cluster mounted file system A shared file system that enables multiple hosts to mount and perform file operations on the same tile. A cluster mount requires a shared storage device that can be accessed by other cluster mounts of the same file system. Writes to the shared device can be performed concurrently from any host on which the cluster file system is mounted. To be a cluster mount, a file system must be mounted using the mount -0 cl uster option. See local monnted fi!e .11'.111.'111. cluster-shareable disk group A disk group in which the disks are shared by multiple hosts (also referred to as a shared disk group). column A set of one or more subdisks within a striped plex. Striping is achieved by allocating data alternately and evenly across the columns within a plex. command log A log file that contains a history of YEA tasks performed in the current session and previous sessions. Each task is listed with the task originator, the start/finish times. the task status. and the low-level commands used to perform the task. concatenation A layout style characterized by subdisks that arc arranged sequentially and contiguously. configuration copy A single copy of a configuration database. configuration database A set of records containing detailed infonnation on existing YxYM objects (such as disk and volume attributes). A single copy of a configuration database is called a con figuration copy. contiguous file A file in which data blocks are physically adjacent on the underlying media. Cross-platform Data Sharing (CDS) Sharing data between heterogeneous systems (such as SUN and HP). where each system has direct access to the physical devices used to hold the data and understands the data on the physical device. current-rev disk group A disk group created using a version of YxYM providing CDS functionality; however, the CDS attribute is not set. If the CDS attribute is set for the disk group, the disk group is called a CDS disk group. CVM The cluster functionality of YERITAS YxYM. D data blocks Blocks that contain the actual data belonging to files and directories. data change object (DCO) A YxYM object that is used to manage information about the FastResync maps in the DCO log volume. Both a DCO object and a DCO log volume must be associated with a volume to implement Persistent FastResync on that volume. data pool The first storage pool that is created within a disk group. Applies to the ISP feature ofYxYM. data stripe This represents the usable data portion of a stripe and is equal to the stripe minus the parity region. Copynght ~.2006 Symanter Corporation. All fights reserved Glossary-3
  • 378. data synchronous writes A form of synchronous 1/0 that writes the tile data to disk before the write returns, but only marks the inode for later update. If the tile size changes, the inode will be written before the write returns. In this mode, the tile data is guaranteed tu be on the disk before the write returns, but the inodc modification times may be lust if the system crashes, Deo log volume A special volume that is used tu huld Persistent FastResync change maps. defragmentation Reorganizing data on disk to keep tile data blocks physically adjacent so as to reduce access times. detached A state in which a VxVM object is associated with another object, but not enabled for use. Device Discovery Layer (DOL) A facility of VxVM for discovering disk attributes needed for VxVM DMP operation. device name The device name or address used to access a physical disk. SUch as cOtOdOs2 on Solaris, cOtOdO on HP-UX, hdiskl on AIX, and hda on Linux. In a SAN environment, it is more convenient to use cnclosure-bascd nannng, which forms the device name by concatenating the name of the enclosure (such as encO) with the disk's number within the enclosure, separated by an underscore (for example, encO _ 2). The term disk access name can also be used to refer to a device name. dialog box A window in which the user submits information tu VxVM. Dialog boxes can contain selectable buttons and fields that accept information. direct extent An extent that is referenced directly by an inodc. direct I/O An unbuffered form of 1/0 that bypasses the tile system's buffering of data. With direct 1/0, the tile system transfers data directly between the disk and the user-supplied butler. See buffered 110 and IIl1hll/t('I'{'t/ I/O. dirty region logging The procedure by which the Vx VM monitors and logs modi fications to a plex. A biunap of changed regions is kept in an associated subdisk called a /og subdisk, disabled path A path to a disk that is not available for 110. A path can be disabled duc to real hardware failures or if the user has used the vxdmpadm disable command on that controller. discovered direct I/O Discovered Direct 1;0 behavior is similar to direct 1/0 and has the same alignment constraints, except writes that allocate storage or extend the tile size do not require writing the inodc changes before returning to the application. disk A collection of read/write data blocks that are indexed and can be accessed fairly quickly. Each disk has a universally unique identifier. disk access name The name used to access a physical disk. The c#t#d#s# syntax identities the controller, target address, disk, and partition. The term device 1I<1111e can also be used to refer to the disk access name. disk access records Configuration records used to specify the accesspath to particular disks. Each disk access record contains a name, a type, and possibly some type-specific information. which is used by the VxVM in deciding huw to access and manipulate the disk that is defined by the disk access record. Glossary-4 VERITAS Storage Foundation 5.0 for UNIX.' Fundamentals COPyllght ., 2()Ob Svmaruec ccroorauc» AI! fights rcservec
  • 379. disk array A collection of disks logically arranged into an object. Arrays tend to provide benefits, such as redundancy or improved performance. disk array serial number This is the serial number of the disk array.: it is usually printed on the disk array cabinet or can be obtained by issuing a vendor- specific SCSI command to the disks on the disk array. This number is used by the [)MP subsystem to uniquely identify a disk array. disk controller The controller (lIBA) connected to thc host or the disk array that is represented as the parent node of the disk by the operating system; it is called the disk controller by the multipathing subsystem ofVxVM. For example, if a disk is represented by the device name: /devices/sbus@lf,O/ QLGC,isp@2,lOOOO/sd@8,O:c then the disk controller for the disk sd@8 r °:cis: QLGC,isp@2,lOOOO This controller (HBA) is connected to the host. disk enclosure An intelligent disk array that usually has a backplane with a built-in Fibre Channel loop, and which permits hot-swapping of disks. disk group A collection of disks that arc under VxVM control and share a common configuration. A disk group configuration is a set of records containing: detailed information on existing VxVM objects (such as disk and volume attributes) and their relationships. Each disk group has an administrator-assigned name and an internally defined unique !D. disk group ID A unique identifier used to identify a disk group. disk ID A universally unique identifier that is given to each disk and can be used to identify the disk, even if it is moved. disk media name A logical or administrative name chosen for the disk, such as disk03. The term disk name is also used to refer to the disk media name. disk media record A configuration record that idcnti fics a particular disk, by disk ID, and gives that disk a logical (or administrati ve) name. disk name A logical or administrative name chosen for a disk that is under the control ofVxVM, such as disk03. The term disk I1I1!dia 17(1l11eis also used to refer to a disk name. dissociate The process by which any link that exists between two Vx VM objects is removed. For example, dissociating a subdisk trom a plcx removes the subdisk from the plex and adds the subdisk to the tree space pool. dissociated plex A plex dissociated from a volume. dissociated subdisk A subdisk dissociated from a plex. distributed lock manager A lock manager that runs on different systems and ensures consistent access to distributed resources. dock To separate or attach the main window and a subwindow, Dynamic Multipathing (DMP) The method that VxVM uses to manage two or more hardware paths directing 1/0 to a single drive. Copyright t,[, 20G6 Symantec Corporation. All nqhts rcsetvco Glossary-5
  • 380. E enabled path A path to a disk that is available for 110, encapsulation A process that converts existing partitions on a specified disk to volumes, Ifany partitions contain tile systems, thc tile system table entries are modi tied so that the file systems are mounted on volumes instead, Encapsulation is not applicable on some systems, enclosure A disk array, enclosure-based naming An alternative disk naming method, beneficial in a SAN environment. which forms the device name by concatenating the name of the enclosure (such as encO) with the disk's number within the enclosure, separated by an underscore (for example, encO 2), extent A group of contiguous file system data blocks that arc treated as a unit. An extent is detined by a starting block and a length, extent attributes The extent allocation policies associated with a tile, external quotas file A quotas tile (named quotas) must exist in the root directory of a tile system for quota-related commands to work, See internal quotas fil« and quotasfile, F fabric mode disk A disk device that is accessible on a Storage Area Network (SAN) through a Fibre Channel switch, FastResync A fast resynchronization feature that is used to perform quick and efficient rcsynchronizution of stale mirrors, and to increase the efficiency of the snapshot mechanism. Fibre Channel A collective name for the fiber optic technology that is commonly used to set up a Storage Area Network (SAN), file system A collection of files organized together into a structure, The UNIX tile system is a hierarchical structure consisting of directories and files, file system block The fundamental minimum size of allocation in a tile system, This is equivalent to the uf s fragment size, file system snapshot An exact copy ota mounted tile system at a specific point in time, Used to perform online backups, fileset A collection of tiles within a tile system, fixed extent size An extent attribute associated with overriding the default allocation policy of the tile system, fragmentation The ongoing process on an active tile system in which the tile system is spread further and further along the disk, leaving unused gaps or fragments between areas that are in use, This leads to degraded performance because the tile system has fewer options when assigning a tile to an extent. free disk pool Disks that are under Yx VM control, but that do not belong to a disk group, free space An area of a disk under YxYM control that is not allocated to any subdisk or reserved for use by any other YxYM object. free subdisk A subdisk that is not associated with any plcx and has an empty putil [0] field, Glossary-6 VERITAS Storage Foundation 5,0 for UNIX: Fundamentals Copyuqht ~ 2006 Svmaruoc Corporauon All nghts reserved
  • 381. G gap A disk region that does not contain VxVM objects (subdisks l. GB Gigabyte (230 bytes or 1024 megabytes ). graphical view A window that displays a graphical view of objects. In VEA. the graphical views include the Object View window and the Volume Layout Details window. grid A tabular display of objects and their properties. The grid lists YxVM objects. disks. controllers. or file systems. The grid displays objects that belong to the group icon that is currently selected in the object tree. The grid is dynamic and constantly updates its contents to reflect changes to objects. group icon The icon that represents a specific object group. GUI Graphical User Interface. H hard limit The hard limit is an absolute limit on system resources lor individual users for lile and data block usage on a file system. See quotas. host A machine or system. hostid A string that identities a host to the VxVM. The hostid for a host is stored in its volboot file. and is used in defining ownership of disks and disk groups. hot relocation A technique of automatically restoring redundancy and access to mirrored and RAID-5 volumes when a disk fails. This is done by relocating the affected subdisks to disks designated as spares and/or tree space in the same disk group. hot swap Refers to devices that can be removed from. or inserted into. a system without lirst turning oil the pmer supply to the system. HP-UX coexistence label Data on disk that identi lies the disk to the HP volume manager (LYM) as being controlled by VxVM. The contents of this label are identical to the contents of the VxYM ID block. I/O clustering The grouping of multiple 110 operations to achieve better performance. indirect address extent An extent that contains references to other extents. as opposed to tile data itself. i single indirect address extent references indirect data extents, A double indirect address extent references single indirect address extents. indirect data extent An extent that contains file data and is referenced through all indirect address extent. initiating node The node on which the system administrator is running a uti lity that requests a change to VxVM objects. This node initiates a volume rcconfiguration. inode A unique identifier lor each file within a file system. which also contains metadaia associated with that file. inode allocation unit A group of consecutive blocks that contain inode allocation information for a given filesct. This information is in the form of a resource summary and a tree inode map. Intelligent Storage Provisioning (ISP) ISP enables you to organize and manage your physical storage hy creating application volumes. ISP creates volumes lrorn available storage with the required Copyright ((: 2006 Svmantec Corporation. All rights reserved Gtossary-7
  • 382. capabilities that you specify. To achieve this, ISP selects storage by referring to the templates for creating volumes. intent The intent of an ISP application volume is a conceptualization of its purpose as defined by its characteristics and implemented by a template. intent logging A logging scheme that records pending changes to the tile system structure. These changes are recorded in a circular intent log tile. internal quotas file VxFS maintains an internal quotas file for its internal usage. The internal quotas tile maintains counts of blocks and inodes used by each user. See extcrnal quotusfilc and quotas. J JBOe The common name for an unintelligent disk array which may, or may not, support the hot-swapping of disks. The name is derived from "just a bunch of disks." K K Kilobyte (2111 bytes or 1024 bytes). L large file A tile larger than 2 gigabytes. VxFS supports files up to two terabytes in size. large file system A file system more than 2 gigabytes in size. VxFS supports tile systems up to 32 terabytes in size. latency For tile systems, this typically refers to the amount of time it takes a given tile system operation to return to the user. launch To start a task or open a window. local mounted file system A file system mounted on a single host. The single host mediates all file system writes to storage from other clients. To be a local mount. a tile system cannot be mounted using the mount -0 cluster option. See cluster mountedfile svsteni. log plex A plex used to store a RAID-5 log. The term log plex may also be used to refer to a dirty region logging plex. log subdisk A subdisk that is used to store a dirty region log. LUN Logical Unit Number. Each disk in an array has a LUN. Disk partitions may also be assigned a LUN. M main window The main VEA window. This window contains a tree and grid that display volumes, disks. and other objects on the system. The main window also has a menu bar and a toolbar. master node A node that is designated by the software as the "master" node. Any node is capable of being the master node. The master node coordinates certain VxVM operations. mastering node The node to which a disk is attached. This is also known as a disk (JI'I/E'I'. MB Megabyte (220 bytes or 1024 kilobytes). menu A list of options or tasks. A menu item is selected by pointing to the item and clicking the mouse. menu bar A bar that contains a set of menus for the current window. The menu bar is typically placed across the top of a window. metadata Structural data describing the attributes of files on a disk. Glossary-8 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cor."lllt,j 2f}116 Symaotec Corporation All fights reserveo.
  • 383. mirror A duplicate copy of a volume and the data therein (in the form of an ordered collection of subdisks). Each mirror is one copy of the volume with which the mirror is associated. The terms mirror and plcx can be used synonymously. mirroring A layout technique that mirrors the contents of a volume onto multiple plexes. Each plex duplicates the data stored on the volume. but the plcxes themselves may have different layouts. multipathing Where there are multiple physical access paths to a disk connected to a system. the disk is called multipathcd. Any software residing on the host (for example. the DMP driver) that hides this fact from the user is said to provide multipathing functionality. multivolume file system A single file system that has been created over multiple volumes. with each volume having its own properties. N node In an object tree. a node is an element attached to the tree. In a cluster environment. a node is a host machine in a cluster. node abort A situation where a node leaves a cluster (on an emergency basis) without attempting to stop ongoing operations. node join The process through which a node joins a cluster and gains access to shared disks. nonpersistent FastResync A form of FastResync that cannot preserve its maps across reboots of the system because it stores its change map in memory. o object An entity that is defined to and recognized internally by YxYM. The YxYM objects arc: volume. plex, subdisk, disk. and disk group. There arc actually two types of disk objects- -one type for the physical aspect of the disk and the other for the logical aspect. object group A group of objects of the same type. Each object group has a group icon and a group name. In YxYM. object groups include disk groups. disks. volumes. controllers. free disk pool disks. uninitializcd disks. and file systems. object location table (OLT) The information needed to locate important file system structural elements. The OLT is written to a fixed location on the underlying media (or disk). object location table replica A copy of the OLT in case ofdata corruption. The OLT replica is written to a fixed location on the underlying media (or disk). object tree A dynamic hierarchical display ofYxYM objects and other objects on the system. Each node in the tree represents a group of objects of the same type. Object View Window A window that displays a graphical view of the volumes. disks. and other objects in a particular disk group. The objects displayed in this window are automatically updated when object properties change. This window can display detailed or basic information about volumes and disks. p page file A fixed-size block of virtual address space that can he mapped onto any of the physical addresses available on a system. Ccpvriqbt ,t 2006 Svmantec Corporation All nqtrts reserved Gtossary-9
  • 384. parity A calculated value that can be used to reconstruct data after a failure. While data is being written to a RAID-5 volume. parity is also calculated by performing an exclusive OR (XOR) procedure on data. The resulting parity is then written to the volume. Ifa portion ofa RAID-5 volume fails, the data that was on that portion of the failed volume can be re- created from the remaining data and the parity. parity stripe unit A RAID-5 volume storage region that contains parity information. The data contained in the parity stripe unit can be used to help reconstruct regions of a RAID-5 volume that are missing because of 110 or disk failures. partition The standard division of a physical disk device. as supported directly by the operating system and disk drives. path When a disk is connected to a host, the path to the disk consists of the Host Bus Adapter (HBA) on the host. the SCSI or fiber cable connector and the controller on the disk or disk array. These components constitute a path to a disk. A failure on any of these results in DMP trying to shift all l/Os for that disk onto the remaining (alternate) paths. pathgroup In case of disks that arc not multipathed by vxdmp, VxVM will see each path as a disk. In such cases, all paths to the disk can be grouped. This way only one of the paths from the group is made visible to VxVM. persistent FastResync A form of FastResync that can preserve its maps across reboots of the system by storing its change map in a DCO log volume on disk. persistent state logging A logging type that ensures that only active mirrors are used for recovery purposes and Glossary-10 prevents failed mirrors from being selected for recovery. This is also known as kernel logging, physical disk The underlying storage device, which mayor may not be under VxVM control. platform block Data placed in sector 0, which contains OS-specitic data for a variety of platforms that require its presence for proper interaction with each of those platforms. The platform block allows a disk to masquerade as if it was initialized by each of the specific platforms. plex A duplicate copy of a volume and the data therein (in the form of an ordered collection of subdisks). Each plex is one copy of the volume with which the plex is associated. preallocation The preallocation of space for a file so that disk blocks will physically be part of a file before they are needed. Enabling an application to preallocate space for a tile guarantees that a specified amount of space will be available for that tile. even if the file system is otherwise out of space. primary fileset A fileset that contains the tiles that are visible and accessible to users. primary path In active/passive type disk arrays, a disk can be bound to one particular controller on the disk array or owned by a controller. The disk can then be accessed using the path through this particular controller. private disk group A disk group in which the disks are accessed by only one specific host. VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copvr, ;1'1" 200f) Symantec Corporation All fight" rcserveo
  • 385. private region A region of a physical disk used to store private, structured VxVM information. The private region contains a disk header. a table of contents, and a configuration database. The table of contents maps the contents of the disk. The disk header contains a disk !D. All data in the private region is duplicated for extra reliability. properties window A window that displays detailed information about a selected object. public region A region of a physical disk managed by VxVM that contains available space and is used tor allocating subdisks. Q Quick 110 file A regular VxFS file that is accessed using the : : cdev : vxf s : extension. Quick I/O for Databases Quick 1'0 is a VERITAS File System feature that improves database performance by minimizing read/write locking and eliminating double buffering of data. This allows online transactions to be processed at speeds equivalent to that of using raw disk devices. while keeping the administrative benefits of file systems. QuickLog VERITAS QuickLog is a high performance mechanism for receiving and storing intent log information for VxFS file systems. Quickl.og increases performance by exporting intent log information to a separate physical volume. quotas Quota limits on system resources for individual users for file and data block usage on a fi le system. See hard limit and soli limit. quotas file The quotas commands read and write the external quotas file to get or change usage limits. When quotas are turned on. the quota limits are copied from the external quotas file to the internal quotas file. See external quotasfile ill/emu/ quotastile. and quotas, R radio buttons A set ol'buuons used to select optional settings. Only one radio button in the set can be selected at any given time. These buttons toggle on or ofT. RAID A Redundant Array of Independent Disks (RAID) is a disk array set up with part of the combined storage capacity used for storing duplicate information about the data stored in that array. This makes it possible to regenerate the data if a disk failure occurs. read-writeback mode A recovery mode in which each read operation recovers plex consistency for the region covered by the read. Plex consistency is recovered by reading data from blocks of one plex and writing the data to all other writable plcxcs. reservation An extent attribute associated with preallocating space for a file. root disk The disk containing the root file system. This disk may be under VxVM control. root disk group In versions ofVxVM prior to 4.0. a special private disk group had to exist on the system. The root disk group was always named rootdg. This requirement does not apply to VxVM 4..v and higher. root file system The initial file system mounted aspart of the UNIX kernel startup sequence. Copyright'L 2006 Symantcr- Corporanon All rights re<;p.rvcri Glossary-11
  • 386. root partition The disk region on which thc root tile system resides. root volume The VxVM volume that contains the root tile system, if such a volume is designated by the system configuration. rootability The ability to place the root tile system and the swap device under VxVM control. The resulting volumes can then be mirrored to provide redundancy and allow recovery in the event of disk fai lure. rule A statementwritten in the VERITAS ISI' language that specifies how a volume is to be created. 5 scroll bar A sliding control that is used to display different portions of the contents ora window. Search window The VEA search tool. The Search window provides a set of search options that can be used to search for objects on the system, secondary path In active/passive type disk arrays, the paths to a disk other than the primary path are called secondary paths. A disk is supposed to be accessed only through the primary path until it fails, after which ownership of the disk is transferred to one of the secondary paths. sector A unit of size, which can vary between systems. A sector is commonly 512 bytes. sector size Sector size is an attribute of a disk drive (or SCSI LUN till' an array- type device) that is set when the drive is formatted. Sectors are the smallest addressable unit of storage on the drive, and arc the units in which the device performs I/O. Glossary-12 shared disk group A disk group in which the disks are shared by multiple hosts (also referred to as a cluster- shareable disk group). shared VM disk A VxVM disk that belongs to a shared disk group. shared volume A volume that belongs to a shared disk group and is open on more than one node at the same time. Shortcut menu A context-sensitive menu that only appears when you click a specific object or area. slave node A node that is not designated as a master node. slice The standard division of a logical disk device. The terms partition and slice arc sometimes used synonymously. snapshot A point-in-time copy of a volume (volume snapshot) or a tile system (tile system snapshot). snapped file system A file system whose exact image has been used to create a snapshot ti Ie system. snapshot file system An exact copy of a mounted file system at a specific point in time. Used to do online backups. Scefi!e SI'stCII/ snapshot, soft limit The soft limit is lower than a hard limit. The soli limit can be exceeded for a limited time. There are separate time limits lor tiles and blocks. See hard fill/it and quota. spanning A layout technique that permits a volume (and its tile system or database) that is too large to tit on a single disk to span across multiple physical disks. sparse plex A plex that is not as long as the volume or that has holes (regions of the plcx that do not have a backing subdisk ). VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copvnqm " 20G() SyrT1cll't.;( Corporauon All JI<JhIS rcserveu
  • 387. splitter ; bar that separates two panes of a window (such as the object tree and the grid). A splitter can be used to adjust the sizes of the panes. status area An area of the main window that displays an alert icon when an object fails or experiences some other error. Storage Area Network (SAN) ; networking paradigm that provides easily rcconfigurable connectivity between any subset of computers, disk storage and interconnecting hardware such as switches. hubs and bridges. storage checkpoint ; facility that provides a consistent and stable view of a file system or database image and keeps track of modi fied data blocks since the last checkpoint. storage pool ; policy-based container within a disk group in VxVM. for use by ISP. that contains LUNs and volumes. storage pool definition ; grouping of template sets that defines the characteristics of a storage pool. Applies to the ISP feature ofVxVM. storage pool policy Defines how a storage pool behaves when more storage is required, and when you try to create volumes whose capabilities are not permitted by the current templates. Applies to the ISP feature ofVxVM. storage pool set ; bundled definition of the capabilities ofa data pool and its clone pools. Applies to the ISP feature of VxVM. stripe A set of stripe units that occupy the same positions across a series of columns. stripe size The SUIll of the stripe unit sizes that compose a single stripe across all columns being striped. stripe unit Equally sized areas that arc allocated alternatelv on the subdisks (within columns) 0'1' each striped plcx. In an array. this is a set oflogically contiguous blocks that exist on each disk before allocations are made from the next disk in the array. ; stripe IIl1i! may also be referred to as a stripe element. stripe unit size The size or each stripe unit. The default stripe unit size is 32 sectors ( I()K). / stripe unit si:e has also historically been referred to as a stripe width. striping / layout technique that spreads data across several physical disks using stripes. The data is allocated alternately to the stripes within the suhdisks of each plex. structural fileset A special filesct that stores the structural elements of a VxFS file system in the form of structural files. These files define the structure or the file system and arc visible only when using utilities such as the file system debugger. subdisk ; consecutive set of contiguous disk blocks that form a logical disk segment. Subdisks can be associated with plcxes to form volumes. super-block A block containing critical information about the file system such as the file system type, layout, and size. The VxFS super-block is always located 8192 bytes from the beginning of the file system and is 8192 bytes long. swap area A disk region used to hold copies of memory pages swapped out by the system pager process. swap volume ; VxVM volume that is configured for use as a swap area. synchronous writes A form or synchronous I/O that writes the lile data to disk. updates the inode times, and writes Copyright {. 2006 Svmantec Corpoeauon All rights reserved Glossary-t S
  • 388. the updated inode to disk. When the write returns to the caller, both the data and the inode have been written to disk. T task properties window A window that displays detailed information about a task listed in the Task Request Monitor window. Task Request Monitor A window that displays a history of tasks performed in the current VEA session. Each task is listed with the task originator. the task status. and the start/ finish times for the task. T8 Terabyte (2~o bytes or 1024 gigabytes ). template A meaningful collection oflS!' rules that provide a capability for a volume. Also known as a volume template. template set consists of related capabilities and templates that have been collected together for convenience to create IS!' volumes. throughput For tile systems, this typically refers to the number of 1/0 operations in a given unit of time, toolbar A set otbuuons used to access VEA windows. These include another main window. a task request monitor. an alert monitor, a search window, and a customize window. transaction A set of configuration changes that succeed or fail as a group. rather than individually. Transactions arc used internally to maintain consistent configurations. tree A dynamic hierarchical display of objects on the system. Each node in the tree represents a group of objects of the same type. Glossary-14 u ufs The UNIX tile system type. Used as parameter in some commands. UFS The UNIX tile system; derived from the 4.2 Berkeley Fast File System. unbuffered 1/0 JlO that bypasses the tile system cache to increase I/O performance. This is similar to direct l/O, except when a tile is extended. For direct 1/0, the inode is wriuen to disk synchronously; for unbuffered 1/0. the inode update is delayed. See buffered 110and direct I/O. uninitialized disks Disks that are not under Vx YM control. user template Consists of related capabilities and templates that have been collected together till' convenience for creating IS!' application olumcs. v VCS VERITAS Cluster Server. VEA VERITAS Enterprise Administrator graphical user interface. VM disk A disk that is both under VxVM control and assigned to a disk group. VM disks are sometimes referred to as Volume Mal/agel' disks or simply disks. In the graphical user interface, VM disks are represented iconically as cylinders labeled D. VMSA Volume Manager Storage Administrator. an earlier version of the VxVM C1UI used prior to VxVM version 3.5. volboot file A small tile that is used to store the host ID of the system on which VxVM is installed and the values of bootdg and defaultdg. VERITAS Storage Foundation 5.0 for UNIX: Fundamentals CUl>ynght _ 2001) Svmautcc Corporuuon All nqhts reserved
  • 389. volume A virtual disk or entity that is made up of portions of one or more physical disks. i volume represents an addressable range of disk blocks used by applications such as file systems or databases. A volume is a collection offrom one to 32 plexes. volume configuration device The volume configuration device (/ dev/vx/ conf ig) is the interface through which all configuration changes to the volume device driver are performed. volume device driver The driver that forms the virtual disk drive between the application and the physical device driver level. The volume device driver is accessed through a virtual disk device node whose character device nodes appear in /dev/vx/rdsk, and whose block device nodes appear in / dev/vx/ dsk. volume event log The volume event log device (/ dev/ vx/ event) is the interface through which volume driver events are reported to the utilities. Volume Layout Window A window that displays a graphical view of a volume and its components. The objects displayed in this window are not automatically updated when the volume's properties change. volume set A volume set allows several volumes to be treated as a single object with one logical 110 interface, Applies to the ISP feature ofVxVM. volume template i meaningful collection of ISP rules that provide a capability for a volume. Also known as a template. Volume to Disk Mapping Window A window that displays a tabular view of volumes and their underlying disks. This window can also display details such asthe subdisks and gaps on each disk. vxconfigd The VxVM configuration daemon, which is responsible for making changes to the VxVM configuration. This daemon must be running before VxVM operations can be performed. vxfs The VERITAS File System type. Used as a parameter in some commands. VxFS VERITAS File System. VxVM V[RITAS Volume Manager. VxVM ID block Dat:l on disk that indicates the disk is under VxVM control. The VxVM ID Block provides dynamic VxVM private region location, GUID, and other information. Copyright oi.: 2006 Symantec Corporation. All rights reserved Glossary-15
  • 390. Glossary-16 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals C()pyrlght; 2006 Svmnun-c Corl'or.:lllull AIlllqlns reservco
  • 391. Index Files and Directories /dev/vx/dsk 3·7 /dev/vx/rdsk 3-7 /etcdefauh/ts 6-6 'ctc/default/vxassist a-tu /ctc/filcsystems 3·17,3·19.3·20 iele/fs/vxf,6·5 /erc/tstuh 3-17, 3·20 ICle/rc.dirc:2.d/S02vxvm-recover 7·23 lelc/rc2.d/S50isisd 2-22 /etc/system 2·7 /etc/vfs 6·6 /ctc/vfsrab 3·17, 3·20 /etc/vxclm 2·6 InpliVRTS/bin 6·5 iopllVRTS/inslall 'logs 2-11 lopliVRTSlman 2·19 iopl/VRTSob/bin 2-21 10plIVRTSvxfsisbin 6·5 /sbin 6·5 Isbin 'fs 6·5 'usr/lib/fs.vxfs 6·5 /v ar/vx/isis/vx isis.log 2·23 A address-length pair 6·4 AIX disk 1-4 AIX physical volume 1·4 array 1·9 B backing lip the VxVM configuration 7·11 Bad Block Relocation Area 1·4 BBRA 1·4 block clustering 6·3 block device file 3-18 block-based allocation 6·3 bootdg 3-7, 3·33 c cos i-n CDS disk 1·12 CDS disk groups converting disk groups 5·25 CDS disk layout 1·11 cfgmgr 7·14 chts 3·20 (,L1 2-16 cu commands in VEA 2·18 cluster 2-8 Cluster File System 2·9 cluster group 3·10 cluster management 3·3 column 4·4 command line interface 2·16. 2-19 concatenated 3·14, 4·9 Concatenated Mirrored 3·15. 4·9, 4·23 concatenated volume 1-16, 4·3 creating 4·10 concarcnution 1·16 advantages 4·7 disadvantages 4·7 configuration backup and restoration 7·11 configuration database 7·6 controller 1·7 creating a layered volume 4-18 creating a volume 3-12 CLI 3-12 errs 3·19 cron 6·12 cross-platform data sharing 1·11 convening disk groups 5·25 requirements for CDS disk groups 5-24 D data change object 3·22 data redundancy 1·16 databases on file systems 2-9 delaultdg 2·12, 3·7 Index-l Copyright G2006 Svmautec Corporation All nqtns reserved
  • 392. defrugmentution scheduling 6-12 dcfragmcnting a lile system 6-11 deponing a disk group and renaming 5-17 10 new host 5-17 VEA 5-18 destroying a disk group 3-33 lLl 3-33 VEA 3-33 devtsadrn 7-14 device path 3-25 dcvicetug 3-25 directory fragmental ion 6-9 dirty region logging 5-7 disaster recovery Inlro-10 disk adding new 7-14 adding 10 a disk group in VEA 3-11 AIX 1-4 configuring for VxVM 3-4 displaying summary infonnation 3-25 failing 7-4 forced removal 7-21 HP-UX 1-4 Linux 1-5 naming 1-7 recognizing by operating system 7-14 removing in VEA 3-32 replacing tailed in vxdiskadm 7-15 replacing in CLI 7-15 replacing in VEA 7-15 uninirializing 3-32 unrelocaring 7-26 viewing in ell 3-24 viewing informal ion about 3-26 disk accessname 3-5 disk access record 1-14. 3-5 disk array 1-9 multiparhed 1-9 disk enclosure 2-12 disk failure 7-4 parlial7-22 permanent 7-7 resolving intermittent failure 7·20 temporary 7-7 disk failure handling 7-4 disk group adding a disk in VEA 3-11 clearing host locks 5-19 creating 3-8 creating in VEA 3-10 creating in vxdiskadm 3-9 definition 1-13 deporting 5-17 destroying 3-33 destroying in CLI 3-33 destroying in VEA 3-33 displaying deponed 3-28 displaying tree space in 3-28 displaying properties for 3-28 forcing an import 5-19 high availability 3-6 importing 5-19 importing and renaming 5-19 importing as temporary in ell 5-20 purpose 1-13, 3-6 renaming in VEA 5-22 reserved names 3-7 shared 3-10 disk group contigurauou 1-13 disk group I D 3-25 disk group properties viewing 3-29 Disk Group Properties window 3-29 disk group versions 5-23 disk ID 3-25 disk initialization 3-4 disk layout 1-11 changing 3-4 disk media name 1-13. 3-5, 3-8, 7·6 dclaull1-13 disk media record 7-6 disk name 3-25 disk naming 3-8 AIX 1-8 HP-UX 1-7 Linux 1-8 Solaris 1-7 disk properties 3-27 disk replacement 7-13 disk spanning 1-15 disk status Deponed 3-26 Disconnected 3-26 External 3·26 Index-2 VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
  • 393. Free 3-26 Imported 3-26 Not Initialized 3-26 online 3-24 online invalid 3-24 disk structure 1-3 Disk View window 4-17 disks adding to a disk group 3-8 displaying detailed information 3-25 evacuating data 3-31 renarnmg 5-21 uninitialized 3-4 dynamic lUN resizing 5-15 dynamic mulripathing 2-12.3-3 E ENABLED stare 7-19 encapsulation 3-4 enclosure 2-12 enclosure-based naming 2-12 benefits 3-3 error disk status 7-6 evacuating a disk 3-31 exclusive OR 4-6 EXT2 6-5 EXT36-5 Extended File System 6-5 extended partition 1·5 extent 6·4 extent fragmemat ion 6-9 extent-based allocation 6·3. 6·4 extents defragmenting 6·11 F rAilED disks 7·4 rAILING disks 7·4 FastResync 3·15 rct s-zo fdisk 1·5 Fibre Channel 2·12 file change log 6·20 compared to intent log 6·20 file system adding to a volume 3·16. 3-18 adding to a volume in ell 3·18 consistency checking 6-16 defragmcnring 6·11 file change log 6·20 lragmcntat ion 6·9 fragmentation rep(1I1S6·10 fragmentation types 6·9 intent log 6·15 intcnt log resizing 6·17 logging and performance 6·-19 logging options 6·18 mounting at boot 3·20 resizing 5-14 resizing in VFA 5·12 resizing methods 5-11 file system free space identifying 6·8 file system type 3·16. 6·8 FlashSnap 2·8 torced removal of a disk 7·21 fragmentation 6·9 directory 6·9 extent 6·9 free space pool 3·5 fsadrn 5·14. 6·9. 6·10 fsck 6·15. 6·16 tsck pass 3·17 G group name 3·25 H HI'S 6·5 l lierarchical File System 6·5 high availability 2·8. 5·16 hOSIlocks clearing 5..19 hostid 3·25 hot rclocat ion definition 7·22 failure detection 7·23 notification 7·23 VERITAS Storage Foundation 5_0 for UNIX: Fundamentals Copylighl© 2006 Symantec Corporation. AJIrighls reserved lndex-S
  • 394. process 7·23 recovery 7-23 selecting space 7·24 unrclocating a disk 7·26 HP·UX disk 1-4 I/O failure identifying 7-4 importing a disk group and renaming 5·19 torcing 5-19 VEA 5-20 initialize zero 3-15 inodc 6-4 insf 7·14 installation log tile 2·11 installation menu 2·10 installer 2-11 installlS 2-11 installing V"VM 2-10 package space requirements 2·7 verifying on AIX 2-15 verifying on HP·UX 2·14 veri lying on Linux 2·15 veri lying on Solaris 2·14 verifying package installation 2·14 installp 2·11 installs!' 2·11 installvm 2·11 Intelligent Storage Provisioning 3-10. 3·22 intent log resizing 6-17 intent logging 6·15 interfaces 2-16 command line inter/ace 2·16 VERIT AS Enterprise Administrator 2·16 vxdiskadm 2-16 intermittent disk failure resolving 7-20 ioscan 7·14 iosize 6-13 J JFS 1-5. 6-5 JFS26·5 Joumalcd File System 6·5 journaling 6-15 K kernel issues and VxFS 2-7 L layered volume 1·16. 4-18 advantages 4-19 creating 4·18 creating in ell 4·23 creating in VEA 4-23 disadvantages 4-19 preventing creation 3·15 viewing in ell 4-24 viewing in VEA 4-24 layered volume layouts 4-22 licensing 2·5 generating a license key 2·6 obtaining a license key 2·5 vl.icense 2-6 Linux disk 1-5 listing installed packages 2-14 load balancing 4-7 location code 1-8 logging 3·15. 5·7 and VxFS performance 6-19 for mirrored volumes 5·7 logging options for a file system 6-18 logical unit number 1-7 Logical Volume Manager 14 logtype 4-12 lsdcv 7-14 IsIS 3·20 Islpp 2·15 LUN 1·7 and resizing V"VM structures 5-15 LVM 1-4 M man 2-19 lndex-d VERITAS Storage Foundation 5,0 for UNIX: Fundamentals COPlIfIgt1t 2(106Svrnantec Ccrporanon All nghts reservco
  • 395. manual pages 2·19 maxfilcsize 6·13 metadara6·3 mirror adding 5·5 adding in CLI 5·6 adding in VEA 5·6 removing 5·5 mirror-concat 4·22 mirrored volume 1·16, 4·5 creating 4·12 mirroring 1·16 advantages 4·8 disadvantages 4-8 enhanced 4-18 mirroring a volume 3-15, 4-9 mirrors adding 4·12 mirror-stripe layout 4-20 mkdir 3-18 rnkfs 3·18 mkfs options 6·7 mrnap 6-14 mount 3-18, 6·18 mount at boot 3-17 CLI 3-20 mount opt ions delaylog 6-18 log 6·18 tmplog 6-18 mount poi nt 3·17 moving a disk vxdiskadm 5·15 multipathed disk array 1·9 N naming disks defaults 3-8 ncol 4-11 New Volume wizard 3-13 newfs 3·18 nlog 4-12 nmirror 4-12 node 2·8 NODEVICE state 7·21 nodg 3·7 nostripe 4·10 o Object Data Manager 1·8 off-host processing 2·8 online disk status 7·6 online invalid status 3·24 online status 3·24 operating system versions 2·3 ordered allocat ion 4·27 order of columns 4·28 order of mirrors 4·28 organization principle 3·10 p packages listing 2·14 space requirements 2·7 parity 1-16,4·6 partial disk iailure 7-22 partition 1·7 PAl116·5 permanent disk failure 7·7 physical disk naming 1·7 Physical Volume Reserved Area 1·4 pkgadd 2·11 pkginfo 2·14 plex 1·14.4·5 definition 1·14 naming 1-14 plcx name default 1·14 Preferences window 2·17 primary partition 1·5 pri vate region 1·11, 3·4. 7-4 private region size 1·11 AIX 1·11 HP·UX 1·11 Linux 1·11 Solaris 1·11 projection 4·17 prtvtoc 7·14 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyright~: 2006 Svmantec Como-anon All rights reserved lndex-S
  • 396. public region 1-11. 1-13. 7-4 PVRA 1-4 Q resizing a dynamic LUN 5-15 rcsizing a file system 5·14 rcsizing a volume 5-10 VEA 5-12 with vxnssist 5-14 with vxresize 5-13 resiziug a volume and file system 5-11 rcsizing a volume with a file system 5-10 response file 2·11 rliuk 3-22 rpm 2-11. 2-15 Quick 1/0 2-9 R RAID 1-15 RAID array benefits with VxVM Intro-10 RAID levels 1-15 RAID-5 column 4-6 RAID-5 volume 1-16.4-6 random read 6-14 random write 6-14 raw device tile 3-18 read policy 5·8 changing in CLI 5-9 changing in VEA 5-9 preferred plcx 5·8 round robin 5-8 selected pic" 5·8 recovering a volume VEA 7-16 s recovering volumes and volume states 7-19 redundancy 1-16 relocating subdisks 7-24 REMOVED state 7-21 removing a disk forced 7-21 VEA 3-32 S95vx'Jll-recover 7-23 SAN 2-12 SAN management 3-3 selected plex read policy 5-8 sequential read 6-14 sequential write 6-14 size of a volume 3-14 slice 1-7 sliced disk 1-12 snap object 3-22 software packages 2·7 space requirements 2-7 spare disks managing 7-25 STALE state 7-19 removing a volume 3·30 renaming a disk 5-21 renaming a disk group 5·22 replacing a disk 7-13 CLI 7·15 VEA 7·15 replacing a failed disk vxdiskadm 7·15 replicated volume group 3·22 Rescun option 7-14 resilience 1-16 resilient volume 1-16 storage allocating lor volumes 4-25 storage area network 2-12 storage attributes specifying lor volumes 4-25 specifying in VEA 4-26 storage cache 3-22 stripe unit 4-4. 4-6 default si/e 3-14. 4-9 striped 3-14. 4-9 Striped Mirrored 3-15. 4-9. 4-23 striped volume 1-16,4-4 creating 4-11 stripe-mirror 4-22 stripe-mirror layout 4-21 stripeuuit 4-11 lndex-G VERITAS Storage Foundation 5,0 for UNIX: Fundamentals Copl/flght G 2006 SYl'1311tCC Corporation All fights reserved
  • 397. striping 1·16 advantages 4·7 disadvantages 4·8 subdisk 1·14 definition 1·14 subdisk name default 1·14 subvolume 4·18 summary fi le 2·11 support for Vx V M 2·4 swinstall z-t t swlist 2·14 T target 1·7 Task History window 2·18 tasks clearing history 2·18 technical support for VxVM 2·4 temporary disk failure 7·7 true mirror 4·5 true mirroring 1·16 type 3·25 u UFS 6·5 allocation 6·3 uninitialized disks 3-4 UNIX File System 6·5 unrelocating a disk 7·26 upgrading a disk group version 5·23 user interfaces 2·16 v VEA 2·16 adding a disk to a disk group 3·11 adding a mirror 5·6 changing volume read policy 5·9 clearing task history 2·18 creating a disk group 3·10 creating a layered volume 4·23 creating a volume 3·13 deporting a disk group 5·18 destroying a disk group 3·33 disk properties 3-27 Disk View window 4·17 displaying volumes 3·23 importing a disk group 5·20 installing 2·21 installing the serv er and client 2·21 monitoring events and tasks 2·23 multiple views of objects 2·17 recovering a volume 7·16 remote administration 2·17 removing a disk 3-32 replacing a disk 7·15 resizing a volume 5·12 scanning disks 7·14 security 2·17 setting preferences 2·17 starting 2·22, 2·23 lask History window 2·18 viewing a layered volume 4·24 viewing CLI commands 2·18 viewing disk group properties 3·29 viewing disk information 3·26 Volume Layout window 4·14 Volume to Disk Mapping window 4·15 Volume View window 4·16 VrRITAS Cluster Fik System 2·9 VERITAS Cluster Server 2·8 VLRITAS Enterprise Administrator 2·16.2·17 VLRITAS hie System 6·5 VERITAS Quick 1'0 for Databases 2·9 VFRITAS Volume Manager 2-9 VrRIT AS Volume Replicator lntro-f O, 3·22 versrorung and disk groups 5·23 V(JDA 1·5 VCiRA 1·4 virtual storage objects 1·10 vl.icensc 2·6 vol subdisk num 1·14 volboot 3·7 volume 1·10. 3·5 accessing 1·10 adding a tile system 3-16 adding a Ilk system in CLl 3·18 adding a minor 5·5 adding a mirror in VEA 5·6 creating 3·12 lndex-?VERITAS Storage Foundation 5.0 for UNIX' Fundamentals Copyrighllf', 2006 Svmantec Corporation All rights reserved
  • 398. creating a layered volume 4-18 creating in Cl.l 3-12 creating in VEA 3-13 creating layered in CLI 4-23 creating layered in VEA 4-23 creating mirrored and logged 4-12 creating on specific disks 4-26 definition 1-10, 1-14 disk requirements 3-12 estimating size 4-13 expanding the size 5-10 layered layouts 4-22 logging 3-15 mirroring 3-15, 4-9 recovering in VEA 7-16 reducing the size 5-10 removing 3-30 removing a mirror 5-5 rcsizing 5-10 resizing in VEA 5-12 rcsizing methods 5-11 resizing with vxassist 5-14 resizing with vxrcsizc 5-13 specifying ordered allocation 4-27 specifying storage attributes in VEA 4-26 starting manually 5-20 viewing layered in CLI 4-24 viewing layered in VEA 4-24 volume attributes 3-13 Volume Group Descriptor Area 1-5 Volume Group Reserved Area 1-4 volume layout 1-15 concatenated 1-16 displaying in Cl.I 3-21 layered 1-16 mirrored 1-16 RAID-S 1-16 selecting 4-3 striped 1-16 Volume Layout window 4-14 Volume Manager control 1-11 Volume Manager disk 1-13 naming 1-13 Volume Manager Support Operations 2-16,2- 20 volume name default 1-14 volume read policy 5-8 changing in Cl.I 5-9 changing in VEA 5-9 volume recovery 7-13 Volume Replicator 2-8 volume size 3-14 volume states utter attaching disk media 7-18 after recovering volumes 7-19 alter running vxreauuch 7-12 alter temporary disk failure 7-12 after volume recovery 7-19 Volume to Disk Mapping window 4-15 Volume View window 4-16 volumes allocating storage for 4-25 vrtsadm 2-22 VRTSap 2-7 VRTStj'pm 2-7 VRTShdoc 2-7 VRTSfspro 2-7 VRTSmuob 2-7 ,RTSob 2-7 VRTSobadmin 2-7 VRTSobgui 2-7 VRTStep 2-7 'RTSvmdoc 2-7 Vk TSvmman 2-7 'RTSv mpro 2-7 VRTSvxts 2-7 v xassisr 3-12, 5-11, 5-14 vxassist growby 5-14 vxusxis: growro 5-14 vxussist shrinkby 5-14 vxussist shrinkto 5-14 vxbcnch options 6-14 vxcunfigbackup 7 -11 vxconligbackupd 7-11 vxconfigrestore 7-11 vxdctl enable 3-5, 3-11, 7-6, 7-14 vxdg destroy 3-33 vxdisk list 3-9, 3-24, 3-25, 7-6, 7-14 vxdisk rcsize 5-15 vxdiskadm 2-16, 2-20, 3-4 creating a disk group 3-9 replacing a Jailed disk 7-15 Index-8 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 399. starting 2·20 vxdiskunsetup 3·32 VxFS 6·5 allocation 6·3, 6·4 and logging 6·19 command locations 6·5 command syntax 6·6 dcfragmenting 6·11 file change log 6·20 file system switchout mechanisms 6·6 file system type 6·8 Iragmentarionreports 6·10 fragmentation types 6·9 idenii rying free space 6·8 intcnt log 6·15 intent log resizing 6·17 logging options 6·18 maintaining consistency 6·16 resizing 5·14 resizing in VEA 5·12 using by default 6·6 vxinstall 2·11 vxmake 4·19 vxprint 3·21, 4·24 vxreal1ach 7·16 vxrecover 7·16 vxrelocd 7·23 vxresize 5·11, 5·13 vxunreloc 7-26 VxVM configuration backup 7·11 user interfaces 2-16 VxVM and RAID arrays lntrc-f O VxVM configuration daemon 3-5 vxvol rdpol prefer 5·9 vxvol rdpol round 5-9 vxvol rdpol select 5-9 vxvol stopall 5-18 x XOR 1·16,4-6 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals Index-9 Copyright~' 2006 Symantec Corporation. All fignts reserved
  • 400. Index-10 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cupyrlgtlt 2006 Sym,H11CC Corporation All fights reserved

×