SlideShare a Scribd company logo
'symantec..
VERITAS Storage Foundation
5.0 for UNIX: Fundamentals
100·002353·A
COURSE DEVELOPERS
Gail Ade~
BilJ,(eGerrits
lECHNICAL
CONTRIBUTORS AND
REVIEWERS
Jade ArrinJ,(lolI
:Iargy Cassid~
Ro~' Freeman
Joe Gallagher
Bruce Garner
Tomer G urantz
Bill Havev
Geue Henriksen
Gerald Jackson
Haymond Karns
Bill Lehman
Boh l.ucas
Durivunc Manikhung
Chr'istlan Rahanus
Dan Rugers
Kleher Saldanha
Albrecht Scriba
"liehe! Simoni
Anaudu Sirisena
Pete 'Iuemmes
Copyright' 2006 Symamec Corporation. All rights reserved. Symantcc.
the Symanrec Logo. and "LRITAS arc trademarks or registered
trademarks uf 5) mantee Corporation Of its alfiluues in the U.S. and other
countries. Other names may be trademarks of their respective owners.
-I IllS PUBLICATION IS I'ROVIDfD "AS IS" AND ALL EXPRESS OR
IMPLllDCONIlITIONS. REPRESENTArJONS AND WARRANTIES.
INCLUDIN(i ANY 11PLlUl WARRANTY OF MFRCHANTA81L1TY.
IITNI.sS FOR A PARIICULAR PURPOSE OR NON-
INFRIN(iI:MEN r. ARL DISCLAIiIED. EXCEl'! TO THE FXTEN!
rHAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY
INVALID. SYIANTI:C CORPORATION SHAl.L NOT HE L1ABLI:
lOR INCIDENTAL OR CONSEQULNTIAL DA1AGI-.S IN
CONNECTION WITH (HI FURNISHING. PERIC)RMANCE. OR USE
OF THIS PUIlLlCAIION. TH~. INFORIATION CONTAINLD
H!:RUN IS SUBJECT ro CHANtiE WITHOUT NOTICE.
No part orthe contents ofthis hook may be reproduced or transmitted in
any torm or b) any means without the , riuen permission of the publisher.
tLRIT-l.')' ::';/orugc FOflllllulion 5.0 /iw [,i;V/.': Fundamentals
Symnnrec Corporation
~03305te ens Creek 81 U.
Cupertino. CA ()SOI4
Table of Contents
Course Introduction
What Is Storage Virtualization?.
Introducing VERITAS Storage Foundation
VERITAS Storage Foundation Curriculum ..
Lesson 1: Virtual Objects
Physical Data Storage
Virtual Data Storage
Volume Manager Storage Objects
Volume Manager RAID Levels ..
Lesson 2: Installation and Interfaces
Installation Prerequisites .
Adding License Keys.. . .
VERITAS Software Packages ..
Installing Storage Foundation.
Storage Foundation User Interfaces.
Managing the VEA Software
Lesson 3: Creating a Volume and File System
Preparing Disks and Disk Groups for Volume Creation
Creating a Volume .
Adding a File System to a Volume
Displaying Volume Configuration Information ...
Displaying Disk and Disk Group Information
Removing Volumes, Disks, and Disk Groups
Lesson 4: Selecting Volume Layouts
Comparing Volume Layouts .....
Creating Volumes with Various Layouts.
Creating a Layered Volume .
Allocating Storage for Volumes .
Lesson 5: Making Basic Configuration Changes
Administering Mirrored Volumes .
Resizing a Volume .
Moving Data Between Systems
Renaming Disks and Disk Groups
Managing Old Disk Group Versions ..
Table of Contents
Copyrigtlt ,( 2006 Svmantec Corporation All rights reserved
Intro-2
Intro-6
Intro-11
1-3
1-10
1-13
1-15
2-3
.... 2-5
. 2-7
2-10
2-16
2-21
...... 3-3
3-12
3·18
3-21
3-24
3-30
. 4-3
. 4-9
4-18
.4·25
5-3
5-10
5-16
5-21
5-23
Lesson 6: Administering File Systems
Comparing the Allocation Policies of VxFS and Traditional File Systems 6-3
Using VERITAS File System Commands 6-5
Controlling File System Fragmentation 6-9
Logging in VxFS , 6-15
Lesson 7: Resolving Hardware Problems
How Does VxVM Interpret Failures in Hardware 7-3
Recovering Disabled Disk Groups "" , 7-8
Resolving Disk Failures ,.., , 7-12
Managing Hot Relocation at the Host Level , ,... 7-22
Appendix A: Lab Exercises
Lab 1: Introducing the Lab Environment , A-3
Lab 2: Installation and Interfaces ,..,.., A-7
Lab 3: Creating a Volume and File System ,..,.., , A-15
Lab 4: Selecting Volume Layouts , A-21
Lab 5: Making Basic Configuration Changes , , A-29
Lab 6: Administering File Systems...... .. ,.., A-37
Lab 7: Resolving Hardware Problems.. ...., ,.., A-47
Appendix B: Lab Solutions
Lab 1 Solutions: Introducing the Lab Environment ,.., B-3
Lab 2 Solutions: Installation and Interfaces , , , B-7
Lab 3 Solutions: Creating a Volume and File System , ,.., B-21
Lab 4 Solutions: Selecting Volume Layouts ,.., B-33
Lab 5 Solutions: Making Basic Configuration Changes ,.." B-47
Lab 6 Solutions: Administering File Systems , " B-67
Lab 7 Solutions: Resolving Hardware Problems " "...................... B-85
Glossary
Index
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copyngt1t <.;; .. LUOb Symaruec Corporation All nqhts reserved.
Course Introduction
Storage Management Issues
Human Resource E-mail
Database Server
Customer Order
Database
90% F
symantec
10% 50% Full _
Multiple-vendor hardware
Explosive data growth
Different application needs
Management pressure to
increase efficiency
I' Multiple operating systems
r , Rapid change
Budgetary constraints
Problem: Customer
order database cannot
access unutilized
storage.
Common solution: Add
more storage.
What Is Storage Virtualization?
Storage Management Issues
Storage management is becoming increasingly complex due to:
Storage hardware from multiple vendors
Unprecedented data growth
Dissimilar applications with different storage resource needs
Management pressure to increase efficiency
Multiple operating systems
Rapidly changing business climates
Budgetary and cost-control constraints
To create a truly efficient environment. administrators must have the tools to
skillfully manage large, complex, and heterogeneous environments. Storage
virtualization helps businesses to simplify the complex IT storage environment
and gain control of capital and operating costs by providing consistent and
automated management of storage.
Intro-2 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
Copynyht r; 2006 Svmantec coroorauoo All ngllts reserved
,S}lnamcc.
What Is Storage Virtualization?
.MimkM4
Consumer
& m' 'f
§$ A
Virtualizatlon:
The logical
representation of
physical storage
across the entire
enterprise
Consumer
fIiIiiIrIIi"m''fl
Consumer
sliildeM"
Application requirements from storage
• Application • Throughput • Failure
requirements • Responsiveness
resistance
• Growth otential • Recovery time
Ca acit Performance Availabilit
• Disk size • Disk seek time • MTBF
• Number of disks! • Cache hit rate • Path
path redundanc
Physical aspects of storage
Physical Storage Resources
Defining Storage Virtualization
Storage virtualization is the process of taking multiple physical storage devices
and combining them into logical (virtual) storage devices that are presented to the
operating system. applications. and users. Storage virtualization builds a layer of
abstraction above the physical storage so that data is not restricted to specific
hardware devices. creating a Ilexible storage environment. Storage virtualization
simplifies management of storage and potentially reduces cost through improved
hardware utilization and consolidation.
With storage virtualization. the physical aspects of storage arc masked to users.
Administrators can concentrate less on physical aspects of storage and more on
delivering access to necessary data.
Benefits of storage virtualization include:
Greater IT productivity through the automation of manual tasks and simplified
administration of heterogeneous environments
Increased application return on investment through improved throughput and
increased uptime
Lower hardware costs through the optimized use of hardware resources
Copyright t: 20U6 Symanter; Corporation All riqh'~ .eservco
Intro-3Course Introduction
syrnantec
Storage Virtualization: Types
Storage-Based
JfIfIJ'AIiI'AY
Servers
Host-Based
AiIII1'
Server
Network-Based
AYAYAY
Servers
Storage
~j,~
Storage
~s.,",
Storage
Most companies use a combination of these three
types of storage virtualization to support their chosen
architectures and application requirements.
How Is Storage Virtualization Used in Your Environment?
The way in which you use storage virtuulization. and the benefits derived from
storage virtualization. depend on the nature of your IT infrastructure and your
specific application requirements. Three main types of storage virtualization used
today arc:
Storage-based
Host-based
Network-based
Most companies use a combination of these three types of storage virtualization
solutions to support their chosen architecture and application needs.
The type of storage virtualization that you use depends on factors. such as the:
Heterogeneity of deployed enterprise storage arrays
Need for applications to access data contained in multiple storage devices
Importance of uptime when replacing or upgrading storage
Need for multiple hosts to access data within a single storage device
Value of the maturity of technology
Investments in a SAN architecture
Level of security required
Level of scalability needed
Inlro-4
Copynghl ·,C ;':OOb Svmantec Conioranon AU flghls reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Storage-Based Storage Virtualization
Storage-basedstorage virtualization refers to disks within an individual array that
are presented virtually to multiple servers. Storage is virtualized by the array itself
For example. RAID arrays virtualize the individual disks (that are contained within
the array) into logical LUNS. which are accessed by host operating systems using
the same method of addressing as a directly-attached physical disk.
This type of storage virtualization is useful under these conditions:
You need to have data in an array accessible to servers of di fferent operating
systems.
All of a server's data needs are met by storage contained in the physical box.
You are not concerned about disruption to data access when replacing or
upgrading the storage.
The main limitation to this type of storage virtualization is that data cannot be
shared between arrays. creating islands of storage that must be managed.
Host-Based Storage Virtualization
Host-basedstorage virtualization refers to disks within multiple arrays and from
multiple vendors that are presented virtually to a single host server. For example.
software-based solutions. such as VERITAS Storage Foundation. provide host-
based storage virtualizarion. Using VERlTAS Storage Foundation to administer
host-based storage virtualization is the focus of this training.
I lost-based storage virtualization is useful under these conditions:
A server needs to access data stored in multiple storage devices.
You need the flexibility to access data stored in arrays from different vendors.
Additional servers do not need to access the data assigned to a particular host.
Maturity of technology is a highly important factor to you in making IT
decisions.
Note: By combining VERITAS Storage Foundation with clustering technologies.
such as VERITAS Cluster Volume Manager. storage can be virtualized to multiple
hosts ofthe same operating system.
Network-Based Storage Virtualization
Network-basedstorage virtualization refers to disks from multiple arrays and
multiple vendors that arc presented virtually to multiple servers. Network-based
storage virtualization is useful under these conditions:
You need 10 have data accessible across heterogeneous servers and storage
devices.
You require central administration of storage across all Network Attached
Storage (NAS) systems or Storage Area Network (SAN) devices.
You want to ensure that replacing or upgrading storage does not disrupt data
access.
You want to virtualize storage to provide block services to applications.
Course Introduction Intro-5
Copyright ,~,2006 Svmantec Corporation. All nnhts reserved
syrnarucc
VERITAS Storage Foundation
VERIT AS Storage Foundation provides host-based
storage virtualization for performance, availability,
and manageability benefits for enterprise computing
environments.
High Availability
Application Soluti
Data Protection
Volume Manager
and File System
Company Business Process
VERITAS Cluster Server/Replication
ons Storage Foundation for Databases
-<
VERITAS NetBackup/Backup Exec
~ VERIT AS Storage Foundation
Hardware and Operating System
Introducing VERITAS Storage Foundation
VERITAS storage management solutions address the increasing costs of managing
mission-critical data and disk resources in Direct Attached Storage (DAS) and
Storage Area Network (SAN) environments.
Atthe heart of these solutions is VERITAS Storage Foundation, which includes
VERITAS Volume Manager (VxVM). VERITAS File System (VxFS), and other
value-added products. Independently, these components provide key benefits.
When used together as an integrated solution, VxVM and VxFS deliver the highest
possible levels of performance, availability, and manageability for heterogeneous
storage environments.
Intro-6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Cnpyllght ,£ 2006 Syroantec Corporation All r,ghts reserved
Users / Applications/ Databases 
.................................................................................................... " ....,,
! Virtual Storage Resources 
00.0 ••• 00 0 .0..(?v!2!J.t.1 '.',
VERITAS
Volume
Manager
(VxVM)
What Is VERITAS Volume Manager?
VERITAS Volume Manager, the industry-leader in storage virtualizarion. is an
easy-to-use, online storage management solution for organizations that require
uninterrupted, consistent access to mission-critical data. VxVM enables you to
apply business policies to configure, share. and manage storage without worrying
about the physical limitations of disk storage. VxVM reduces the total cost of
ownership by enabling administrators to easily build storage configurations that
improve performance and increase data availability.
Working in conjunction with VERITAS File System, VERITAS Volume Manager
creates a foundation for other value-added technologies. such as SAN
environments, clustering and failover, automated management. backup and IISM,
and remote browser-based management.
What Is VERITAS File System?
A file system is a collection of directories organized into a structure that enables
you to locate and store tiles. All processed information is eventually stored in a tile
system. The main purposes of a file system arc to:
Provide shared access to data storage.
Provide structured access to data.
Control accessto data.
Provide a common. portable application interface.
Enable the manageability or data storage.
The value of a file system depends on its integrity and performance.
Copyright os 2006 swnante- Corporation. All rigllts reserveo
Intro-7Course Introduction
svrnaruec
VERITAS Storage Foundation: Benefits
• Manageability
- Manage storage and file systems from one interface.
- Configure storage online across Solaris, HP-UX, AIX, and
Linux.
- Provide additional benefits for array environments, such as
inter-array mirroring.
• Availability
- Features are implemented to protect against data loss.
- Online operations lessen planned downtime.
• Performance
- 1/0 throughput can be maximized using volume layouts.
- Performance bottlenecks can be located and eliminated
using analysis tools.
• Scalability
- VxVM and VxFS run on 32-bit and 64-bit operating systems.
- Storage can be deported to larger enterprise platforms.
Benefits of VERITAS Storage Foundation
Commercial system availability now requires continuous uptime in many
implementations. Systems must be available 24 hours a day. 7 days a week, and
365 days a year. VERlTAS Storage Foundation reduces the cost ofownership by
providing scalable manageability, availability, and performance enhancements for
these enterprise computing environments.
Manageability
Management of storage and the tile system is performed online in real time,
eliminating the need for planned downtime.
Online volume and file system management can be performed through an
intuitive. easy-to-use graphical user interface that is integrated with the
VERITAS Volume Manager (VxVM) product.
Vx VM provides consistent management across Solaris. HP-llX, AlX, Linux,
and Windows platforms.
VxFS command operations are consistent across Solaris, HP-UX, AlX, and
Linux platforms.
Storage Foundation provides additional benefits for array environments, such
as inter-array mirroring.
Availability
Through software RAID techniques, storage remains available in the event of
hardware fai lure.
Intro-8
CopytlQtll ''",2006 Svmantec Corporation All fights reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
1I0t relocation guarantees the rebuilding of redundancy in the case of a disk
failure.
Recovery time is minimized with logging and background mirror
resynchronization.
Logging of file system changes enables fast file system recovery.
A snapshot of a file system provides an internally consistent. read-only image
for backup. and file system checkpoints provide read-writable snapshots.
Performance
I/O throughput can be maximized by measuring and modifying volume layouts
while storage remains online.
Performance bottlenecks can he located and eliminated using YxYM analysis
tools.
Extent-based allocation of space lor files minimizes file level access time.
Read-ahead buffering dynamically tunes itself to the volume layout.
Aggressive caching of writes greatly reduces the number of disk accesses.
Direct I/O performs file [10 directly into and out of user butlers.
Scalability
YxYM runs over a 32-bit and M-hit operating system.
Ilosts can be replaced without modifying storage.
Hosts with different operating systems can access the same storage.
Storage devices can be spanned.
YxYM is fully integrated with YxFS so that modifying the volume layout
automatically modi lies the file system internals.
With YxFS. several add-on products are available for maximizing performance
in a database environment.
Course Introduction Intro-9
Copyright;;; 2006 Symalll~r. Corporation All rights .esorveo
• Reconfigure and resize storage
across the logical devices
presented by a RAID array.
• Mirror between arrays to improve
disaster recovery protection of an
array.
Use arrays and JBODs.
• Use snapshots with mirrors in
different locations for disaster
recovery and off-host processing.
Use VERITAS Volume Replicator
(VVR) to provide hardware-
independent replication services.
symantec
Storage Foundation and RAID Arrays: Benefits
With Storage Foundation, you can:
Benefits of VxVM and RAID Arrays
RAID arrays virtualize individual disks into logical LUNS which are accessed by
host operating systems as "physical devices." that is, using the same method of
addressing as a directly-attached physical disk.
VxVM virtualizes both the physical disks and the logical LUNs presented by a
RAID array. Modifying the configuration ofa RAID array may result in changes in
SCSI addresses of LUNs, requiring modification of application configurations.
VxVM provides an effective method ofrcconfiguring and resizing storage across
the logical devices presented by a RAID array.
When using VxVM with RAID arrays. you can leverage the strengths of both
technologies:
You can use Vx VM to mirror between arrays 0 improve disaster recovery
protection against the failure of an array. particularly if one array is remote.
Arrays can be of different manufacture: that is, one array can be a RAID array
and the other a J80D.
VxVM facilitates data reorganization and maximizes available resources.
VxVM improves overall performance by making 1/0 activity parallel for a
volume through more than one 110 path to and within the array.
You can use snapshots with mirrors in different locations. which is beneficial
for disaster recovery and off-host processing.
If you include VERITAS Volume Rcplicaror (VVR) in your environment,
VVR can be used to provide hardware-independent replication services.
tntro-10 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
Copyngtll' LU06Svmantec Corporaucn All nqtits reserved
Storage Foundation Curriculum Path
,S}11Hlntt'C
VERITAS Storage
Foundation for
UNIX:
Fundamentals
VERITAS Storage
Foundation for
UNIX:
Maintenance
VERITAS Storage Foundation Curriculum
VERITASStorage Foundationfor UNIX' Fundamentalstraining is designed 10
provide you with comprehensive instruction on making the most ofVERITAS
Storage Foundation.
~------------ ------------~---v--
VERIT AS Storage Foundation for UNIX
Course Introduction
Cor-yriqht © 2006 Symantec Corporauon. All rights reserved
Inlro-11
Storage Foundation Fundamentals: Overview
• Lesson 1: Virtual Objects
• Lesson 2: Installation and Interfaces
Lesson 3: Creating a Volume and File
System
• Lesson 4: Selecting Volume Layouts
• Lesson 5: Making Basic Configuration
Changes
• Lesson 6: Administering File Systems
• Lesson 7: Resolving Hardware
Problems
syrnantcc
VERITAS Storage Foundation for UNIX: Fundamentals Overview
This training provides comprehensive instruction un operating the file and disk
management foundation products: VERITAS Volume Manager (VxVM) and
VERITAS File System (VxFS). In this training. you learn how to combine file
system and disk management technology to ensure easy management of all storage
and maximum availability of essential data.
Objectives
After completing this training. you will be able to:
Identify VxVM virtual storage objects and volume layouts.
Install and configure Storage Foundation.
Configure and manage disks and disk groups.
Create concatenated. striped, mirrored. RAID-5, and layered volumes.
Configure volumes by adding mirrors and logs and resizing volumes and tile
systems.
Perform tile system administration.
Resolve basic hardware problems.
Intro-12
Copynght'~ 2006 Svmantec Corpoauon All nghl<; reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
, S)111<1ntt'(
Course Resources
• Lab Exercises (Appendix A)
• Lab Solutions (Appendix B)
• Glossary
Additional Course Resources
Appendix A: Lab Exercises
This section contains hands-on exercises that enable you to practice the concepts
and procedures presented in the lessons.
Appendix B: Lab Solutions
This section contains detailed solutions to the lab exercises for each lesson.
Glossary
For your reference. this course includes a glossary ofterms related to V[RITAS
Storage Foundation.
Course Introduction tntro-13
Copyrigtllf' 2006 Symanter Corporation All rights reserved
Typographic Conventions Used in This Course
The following tables describe the typographic conventions used in this course.
Typographic Conventions in Text and Commands
Convention Element Examples
Courier Nell. Command input. To display the robot and drive configurauon:
bold both syntax and tpconfig -d
examples
To display disk information:
vxdisk -0 alldgs list
Courier New.
· Command output lu the output:
plain
· Command protocol mlnlmum: 40-
names. directory protocol - maximum: 60
names. tile protocol
-current: 0
names. path Locale the al tnames directory.
names. user
Go 10http: / /www.symantec.com.
names.
passwords. URLs Enler the value 300.
when used within Log on as user l.
regular text
paragraphs.
Courier New. Variables in To install the media server:
Italic. bold or command syntax, /cdrom_directory/install
plain and examples:
To access a manual page:
· Variables in
command input
man command name
-
are Italic. plain. To display detailed information lor a disk:
· Variables in vxdisk -g disk_group list
command output disk -
name
are ltulic. bold.
Typographic Conventions in Graphical User Interface Descriptions
Convention Element Examples
Arrow Menu navigation paths Select File -->Save.
Initial capitalization Buttons. menus. windows, Select the Next buuon.
options. and other interface Open the Task Sialus
clements window.
Remove the checkmark
trorn the Print File check
box.
()uotation marks lutertucc clements with Select the "Include
long names subvolumes in object view
window" check box.
Intro-14 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copyright ~ ;!()Clfi Svmanter: Corporation AUfights reserved
Lesson 1
Virtual Objects
symantcc
Lesson Introduction
• Lesson 1: VirtU!!L9_bl~~ts •••
• Lesson 2: Installation and Interfaces
• Lesson 3: Creating a Volume and File
System
• Lesson 4: Selecting Volume Layouts
• Lesson 5: Making Basic Configuration
Changes
• Lesson 6: Administering File Systems
• Lesson 7: Resolving Hardware
Problems
svmantec
Lesson Topics and Objectives
Topic After completing this lesson, you
will be able to:
Topic 1: Physical Data Identify the structural characteristics of
Storage a disk that are affected by placing a
disk under VxVM control.
Topic 2: Virtual Data Describe the structural characteristics
Storage of a disk after it is placed under VxVM
control.
Topic 3: Volume Manager Identify the virtual objects that are
Storage Objects created by VxVM to manage data
storage, including disk groups, VxVM
disks, subdisks, piexes, and volumes.
Topic 4: Volume Manager Define VxVM RAID levels and identify
RAID Levels virtual storage layout types used by
VxVM to remap address space.
1-2
COPYlIgl1l·.~ 2006 Symanter. Ccq-orauon All fights rnservoc
VERITAS SLorage Foundation 5.0 for UNIX: Fundamentals
, S)11Jan,'(
I
Physical Disk Structure
Physical storage objects:
• The basic physical storage device that ultimately stores
your data is the hard disk.
• When you install your operating system, hard disks are
formatted as part of the installation program.
• Partitioning is the basic method of organizing a disk to
prepare for files to be written to and retrieved from the
disk.
• A partitioned disk has a prearranged storage pattern that
is designed for the storage and retrieval of data.
Solaris I HP-UX I AIX I Linux I
Physical Data Storage
Physical Disk Structure
Solaris
A physical disk under Solaris contains the partition table of the disk and the volume Table
of contents (VTOC) in the first sector 151~ bytes) of the disk. The VTOC has at least an
entry lor the backup partition on the II hole disk (partition tag 5, normally partition number
2), so the OS may work correctly with the disk. The VTOC is always a part of the backup
partition and may be part ota standard data partition. You can destroy the VTOC using the
raw device driver on that partition making the disk immediately unusable.
Sector 0 of disk: VTOC
Sector 1-15 of
/ partition: bootblock
~1US'
IPartition 2 (backup slice)
refers to the entire disk.
Partitions
(Slices)
1
Copyright 1'. 2006 Symaruec Corporation All nqtus reserved
Lesson 1 Virtual Objects
If the disk contains the partition fur the rout file system mounted on / (partition tag 2), lor
example of an OS disk, this root partition contains the bootblock for the first boot stage
after the Open Bout Prom within sector I - 15. Sector 0 is skipped, so there is no
overlapping between VTOe and boorblock. if the root partition starts at the beginning of
the disk.
The li"t sector ofa file system un Soluris cannut start before sector 16 of the partition.
Sector 16 contains the main super block of the file system. Using the block device driver
of the file system prevents VTOC and boot block from being overwritten by applicuuon
data.
Note: 011 Solaris. VxVM 4. I and later support EFI disks. EFI disks are an lntcl-based
technology that allows disks to retain BIOS code.
HP-lJX
On an HP-UX system. the physical disk is traditionally partitioned using either the whole
disk approach or Logical Volume Manager (LVM).
HP-UX Disk
cOtld4
LVM Disk
cOtld4
The whole disk approach enables you tu partition a disk in five ways: the whole disk is
used by a single file system: the whole disk is used as swap area: the whole disk is
used as a raw partition: a portion of the disk contains a file system, and the rest is used
as swap: or the boot disk contains a 2-MB special boot are". the root file system, and a
swap area.
An LVM data disk consists of four areas: Physical Volume Reserved Area (PVRA):
Volume Group Reserved Area (VGRA): user data area: and Bad Block Relocation
Area IBBRA).
AIX
A native AIX disk docs not have a partition table "I' the kind familiar on many other
operating systems. such as Solaris, Linux, and Windows. An application could use the
entire unstructured raw physical device. but the lirst 5 I 2-byte sector normally contains
intunn.uion, including a physical volume identifier (pvid) to support recognition of the
disk by AIX. An AIX disk is managed by IBM's Logical Volume Manager (LVM) by
defuult. A disk managed by LVM is called a physical volume (PV). A physical volume
consists of:
PV reserved area: A physical volume begins with a reserved area of 128 sectors
containing I'V metadatu. including the pvid.
1-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
COPYright ';:' 2(1()h Sytn<ll1le: Corporauon All [lgtHS reserveo
Volume Group Descrlptur Area ('G[)A): One or two copies of the V[X;/ tollows.
The V(jDA contains information describing a volume group (Vt.i ), which consists of
one or more physical volumes. Included in the mctadata in the V(j[)A is the defiuiuon
of the physical partition (PI') Sill'. nonnally-t MH,
Physlcal partitions: The remainder of the disk is divided into a number of physical
partitions, All of the PVs in a volume group have PI's of the same size. as defined in
the VGDA, In a normal VG. there can be up to.n PI's in a P'. In a big VCi. there can
be up to 12R PI's in a Pv. I
,Raw
device
,hdisk3
Physical volume
reserved area
(128 sectors)
Volume Group
Descriptor Areas
Physical partitions
(equal size,
defined in VGDA)
The term partition is used differently in different operating systems. In many kinds of
UNiX. Linux, and Windows. a partition is a variable sized portion of contiguous disk
spacethat can be formatted to contain a file system. in LVM. a PI' is mapped to a logical
partition 11..1'). and one or more LPs from any location throughout the V(j can be
combined to define a logical volume (LV). A logical '0IU111eis the entity that can be
formatted to contain a file system (by default either .IFS or .IFS2), So a physical partition
compares in concept more closely to a disk allocation cluster in some other operating
systems. and a logical volume plays the role that a partition does in some other operating
systems.
Linux
On Linux. a nonboot disk can be divided into one to lour primary partitions. One of these
primary partitions can be used to contain logical partitions. and it is called the extended
partition. The extended partition can have lip to I ~ logical partitions on a SCSi disk and lip
to 60 logical partitions on an IDE disk, You can use fdisk to set up partitions on a Linux
disk,
Lesson 1 Virtual Objects 1-5
CopyriglH '-':2006 Symantcc Corporation All fights reserveo
Primary Partition 1
/dev/sdal or/dev/hdal
Primary Partition 2
/dev/sda2or/dev/hda2
Primary Partition 3
/dev/sda3or/dev/hda3
Primary Partition 4
(Extended Partition)
/dev/sda4
/dev/hda4
On a l.inux boot disk. the boot partition must be a primary partition and is typically
located within the first 1024 cylinders of the drive. On the boot disk. you must also have a
dedicated swap partition. The swap partition can be a primary or a logical partition. and it
can be located anywhere on the disk.
Logical partitions must be contiguous. but they do not need to take up all of the space of
the extended partition. Only one primary partition can be extended. The extended partition
docs not take up any space until it is subdivided into logical partitions.
VERITAS Volume Manager 4.0 for Linux does not support most hardware RAID
controllers currently unless they present SCSI device interfaces with names of the
form / dev / sdx.
The following controllers are supported:
PERC, on the Dell 1650
MegaRAID. on the Dell 1650
ScrvcRAID. on x440 systems
Compaq array controllers that require the SmartI and CCISS drivers (w hich present
device paths. such as I dev I idal c #d#p # and I dev I cc i ssl c #d#p#) arc supported
lor normal use and for rootnbiliry.
1·6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Cupynqht c ;1.0011Svmanu-c Corporation All fights rcserveo
,S)l1HHlh'(
Physical Disk Naming
VxVM parses disk names to retrieve connectivity
information for disks. Operating systems have different
conventions:
Operating System Device Naming Convention Example
Solaris /dev/[rJdsk/c1t9dOs2
/dev/ [rJ dak/c3t2dO (no slice)
/dev/hdisk2 (no slice)
SCSI disks:
/dev/ade [1-4J (primary partitions)
/dev/ade [5 -16J (logical partitions)
/dev/adbN(on the second disk)
/ dev / adcN (on the third disk)
IDE disks:
/dev/hdeN. /dev/hClbN./dev/hdcN
HP-UX
AIX
Linux
Physical Disk Naming
Solaris
I
You locate and access the data on a physical disk by using a device name that
specifies the controller, target ID. and disk number. A typical device name uses the
format: c#t#d#.
c# is the controller number.
t# is the target !D.
d# is the logical unit number (LUN) of the drive attached to the target.
Ira disk is divided into partitions. you also specify the partition number in the
device name:
s# is the partition (slice) number,
For example. the device name cOtOdOsl is connected to controller number 0 ill
the system. with a target ID oro. physical disk number O. and partition number I
011 the disk.
HP-liX
You locate and access the data on a physical disk by using a device name that
specifies the controller. target ID, and disk number. A typical device name uses the
format: c#t#d#.
c# is the controller number.
t # is the target !D.
d# is the logical unit number (LUN) of the drive attached to the target.
Copyright .~..2006 Svmanter Corporauon All fights reserved
1-7Lesson 1 Virtual Objects
For example, the cOt OdO device name is connected to the controller number °in
the system, with a target 10 of 0, and the physical disk number O.
AIX
Every device in AIX is assigned a location code that describes its connection to the
system. The general format of this identifier isAB-CD-EF-GH, where the letters
represent decimal digits or uppercase letters. The first two characters represent the
bus. the second pair identify the adapter, the third pair represent the connector, and
the tinal pair uniquely represent the device. For example, a SCSI disk drive might
have a location identifier of 04 - 01- 00 - 6, O.In this example, 04 means the PCI
bus, 01 is the slot number on the PCI bus occupied by the SCSI adapter, 00 means
the only or internal connector, and the 6,0 means SCSIID 6, LUN o.
However, this data is used internally by AIX to locate a device. The device name
that a system administrator or software uses to identify a device is less hardware
dependant. The system maintains a special database called the Object Data
Manager (ODM) that contains essential definitions for most objects in the system,
including devices. Through the ODM. a device name is mapped to the location
identifier. The device names are referred to by special files found in the / dev
directory. For example, the SCSI disk identified previously might have the device
name hdisk3 (the fourth hard disk identified by the system). The device named
hdisk3 is accessed by the file name /dev/hdisk3.
If a device is moved so that it has a different location identifier, the ODM is
updated so that it retains the same device name. and the move is transparent to
users. This is facilitated by the physical volume identifier stored in the first sector
of a physical volume. This unique 128-bit number is used by the system to
recognize the physical volume wherever it may be attached because it is also
associated with the device name in the ODM.
Linux
On Linux, device names are displayed in the format:
• sdx [N]
• hdx [N]
In the syntax:
sd refers to a SCSI disk, and hd refers to an EIDE disk.
x is a letter that indicates the order of disks detected by the operating system.
For example, sda refers to the first SCSI disk, sdb refers to the second SCSI
disk. and so on.
N is an optional parameter that represents a partition number in the range I
through 16. For example. sda7 references partition 7 on the first SCSI disk.
Primary partitions on a disk are I. 2, .~.4: logical partitions have numbers 5 and up.
If the partition number is omitted, the device name indicates the entire disk.
1-8 VERITAS Storage Foundation 5.0 for UNIX.' Fundamentals
Copynqht ', ;'[)Of) Symantec Corporaucn All fights reserverr
Physical Data Storage
Note: Throughout this course, the term disk is used to mean either disk or LUN.
Whatever the OS sees as a storage device, VxVM sees as a disk.
• Reads and writes
on unmanaged
physical disks
can be a slow
process .
• Disk arrays and
multipathed disk
arrays can
improve 110 speed
and throughput.
IApplications! Databases / Users
I
00PhYSi!1[!LUNS 00Disk array: A collection of physical disks used
to balance 1/0 across multiple disks
Multipathed disk array: Provides multiple ports
to access disks to achieve performance and
availability benefits
Disk Arrays
Reads and writes on unmanaged physical disks can be a relatively slow process,
because disks are physical devices that require time to move the heads to the
correct position on the disk before reading or writing. If all of the read and write
operations are performed to individual disks. one at a time. the read-write time can
become unmanageable.
A disk 1/1"'(11' is a collection of physical disks. Performing 110 operations on
multiple disks in a disk array can improve 1/0 speed and throughput.
Hardware arrays present disk storage to the host operating system as LUNs.
Multipathed Disk Arrays
Some disk arrays provide multiple ports to access disk devices. These ports.
coupled with the host bus adaptor (IIBA) controller and any data bus or 110
processor local to the array. compose multiple hardware paths to access the disk
devices. This type of disk array is called a muttipathed disk aI'I'O'.
You can connect rnultipathed disk arrays to host systems in many different
configurations. such as:
Connecting multiple ports to different controllers on a single host
Chaining ports through a single controller on a host
Connecting ports to di tferent hosts simultaneously
Lesson 1 Virtual Objects 1-9
Cop'r'riglll'(~ 2006 Symantec Corporation All rights. roservoo
symarucc
Virtual Data Storage
• Volume Manager
creates a virtual
layer of data
storage.
• Volume Manager
volumes appear to
applications
to be physical disk
partitions.
• Volumes have
block and character
device nodes in the
Zdev tree:
Idev/vxl lr l dsk/ ...
Multidisk
configurations:
• Concatenation
• Mirroring
• Striping
• RAID·S
High Availability:
• Disk group
import and deport
• Hot relocation
• Dynamic
multipathing
Disk Spanning Load·Balancing
Virtual Data Storage
Virtual Storage Management
VER lTAS Volume Manager creates a virtual level of storage management above
the physical device level by creating virtual storage objects. The virtual storage
object that is visible to users and applications is called a 1'0111111<'.
What Is a Volume?
A volume is a virtual object, created by Volume Manager, that stores data. A
volume consists of space from one or more physical disks on which the data is
physically stored.
How Do You Access a Volume?
Volumes created by VxVM appear to the operating system as physical disks, and
applications that interact with volumes work in the same way as with physical
disks. All users and applications access volumes as contiguous address space using
special device files in a manner similar to accessing a disk partition.
Volumes have block and character device nodes in the / dev tree. You can supply
the name of the path to a volume in your commands and programs, in your file
system and database configuration files, and in any other context where you would
otherwise use the path to a physical disk partition.
1-10 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copynght '6 2006 Symantec Comor auon All fights reserved
I
Volume Manager Control
When you place a disk under VxVM control, a cross-platform data sharing (CDS)
disk layout is used, which ensures that the disk is accessible on different
platforms, regardless of the platform on which the disk was initialized.
I····- ..·-------·.L~_~lOS-reserved areas
I
that contain:
" Platform blocks
I " VxVM 10 blocks
I
" AIX and HP-UX
. coexistence labels
Public
Region
Volume Manager-Controlled Disks
With Volume Manager. you enable virtual data storage by bringing a disk under
Volume Manager control. By default in VxVM 4.0 and later. Volume Manager
uses a cross-platform data sharing (CDS) disk layout. A CDS disk is consistently
recognized by all VxVM-supported UNIX platforms and consists of:
Ox-reserved area: To accommodate plat tonn-spcci fie disk usage. f 2RK is
reserved for disk labels. platform blocks. and platform-coexistence labels.
Private region: The private region stores information. such as disk headers.
configuration copies. and kernel logs. in addition to other platform-specific
management areas that VxVM uses to manage virtual objects. The private
region represents a small management overhead:
Operating System Default Block/Sector Size Default Private Region Size
Solaris 512 bytes 65536 sectors (I 024K)
HI'-UX 1024 bytes 3276Rsectors ( I024K)
AIX 512 bytes 65536 sectors (I024K)
Linux 512 bytes 65536 sectors ( I024K)
Public region: The public region consists of the remainder of the space on the
disk. The public region represents the available space that Volume Manager
can LIseto assign to volumes and is where an application stores data. Volume
Manager never overwrites this area unless specifically instructed to do so.
Lesson 1 Virtual Objects 1-11
Copvriqht 'I' 2006 svoteotec Corporation All rights reservco
syrnantec.
Comparing CDS and Pre-4.x Disks
CDS Disk
(>4.x Default)
Private region
(metadata) and
public region (user
data) are created on
a single partition.
Suitable for moving
between different
operating systems
Not suitable for
boot partitions
Sliced Disk
(Pre-4.x Solaris Default)
Private region and
public region are
created on
separate
partitions.
Not suitable for
moving between
different operating
systems
Suitable for boot
partitions
Simple Disk
(Pre •••.x HP-UX Default)
Private region and
public region are
created on the
whole disk with
specific offsets.
Not suitable for
moving between
different operating
systems
Suitable for boot
partitions
Note: This format is
called hpdisk format
as of VxVM 4.1 on the
HP-UX platform.
Comparing CDS Disks and Pre-4.x Disks
The pre-t.v disk layouts arc still available in VxVM 4.0 and later. These layouts
are used for hringing the boot disk under VxVM control on operating systems that
support that capability,
On platforms that support bringing the boot disk under V.xVM control, CDS disks
cannot be used tor boot disks. CDS disks have specific disk layout requirements
that enable a common disk layout across different platforms, and these
requirements arc not compatible with the particular platform-specific requirements
of boot disks. Therefore, when placing a hoot disk under VxVM control. you must
use a prc-4.x disk layout (sliced on Solaris, hpdisk on HP-UX).
For non boot disks, you can convert CDS disks to sliced disks and vice versa by
using VxVM utilities.
Other disk types, working with boot disks, and transferring data across platforms
with CDS disks are topics covered in detail in later lessons.
1-12
Cupyrlght L 200t) Symantec Corporation All f,gtl1s reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Volume Manager Storage Objects
Disk Group
Volumes
acctdg
expvol payvol
acctd 01-01
acctdgO~ - O~
llcctdg03-02
expvol-Ol
acctdg01-02 llcctdg03-01
acctdg02-01
Ploxes
payvol-Ol payvol-02
-VxVM Disks acctdg03
Subdisks
Physical Disks
Volume Manager Storage Objects
I
Disk Groups
A disk group is a collection of Vx VM disks that share a common configurution.
You group disks into disk groups for management purposes. such as to hold the
data for a specific application or set of applications. For example. data for
accounting applications can be organized in a disk group called acctdg. A disk
group contigurcttion is a set olrccords with detailed information about related
Volume Manager objects in a disk group. their attributes. and their connections.
Volume Manager objects cannot span disk groups. For example. a volume's
subdisks. plexes. and disks must be derived from the same disk group as the
volume. You can create additional disk groups as necessary. Disk groups enable
you to group disks into logical collections. Disk groups and their components can
be moved as a unit from one host machine to another.
Volume Manager Disks
A Volume Manager (VxVM) disk represents the public region ota physical disk
that is under Volume Manager control. Each VxVM disk corresponds to one
physical disk. Each VxVM disk has a unique virtual disk name called a disk media
name,The disk media name is a logical name used lor Volume Manager
administrative purposes. Volume Manager uses the disk media name when
assigning space to volumes. A VxVM disk is given a disk media name when it is
added to a disk group.
Default disk media name: diskgrollplili
Copyright ~~.2Un6 Svmantec Corporanon All rights reserved
1-13Lesson 1 Virtual Objects
You can supply the disk media name or allow Volume Manager to assign a default
name. The disk media name is stored with a unique disk ID to avoid name
collision. After a VxVM disk is assigned a disk media name, the disk is no longer
referred 10 by its physical address. The physical address (for example, clltlldll or
hdiskll) becomes known <IS the disk access record.
Subdisks
A VxVM disk can be divided into one or more subdisks. A subdisk is a set of
contiguous disk blocks that represent a specific portion ofa VxVM disk, which is
mapped to a specific region of a physical disk. A subdisk is a subsection of a disk's
public region. A subdisk is the smallest unit of storage in Volume Manager.
Therefore, subdisks are the building blocks for Volume Manager objects.
A subdisk is defined by an offset and a length in sectors on a VxVM disk.
Default subdisk name: DMname-1l1l
A Vx VM disk can contain multiple subdisks, but subdisks cannot overlap or share
the same portions ofa VxVM disk. Any VxVM disk space that is not reserved or
that is not part of a subdisk is free space. You can use free space to create new
subdisks.
Conceptually, a subdisk is similar to <I partition. Both a subdisk and a partition
divide a disk into pieces defined by an offset address and length. Each of those
pieces represent a reservation of contiguous space on the physical disk. However,
while the maximum number of partitions to a disk is limited by some operating
systems, there is no theoretical limit to the number of subdisks that can be attached
to a single plex. This number has been limited by default to <I value 01'4090. If
required, this default can be changed, using the vo1_ subdisk _num tunable
parameter. For more information on tunable parameters, see the I'ERITAS Volume
.Mal/agerAdministrator '.1 Guide.
Plexes
Volume Manager uses subdisks to build virtual objects called plexes. A plex is a
structured or ordered collection of subdisks that represents one copy of the data in
a volume. A plex consists of one or more subdisks located on one or more physical
disks. The length of a plex is determined by the last block that can be read or
written on the last subdisk in the plex.
Default plcx name: volume_name-Il#
Volumes
A volume is a virtual storage device that is used by applications in a manner
similar to <I physical disk. Due 10 its virtual nature, a volume is not restricted by the
physical size constraints that apply to a physical disk. A VxVM volume can be as
large as the total of available. unreserved free physical disk space in the disk
group. A volume consists of one or more plcxes.
Default volume name: vol ul1le name##
1-14 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copyright l(; 2006 Syruantec Corporation All fights reserved
, S)111'1I1t('(
~
Volume Layouts
Volume layout: The way plexes are configured to remap the
volume address space through which 1/0 is redirected
.....Di~~~U [--R-~-;;i~;~~;1
LayeredConcatenated Striped
RAID-O
Data Redundancy
Mirrored RAID-5 Striped and
RAl0-O+1RAl0-5
Volume Manager RAID Levels
RAID
I
RAID is an acronym for Redundant Array of Independent Disks. RAID is a
storage management approach in which an army of disks is created. and part of the
combined storage capacity of the disks is used to store duplicate information about
the data in the array. By maintaining a redundant array of disks. you can regenerate
data in the case of disk failure.
RAID configuration models arc classified in terms of RAID levels. which arc
defined by the number of disks in the array. the way data is spanned across the
disks. and the method used for redundancy. Each RA ID level has speci lie features
and performance benefits that involve a trade-oil between performance and
reliability.
Volume Layouts
RAID levels correspond to volume layouts. A volume's layout refers to the
organization of plexcs in a volume. Volume layout is the way plcxes are
configured to remap the volume address space through which 110 is redirected at
run-time. Volume layouts are based on the concepts of disk spanning. redundancy.
and resilience.
Disk Spanning
Disk spanning is the combining of disk space from multiple physical disks to Iorrn
one logical drive. Disk spanning has two forms:
Lesson 1 Virtual Objects
Copyright!': 2006 Symantec Corporation All rights reserved
1-15
Concatenation: Concatenation is the mapping of data in a linear manner
across two or more disks.
In a concatenated volume. subdisks are arranged both sequentially and
contiguously within a plex. Concatenation allows a volume to be created from
multiple regions of one or more disks if there is not enough space for an entire
volume on a single region of a disk.
Striping: Striping is the mapping of data in equally-sized chunks alternating
across multiple disks. Striping is also called interleaving.
In a striped volume. data is spread evenly across multiple disks. Stripes are
equally-sized fragments that are allocated alternately and evenly to the
subdisks of a single plcx. There must be at least two subdisks in a striped plex ,
each of which must exist on a different disk. Configured properly. striping not
only helps to balance 1/0 but also to increase throughput.
Data Redundancy
To protect data against disk failure, the volume layout must provide some form of
data redundancy. Redundancy is achieved in two ways:
Mirroring: Mirroring is maintaining two or more copies of volume data.
A mirrored volume uses multiple plcxcs to duplicate the information contained
in a volume. Although a volume can have a single plex, at least two are
required for true mirroring (redundancy of data). Each of these plexes should
contain disk space from different disks for the redundancy to be useful.
Parity: Parity is a calculated value used to reconstruct data after a failure by
doing an exclusive OR (XOR) procedure on the data. Parity information can be
stored on a disk. Ifpart ofa volume fails, the data on that portion of the failed
volume can be re-created from the remaining data and parity information.
A RAIO-S volume uses striping to spread data and parity evenly aeross
multiple disks in an array. Each stripe contains a parity stripe unit and data
stripe units. Parity can be used to reconstruct data if one of the disks fails. In
comparison to the performance of striped volumes, write throughput of RAI D-
S volumes decreases, because parity infonnauon needs to be updated each time
data is accessed. However. in comparison to mirroring, the use of parity
reduces the amount of space required.
Resilience
A resilient volume, also called a layered volume, is a volume that is built on one or
more other volumes. Resilient volumes enable the mirroring of data at a more
granular level. For example. a resilient volume can be concatenated or striped at
the top level and then mirrored at the bottom level.
A layered volume is a virtual Volume Manager object that nests other virtual
objects inside of itself. Layered volumes provide better fault tolerance by
mirroring data at a more granular level.
1-16 VERITAS Storage Foundation 5.0 for UNIX' Fundamentals
CopYrlyhl '& 2006 Symantec Corporauon AU nqhts reserved
, syrnanrec
I
Lesson Summary
• Key Points
This lesson described the virtual storage objects
that VERITAS Volume Manager uses to manage
physical disk storage, including disk groups,
VxVM disks, subdisks, plexes, and volumes.
• Reference Materials
VERITAS Volume Manager Administrator's Guide
'symantl'C
Labs and solutions for this lesson arc located on the following pages:
Appendix A provides complete lab instructions. "Lib I' IrilJ"duc'ing tile: !,;11
Lnvironmctu." p:I',!C i-,~
Appendix B provides complete lab instructions and solutions, "I .ab 1 S"IUlioI1S:
1,llrodliCIII;2 the 1:11) 1..11in1J1!l1CIlI," rage' B-,;
Lab 1
Lab 1: Introducing the Lab Environment
In this lab, you are introduced to the lab
environment, system, and disks that you will use
throughout this course.
For Lab Exercises, see Appendix A.
For .t:cabSoluti~!!s, see Appendix B,
Lesson 1 Virtual Objects
Copyright'~ 2006 Svmantec Corporation All rights reserved
1-17
1-18 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copynghi .~ 2006 Svmantec Corporation. AlIlIghl!'i resorvec
Lesson 2
Installation and Interfaces
syrnantec
Lesson Introduction
• Lesson 1: Virtual Objects
~_Lesson~'!..sta~lationandInterface.s_~"
• Lesson 3: Creating a Volume and File
System
• Lesson 4: Selecting Volume Layouts
• Lesson 5: Making Basic Configuration
Changes
• Lesson 6: Administering File Systems
• Lesson 7: Resolving Hardware
Problems
"~, .AS:, #'lli~1[ii-Jl L , svmantcc
Lesson Topics and Objectives
Topic After completing this lesson, you will
be able to:
Topic 1: Installation Identify operating system compatibility and
Prerequisites other preinstallation considerations.
Topic 2: Adding License Keys Obtain license keys, add licenses by using
vxlic inst, and view licenses by using
vxlicrep.
Topic 3: VERITAS Software Identify the packages that are included in the
Packages Storage Foundation 5.0 software.
Topic 4: Installing Storage Install Storage Foundation interactively, by
Foundation using the installation utility.
Topic 5: Storage Foundation Describe the three Storage Foundation user
User Interfaces interfaces.
Topic 6: Managing the VEA Install, start, and manage the VEA server.
Server
2-2
Copynyhl:- 2006 Svmantec Corporaucn. All fights reserveu
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
'''l!t ..,~. ~ .. ", ,,,
~[ , S}ll1;!n1CC.
as Compatibility
The VERITAS Storage Foundation product line
operates on the following operating systems:
SF Solaris HP-UX AIX Linux
Version Version Version Version Version
5.0 8,9,10 11i.v2 (0904) 5.2,5.3
RHEL 4 Update 3,
SLES 9 SP3
4.1 8,9,10, x86 11i.v2 (0904) No release
RHEL 4 Update 1 (2.6),
SLES 9 SP1
4.0 7,8,9 No release 5.1, 5.2, 5.3 RHEL 3 Update 2 (1686)
3.5.x 2.6,7,8 11.11i (0902) No release No release*
• Note: Version 3.2.2 on Linux has
functionality equivalent to 3.5 on Solaris.
I
Installation Prerequisites
OS Version Compatibility
Before installing Storage Foundation. ensure that the version of Storage
Foundation that you are installing is compatible with the version of the operating
system that you are running. You may need to upgrade your operating system
before you install the latest Storage Foundation version.
VERITAS StorageFoundation 5.0 operates on the following operating systems:
Solaris 8 (SPARe Platform 32-bit and 64-bil)
Solaris 9 (SPARe Platform 32-bit and M-bil)
Solari, 10 (SPARe Platform M-bil)
September 2004 release of HP-UX II i version 2.0 or later
AIX 5.2 ML6 (legacy)
AIX 5.3 TL4 with SP4
Red Hat Enterprise l.inux 4 (RIIEL 4) with Update 3 (2.6.9-34
kernel) on AMD Optcron or Intel Xeon EM64T (xX6_(4)
SUSE Linux Enterprise Server <) (SLES 9) with SP3 (2.6.5-7.244.
252 kernels) on AMD Opteron or Intel Xcon EM64T (xR6_(4)
Check the /'F:R1TAS Storag« Foundation Release No/es Ior additional operating
Solaris
HP-UX
AIX
Llnux
system requirements.
Lesson 2 Installation and Interfaces 2-3
Copyright b 2006 svoantcc Corporation All nqhts reserved.
symantec.
Support Resources
·Il;'l,q! Storag~ Founnanonfor U~~IX
Products I
Search for Technotes
I Support services ~~E:~~,~""""~
1Wf'l'tIttlfUialln.
r~'-"-'''-''-F-"-,,,,-,-.,,--~----rp~a~tc~h~e~s=;I--------'---------------
"un •.•.••,"tlf~d•• "·, lI.IM ••••••••.'."". hll ••• " •.•••lo.11<,11'.
'->."';' ~:.;tn ~J ""~': .• '!It-:.," •• s. ~~, "'.1)00;<1':><;"< Jo·1'.~$ot> ,~'~.;~~'''' ""~,!~~.•.,,,~ ,.~.•~~~,,<;<'"
r·,.,,~,',1""rH .'.'''~'''''' ~,~.•••.",~~'·""··'''''.''''f· C.h"·"-" ':l'""c•.- •.."'!!....•••...•I"*"""~-~,~ 'w,'<q- •.•••"""'~r.·j'f'·
".""' ...••
.,~,.'"'At.~~'·""~'>''
~1'''fl''t'"F''''''''~I'r.-~,t>.U'
"," ~" •• It••• ,"',.,.I! ••• ,,- •••••
'''' .••'~ .'" :£.:..••,..,.. F<.'A>u<III._ ••
•( •. "-1)'
http://support.veritas.com(',ur4'''·'',IIa",I.il<·''''I''"'·
" •• ~~: ..,_''' ••.•• ,'''~ •• ·d ••• l01·••• ,.,.., ••••••••••••• "'"'".;>
:t"".~.1":1""'-. ""'.1,1-:" "''to ..••..., .,~•••r""· •.'" +"" ,~._.
"'j<.('.o.:', "~"""'''''~ "',-.~ .•.1'
Version Release Differences
With each new release of the Storage Foundation software. changes are made that
may affect the installation or operation of Storage Foundation in your
environment. By reading version release notes and installation documentation that
are included with the product, you can stay informed of any changes,
For more information about specific releases ofYERITAS Storage Foundation,
visit the YERITAS Support Web site at: http: / / support. veri tas. com
This site contains product and patch information, a searchable knowledge base of
technical notes, access to product-specific news groups and c-mail norification
services, and other infonnation about contacting technical support staff.
Note: If you open a case with YERITAS Support. you can view updates at:
http://support.veritas.com/viewcase
You can access your case by entering the e-mail address associated with your case
and the case number.
2-4
Copynqhtc. 2006 Svmaruec Ccmoranon All fights reserved
VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
,S)111illllt'L
I
Storage Foundation Licensing
• Licensing utilities are contained in the VRTSvlic
package, which is common to all VERITAS products.
• To obtain a license key:
- Create a vLicense account and retrieve license keys online.
vLicense is a Web site that you can use to retrieve and
manage your license keys.
or
- Complete a License Key Request form and fax it to
VERITAS customer support.
• To generate a license key, you must provide your:
- Software serial number
- Customer number
- Order number
Note: You may also need the network and RDBMS platform, system
configuration, and software revision levels.
Adding License Keys
You must have your license key before you begin installation. because you are
prompted for the license key during the installation process. A new license key is
nut necessary if you are upgrading Storage Foundation from a previously licensed
version of the product.
lfyou have an evaluation license key, you must obtain a permanent license key
when you purchase the product. The VER[TAS licensing mechanism checks the
system date to verify that it has not been set back. [I' the system date has been reset.
the evaluation license key becomes invalid.
Obtaining a License Key
License keys arc delivered on Software License Certificates to you at the
conclusion of the order fulfillment process. The certificate specifics the product
keys and the number of product licenses purchased. A single key enables you to
install the product un the number and type of systems for which you purchased the
license.
License key arc non-node locked.
In a non-node locked model. one key can unlock a product on different servers
regardless ufllost ID and architecture type.
In a nude locked model. a single license is tied to a single specific server. For
each server. you need a di fferent key.
Lesson 2 Installationand Interfaces
Copyright C 2006 Symamec Corporation. All nqhts reserved
2-5
symaruec
Generating License Keys
[http://vli-~~~-s-e-.v~-;i-ta-s~.-c~Jf-~---'
I·········
I. Access automatic
I
, license key generation
and delivery.
• Manage and track
I
i license key inventory
and usage.
I· Locate and reissue lost
license keys.
I· Report, track, and
1 resolve license key
I issues online.
I· Consotidate and share
license key information
with other accounts.
• To add a license key:
vxlicinst
• License keys are installed in:
/etc/vx/licenses/lic
• To view installed license key
information:
vxlicrep
Displayed information includes:
- License key number
- Name of the VERIT AS
product that the key enables
- Type of license
- Features enabled by the key
Generating License Keys with vLicense
VERITAS vl.icense (v L icense. veri tas. com) is a self-service online license
management system.
vl.iccnsc supports production license keys only. Temporary. evaluation. or
demonstration keys must be obtained through your VERITAS sales representative.
Note: The VRTSvl ic package can coexist with previous licensing packages. such
as VRTSIic. If you have old license keys installed in /etc/vx/eIm.leave this
directory on your system. The old and new license utilities cun coexist.
2-6 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
Copvnqht i; 2006 Svmantec Corporation An fights reserved
,S)11Hlnt.?( .
I
~
What Gets Installed?
In version 5.0, the default installation behavior is to
install all packages in Storage Foundation Enterprise HA.
In previous versions, the default behavior was to only
install packages for which you had typed in a license key.
In 5.0, you can choose to install:
• All packages included in Storage Foundation Enterprise HA
or
• All packages included in Storage Foundation Enterprise HA,
minus any optional packages, such as documentation
and software development kits
VERITAS Software Packages
When you install a product suite. the component product packages are installed
automatically. When installing Storage Foundation. be sure to follow the
instructions in the product release notes and installation guides.
Package Space Requirements
Before you install any of the packages. confirm that your system has enough free
disk space to accommodate the installation. Storage Foundation programs and files
are installed in the /. /usr. and / opt tile systems. Refer to the product
installation guides for a detailed list of package space requirements.
Solaris Note
VxFS often requires more than the default RK kernel stack size. so entries are
added to the jete/system file. This increases the kernel thread stack size of the
system to 24K. The original / ete/ system file is copied to
/ete/fs/vxfs/system.preinstall.
Lesson 2 Installation and Interfaces 2-7
COpyrig!lt:fj 2006 Symanlec Corporation. All rights reserved
symantec
Optional Features
VERITAS FlashSnap
- Enables point-in-time copies of data with minimal
performance overhead
- Includes disk group split/join, FastResync, and
storage checkpointing (in conjunction with VxFS)
VERITAS Volume Replicator
- Enables replication of data to remote locations
- VRTSvrdoc: VVR documentation
• VERITAS Cluster Volume Manager
Used for high availability environments
Features are
Included In the
VxVM package,
but they require a
separate license.
Features are
Included In the
VxFS package,
but they require a
separate license.
VERITAS Quick 1/0 for Databases
Enables applications to access prealiocated VxFS files
as raw character devices
• VERITAS Cluster File System
Enables multiple hosts to mount and perform file
operations concurrently on the same file
Dynamic Storage Tiering
Enables the support for multivolume file systems by
managing the placement of files through policies that
control both initial file location and the circumstances
under which existing files are relocated
Storage Foundation Optional Features
Several optional features do not require separate packages, only additional
licenses. The following optional features are built-in to Storage Foundation that
you can enable with additional licenses:
VERITAS Flashxnap: FlashSnap facilitates point-in-time copies of data,
while enabling applications to maintain optimal performance, by enabling
features, such as FastResync and disk group split and join functionality.
FlashSnap provides an efficient method to perform offline and off-host
processing tasks, such as backup and decision support.
VERITAS Volume Replicator: Volume Rcplicator augments Storage
Foundation functionality to enable you to replicate data to remote locations
over any IP network. Replicated copies of data can be used for disaster
recovery, off-host processing, off-host backup, and application migration.
Volume Replicator ensures maximum business continuity by delivering true
disaster recovery and flexible off-host processing.
Cluster Functionality: Storage Foundation includes optional cluster
functionality that enables Storage Foundation to be used in a cluster
environment.
A cI/lSII!I' is a set of hosts that share a set of disks. Each host is referred to as a
node in a cluster. When the cluster functionality is enabled, all of the nodes in
the cluster can share VxVM objects. The main benefits of cluster
configurations are high availability and off-host processing.
VERITAS Cluster Server (VCS): ves supplies two major components
integral to eFS: the Low Latency Transport (LLT) package and the Group
2-8
CQPynyht'~ 2006 Syrn;'Jnlt:r.: Corporation. All fights reserveu
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Membership and Atomic Broadcast (GAB) package. LLT provides node-
to-node communications and monitors network communications. GAB
provides cluster state. configuration. and membership service. and it
monitors the heartbeat links between systems to ensure that they are active.
VERIT AS Cluster File System (CFS): CFS is a shared file system that
enables multiple hosts to mount and perform IIle operations concurrently
on the same file.
VERITAS Cluster Volume Manager (CVM): CVM creates the cluster
volumes necessary for mounting cluster file systems.
VERIT AS Quick 1/0 for Databases: VERITAS Quick 1/0 for Databases
(referred to as Quick 1;0) enables applications to access preallocated VxFS
tiles as raw character devices. This provides the administrative benefits of
running databases on file systems without the performance degradation usually
associated with databases created on file systems.
Dynamic Storage Tiering (DST): DST enables the support for multivolume
file systems by managing the placement of files through policies that control
both initial tile location and the circumstances under which existing files are
relocated.
Lesson 2 Installation and Interfaces
Copyrigh1 It 200{J Svmentec Corporation. All nqtus reserved.
I
2-9
syrnantec
Installation Menu
Storage Foundation and High Availability Solutions 5.0
SYMANTEC Product Version Installed Licensed
Veritas Cluster Server
Veritas File System
Veritas Volume Manager
Veri tas Volume Replicator
Veritas Storage Foundation
Veritas Storage Foundation for Oracle
Veri tas Storage Foundation for DB2
Veritas Storage Foundation for Sybase
Veri tas Storage Foundation Cluster File System
Veritas Storage Foundation for Oracle RAe
no no
no no
no
no no
no no
no no
no
no no
no no
Task Menu:
[2> Install/Upgrade a Product
L) License a Product
U) Uninstall a Product
Q) Quit
C) Configure an Installed Product
P) Perform a Preinstallation Check
0) View a Product Description
?) Help
Enter a Selection: [I,C,L,P,U,D,Q,?]
Installing Storage Foundation
The Installer is a menu-based installation utility that you can use to install any
product contained on the VERITAS Storage Solutions CD-ROM. This utility acts
as a wrapper for existing product installation scripts and is most useful when you
are installing multiple VERI rAS products or bundles, such as VERITAS Storage
Foundation or VERITAS Storage Foundation tor Databases.
Note: The example on the slide is from a Solaris platform. Some of the products
shown on the menu may not be available on other platforms. For example,
VERITAS File System is available only as part of Storage Foundation on HP-liX.
Note: The VERITAS Storage Solutions CD-ROM contains an installation guide
that describes how to use the installer utility. You should also read all product
installation guides and release notes even if you are using the installer utility.
To add the Storage Foundation packages using the installer utility:
1 Log on as supcruscr.
2 Mount the VERITAS Storage Solutions CD-ROM.
3 Locate and invoke the installer script:
cd / cdrom/ CD_name
./installer
4 If the licensing utilities are installed. the product status page is displayed. This
list displays the VERITAS products on the CD-ROM and the installation and
licensing status of each product. If the licensing utilities are not installed, you
receive a message indicating that the installation utility could not cletermine
product status.
2-10 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copynghl & 2006 Svrnaotoc Corporation All flyhls reserved
5 Type I to install a product. Follow the instructions to select the product that
you want to install. Installation begins automatically.
When you add Storage Foundation packages by using the installer utility. all
packages are installed. lfyou want to add a specific package only. for example.
only the VRTSvrndoc package. then you must add the package manually from the
command line.
After installation. the installer creates three text files that can be used for auditing
or debugging. The names and locations or each file are displayed at the end or the
installation and are located in / opt/VRTS/ install / logs:
IFile Description
Installation log file Contains all commands executed during installation. their output.
and any errors generated by the commands: used for debugging
installation problems and for analysis by VERITAS Support
Responsefile Contains configuration information enteredduring the procedure:
can be used for future installation procedures when using the
installer script with the -responsefile option
Summary file Contains the output of Vf:RITAS product installation scripts:
shows products that were installed. locations of log and response
files, and installation messagesdisplayed
Methods for Adding Storage Foundation Packages
A first-time installation or Storage Foundation involves adding the software
packages and configuring Storage Foundation fur first-time use. You can add
VERITAS product packages by using one of three methods:
Method Command Notes
VLRITAS installer Installs multiple VERITAS
Installation Menu products interactively.
Installs packagesand conligures
Storage Foundation (or first-time
use.
Product installation installvm Install individual VFRITAS
scripts installfs products internctively.
installsf Installs packagesand configures
Storage Foundation lor first time
use.
Native operating pkgadd (Solaris) Install individual packages. for
system package swinstall (iIP-UX) example. when using your 0'11
installation
installp (AIX)
custom installation scripts,
commands First-time Storage Foundation
rpm (Linux )
configuration must be run as a
Then. to configure SF: separatestep.
vxinstall
Lesson 2 Installation and Interfaces 2-11
Copynqht "( 2006 Svmanter. Corporation All rights reserved
Default Disk Group
o You can set up a system-
wide default disk group to
which Storage Foundation
commands default if you do
not specify a disk group .
o If you choose not to set a
default disk group at
installation, you can set the
default disk group later from
the command line.
Note:In StorageFoundation
4.0and later,the rootdg
requirementno longerexists.
symaru«,
Configuring Storage Foundation
: When you i~~taIISt~r~g~F~~~dation, y~u are asl<edify~~'V~~tt~~~~:l
Enclosure-Based Naming
HostJ
"-cl
Disk
Enclosures
.,.II '·c' 2
. ~.I. enc
j' -;;ncl
encO
o Standard device naming is based on
controllers, for example, cltOdOs2.
o Enclosure-based naming is based on
disk enclosures, for example, encO.
Configuring Storage Foundation
When you install Storage Foundation, you are asked if you want to configure it
during installation. This includes deciding whether to use enclosure-based naming
and a default disk group.
What Is Enclosure-Based Naming'!
An enclosure.or disk enclosure,is an intelligent disk array. which permits hot-
swapping of disks. With Storage Foundation. disk devices can be named for
enclosures rather than for the controllers through which they are accessed as with
standard disk device naming (for example. eOtOdO or hdisk2).
Enclosure-based naming allows Storage Foundation to access enclosures as
separate physical entities. By configuring redundant copies of your data on
separate enclosures, you can safeguard against failure of one or more enclosures.
This is especially useful in a storage area network (SAN) that uses Fibre Channel
hubs or fabric switches and when managing the dynamic multipathing (DMP)
feature of Storage Foundation. For example, if two paths (el t 99dO and
e2t99dO) exist to a single disk in an enclosure, VxVM can use a single DMP
metanode, such as eneO 0, to access the disk.
What Is a Default Disk Group'!
The main benefit of creating a default disk group is that Storage Foundation
commands default to that disk group if you do not specify a disk group on the
command line. defaul tdg specifies the default disk group and is an alias for the
disk group name that should be assumed if a disk group is not specified ill a
command.
2-12 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
COp,'!v;)ht G" 2()06 Svmantec COIl-'or<lIlOI1. All flI)t1ls reservcc
I
Storage Foundation Management Server
Storage Foundation 5.0 provides central
management capability by introducing a
Storage Foundation Management Server (SFMS).
With SF 5.0, it is possible to configure a SF host as
a managed host or as a standalone host during
installation.
A Management Server and Authentication Broker
must have previously been set up if a managed
host is required during installation.
To configure a server as a standalone host during
installation, you need to answer "n" when asked if
you want to enable SFMS Management.
You can change a standalone host to a managed
host at a later time.
Note: This course does not cover SFMS and managed hosts.
Storage Foundation Management Server
Storage Foundation 5.0 provides central management capability by introducing a
Storage Foundation Management Server (SFMS). For more information. refer to
the S/Or"geFoundation ManagementS(,I"I·(,I" Administrator's Guide.
Lesson 2 Installationand Interfaces
Copyright «; 2006 Svmantec Corporation All fignls rese-veo
2-13
Sularis
symantec
Verifying Package Installation
To verify package installation, use OS-specific
commands:
• Solaris:
pkginfo -1 VRTSvxvm
• HP-UX:
sw1ist -1 product VRTSvxvm
• AIX:
lslpp -1 VRTSvxvm
• Linux:
rpm -qa VRTSvxvm
Verifying Package Installation
llyou are not sure whether VERITAS packagesare installed, or if you want to
verify which packagesare installed on the system,you can view information about
installed packagesby using Ox-specific commands to list package information.
To list all installed packageson the system:
pkginfo
To restrict the list to installed VERITAS packages:
pkginfo I grep VRTS
To display detailed information about a package:
pkginfo -1 VRTSvxvm
HP-UX
To list all installed packageson the system:
sw1ist -1 product
To restrict the list to installed VERITAS packages:
sw1ist -1 product I grep VRTS
To display detailed information about a package:
sw1ist -1 product VRTSvxvm
2-14
Cqpyngtll (,. 2006 Symautec Corporauon All nqhts reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
AIX
To list all installed packages on the system:
lslpp
To restrict the list to installed VERITAS packages, type:
lslpp -1 'VRTS*'
To verify that a particular Iileset has heen installed, use its name, for example:
lslpp -1 VRTSvxvrn
To verify package installation on the system:
rpm -qa I grep VRTS
To verify a specific package installation on the system:
rpm -q[i] package_name
For example, to verify that the VRTSvxvm package is installed:
rpm -q VRTSvxvrn
The - i option lists detailed information about the package.
ILinux
Lesson 2 Installation and Interfaces 2-15
Copyrighi ~ 2006 Svrnantec Corporation All rigl1;; reserved
svmantec
Storage Foundation User Interfaces
Storage Foundation supports three user interfaces:
• VERITAS Enterprise Administrator (VEA):
A GUI that provides access through icons, menus,
wizards, and dialog boxes
Note: This course only covers using VEA on a standalone
host.
• Command-Line Interface (CLI): UNIX utilities that
you invoke from the command line
• Volume Manager Support Operations
(vxdiskadm): A menu-driven, text-based interface
also invoked from the command line
Note: vxdiskadm only provides access to certain disk and
disk group management functions.
Storage Foundation User Interfaces
Storage Foundation User Interfaces
Storage Foundation supports three user interfaces, Volume Manager objects
created by one interface are compatible with those created by the other interfaces.
YERITAS Enterprise Administrator (YEA): VERITAS Enterprise
Administrator (VEA) is a graphical user interface to Volume Manager and
other VERITAS products. VEA provides access to Storage Foundation
functionality through visual clements, such as icons, menus. wizards, and
dialog boxes. Using VEA, you can manipulate Volume Manager objects and
also perform common tile system operations.
Command-Line Interface (CLI): The command-line interface (ell) consists
of UNIX utilities that you invoke from the command line to perform Storage
Foundation and standard UNIX tasks. You can use the ell not only to
manipulate Volume Manager objects. but also to perform scripting and
debugging functions. Most of the ell commands require supcruser or other
appropriate privileges. The ell commands perform functions that range from
the simple to the complex, and some require detailed user input.
Volume Manager Support Operations (vxdiskadm): The Volume
Manager Support Operations interface, commonly called vxdiskadm, is a
menu-driven, text-based interface that you can use for disk and disk group
administration functions. The vxdiskadm interface has a main menu from
which you can select storage management tasks.
A single VEA task may perform multiple command-line tasks.
2-16
Copyuqtu 'c 2(1)6 Syn.autec Corporauon All fights reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
-
I
, syrnaruec.
; Menu Bart t"'_ ~.,... •....•~ .•...•....;
VEA: Main Window
Quick
Access
Bar
; Toolbar
lit 0
<_l ;::""ot.,,.._.! t_'f-"'W"" ~"'br-'" ,..,,,,,-.,.,-.l~<C"'><» ""cO._"",," ~<W1'A<'.~ "'•.•••".I>t.~l
,,: _.,._.,n
~ •.•.•••• ">.
t loi,-,q , '>
ll"''''
~ u; ,-
..."
:: t,.i:.,,;~~'
.'_•....••.
Three ways to
access tasks:
1. Menu bar
2. Toolbar
3. Context menu
(right-click)
""'_==":;":'-':'...1 ~';':"'"d,.·'.·'. '" '~.•.,~
Using the VEA Interface
The VERITAS Enterprise Administrator (VEA) is the graphical user interface for
Storage Foundation and other VERITAS products. You can use the Storage
Foundation features of VEA to administer disks, volumes, and file systems on
local or remote machines.
VEA is a Java-based interface that consists of a server and a client. You must
install the VEA server on a UNIX machine that is running VERITAS Volume
Manager. The VEA client can Tunon any machine that supports the Java 1.4
Runtime Environment, which can be Solaris. IIP-UX, AIX, Linux, or Windows.
SOllie Storage Foundation features ofVEA include:
Remote Administration
Security
Multiple Host Support
Multiple Views of Objects
Setting VE! Preferences
You can customize general VEA environment auributes through the Preferences
window (Select Tools - --Prcfcrcncev).
Lesson 2 Installation and Interfaces 2-17
CopYllght ttl2006 Symanter. Corporation. All rights resetveo
symaruec
VEA: Viewing Tasks and Commands
To view underlying command
lines, double-click a task.
, *-'(:~<;,<j~"
"'¥i:t'w. " , 'I" ~ ,.__"
'-'--""';"--+-1 ~;;~~i':::J~~;~'~
fr<l flnlC" l!<;.u,t.H~"
t>O!k!'lIillIll
('jU! •• r~ltr ••O<.IP
T,"~1."l
(1"~t<i.uU!<)o~
."OI$tUi.
'oWN ,",!Jg eso 'I'+! 1(!)t;rrr", ~fnOl
•••••~.I¢"!I"'I(iK<:,.,~::.'l'
UU(
kl'~-":-:O''''''''''----'--'-----'
i~~lr4"~IIfll
Viewing Commands Through the Task Log
The Task Log displays a history of the tasks performed in the current session. Each
task is listed with properties. such as the target object of the task. the host, the start
time, the task status, and task progress.
Displaying the Task Lug window: To display the Task Log, click the Logs
tab at the left of the main window.
Clearing the Task History: Tasks are persistent in the Task History window.
To remove completed tasks from the window, right-click a task and select
Clear All Finished Tasks.
Viewing ell Commands: To view the command lines executed for a task,
double-click a task. The Task Log Details window is displayed tor the task.
The CLI commands issued are displayed in the Commands Executed field of
the Task Details section.
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals2-18
COlJyflgtll '"' 2006 Symantec Coporauon All lights reservcc
, S)111.1I1lt'l.
I
Command-Line Interface
You can administer CLI commands from the UNIX shell
prompt.
Commands can be executed individually or combined
into scripts.
Most commands are located in /usr/sbin. Add this
directory to your PATH environment variable to
access the commands.
Examples of CLI commands include:
vxassist
vxprint
vxdg
vxdisk
Creates and manages volumes
Lists VxVM configuration records
Creates and manages disk groups
Administers disks under VM control
Using the Command-Line Interface
The Storage Foundation command-line interface (CLl) provides commands used
for administering Storage Foundation from the shell prompt on a UNIX system.
CLl commands can be executed individually for specific tasks or combined into
scripts.
The Storage Foundation command set ranges from commands requiring minimal
user input to commands requiring detailed user input. Many of the Storage
Foundation commands require an understanding of Storage Foundation concepts.
Most Storage Foundation commands require supcruser or other appropriate access
privileges.
CLI commands are detailed in manual pages.
Accessing Manual Pages for CLI Commands
Detailed descriptions ofVxVM and VxFS commands. the options for each utility.
and details on how to use them are located in VxVM and VxFS manual pages.
Manual pages are installed by default in / opt/VRTS/man. Add this directory to
the MANPATI I environment variable. if it is not already added.
To access a manual page. type man command name.
Examples:
man vxassist
man mount vxfs
Linux Note
On Linux. you must also set the MANSECT and ~1ANPATH variables.
Lesson 2 Installation and Interfaces
Copyrigtlt l~, 2U06 Symamec Corpotauon All rights .csorvoo
2-19
symantcc
The vxdi skadm Interface
vxdiskadm
Volume Manager Support Operations
Menu: volumeManager/Disk
1 Add or initialize one or more disks
2 Encapsulate one or more disks
3 Remove a disk
4 Remove a disk for replacement
5 Replace a failed or removed disk
list List disk information
Display help about menu
?? Display help about the menuing system
q Exit from menus
Note: This example is from a Solaris platform. The options may be
slightly different on other platforms.
Using the vxdiskadm Interface
The vxdiskadm command is a CLI command that you can use to launch the
Volume Manager Support Operations menu interface. You can use the Volume
Manager Support Operations interface, commonly referred to as vxdiskadm. to
perform common disk management tasks. The vxdiskadm interface is restricted
10 managing disk objects and does not provide a means of handl ing all other
VxVM objects.
Each option in the vxdiskadm interface invokes a sequence ofCLI commands.
The vxdiskadm interlace presents disk management tasks to the user as a series
of questions. or prompts.
To start vxdiskadm. you type vxdiskadm at the command line to display the
main menu.
The vxdiskadm main menu contains a selection of main tasks that you can use to
manipulate Volume Manager objects. Each entry in the main menu leads you
through a particular task by providing you with information and prompts. Default
answers arc provided for many questions, so you can select common answers.
The menu also contains options for listing disk information, displaying help
information. and quilling the menu interface.
The tasks listed in the main menu are covered throughout this training. Options
available in the menu differ somewhat by platform. See the vxdiskadm (1m)
manual page for more details on how to use vxdiskadm.
Note: vxdiskadm can be run only once per host. A lock file prevents multiple
instances from running: /var / spool / locks/ .DrSKADO. LOCK.
2-20
Copynqht F; ';:006 Svmamcc Corporation All rights reserved
VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
I
Installing VEA
Installation
administration
file (Solaris only):
VRTSobadmin
Windows
Client packages:
• VRTSobgui, VRTSat, VRTSpbx,
VRTSicsco (UNIX)
Server packages:
• VRTSob
• VRTSobc33
• VRTSaa
VRTSccg
• VRTSdsa
• VRTSvail
• VRTSvmpro
• VRTSfspro
'UN/X VRTSddlpr
--_.__._---_._--..-
r-;, Install the VEA
I server on a UNIX
I machine running
I
I. Storage Foundation.
Install the VEA
client on any
I machine that
,I
supports the Java
1.4 Runtime
Environment (or
I later).
• windows/VRTSobgui .rosi (Windows)
VEA is installed automatically when you run the SF installation
scripts. You can also install VEA by adding packages manually.
Managing the VEA Software
YEA consists of a server and a client. You must install the YEA server on a UNIX
machine that is running YERITAS Volume Manager. You can install the YEA
client on the same machine or on any other UNIX or Windows machine that
supports the Java 1.4 Runtime Environment (or later),
Installing the VEA Server and Client on UNIX
If you install Storage Foundation by using the Installer utility. you arc prompted to
install both the ,[A server and client packages automatically. If you did not install
all of the components by using the Installer. you can add the YEA packages
separately.
It is recommended that you upgrade YEA to the latest version released with
Storage Foundation in order to take advantage of new functionality built into YEA.
You can use the YEA with 4.1 and later to manage 3.5.2 and later releases.
When adding packages manually. you must install the Volume Manager
(VRTSvl ie. VRTSvxvrn) and the infrastructure packages (VRTSat. VRTSpbx.
VRTSieseo) before installing the YEA server packages. After installation. also
add the YEA startup scripts directory. / opt/VRTSob/bin. to the PATH
environment variable.
Lesson 2 Installationand Interfaces 2-21
Copyright ,°,2006 Symanrec Corporation. Anuqhts resorvoo
syrnanrec
Starting the VEA Server and Client
Once installed, the VEA server starts up automatically
at system startup.
To start the VEA server manually:
1. Log on as superuser.
2. Start the VEA server by invoking the server program:
/opt/VRTSob/bin/vxsvc (on Solaris and HP-UX)
/opt/VRTSob/bin/vxsvcctrl (on Linux)
When the VEA server is started:
/var /vx/ isis/vxis is. lock ensures that only one instance
of the VEA server is running.
/var/vx/isis/vxisis .log contains server process log
messages.
To start the VEA client:
On UNIX: /opt/VRTSob/bin/vea
On Windows: Select Start->Programs->VERIT AS->
VERITAS Enterprise Administrator.
Starting the VEA Server
In order to use YEA. the YEA server must be running on the UNIX machine to be
administered. Only one instance of the VEA server should be running at a time.
Once installed. the YEA server starts up automatically at system startup. You can
start the YEA server manually by invoking vxsvc (on Solaris and HP-UX).
vxsvcctrl (on Linux ), or by invoking the startup script itself, for example:
Solaris
/etc/rc2.d/S73isisd start
~IP-LJX
/sbin/rc2.d/S700isisd start
The YEA client call provide simultaneous access to multiple host machines. Each
host machine must be running the VEA server.
Note: Entries for your user name and password must exist in the password file or
corresponding Network Information Name Service table on the machine to be
administered. Your user name must also be included in the YERITAS
administration group (v r t s adm, by default) in the group tile or NIS group table.
If the vrtsadm entry does not exist. only root can run YEA.
You can contigure YEA to connect automatically to hosts when you start the YEA
client. In the YEA main window. the Favorite Hosts node can contain a list of
hosts that arc reconnected by default at the startup of the YEA client.
2-22
Copyright .; 2006 Svrnantec Corpcrahon. AU nqnts reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
, symanrec.
Managing VEA
The VEA server program is:
/opt/VRTSob/bin/vxsvc (Solaris and HP-UX)
/opt/VRTSob/bin/vxsvcctrl (Linux)
To confirm that the VEA server is running:
vxsvc -m (Solaris and HP-UX)
vxsvcctrl status (Linux)
To stop and restart the VEA server:
/etc/init.d/isisd restart (Solaris)
/sbin/init.d/isisd restart (HP-UX)
To kill the VEA server process:
vxsvc -k (Solaris and HP·UX)
vxsvcctrl stop (Linux)
To display the VEA version number:
vxsvc -v (Solaris and HP-UX)
vxsvcctrl version (Linux)
Managing the VEA Server
I
Monitoring VEA Event and Task Logs
You can monitor VEA server events and tasks from the [vent Log and Task Log
nodes in the VEA object tree. You can also view the VEA log file. which is located
at /var /vx/ isis/vxisis. log. This tile contains trace messages for the V[A
server and VEA service providers.
Copylight <E2006 Symantec Corporation Anucnts reserved
2-23Lesson 2 Installation and Interfaces
symantcc
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions, "lab ::: lnstatl.uiou and
hucrruccs." I'a~l' , ..7
Appendix B provides complete lab instructions and solutions, "Lab 2 Solutiuns:
lnstullation and lnrcriucc-." page n 7
Lesson Summary
• Key Points
In this lesson, you learned guidelines for a first-
time installation of VERITAS Storage Foundation,
as well as an introduction to the three interfaces
used to manage VERITAS Storage Foundation.
• Reference Materials
- VERITAS Volume Manager Administrator's Guide
- VERITAS Storage Foundation Installation Guide
- VERITAS Storage Foundation Release Notes
- Storage Foundation Management Server
Administrator's Guide
2-24
Lab 2
Lab 2: Installation and Interfaces
In this lab, you install VERITAS Storage
Foundation 5.0 on your lab system. You also
explore the Storage Foundation user interfaces,
including the VERITAS Enterprise Administrator
interface, the vxdiskadm menu interface, and the
command-line interface.
For Lab Exercises, see Appendix A.
For Lab Solutions, see Appendix B,
Copynqt-t ( 20DI'}Svrnantec Lorpcratton All riqhts leserved
VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
Lesson 3
Creating a Volume and File System
svmantec
Lesson Introduction
• Lesson 1: Virtual Objects
• Lesson 2: Installation and Interfaces
• Lesson 3: Creating a Volume and File "",',
System
• Lesson 4: Selecting Volume Layouts
• Lesson 5: Making Basic Configuration
Changes
• Lesson 6: Administering File Systems
• Lesson 7: Resolving Hardware Problems
~ ~~'" ,~,"';'~;i!1I.. , symantcc
Lesson Topics and Objectives
Topic After completing this lesson, you will be
able to:
Topic 1: Preparing Disks and Initialize an OS disk as a VxVM disk and
Disk Groups for Volume create a disk group by using VEA and
Creation command-line utilities.
Topic 2: Creating a Volume Create a concatenated volume by using VEA
and from the command-line,
Topic 3: Adding a File System Add a file system to and mount an existing
toa Volume volume.
Topic 4: Displaying Volume Display volume layout information by using
Configuration Information VEA and by using the vxprint command.
Topic 5: Displaying Disk and View disk and disk group information and
Disk Group Information identify disk status.
Topic 6: Removing Volumes, Remove a volume, evacuate a disk, remove a
Disks, and Disk Groups disk from a disk group, and destroy a disk
group.
3-2
Cop;lfIyhl'~ 2006 Svrnantec COrpOI(ltloll All rights reserved
VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
."-*..,,.,.dt~., d"U1I;:M
Selecting a Disk Naming Scheme
Types of naming schemes:
• Traditional device naming: OS-dependent and based on
physical connectivity information
• Enclosure-based naming: OS-independent, based on the
logical name of the enclosure, and customizable
You can select a naming scheme:
• When you run Storage Foundation installation scripts
• Using vxdiskadm, "Change the disk naming scheme"
Enclosure-based named disks are displayed in three
categories:
Enclosures: enclosurenarne #
Disks: Disk #
Others: Disks that do not return a path-independent identifier
to VxVM are displayed in the traditional OS-based format.
Preparing Disks and Disk Groups for Volume Creation-
Here are some examples of naming schemes:
Naming Scheme Example
Traditional Solaris: /dev/ l r l dsk/clt9dOs2
HP-UX: /dev/ l r ] dsk/c3t2dO (no slice)
AIX: /dev/hdisk2
I.inux: /dev/sda. /dev/hda
Enclosure-based senaO-
1.senaO
-
2,senaO- 3..
Enclosure-based Customized englab2.hrl.boston3
I
Benefits of enclosure-based naming include:
Easier fault isolation: Storage Foundation can more effectively place data and
metadata to ensure data availability.
Device-name independence: Storage Foundation is independent of arbitrary
device names used by third-party drivers.
Improved SAN management: Storage Foundation can create better location
identification information about disks in large disk limns and SANs.
Improved cluster management: In a cluster environment. disk array names
on all hosts in a cluster can be the same.
Improved dynamic multipathing (DMP) management: With multipathcd
disks. the name of a disk is independent of the physical communication paths.
avoiding confusion and conflict.
Copyrighl;~ 2006 Symantec Corporation. All nqtus reserved
3-3Lesson 3 Creating a Volume and File System
symantec
~
Stage 1: ;
Initialize disk. J
~ !
Uninitialized :
Disk ;
Stage 2:
Assign disk
to disk group.
Before Configuring a Disk for Use by VxVM
In order to use the space ofa physical disk to build VxVM volumes, you must
place the disk under Volume Manager control. Before a disk can be placed under
volume Manager control, the disk media must be formatted outside ofVxVM
using standard operating system formatting methods. SCSI disks arc usually
prcformaued. After a disk is formatted. the disk can be initialized for use by
Volume Manager. In other words. disks must be detected by the operating system,
before VxVM can detect the disks.
Stage One: Initialize a Disk
A formatted physical disk is considered uninitialized until it is initialized for use
by VxVM. When a disk is initialized. the public and private regions are created.
and VM disk header information is written to the private region. Any data or
partitions that may have existed on the disk are removed.
These disks are under Volume Manager control but cannot be used by Volume
Manager until they are added to a disk group.
Note: Encapsulation is another method of placing a disk under VxVM control in
which existing data on the disk is preserved. This method is covered in a later
lesson.
Changing the Disk Layout
To display or change the default values that are used for initializing disks, select
the "Change/display the default disk layouts" option in vxdiskadm:
3-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
COPYright ,~, 2006 Svmantec Cornorauon All fights reserved
For disk initialization. you can change the default format and the default length
of the private region. If the attribute settings for initializing disks are stored in
the user-created tile. / etc/ defaul t /vxdi sk, they apply to all disks to be
initialized
On Solaris for disk encapsulation. you can additionally change the offset
values for both the private and public regions. To make encapsulation
parameters different from the default VxVM values. create the user-detined
/ etc/ defaul t /vxencap tile and place the parameters in this tile.
On HP-UX when converting LVM disks. you can change the default format
and the default private region length. The attribute settings are stored in the
/etc/default/vxencap file.
Stage Two: Assign a Disk to a Disk Group
When you add a disk to a disk group. VxVM assigns a disk media name to the disk
and maps this name to the disk access name.
Disk media name: A disk media name is the logical disk name assigned to a
drive by VxVM. VxVM uses this name to identify the disk for volume
operations. such as volume creation and mirroring.
Disk access name: A disk access name represents all UNIX paths to the
device. A disk access record maps the physical location to the logical name
and represents the link between the disk media name and the disk accessname.
Disk accessrecords arc dynamic and can be re-created when vxdctl enable
is run.
The disk media name and disk access name. in addition to the host name. are
written to the private region of the disk. Space in the public region is made
available for assignment to volumes. Volume Manager has full control of the disk.
and the disk can be used to allocate space tor volumes. Whenever the VxVM
configuration daemon is started (or vxdctl enable is run). the system reads the
private region on every disk and establishes the connections between disk access
names and disk media names.
A tier disks are placed under Volume Manager control. storage is managed in terms
of the logical configuration. File systems mount to logical volumes. not to physical
partitions. Logical names. such as
/dev/vx/ l r l dsk/diskgroup/volume_name. replace physical locations.
such as /dev/ [rl dsk/ device_name.
The free space in a disk group refers to the space on all disks within the disk group
that has not been allocated as subdisks, When you place a disk into a disk group.
its space becomes part or the tree space pool of the disk group.
Stage Three: Assign Disk Space to Volumes
When you create volumes. space in the public region of a disk is assigned to the
volumes. Some operations. such as removal of a disk from a disk group. are
restricted itspace on a disk is ill use by a volume.
Lesson 3 Creating a Volume and File System
Copyright c· 2006 Symantcc Corporation All rignls reserved
I
3-5
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals
Veritas storage foundation_5.0_for_unix_-_fundamentals

More Related Content

What's hot

Transforming Backup and Recovery in VMware environments with EMC Avamar and D...
Transforming Backup and Recovery in VMware environments with EMC Avamar and D...Transforming Backup and Recovery in VMware environments with EMC Avamar and D...
Transforming Backup and Recovery in VMware environments with EMC Avamar and D...
CTI Group
 
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...
Symantec
 
Les solutions EMC de sauvegarde des données avec déduplication dans les envir...
Les solutions EMC de sauvegarde des données avec déduplication dans les envir...Les solutions EMC de sauvegarde des données avec déduplication dans les envir...
Les solutions EMC de sauvegarde des données avec déduplication dans les envir...
ljaquet
 
Emc data domain
Emc data domainEmc data domain
Emc data domain
solarisyougood
 
How to achieve better backup with Symantec
How to achieve better backup with SymantecHow to achieve better backup with Symantec
How to achieve better backup with Symantec
Arrow ECS UK
 
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM India Smarter Computing
 
Veerendra_2016_V2
Veerendra_2016_V2Veerendra_2016_V2
Veerendra_2016_V2
Veerendra Patil
 
Better Backup For All - February 2012
Better Backup For All - February 2012Better Backup For All - February 2012
Better Backup For All - February 2012
Symantec
 
Presentation data domain advanced features and functions
Presentation   data domain advanced features and functionsPresentation   data domain advanced features and functions
Presentation data domain advanced features and functions
xKinAnx
 
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Sverige
 
TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330
TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330
TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330
Symantec
 
TECHNICAL WHITE PAPER▶NetBackup 5330 Resiliency/High Availability Attributes
TECHNICAL WHITE PAPER▶NetBackup 5330 Resiliency/High Availability AttributesTECHNICAL WHITE PAPER▶NetBackup 5330 Resiliency/High Availability Attributes
TECHNICAL WHITE PAPER▶NetBackup 5330 Resiliency/High Availability Attributes
Symantec
 
Iocg Whats New In V Sphere
Iocg Whats New In V SphereIocg Whats New In V Sphere
Iocg Whats New In V Sphere
Anne Achleman
 
Avamar weekly webcast
Avamar weekly webcastAvamar weekly webcast
Avamar weekly webcast
stefriche0199
 
EMC for Network Attached Storage (NAS) Backup and Recovery Using NDMP
EMC for Network Attached Storage (NAS) Backup and Recovery Using NDMPEMC for Network Attached Storage (NAS) Backup and Recovery Using NDMP
EMC for Network Attached Storage (NAS) Backup and Recovery Using NDMP
EMC
 
Symantec Backup Exec 2010 and NetBackup 7
Symantec Backup Exec 2010 and NetBackup 7Symantec Backup Exec 2010 and NetBackup 7
Symantec Backup Exec 2010 and NetBackup 7
Symantec
 
Citrix XenApp hosted shared desktop performance on Cisco UCS: Cisco VM-FEX vs...
Citrix XenApp hosted shared desktop performance on Cisco UCS: Cisco VM-FEX vs...Citrix XenApp hosted shared desktop performance on Cisco UCS: Cisco VM-FEX vs...
Citrix XenApp hosted shared desktop performance on Cisco UCS: Cisco VM-FEX vs...
Principled Technologies
 
TECHNICAL WHITE PAPER▶Symantec Backup Exec 2014 Blueprints - OST Powered Appl...
TECHNICAL WHITE PAPER▶Symantec Backup Exec 2014 Blueprints - OST Powered Appl...TECHNICAL WHITE PAPER▶Symantec Backup Exec 2014 Blueprints - OST Powered Appl...
TECHNICAL WHITE PAPER▶Symantec Backup Exec 2014 Blueprints - OST Powered Appl...
Symantec
 
NetBackup Appliance Family presentation
NetBackup Appliance Family presentationNetBackup Appliance Family presentation
NetBackup Appliance Family presentation
Symantec
 
Fasg02 mr
Fasg02 mrFasg02 mr
Fasg02 mr
balakrishna b
 

What's hot (20)

Transforming Backup and Recovery in VMware environments with EMC Avamar and D...
Transforming Backup and Recovery in VMware environments with EMC Avamar and D...Transforming Backup and Recovery in VMware environments with EMC Avamar and D...
Transforming Backup and Recovery in VMware environments with EMC Avamar and D...
 
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...
 
Les solutions EMC de sauvegarde des données avec déduplication dans les envir...
Les solutions EMC de sauvegarde des données avec déduplication dans les envir...Les solutions EMC de sauvegarde des données avec déduplication dans les envir...
Les solutions EMC de sauvegarde des données avec déduplication dans les envir...
 
Emc data domain
Emc data domainEmc data domain
Emc data domain
 
How to achieve better backup with Symantec
How to achieve better backup with SymantecHow to achieve better backup with Symantec
How to achieve better backup with Symantec
 
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
 
Veerendra_2016_V2
Veerendra_2016_V2Veerendra_2016_V2
Veerendra_2016_V2
 
Better Backup For All - February 2012
Better Backup For All - February 2012Better Backup For All - February 2012
Better Backup For All - February 2012
 
Presentation data domain advanced features and functions
Presentation   data domain advanced features and functionsPresentation   data domain advanced features and functions
Presentation data domain advanced features and functions
 
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
 
TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330
TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330
TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330
 
TECHNICAL WHITE PAPER▶NetBackup 5330 Resiliency/High Availability Attributes
TECHNICAL WHITE PAPER▶NetBackup 5330 Resiliency/High Availability AttributesTECHNICAL WHITE PAPER▶NetBackup 5330 Resiliency/High Availability Attributes
TECHNICAL WHITE PAPER▶NetBackup 5330 Resiliency/High Availability Attributes
 
Iocg Whats New In V Sphere
Iocg Whats New In V SphereIocg Whats New In V Sphere
Iocg Whats New In V Sphere
 
Avamar weekly webcast
Avamar weekly webcastAvamar weekly webcast
Avamar weekly webcast
 
EMC for Network Attached Storage (NAS) Backup and Recovery Using NDMP
EMC for Network Attached Storage (NAS) Backup and Recovery Using NDMPEMC for Network Attached Storage (NAS) Backup and Recovery Using NDMP
EMC for Network Attached Storage (NAS) Backup and Recovery Using NDMP
 
Symantec Backup Exec 2010 and NetBackup 7
Symantec Backup Exec 2010 and NetBackup 7Symantec Backup Exec 2010 and NetBackup 7
Symantec Backup Exec 2010 and NetBackup 7
 
Citrix XenApp hosted shared desktop performance on Cisco UCS: Cisco VM-FEX vs...
Citrix XenApp hosted shared desktop performance on Cisco UCS: Cisco VM-FEX vs...Citrix XenApp hosted shared desktop performance on Cisco UCS: Cisco VM-FEX vs...
Citrix XenApp hosted shared desktop performance on Cisco UCS: Cisco VM-FEX vs...
 
TECHNICAL WHITE PAPER▶Symantec Backup Exec 2014 Blueprints - OST Powered Appl...
TECHNICAL WHITE PAPER▶Symantec Backup Exec 2014 Blueprints - OST Powered Appl...TECHNICAL WHITE PAPER▶Symantec Backup Exec 2014 Blueprints - OST Powered Appl...
TECHNICAL WHITE PAPER▶Symantec Backup Exec 2014 Blueprints - OST Powered Appl...
 
NetBackup Appliance Family presentation
NetBackup Appliance Family presentationNetBackup Appliance Family presentation
NetBackup Appliance Family presentation
 
Fasg02 mr
Fasg02 mrFasg02 mr
Fasg02 mr
 

Similar to Veritas storage foundation_5.0_for_unix_-_fundamentals

Sofware architure of a SAN storage Control System
Sofware architure of a SAN storage Control SystemSofware architure of a SAN storage Control System
Sofware architure of a SAN storage Control System
Grupo VirreySoft
 
Ibm system storage n series with multi store and snapmover redp4170
Ibm system storage n series with multi store and snapmover redp4170Ibm system storage n series with multi store and snapmover redp4170
Ibm system storage n series with multi store and snapmover redp4170
Banking at Ho Chi Minh city
 
Why is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandWhy is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage Switzerland
INFINIDAT
 
Storage Area Networks Unit 3 Notes
Storage Area Networks Unit 3 NotesStorage Area Networks Unit 3 Notes
Storage Area Networks Unit 3 Notes
Sudarshan Dhondaley
 
Net App Unified Storage Architecture
Net App Unified Storage ArchitectureNet App Unified Storage Architecture
Net App Unified Storage Architecture
Samantha_Roehl
 
Net App Unified Storage Architecture
Net App Unified Storage ArchitectureNet App Unified Storage Architecture
Net App Unified Storage Architecture
nburgett
 
Oracle Exec Summary 7000 Unified Storage
Oracle Exec Summary 7000 Unified StorageOracle Exec Summary 7000 Unified Storage
Oracle Exec Summary 7000 Unified Storage
David R. Klauser
 
Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)
Tuan Yang
 
Survey of distributed storage system
Survey of distributed storage systemSurvey of distributed storage system
Survey of distributed storage system
Zhichao Liang
 
Huawei Symantec Oceanspace VIS6000 Overview
Huawei Symantec Oceanspace VIS6000 OverviewHuawei Symantec Oceanspace VIS6000 Overview
Huawei Symantec Oceanspace VIS6000 Overview
Utopia Media
 
Sample_Blueprint-Fault_Tolerant_NAS
Sample_Blueprint-Fault_Tolerant_NASSample_Blueprint-Fault_Tolerant_NAS
Sample_Blueprint-Fault_Tolerant_NAS
Mike Alvarado
 
Emc data domain technical deep dive workshop
Emc data domain  technical deep dive workshopEmc data domain  technical deep dive workshop
Emc data domain technical deep dive workshop
solarisyougood
 
vFabric Data Director 2.7 customer deck
vFabric Data Director 2.7 customer deckvFabric Data Director 2.7 customer deck
vFabric Data Director 2.7 customer deck
Junchi Zhang
 
Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 Ibm spectrum virtualize 101
Ibm spectrum virtualize 101
xKinAnx
 
Log-Structured File System (LSFS) as a weapon to fight “I/O Blender” virtuali...
Log-Structured File System (LSFS) as a weapon to fight “I/O Blender” virtuali...Log-Structured File System (LSFS) as a weapon to fight “I/O Blender” virtuali...
Log-Structured File System (LSFS) as a weapon to fight “I/O Blender” virtuali...
StarWind Software
 
Cloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out StorageCloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out Storage
ryanwakeling
 
Comp8 unit9b lecture_slides
Comp8 unit9b lecture_slidesComp8 unit9b lecture_slides
Comp8 unit9b lecture_slides
CMDLMS
 
Backing Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskBacking Up Mountains of Data to Disk
Backing Up Mountains of Data to Disk
IT Brand Pulse
 
Emc vi pr controller customer presentation
Emc vi pr controller customer presentationEmc vi pr controller customer presentation
Emc vi pr controller customer presentation
solarisyougood
 
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyTrack 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
EMC Forum India
 

Similar to Veritas storage foundation_5.0_for_unix_-_fundamentals (20)

Sofware architure of a SAN storage Control System
Sofware architure of a SAN storage Control SystemSofware architure of a SAN storage Control System
Sofware architure of a SAN storage Control System
 
Ibm system storage n series with multi store and snapmover redp4170
Ibm system storage n series with multi store and snapmover redp4170Ibm system storage n series with multi store and snapmover redp4170
Ibm system storage n series with multi store and snapmover redp4170
 
Why is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandWhy is Virtualization Creating Storage Sprawl? By Storage Switzerland
Why is Virtualization Creating Storage Sprawl? By Storage Switzerland
 
Storage Area Networks Unit 3 Notes
Storage Area Networks Unit 3 NotesStorage Area Networks Unit 3 Notes
Storage Area Networks Unit 3 Notes
 
Net App Unified Storage Architecture
Net App Unified Storage ArchitectureNet App Unified Storage Architecture
Net App Unified Storage Architecture
 
Net App Unified Storage Architecture
Net App Unified Storage ArchitectureNet App Unified Storage Architecture
Net App Unified Storage Architecture
 
Oracle Exec Summary 7000 Unified Storage
Oracle Exec Summary 7000 Unified StorageOracle Exec Summary 7000 Unified Storage
Oracle Exec Summary 7000 Unified Storage
 
Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)Understanding the Windows Server Administration Fundamentals (Part-2)
Understanding the Windows Server Administration Fundamentals (Part-2)
 
Survey of distributed storage system
Survey of distributed storage systemSurvey of distributed storage system
Survey of distributed storage system
 
Huawei Symantec Oceanspace VIS6000 Overview
Huawei Symantec Oceanspace VIS6000 OverviewHuawei Symantec Oceanspace VIS6000 Overview
Huawei Symantec Oceanspace VIS6000 Overview
 
Sample_Blueprint-Fault_Tolerant_NAS
Sample_Blueprint-Fault_Tolerant_NASSample_Blueprint-Fault_Tolerant_NAS
Sample_Blueprint-Fault_Tolerant_NAS
 
Emc data domain technical deep dive workshop
Emc data domain  technical deep dive workshopEmc data domain  technical deep dive workshop
Emc data domain technical deep dive workshop
 
vFabric Data Director 2.7 customer deck
vFabric Data Director 2.7 customer deckvFabric Data Director 2.7 customer deck
vFabric Data Director 2.7 customer deck
 
Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 Ibm spectrum virtualize 101
Ibm spectrum virtualize 101
 
Log-Structured File System (LSFS) as a weapon to fight “I/O Blender” virtuali...
Log-Structured File System (LSFS) as a weapon to fight “I/O Blender” virtuali...Log-Structured File System (LSFS) as a weapon to fight “I/O Blender” virtuali...
Log-Structured File System (LSFS) as a weapon to fight “I/O Blender” virtuali...
 
Cloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out StorageCloud-Ready, Scale-Out Storage
Cloud-Ready, Scale-Out Storage
 
Comp8 unit9b lecture_slides
Comp8 unit9b lecture_slidesComp8 unit9b lecture_slides
Comp8 unit9b lecture_slides
 
Backing Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskBacking Up Mountains of Data to Disk
Backing Up Mountains of Data to Disk
 
Emc vi pr controller customer presentation
Emc vi pr controller customer presentationEmc vi pr controller customer presentation
Emc vi pr controller customer presentation
 
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyTrack 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
 

Recently uploaded

The cost of acquiring information by natural selection
The cost of acquiring information by natural selectionThe cost of acquiring information by natural selection
The cost of acquiring information by natural selection
Carl Bergstrom
 
Direct Seeded Rice - Climate Smart Agriculture
Direct Seeded Rice - Climate Smart AgricultureDirect Seeded Rice - Climate Smart Agriculture
Direct Seeded Rice - Climate Smart Agriculture
International Food Policy Research Institute- South Asia Office
 
waterlessdyeingtechnolgyusing carbon dioxide chemicalspdf
waterlessdyeingtechnolgyusing carbon dioxide chemicalspdfwaterlessdyeingtechnolgyusing carbon dioxide chemicalspdf
waterlessdyeingtechnolgyusing carbon dioxide chemicalspdf
LengamoLAppostilic
 
SAR of Medicinal Chemistry 1st by dk.pdf
SAR of Medicinal Chemistry 1st by dk.pdfSAR of Medicinal Chemistry 1st by dk.pdf
SAR of Medicinal Chemistry 1st by dk.pdf
KrushnaDarade1
 
Compexometric titration/Chelatorphy titration/chelating titration
Compexometric titration/Chelatorphy titration/chelating titrationCompexometric titration/Chelatorphy titration/chelating titration
Compexometric titration/Chelatorphy titration/chelating titration
Vandana Devesh Sharma
 
Authoring a personal GPT for your research and practice: How we created the Q...
Authoring a personal GPT for your research and practice: How we created the Q...Authoring a personal GPT for your research and practice: How we created the Q...
Authoring a personal GPT for your research and practice: How we created the Q...
Leonel Morgado
 
在线办理(salfor毕业证书)索尔福德大学毕业证毕业完成信一模一样
在线办理(salfor毕业证书)索尔福德大学毕业证毕业完成信一模一样在线办理(salfor毕业证书)索尔福德大学毕业证毕业完成信一模一样
在线办理(salfor毕业证书)索尔福德大学毕业证毕业完成信一模一样
vluwdy49
 
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...
Scintica Instrumentation
 
23PH301 - Optics - Optical Lenses.pptx
23PH301 - Optics  -  Optical Lenses.pptx23PH301 - Optics  -  Optical Lenses.pptx
23PH301 - Optics - Optical Lenses.pptx
RDhivya6
 
GBSN - Biochemistry (Unit 6) Chemistry of Proteins
GBSN - Biochemistry (Unit 6) Chemistry of ProteinsGBSN - Biochemistry (Unit 6) Chemistry of Proteins
GBSN - Biochemistry (Unit 6) Chemistry of Proteins
Areesha Ahmad
 
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
Abdul Wali Khan University Mardan,kP,Pakistan
 
molar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptxmolar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptx
Anagha Prasad
 
ESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptxESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptx
PRIYANKA PATEL
 
Sciences of Europe journal No 142 (2024)
Sciences of Europe journal No 142 (2024)Sciences of Europe journal No 142 (2024)
Sciences of Europe journal No 142 (2024)
Sciences of Europe
 
Randomised Optimisation Algorithms in DAPHNE
Randomised Optimisation Algorithms in DAPHNERandomised Optimisation Algorithms in DAPHNE
Randomised Optimisation Algorithms in DAPHNE
University of Maribor
 
HOW DO ORGANISMS REPRODUCE?reproduction part 1
HOW DO ORGANISMS REPRODUCE?reproduction part 1HOW DO ORGANISMS REPRODUCE?reproduction part 1
HOW DO ORGANISMS REPRODUCE?reproduction part 1
Shashank Shekhar Pandey
 
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdf
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfMending Clothing to Support Sustainable Fashion_CIMaR 2024.pdf
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdf
Selcen Ozturkcan
 
8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf
by6843629
 
Immersive Learning That Works: Research Grounding and Paths Forward
Immersive Learning That Works: Research Grounding and Paths ForwardImmersive Learning That Works: Research Grounding and Paths Forward
Immersive Learning That Works: Research Grounding and Paths Forward
Leonel Morgado
 
The debris of the ‘last major merger’ is dynamically young
The debris of the ‘last major merger’ is dynamically youngThe debris of the ‘last major merger’ is dynamically young
The debris of the ‘last major merger’ is dynamically young
Sérgio Sacani
 

Recently uploaded (20)

The cost of acquiring information by natural selection
The cost of acquiring information by natural selectionThe cost of acquiring information by natural selection
The cost of acquiring information by natural selection
 
Direct Seeded Rice - Climate Smart Agriculture
Direct Seeded Rice - Climate Smart AgricultureDirect Seeded Rice - Climate Smart Agriculture
Direct Seeded Rice - Climate Smart Agriculture
 
waterlessdyeingtechnolgyusing carbon dioxide chemicalspdf
waterlessdyeingtechnolgyusing carbon dioxide chemicalspdfwaterlessdyeingtechnolgyusing carbon dioxide chemicalspdf
waterlessdyeingtechnolgyusing carbon dioxide chemicalspdf
 
SAR of Medicinal Chemistry 1st by dk.pdf
SAR of Medicinal Chemistry 1st by dk.pdfSAR of Medicinal Chemistry 1st by dk.pdf
SAR of Medicinal Chemistry 1st by dk.pdf
 
Compexometric titration/Chelatorphy titration/chelating titration
Compexometric titration/Chelatorphy titration/chelating titrationCompexometric titration/Chelatorphy titration/chelating titration
Compexometric titration/Chelatorphy titration/chelating titration
 
Authoring a personal GPT for your research and practice: How we created the Q...
Authoring a personal GPT for your research and practice: How we created the Q...Authoring a personal GPT for your research and practice: How we created the Q...
Authoring a personal GPT for your research and practice: How we created the Q...
 
在线办理(salfor毕业证书)索尔福德大学毕业证毕业完成信一模一样
在线办理(salfor毕业证书)索尔福德大学毕业证毕业完成信一模一样在线办理(salfor毕业证书)索尔福德大学毕业证毕业完成信一模一样
在线办理(salfor毕业证书)索尔福德大学毕业证毕业完成信一模一样
 
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...
 
23PH301 - Optics - Optical Lenses.pptx
23PH301 - Optics  -  Optical Lenses.pptx23PH301 - Optics  -  Optical Lenses.pptx
23PH301 - Optics - Optical Lenses.pptx
 
GBSN - Biochemistry (Unit 6) Chemistry of Proteins
GBSN - Biochemistry (Unit 6) Chemistry of ProteinsGBSN - Biochemistry (Unit 6) Chemistry of Proteins
GBSN - Biochemistry (Unit 6) Chemistry of Proteins
 
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...THEMATIC  APPERCEPTION  TEST(TAT) cognitive abilities, creativity, and critic...
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
 
molar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptxmolar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptx
 
ESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptxESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptx
 
Sciences of Europe journal No 142 (2024)
Sciences of Europe journal No 142 (2024)Sciences of Europe journal No 142 (2024)
Sciences of Europe journal No 142 (2024)
 
Randomised Optimisation Algorithms in DAPHNE
Randomised Optimisation Algorithms in DAPHNERandomised Optimisation Algorithms in DAPHNE
Randomised Optimisation Algorithms in DAPHNE
 
HOW DO ORGANISMS REPRODUCE?reproduction part 1
HOW DO ORGANISMS REPRODUCE?reproduction part 1HOW DO ORGANISMS REPRODUCE?reproduction part 1
HOW DO ORGANISMS REPRODUCE?reproduction part 1
 
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdf
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfMending Clothing to Support Sustainable Fashion_CIMaR 2024.pdf
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdf
 
8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf
 
Immersive Learning That Works: Research Grounding and Paths Forward
Immersive Learning That Works: Research Grounding and Paths ForwardImmersive Learning That Works: Research Grounding and Paths Forward
Immersive Learning That Works: Research Grounding and Paths Forward
 
The debris of the ‘last major merger’ is dynamically young
The debris of the ‘last major merger’ is dynamically youngThe debris of the ‘last major merger’ is dynamically young
The debris of the ‘last major merger’ is dynamically young
 

Veritas storage foundation_5.0_for_unix_-_fundamentals

  • 1. 'symantec.. VERITAS Storage Foundation 5.0 for UNIX: Fundamentals 100·002353·A
  • 2. COURSE DEVELOPERS Gail Ade~ BilJ,(eGerrits lECHNICAL CONTRIBUTORS AND REVIEWERS Jade ArrinJ,(lolI :Iargy Cassid~ Ro~' Freeman Joe Gallagher Bruce Garner Tomer G urantz Bill Havev Geue Henriksen Gerald Jackson Haymond Karns Bill Lehman Boh l.ucas Durivunc Manikhung Chr'istlan Rahanus Dan Rugers Kleher Saldanha Albrecht Scriba "liehe! Simoni Anaudu Sirisena Pete 'Iuemmes Copyright' 2006 Symamec Corporation. All rights reserved. Symantcc. the Symanrec Logo. and "LRITAS arc trademarks or registered trademarks uf 5) mantee Corporation Of its alfiluues in the U.S. and other countries. Other names may be trademarks of their respective owners. -I IllS PUBLICATION IS I'ROVIDfD "AS IS" AND ALL EXPRESS OR IMPLllDCONIlITIONS. REPRESENTArJONS AND WARRANTIES. INCLUDIN(i ANY 11PLlUl WARRANTY OF MFRCHANTA81L1TY. IITNI.sS FOR A PARIICULAR PURPOSE OR NON- INFRIN(iI:MEN r. ARL DISCLAIiIED. EXCEl'! TO THE FXTEN! rHAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. SYIANTI:C CORPORATION SHAl.L NOT HE L1ABLI: lOR INCIDENTAL OR CONSEQULNTIAL DA1AGI-.S IN CONNECTION WITH (HI FURNISHING. PERIC)RMANCE. OR USE OF THIS PUIlLlCAIION. TH~. INFORIATION CONTAINLD H!:RUN IS SUBJECT ro CHANtiE WITHOUT NOTICE. No part orthe contents ofthis hook may be reproduced or transmitted in any torm or b) any means without the , riuen permission of the publisher. tLRIT-l.')' ::';/orugc FOflllllulion 5.0 /iw [,i;V/.': Fundamentals Symnnrec Corporation ~03305te ens Creek 81 U. Cupertino. CA ()SOI4
  • 3. Table of Contents Course Introduction What Is Storage Virtualization?. Introducing VERITAS Storage Foundation VERITAS Storage Foundation Curriculum .. Lesson 1: Virtual Objects Physical Data Storage Virtual Data Storage Volume Manager Storage Objects Volume Manager RAID Levels .. Lesson 2: Installation and Interfaces Installation Prerequisites . Adding License Keys.. . . VERITAS Software Packages .. Installing Storage Foundation. Storage Foundation User Interfaces. Managing the VEA Software Lesson 3: Creating a Volume and File System Preparing Disks and Disk Groups for Volume Creation Creating a Volume . Adding a File System to a Volume Displaying Volume Configuration Information ... Displaying Disk and Disk Group Information Removing Volumes, Disks, and Disk Groups Lesson 4: Selecting Volume Layouts Comparing Volume Layouts ..... Creating Volumes with Various Layouts. Creating a Layered Volume . Allocating Storage for Volumes . Lesson 5: Making Basic Configuration Changes Administering Mirrored Volumes . Resizing a Volume . Moving Data Between Systems Renaming Disks and Disk Groups Managing Old Disk Group Versions .. Table of Contents Copyrigtlt ,( 2006 Svmantec Corporation All rights reserved Intro-2 Intro-6 Intro-11 1-3 1-10 1-13 1-15 2-3 .... 2-5 . 2-7 2-10 2-16 2-21 ...... 3-3 3-12 3·18 3-21 3-24 3-30 . 4-3 . 4-9 4-18 .4·25 5-3 5-10 5-16 5-21 5-23
  • 4. Lesson 6: Administering File Systems Comparing the Allocation Policies of VxFS and Traditional File Systems 6-3 Using VERITAS File System Commands 6-5 Controlling File System Fragmentation 6-9 Logging in VxFS , 6-15 Lesson 7: Resolving Hardware Problems How Does VxVM Interpret Failures in Hardware 7-3 Recovering Disabled Disk Groups "" , 7-8 Resolving Disk Failures ,.., , 7-12 Managing Hot Relocation at the Host Level , ,... 7-22 Appendix A: Lab Exercises Lab 1: Introducing the Lab Environment , A-3 Lab 2: Installation and Interfaces ,..,.., A-7 Lab 3: Creating a Volume and File System ,..,.., , A-15 Lab 4: Selecting Volume Layouts , A-21 Lab 5: Making Basic Configuration Changes , , A-29 Lab 6: Administering File Systems...... .. ,.., A-37 Lab 7: Resolving Hardware Problems.. ...., ,.., A-47 Appendix B: Lab Solutions Lab 1 Solutions: Introducing the Lab Environment ,.., B-3 Lab 2 Solutions: Installation and Interfaces , , , B-7 Lab 3 Solutions: Creating a Volume and File System , ,.., B-21 Lab 4 Solutions: Selecting Volume Layouts ,.., B-33 Lab 5 Solutions: Making Basic Configuration Changes ,.." B-47 Lab 6 Solutions: Administering File Systems , " B-67 Lab 7 Solutions: Resolving Hardware Problems " "...................... B-85 Glossary Index VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyngt1t <.;; .. LUOb Symaruec Corporation All nqhts reserved.
  • 6. Storage Management Issues Human Resource E-mail Database Server Customer Order Database 90% F symantec 10% 50% Full _ Multiple-vendor hardware Explosive data growth Different application needs Management pressure to increase efficiency I' Multiple operating systems r , Rapid change Budgetary constraints Problem: Customer order database cannot access unutilized storage. Common solution: Add more storage. What Is Storage Virtualization? Storage Management Issues Storage management is becoming increasingly complex due to: Storage hardware from multiple vendors Unprecedented data growth Dissimilar applications with different storage resource needs Management pressure to increase efficiency Multiple operating systems Rapidly changing business climates Budgetary and cost-control constraints To create a truly efficient environment. administrators must have the tools to skillfully manage large, complex, and heterogeneous environments. Storage virtualization helps businesses to simplify the complex IT storage environment and gain control of capital and operating costs by providing consistent and automated management of storage. Intro-2 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals Copynyht r; 2006 Svmantec coroorauoo All ngllts reserved
  • 7. ,S}lnamcc. What Is Storage Virtualization? .MimkM4 Consumer & m' 'f §$ A Virtualizatlon: The logical representation of physical storage across the entire enterprise Consumer fIiIiiIrIIi"m''fl Consumer sliildeM" Application requirements from storage • Application • Throughput • Failure requirements • Responsiveness resistance • Growth otential • Recovery time Ca acit Performance Availabilit • Disk size • Disk seek time • MTBF • Number of disks! • Cache hit rate • Path path redundanc Physical aspects of storage Physical Storage Resources Defining Storage Virtualization Storage virtualization is the process of taking multiple physical storage devices and combining them into logical (virtual) storage devices that are presented to the operating system. applications. and users. Storage virtualization builds a layer of abstraction above the physical storage so that data is not restricted to specific hardware devices. creating a Ilexible storage environment. Storage virtualization simplifies management of storage and potentially reduces cost through improved hardware utilization and consolidation. With storage virtualization. the physical aspects of storage arc masked to users. Administrators can concentrate less on physical aspects of storage and more on delivering access to necessary data. Benefits of storage virtualization include: Greater IT productivity through the automation of manual tasks and simplified administration of heterogeneous environments Increased application return on investment through improved throughput and increased uptime Lower hardware costs through the optimized use of hardware resources Copyright t: 20U6 Symanter; Corporation All riqh'~ .eservco Intro-3Course Introduction
  • 8. syrnantec Storage Virtualization: Types Storage-Based JfIfIJ'AIiI'AY Servers Host-Based AiIII1' Server Network-Based AYAYAY Servers Storage ~j,~ Storage ~s.,", Storage Most companies use a combination of these three types of storage virtualization to support their chosen architectures and application requirements. How Is Storage Virtualization Used in Your Environment? The way in which you use storage virtuulization. and the benefits derived from storage virtualization. depend on the nature of your IT infrastructure and your specific application requirements. Three main types of storage virtualization used today arc: Storage-based Host-based Network-based Most companies use a combination of these three types of storage virtualization solutions to support their chosen architecture and application needs. The type of storage virtualization that you use depends on factors. such as the: Heterogeneity of deployed enterprise storage arrays Need for applications to access data contained in multiple storage devices Importance of uptime when replacing or upgrading storage Need for multiple hosts to access data within a single storage device Value of the maturity of technology Investments in a SAN architecture Level of security required Level of scalability needed Inlro-4 Copynghl ·,C ;':OOb Svmantec Conioranon AU flghls reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 9. Storage-Based Storage Virtualization Storage-basedstorage virtualization refers to disks within an individual array that are presented virtually to multiple servers. Storage is virtualized by the array itself For example. RAID arrays virtualize the individual disks (that are contained within the array) into logical LUNS. which are accessed by host operating systems using the same method of addressing as a directly-attached physical disk. This type of storage virtualization is useful under these conditions: You need to have data in an array accessible to servers of di fferent operating systems. All of a server's data needs are met by storage contained in the physical box. You are not concerned about disruption to data access when replacing or upgrading the storage. The main limitation to this type of storage virtualization is that data cannot be shared between arrays. creating islands of storage that must be managed. Host-Based Storage Virtualization Host-basedstorage virtualization refers to disks within multiple arrays and from multiple vendors that are presented virtually to a single host server. For example. software-based solutions. such as VERITAS Storage Foundation. provide host- based storage virtualizarion. Using VERlTAS Storage Foundation to administer host-based storage virtualization is the focus of this training. I lost-based storage virtualization is useful under these conditions: A server needs to access data stored in multiple storage devices. You need the flexibility to access data stored in arrays from different vendors. Additional servers do not need to access the data assigned to a particular host. Maturity of technology is a highly important factor to you in making IT decisions. Note: By combining VERITAS Storage Foundation with clustering technologies. such as VERITAS Cluster Volume Manager. storage can be virtualized to multiple hosts ofthe same operating system. Network-Based Storage Virtualization Network-basedstorage virtualization refers to disks from multiple arrays and multiple vendors that arc presented virtually to multiple servers. Network-based storage virtualization is useful under these conditions: You need 10 have data accessible across heterogeneous servers and storage devices. You require central administration of storage across all Network Attached Storage (NAS) systems or Storage Area Network (SAN) devices. You want to ensure that replacing or upgrading storage does not disrupt data access. You want to virtualize storage to provide block services to applications. Course Introduction Intro-5 Copyright ,~,2006 Svmantec Corporation. All nnhts reserved
  • 10. syrnarucc VERITAS Storage Foundation VERIT AS Storage Foundation provides host-based storage virtualization for performance, availability, and manageability benefits for enterprise computing environments. High Availability Application Soluti Data Protection Volume Manager and File System Company Business Process VERITAS Cluster Server/Replication ons Storage Foundation for Databases -< VERITAS NetBackup/Backup Exec ~ VERIT AS Storage Foundation Hardware and Operating System Introducing VERITAS Storage Foundation VERITAS storage management solutions address the increasing costs of managing mission-critical data and disk resources in Direct Attached Storage (DAS) and Storage Area Network (SAN) environments. Atthe heart of these solutions is VERITAS Storage Foundation, which includes VERITAS Volume Manager (VxVM). VERITAS File System (VxFS), and other value-added products. Independently, these components provide key benefits. When used together as an integrated solution, VxVM and VxFS deliver the highest possible levels of performance, availability, and manageability for heterogeneous storage environments. Intro-6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cnpyllght ,£ 2006 Syroantec Corporation All r,ghts reserved
  • 11. Users / Applications/ Databases .................................................................................................... " ....,, ! Virtual Storage Resources 00.0 ••• 00 0 .0..(?v!2!J.t.1 '.', VERITAS Volume Manager (VxVM) What Is VERITAS Volume Manager? VERITAS Volume Manager, the industry-leader in storage virtualizarion. is an easy-to-use, online storage management solution for organizations that require uninterrupted, consistent access to mission-critical data. VxVM enables you to apply business policies to configure, share. and manage storage without worrying about the physical limitations of disk storage. VxVM reduces the total cost of ownership by enabling administrators to easily build storage configurations that improve performance and increase data availability. Working in conjunction with VERITAS File System, VERITAS Volume Manager creates a foundation for other value-added technologies. such as SAN environments, clustering and failover, automated management. backup and IISM, and remote browser-based management. What Is VERITAS File System? A file system is a collection of directories organized into a structure that enables you to locate and store tiles. All processed information is eventually stored in a tile system. The main purposes of a file system arc to: Provide shared access to data storage. Provide structured access to data. Control accessto data. Provide a common. portable application interface. Enable the manageability or data storage. The value of a file system depends on its integrity and performance. Copyright os 2006 swnante- Corporation. All rigllts reserveo Intro-7Course Introduction
  • 12. svrnaruec VERITAS Storage Foundation: Benefits • Manageability - Manage storage and file systems from one interface. - Configure storage online across Solaris, HP-UX, AIX, and Linux. - Provide additional benefits for array environments, such as inter-array mirroring. • Availability - Features are implemented to protect against data loss. - Online operations lessen planned downtime. • Performance - 1/0 throughput can be maximized using volume layouts. - Performance bottlenecks can be located and eliminated using analysis tools. • Scalability - VxVM and VxFS run on 32-bit and 64-bit operating systems. - Storage can be deported to larger enterprise platforms. Benefits of VERITAS Storage Foundation Commercial system availability now requires continuous uptime in many implementations. Systems must be available 24 hours a day. 7 days a week, and 365 days a year. VERlTAS Storage Foundation reduces the cost ofownership by providing scalable manageability, availability, and performance enhancements for these enterprise computing environments. Manageability Management of storage and the tile system is performed online in real time, eliminating the need for planned downtime. Online volume and file system management can be performed through an intuitive. easy-to-use graphical user interface that is integrated with the VERITAS Volume Manager (VxVM) product. Vx VM provides consistent management across Solaris. HP-llX, AlX, Linux, and Windows platforms. VxFS command operations are consistent across Solaris, HP-UX, AlX, and Linux platforms. Storage Foundation provides additional benefits for array environments, such as inter-array mirroring. Availability Through software RAID techniques, storage remains available in the event of hardware fai lure. Intro-8 CopytlQtll ''",2006 Svmantec Corporation All fights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 13. 1I0t relocation guarantees the rebuilding of redundancy in the case of a disk failure. Recovery time is minimized with logging and background mirror resynchronization. Logging of file system changes enables fast file system recovery. A snapshot of a file system provides an internally consistent. read-only image for backup. and file system checkpoints provide read-writable snapshots. Performance I/O throughput can be maximized by measuring and modifying volume layouts while storage remains online. Performance bottlenecks can he located and eliminated using YxYM analysis tools. Extent-based allocation of space lor files minimizes file level access time. Read-ahead buffering dynamically tunes itself to the volume layout. Aggressive caching of writes greatly reduces the number of disk accesses. Direct I/O performs file [10 directly into and out of user butlers. Scalability YxYM runs over a 32-bit and M-hit operating system. Ilosts can be replaced without modifying storage. Hosts with different operating systems can access the same storage. Storage devices can be spanned. YxYM is fully integrated with YxFS so that modifying the volume layout automatically modi lies the file system internals. With YxFS. several add-on products are available for maximizing performance in a database environment. Course Introduction Intro-9 Copyright;;; 2006 Symalll~r. Corporation All rights .esorveo
  • 14. • Reconfigure and resize storage across the logical devices presented by a RAID array. • Mirror between arrays to improve disaster recovery protection of an array. Use arrays and JBODs. • Use snapshots with mirrors in different locations for disaster recovery and off-host processing. Use VERITAS Volume Replicator (VVR) to provide hardware- independent replication services. symantec Storage Foundation and RAID Arrays: Benefits With Storage Foundation, you can: Benefits of VxVM and RAID Arrays RAID arrays virtualize individual disks into logical LUNS which are accessed by host operating systems as "physical devices." that is, using the same method of addressing as a directly-attached physical disk. VxVM virtualizes both the physical disks and the logical LUNs presented by a RAID array. Modifying the configuration ofa RAID array may result in changes in SCSI addresses of LUNs, requiring modification of application configurations. VxVM provides an effective method ofrcconfiguring and resizing storage across the logical devices presented by a RAID array. When using VxVM with RAID arrays. you can leverage the strengths of both technologies: You can use Vx VM to mirror between arrays 0 improve disaster recovery protection against the failure of an array. particularly if one array is remote. Arrays can be of different manufacture: that is, one array can be a RAID array and the other a J80D. VxVM facilitates data reorganization and maximizes available resources. VxVM improves overall performance by making 1/0 activity parallel for a volume through more than one 110 path to and within the array. You can use snapshots with mirrors in different locations. which is beneficial for disaster recovery and off-host processing. If you include VERITAS Volume Rcplicaror (VVR) in your environment, VVR can be used to provide hardware-independent replication services. tntro-10 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals Copyngtll' LU06Svmantec Corporaucn All nqtits reserved
  • 15. Storage Foundation Curriculum Path ,S}11Hlntt'C VERITAS Storage Foundation for UNIX: Fundamentals VERITAS Storage Foundation for UNIX: Maintenance VERITAS Storage Foundation Curriculum VERITASStorage Foundationfor UNIX' Fundamentalstraining is designed 10 provide you with comprehensive instruction on making the most ofVERITAS Storage Foundation. ~------------ ------------~---v-- VERIT AS Storage Foundation for UNIX Course Introduction Cor-yriqht © 2006 Symantec Corporauon. All rights reserved Inlro-11
  • 16. Storage Foundation Fundamentals: Overview • Lesson 1: Virtual Objects • Lesson 2: Installation and Interfaces Lesson 3: Creating a Volume and File System • Lesson 4: Selecting Volume Layouts • Lesson 5: Making Basic Configuration Changes • Lesson 6: Administering File Systems • Lesson 7: Resolving Hardware Problems syrnantcc VERITAS Storage Foundation for UNIX: Fundamentals Overview This training provides comprehensive instruction un operating the file and disk management foundation products: VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS). In this training. you learn how to combine file system and disk management technology to ensure easy management of all storage and maximum availability of essential data. Objectives After completing this training. you will be able to: Identify VxVM virtual storage objects and volume layouts. Install and configure Storage Foundation. Configure and manage disks and disk groups. Create concatenated. striped, mirrored. RAID-5, and layered volumes. Configure volumes by adding mirrors and logs and resizing volumes and tile systems. Perform tile system administration. Resolve basic hardware problems. Intro-12 Copynght'~ 2006 Svmantec Corpoauon All nghl<; reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 17. , S)111<1ntt'( Course Resources • Lab Exercises (Appendix A) • Lab Solutions (Appendix B) • Glossary Additional Course Resources Appendix A: Lab Exercises This section contains hands-on exercises that enable you to practice the concepts and procedures presented in the lessons. Appendix B: Lab Solutions This section contains detailed solutions to the lab exercises for each lesson. Glossary For your reference. this course includes a glossary ofterms related to V[RITAS Storage Foundation. Course Introduction tntro-13 Copyrigtllf' 2006 Symanter Corporation All rights reserved
  • 18. Typographic Conventions Used in This Course The following tables describe the typographic conventions used in this course. Typographic Conventions in Text and Commands Convention Element Examples Courier Nell. Command input. To display the robot and drive configurauon: bold both syntax and tpconfig -d examples To display disk information: vxdisk -0 alldgs list Courier New. · Command output lu the output: plain · Command protocol mlnlmum: 40- names. directory protocol - maximum: 60 names. tile protocol -current: 0 names. path Locale the al tnames directory. names. user Go 10http: / /www.symantec.com. names. passwords. URLs Enler the value 300. when used within Log on as user l. regular text paragraphs. Courier New. Variables in To install the media server: Italic. bold or command syntax, /cdrom_directory/install plain and examples: To access a manual page: · Variables in command input man command name - are Italic. plain. To display detailed information lor a disk: · Variables in vxdisk -g disk_group list command output disk - name are ltulic. bold. Typographic Conventions in Graphical User Interface Descriptions Convention Element Examples Arrow Menu navigation paths Select File -->Save. Initial capitalization Buttons. menus. windows, Select the Next buuon. options. and other interface Open the Task Sialus clements window. Remove the checkmark trorn the Print File check box. ()uotation marks lutertucc clements with Select the "Include long names subvolumes in object view window" check box. Intro-14 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyright ~ ;!()Clfi Svmanter: Corporation AUfights reserved
  • 20. symantcc Lesson Introduction • Lesson 1: VirtU!!L9_bl~~ts ••• • Lesson 2: Installation and Interfaces • Lesson 3: Creating a Volume and File System • Lesson 4: Selecting Volume Layouts • Lesson 5: Making Basic Configuration Changes • Lesson 6: Administering File Systems • Lesson 7: Resolving Hardware Problems svmantec Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Physical Data Identify the structural characteristics of Storage a disk that are affected by placing a disk under VxVM control. Topic 2: Virtual Data Describe the structural characteristics Storage of a disk after it is placed under VxVM control. Topic 3: Volume Manager Identify the virtual objects that are Storage Objects created by VxVM to manage data storage, including disk groups, VxVM disks, subdisks, piexes, and volumes. Topic 4: Volume Manager Define VxVM RAID levels and identify RAID Levels virtual storage layout types used by VxVM to remap address space. 1-2 COPYlIgl1l·.~ 2006 Symanter. Ccq-orauon All fights rnservoc VERITAS SLorage Foundation 5.0 for UNIX: Fundamentals
  • 21. , S)11Jan,'( I Physical Disk Structure Physical storage objects: • The basic physical storage device that ultimately stores your data is the hard disk. • When you install your operating system, hard disks are formatted as part of the installation program. • Partitioning is the basic method of organizing a disk to prepare for files to be written to and retrieved from the disk. • A partitioned disk has a prearranged storage pattern that is designed for the storage and retrieval of data. Solaris I HP-UX I AIX I Linux I Physical Data Storage Physical Disk Structure Solaris A physical disk under Solaris contains the partition table of the disk and the volume Table of contents (VTOC) in the first sector 151~ bytes) of the disk. The VTOC has at least an entry lor the backup partition on the II hole disk (partition tag 5, normally partition number 2), so the OS may work correctly with the disk. The VTOC is always a part of the backup partition and may be part ota standard data partition. You can destroy the VTOC using the raw device driver on that partition making the disk immediately unusable. Sector 0 of disk: VTOC Sector 1-15 of / partition: bootblock ~1US' IPartition 2 (backup slice) refers to the entire disk. Partitions (Slices) 1 Copyright 1'. 2006 Symaruec Corporation All nqtus reserved Lesson 1 Virtual Objects
  • 22. If the disk contains the partition fur the rout file system mounted on / (partition tag 2), lor example of an OS disk, this root partition contains the bootblock for the first boot stage after the Open Bout Prom within sector I - 15. Sector 0 is skipped, so there is no overlapping between VTOe and boorblock. if the root partition starts at the beginning of the disk. The li"t sector ofa file system un Soluris cannut start before sector 16 of the partition. Sector 16 contains the main super block of the file system. Using the block device driver of the file system prevents VTOC and boot block from being overwritten by applicuuon data. Note: 011 Solaris. VxVM 4. I and later support EFI disks. EFI disks are an lntcl-based technology that allows disks to retain BIOS code. HP-lJX On an HP-UX system. the physical disk is traditionally partitioned using either the whole disk approach or Logical Volume Manager (LVM). HP-UX Disk cOtld4 LVM Disk cOtld4 The whole disk approach enables you tu partition a disk in five ways: the whole disk is used by a single file system: the whole disk is used as swap area: the whole disk is used as a raw partition: a portion of the disk contains a file system, and the rest is used as swap: or the boot disk contains a 2-MB special boot are". the root file system, and a swap area. An LVM data disk consists of four areas: Physical Volume Reserved Area (PVRA): Volume Group Reserved Area (VGRA): user data area: and Bad Block Relocation Area IBBRA). AIX A native AIX disk docs not have a partition table "I' the kind familiar on many other operating systems. such as Solaris, Linux, and Windows. An application could use the entire unstructured raw physical device. but the lirst 5 I 2-byte sector normally contains intunn.uion, including a physical volume identifier (pvid) to support recognition of the disk by AIX. An AIX disk is managed by IBM's Logical Volume Manager (LVM) by defuult. A disk managed by LVM is called a physical volume (PV). A physical volume consists of: PV reserved area: A physical volume begins with a reserved area of 128 sectors containing I'V metadatu. including the pvid. 1-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYright ';:' 2(1()h Sytn<ll1le: Corporauon All [lgtHS reserveo
  • 23. Volume Group Descrlptur Area ('G[)A): One or two copies of the V[X;/ tollows. The V(jDA contains information describing a volume group (Vt.i ), which consists of one or more physical volumes. Included in the mctadata in the V(j[)A is the defiuiuon of the physical partition (PI') Sill'. nonnally-t MH, Physlcal partitions: The remainder of the disk is divided into a number of physical partitions, All of the PVs in a volume group have PI's of the same size. as defined in the VGDA, In a normal VG. there can be up to.n PI's in a P'. In a big VCi. there can be up to 12R PI's in a Pv. I ,Raw device ,hdisk3 Physical volume reserved area (128 sectors) Volume Group Descriptor Areas Physical partitions (equal size, defined in VGDA) The term partition is used differently in different operating systems. In many kinds of UNiX. Linux, and Windows. a partition is a variable sized portion of contiguous disk spacethat can be formatted to contain a file system. in LVM. a PI' is mapped to a logical partition 11..1'). and one or more LPs from any location throughout the V(j can be combined to define a logical volume (LV). A logical '0IU111eis the entity that can be formatted to contain a file system (by default either .IFS or .IFS2), So a physical partition compares in concept more closely to a disk allocation cluster in some other operating systems. and a logical volume plays the role that a partition does in some other operating systems. Linux On Linux. a nonboot disk can be divided into one to lour primary partitions. One of these primary partitions can be used to contain logical partitions. and it is called the extended partition. The extended partition can have lip to I ~ logical partitions on a SCSi disk and lip to 60 logical partitions on an IDE disk, You can use fdisk to set up partitions on a Linux disk, Lesson 1 Virtual Objects 1-5 CopyriglH '-':2006 Symantcc Corporation All fights reserveo
  • 24. Primary Partition 1 /dev/sdal or/dev/hdal Primary Partition 2 /dev/sda2or/dev/hda2 Primary Partition 3 /dev/sda3or/dev/hda3 Primary Partition 4 (Extended Partition) /dev/sda4 /dev/hda4 On a l.inux boot disk. the boot partition must be a primary partition and is typically located within the first 1024 cylinders of the drive. On the boot disk. you must also have a dedicated swap partition. The swap partition can be a primary or a logical partition. and it can be located anywhere on the disk. Logical partitions must be contiguous. but they do not need to take up all of the space of the extended partition. Only one primary partition can be extended. The extended partition docs not take up any space until it is subdivided into logical partitions. VERITAS Volume Manager 4.0 for Linux does not support most hardware RAID controllers currently unless they present SCSI device interfaces with names of the form / dev / sdx. The following controllers are supported: PERC, on the Dell 1650 MegaRAID. on the Dell 1650 ScrvcRAID. on x440 systems Compaq array controllers that require the SmartI and CCISS drivers (w hich present device paths. such as I dev I idal c #d#p # and I dev I cc i ssl c #d#p#) arc supported lor normal use and for rootnbiliry. 1·6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Cupynqht c ;1.0011Svmanu-c Corporation All fights rcserveo
  • 25. ,S)l1HHlh'( Physical Disk Naming VxVM parses disk names to retrieve connectivity information for disks. Operating systems have different conventions: Operating System Device Naming Convention Example Solaris /dev/[rJdsk/c1t9dOs2 /dev/ [rJ dak/c3t2dO (no slice) /dev/hdisk2 (no slice) SCSI disks: /dev/ade [1-4J (primary partitions) /dev/ade [5 -16J (logical partitions) /dev/adbN(on the second disk) / dev / adcN (on the third disk) IDE disks: /dev/hdeN. /dev/hClbN./dev/hdcN HP-UX AIX Linux Physical Disk Naming Solaris I You locate and access the data on a physical disk by using a device name that specifies the controller, target ID. and disk number. A typical device name uses the format: c#t#d#. c# is the controller number. t# is the target !D. d# is the logical unit number (LUN) of the drive attached to the target. Ira disk is divided into partitions. you also specify the partition number in the device name: s# is the partition (slice) number, For example. the device name cOtOdOsl is connected to controller number 0 ill the system. with a target ID oro. physical disk number O. and partition number I 011 the disk. HP-liX You locate and access the data on a physical disk by using a device name that specifies the controller. target ID, and disk number. A typical device name uses the format: c#t#d#. c# is the controller number. t # is the target !D. d# is the logical unit number (LUN) of the drive attached to the target. Copyright .~..2006 Svmanter Corporauon All fights reserved 1-7Lesson 1 Virtual Objects
  • 26. For example, the cOt OdO device name is connected to the controller number °in the system, with a target 10 of 0, and the physical disk number O. AIX Every device in AIX is assigned a location code that describes its connection to the system. The general format of this identifier isAB-CD-EF-GH, where the letters represent decimal digits or uppercase letters. The first two characters represent the bus. the second pair identify the adapter, the third pair represent the connector, and the tinal pair uniquely represent the device. For example, a SCSI disk drive might have a location identifier of 04 - 01- 00 - 6, O.In this example, 04 means the PCI bus, 01 is the slot number on the PCI bus occupied by the SCSI adapter, 00 means the only or internal connector, and the 6,0 means SCSIID 6, LUN o. However, this data is used internally by AIX to locate a device. The device name that a system administrator or software uses to identify a device is less hardware dependant. The system maintains a special database called the Object Data Manager (ODM) that contains essential definitions for most objects in the system, including devices. Through the ODM. a device name is mapped to the location identifier. The device names are referred to by special files found in the / dev directory. For example, the SCSI disk identified previously might have the device name hdisk3 (the fourth hard disk identified by the system). The device named hdisk3 is accessed by the file name /dev/hdisk3. If a device is moved so that it has a different location identifier, the ODM is updated so that it retains the same device name. and the move is transparent to users. This is facilitated by the physical volume identifier stored in the first sector of a physical volume. This unique 128-bit number is used by the system to recognize the physical volume wherever it may be attached because it is also associated with the device name in the ODM. Linux On Linux, device names are displayed in the format: • sdx [N] • hdx [N] In the syntax: sd refers to a SCSI disk, and hd refers to an EIDE disk. x is a letter that indicates the order of disks detected by the operating system. For example, sda refers to the first SCSI disk, sdb refers to the second SCSI disk. and so on. N is an optional parameter that represents a partition number in the range I through 16. For example. sda7 references partition 7 on the first SCSI disk. Primary partitions on a disk are I. 2, .~.4: logical partitions have numbers 5 and up. If the partition number is omitted, the device name indicates the entire disk. 1-8 VERITAS Storage Foundation 5.0 for UNIX.' Fundamentals Copynqht ', ;'[)Of) Symantec Corporaucn All fights reserverr
  • 27. Physical Data Storage Note: Throughout this course, the term disk is used to mean either disk or LUN. Whatever the OS sees as a storage device, VxVM sees as a disk. • Reads and writes on unmanaged physical disks can be a slow process . • Disk arrays and multipathed disk arrays can improve 110 speed and throughput. IApplications! Databases / Users I 00PhYSi!1[!LUNS 00Disk array: A collection of physical disks used to balance 1/0 across multiple disks Multipathed disk array: Provides multiple ports to access disks to achieve performance and availability benefits Disk Arrays Reads and writes on unmanaged physical disks can be a relatively slow process, because disks are physical devices that require time to move the heads to the correct position on the disk before reading or writing. If all of the read and write operations are performed to individual disks. one at a time. the read-write time can become unmanageable. A disk 1/1"'(11' is a collection of physical disks. Performing 110 operations on multiple disks in a disk array can improve 1/0 speed and throughput. Hardware arrays present disk storage to the host operating system as LUNs. Multipathed Disk Arrays Some disk arrays provide multiple ports to access disk devices. These ports. coupled with the host bus adaptor (IIBA) controller and any data bus or 110 processor local to the array. compose multiple hardware paths to access the disk devices. This type of disk array is called a muttipathed disk aI'I'O'. You can connect rnultipathed disk arrays to host systems in many different configurations. such as: Connecting multiple ports to different controllers on a single host Chaining ports through a single controller on a host Connecting ports to di tferent hosts simultaneously Lesson 1 Virtual Objects 1-9 Cop'r'riglll'(~ 2006 Symantec Corporation All rights. roservoo
  • 28. symarucc Virtual Data Storage • Volume Manager creates a virtual layer of data storage. • Volume Manager volumes appear to applications to be physical disk partitions. • Volumes have block and character device nodes in the Zdev tree: Idev/vxl lr l dsk/ ... Multidisk configurations: • Concatenation • Mirroring • Striping • RAID·S High Availability: • Disk group import and deport • Hot relocation • Dynamic multipathing Disk Spanning Load·Balancing Virtual Data Storage Virtual Storage Management VER lTAS Volume Manager creates a virtual level of storage management above the physical device level by creating virtual storage objects. The virtual storage object that is visible to users and applications is called a 1'0111111<'. What Is a Volume? A volume is a virtual object, created by Volume Manager, that stores data. A volume consists of space from one or more physical disks on which the data is physically stored. How Do You Access a Volume? Volumes created by VxVM appear to the operating system as physical disks, and applications that interact with volumes work in the same way as with physical disks. All users and applications access volumes as contiguous address space using special device files in a manner similar to accessing a disk partition. Volumes have block and character device nodes in the / dev tree. You can supply the name of the path to a volume in your commands and programs, in your file system and database configuration files, and in any other context where you would otherwise use the path to a physical disk partition. 1-10 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynght '6 2006 Symantec Comor auon All fights reserved
  • 29. I Volume Manager Control When you place a disk under VxVM control, a cross-platform data sharing (CDS) disk layout is used, which ensures that the disk is accessible on different platforms, regardless of the platform on which the disk was initialized. I····- ..·-------·.L~_~lOS-reserved areas I that contain: " Platform blocks I " VxVM 10 blocks I " AIX and HP-UX . coexistence labels Public Region Volume Manager-Controlled Disks With Volume Manager. you enable virtual data storage by bringing a disk under Volume Manager control. By default in VxVM 4.0 and later. Volume Manager uses a cross-platform data sharing (CDS) disk layout. A CDS disk is consistently recognized by all VxVM-supported UNIX platforms and consists of: Ox-reserved area: To accommodate plat tonn-spcci fie disk usage. f 2RK is reserved for disk labels. platform blocks. and platform-coexistence labels. Private region: The private region stores information. such as disk headers. configuration copies. and kernel logs. in addition to other platform-specific management areas that VxVM uses to manage virtual objects. The private region represents a small management overhead: Operating System Default Block/Sector Size Default Private Region Size Solaris 512 bytes 65536 sectors (I 024K) HI'-UX 1024 bytes 3276Rsectors ( I024K) AIX 512 bytes 65536 sectors (I024K) Linux 512 bytes 65536 sectors ( I024K) Public region: The public region consists of the remainder of the space on the disk. The public region represents the available space that Volume Manager can LIseto assign to volumes and is where an application stores data. Volume Manager never overwrites this area unless specifically instructed to do so. Lesson 1 Virtual Objects 1-11 Copvriqht 'I' 2006 svoteotec Corporation All rights reservco
  • 30. syrnantec. Comparing CDS and Pre-4.x Disks CDS Disk (>4.x Default) Private region (metadata) and public region (user data) are created on a single partition. Suitable for moving between different operating systems Not suitable for boot partitions Sliced Disk (Pre-4.x Solaris Default) Private region and public region are created on separate partitions. Not suitable for moving between different operating systems Suitable for boot partitions Simple Disk (Pre •••.x HP-UX Default) Private region and public region are created on the whole disk with specific offsets. Not suitable for moving between different operating systems Suitable for boot partitions Note: This format is called hpdisk format as of VxVM 4.1 on the HP-UX platform. Comparing CDS Disks and Pre-4.x Disks The pre-t.v disk layouts arc still available in VxVM 4.0 and later. These layouts are used for hringing the boot disk under VxVM control on operating systems that support that capability, On platforms that support bringing the boot disk under V.xVM control, CDS disks cannot be used tor boot disks. CDS disks have specific disk layout requirements that enable a common disk layout across different platforms, and these requirements arc not compatible with the particular platform-specific requirements of boot disks. Therefore, when placing a hoot disk under VxVM control. you must use a prc-4.x disk layout (sliced on Solaris, hpdisk on HP-UX). For non boot disks, you can convert CDS disks to sliced disks and vice versa by using VxVM utilities. Other disk types, working with boot disks, and transferring data across platforms with CDS disks are topics covered in detail in later lessons. 1-12 Cupyrlght L 200t) Symantec Corporation All f,gtl1s reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 31. Volume Manager Storage Objects Disk Group Volumes acctdg expvol payvol acctd 01-01 acctdgO~ - O~ llcctdg03-02 expvol-Ol acctdg01-02 llcctdg03-01 acctdg02-01 Ploxes payvol-Ol payvol-02 -VxVM Disks acctdg03 Subdisks Physical Disks Volume Manager Storage Objects I Disk Groups A disk group is a collection of Vx VM disks that share a common configurution. You group disks into disk groups for management purposes. such as to hold the data for a specific application or set of applications. For example. data for accounting applications can be organized in a disk group called acctdg. A disk group contigurcttion is a set olrccords with detailed information about related Volume Manager objects in a disk group. their attributes. and their connections. Volume Manager objects cannot span disk groups. For example. a volume's subdisks. plexes. and disks must be derived from the same disk group as the volume. You can create additional disk groups as necessary. Disk groups enable you to group disks into logical collections. Disk groups and their components can be moved as a unit from one host machine to another. Volume Manager Disks A Volume Manager (VxVM) disk represents the public region ota physical disk that is under Volume Manager control. Each VxVM disk corresponds to one physical disk. Each VxVM disk has a unique virtual disk name called a disk media name,The disk media name is a logical name used lor Volume Manager administrative purposes. Volume Manager uses the disk media name when assigning space to volumes. A VxVM disk is given a disk media name when it is added to a disk group. Default disk media name: diskgrollplili Copyright ~~.2Un6 Svmantec Corporanon All rights reserved 1-13Lesson 1 Virtual Objects
  • 32. You can supply the disk media name or allow Volume Manager to assign a default name. The disk media name is stored with a unique disk ID to avoid name collision. After a VxVM disk is assigned a disk media name, the disk is no longer referred 10 by its physical address. The physical address (for example, clltlldll or hdiskll) becomes known <IS the disk access record. Subdisks A VxVM disk can be divided into one or more subdisks. A subdisk is a set of contiguous disk blocks that represent a specific portion ofa VxVM disk, which is mapped to a specific region of a physical disk. A subdisk is a subsection of a disk's public region. A subdisk is the smallest unit of storage in Volume Manager. Therefore, subdisks are the building blocks for Volume Manager objects. A subdisk is defined by an offset and a length in sectors on a VxVM disk. Default subdisk name: DMname-1l1l A Vx VM disk can contain multiple subdisks, but subdisks cannot overlap or share the same portions ofa VxVM disk. Any VxVM disk space that is not reserved or that is not part of a subdisk is free space. You can use free space to create new subdisks. Conceptually, a subdisk is similar to <I partition. Both a subdisk and a partition divide a disk into pieces defined by an offset address and length. Each of those pieces represent a reservation of contiguous space on the physical disk. However, while the maximum number of partitions to a disk is limited by some operating systems, there is no theoretical limit to the number of subdisks that can be attached to a single plex. This number has been limited by default to <I value 01'4090. If required, this default can be changed, using the vo1_ subdisk _num tunable parameter. For more information on tunable parameters, see the I'ERITAS Volume .Mal/agerAdministrator '.1 Guide. Plexes Volume Manager uses subdisks to build virtual objects called plexes. A plex is a structured or ordered collection of subdisks that represents one copy of the data in a volume. A plex consists of one or more subdisks located on one or more physical disks. The length of a plex is determined by the last block that can be read or written on the last subdisk in the plex. Default plcx name: volume_name-Il# Volumes A volume is a virtual storage device that is used by applications in a manner similar to <I physical disk. Due 10 its virtual nature, a volume is not restricted by the physical size constraints that apply to a physical disk. A VxVM volume can be as large as the total of available. unreserved free physical disk space in the disk group. A volume consists of one or more plcxes. Default volume name: vol ul1le name## 1-14 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copyright l(; 2006 Syruantec Corporation All fights reserved
  • 33. , S)111'1I1t('( ~ Volume Layouts Volume layout: The way plexes are configured to remap the volume address space through which 1/0 is redirected .....Di~~~U [--R-~-;;i~;~~;1 LayeredConcatenated Striped RAID-O Data Redundancy Mirrored RAID-5 Striped and RAl0-O+1RAl0-5 Volume Manager RAID Levels RAID I RAID is an acronym for Redundant Array of Independent Disks. RAID is a storage management approach in which an army of disks is created. and part of the combined storage capacity of the disks is used to store duplicate information about the data in the array. By maintaining a redundant array of disks. you can regenerate data in the case of disk failure. RAID configuration models arc classified in terms of RAID levels. which arc defined by the number of disks in the array. the way data is spanned across the disks. and the method used for redundancy. Each RA ID level has speci lie features and performance benefits that involve a trade-oil between performance and reliability. Volume Layouts RAID levels correspond to volume layouts. A volume's layout refers to the organization of plexcs in a volume. Volume layout is the way plcxes are configured to remap the volume address space through which 110 is redirected at run-time. Volume layouts are based on the concepts of disk spanning. redundancy. and resilience. Disk Spanning Disk spanning is the combining of disk space from multiple physical disks to Iorrn one logical drive. Disk spanning has two forms: Lesson 1 Virtual Objects Copyright!': 2006 Symantec Corporation All rights reserved 1-15
  • 34. Concatenation: Concatenation is the mapping of data in a linear manner across two or more disks. In a concatenated volume. subdisks are arranged both sequentially and contiguously within a plex. Concatenation allows a volume to be created from multiple regions of one or more disks if there is not enough space for an entire volume on a single region of a disk. Striping: Striping is the mapping of data in equally-sized chunks alternating across multiple disks. Striping is also called interleaving. In a striped volume. data is spread evenly across multiple disks. Stripes are equally-sized fragments that are allocated alternately and evenly to the subdisks of a single plcx. There must be at least two subdisks in a striped plex , each of which must exist on a different disk. Configured properly. striping not only helps to balance 1/0 but also to increase throughput. Data Redundancy To protect data against disk failure, the volume layout must provide some form of data redundancy. Redundancy is achieved in two ways: Mirroring: Mirroring is maintaining two or more copies of volume data. A mirrored volume uses multiple plcxcs to duplicate the information contained in a volume. Although a volume can have a single plex, at least two are required for true mirroring (redundancy of data). Each of these plexes should contain disk space from different disks for the redundancy to be useful. Parity: Parity is a calculated value used to reconstruct data after a failure by doing an exclusive OR (XOR) procedure on the data. Parity information can be stored on a disk. Ifpart ofa volume fails, the data on that portion of the failed volume can be re-created from the remaining data and parity information. A RAIO-S volume uses striping to spread data and parity evenly aeross multiple disks in an array. Each stripe contains a parity stripe unit and data stripe units. Parity can be used to reconstruct data if one of the disks fails. In comparison to the performance of striped volumes, write throughput of RAI D- S volumes decreases, because parity infonnauon needs to be updated each time data is accessed. However. in comparison to mirroring, the use of parity reduces the amount of space required. Resilience A resilient volume, also called a layered volume, is a volume that is built on one or more other volumes. Resilient volumes enable the mirroring of data at a more granular level. For example. a resilient volume can be concatenated or striped at the top level and then mirrored at the bottom level. A layered volume is a virtual Volume Manager object that nests other virtual objects inside of itself. Layered volumes provide better fault tolerance by mirroring data at a more granular level. 1-16 VERITAS Storage Foundation 5.0 for UNIX' Fundamentals CopYrlyhl '& 2006 Symantec Corporauon AU nqhts reserved
  • 35. , syrnanrec I Lesson Summary • Key Points This lesson described the virtual storage objects that VERITAS Volume Manager uses to manage physical disk storage, including disk groups, VxVM disks, subdisks, plexes, and volumes. • Reference Materials VERITAS Volume Manager Administrator's Guide 'symantl'C Labs and solutions for this lesson arc located on the following pages: Appendix A provides complete lab instructions. "Lib I' IrilJ"duc'ing tile: !,;11 Lnvironmctu." p:I',!C i-,~ Appendix B provides complete lab instructions and solutions, "I .ab 1 S"IUlioI1S: 1,llrodliCIII;2 the 1:11) 1..11in1J1!l1CIlI," rage' B-,; Lab 1 Lab 1: Introducing the Lab Environment In this lab, you are introduced to the lab environment, system, and disks that you will use throughout this course. For Lab Exercises, see Appendix A. For .t:cabSoluti~!!s, see Appendix B, Lesson 1 Virtual Objects Copyright'~ 2006 Svmantec Corporation All rights reserved 1-17
  • 36. 1-18 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynghi .~ 2006 Svmantec Corporation. AlIlIghl!'i resorvec
  • 38. syrnantec Lesson Introduction • Lesson 1: Virtual Objects ~_Lesson~'!..sta~lationandInterface.s_~" • Lesson 3: Creating a Volume and File System • Lesson 4: Selecting Volume Layouts • Lesson 5: Making Basic Configuration Changes • Lesson 6: Administering File Systems • Lesson 7: Resolving Hardware Problems "~, .AS:, #'lli~1[ii-Jl L , svmantcc Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Installation Identify operating system compatibility and Prerequisites other preinstallation considerations. Topic 2: Adding License Keys Obtain license keys, add licenses by using vxlic inst, and view licenses by using vxlicrep. Topic 3: VERITAS Software Identify the packages that are included in the Packages Storage Foundation 5.0 software. Topic 4: Installing Storage Install Storage Foundation interactively, by Foundation using the installation utility. Topic 5: Storage Foundation Describe the three Storage Foundation user User Interfaces interfaces. Topic 6: Managing the VEA Install, start, and manage the VEA server. Server 2-2 Copynyhl:- 2006 Svmantec Corporaucn. All fights reserveu VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 39. '''l!t ..,~. ~ .. ", ,,, ~[ , S}ll1;!n1CC. as Compatibility The VERITAS Storage Foundation product line operates on the following operating systems: SF Solaris HP-UX AIX Linux Version Version Version Version Version 5.0 8,9,10 11i.v2 (0904) 5.2,5.3 RHEL 4 Update 3, SLES 9 SP3 4.1 8,9,10, x86 11i.v2 (0904) No release RHEL 4 Update 1 (2.6), SLES 9 SP1 4.0 7,8,9 No release 5.1, 5.2, 5.3 RHEL 3 Update 2 (1686) 3.5.x 2.6,7,8 11.11i (0902) No release No release* • Note: Version 3.2.2 on Linux has functionality equivalent to 3.5 on Solaris. I Installation Prerequisites OS Version Compatibility Before installing Storage Foundation. ensure that the version of Storage Foundation that you are installing is compatible with the version of the operating system that you are running. You may need to upgrade your operating system before you install the latest Storage Foundation version. VERITAS StorageFoundation 5.0 operates on the following operating systems: Solaris 8 (SPARe Platform 32-bit and 64-bil) Solaris 9 (SPARe Platform 32-bit and M-bil) Solari, 10 (SPARe Platform M-bil) September 2004 release of HP-UX II i version 2.0 or later AIX 5.2 ML6 (legacy) AIX 5.3 TL4 with SP4 Red Hat Enterprise l.inux 4 (RIIEL 4) with Update 3 (2.6.9-34 kernel) on AMD Optcron or Intel Xeon EM64T (xX6_(4) SUSE Linux Enterprise Server <) (SLES 9) with SP3 (2.6.5-7.244. 252 kernels) on AMD Opteron or Intel Xcon EM64T (xR6_(4) Check the /'F:R1TAS Storag« Foundation Release No/es Ior additional operating Solaris HP-UX AIX Llnux system requirements. Lesson 2 Installation and Interfaces 2-3 Copyright b 2006 svoantcc Corporation All nqhts reserved.
  • 40. symantec. Support Resources ·Il;'l,q! Storag~ Founnanonfor U~~IX Products I Search for Technotes I Support services ~~E:~~,~""""~ 1Wf'l'tIttlfUialln. r~'-"-'''-''-F-"-,,,,-,-.,,--~----rp~a~tc~h~e~s=;I--------'--------------- "un •.•.••,"tlf~d•• "·, lI.IM ••••••••.'."". hll ••• " •.•••lo.11<,11'. '->."';' ~:.;tn ~J ""~': .• '!It-:.," •• s. ~~, "'.1)00;<1':><;"< Jo·1'.~$ot> ,~'~.;~~'''' ""~,!~~.•.,,,~ ,.~.•~~~,,<;<'" r·,.,,~,',1""rH .'.'''~'''''' ~,~.•••.",~~'·""··'''''.''''f· C.h"·"-" ':l'""c•.- •.."'!!....•••...•I"*"""~-~,~ 'w,'<q- •.•••"""'~r.·j'f'· ".""' ...•• .,~,.'"'At.~~'·""~'>'' ~1'''fl''t'"F''''''''~I'r.-~,t>.U' "," ~" •• It••• ,"',.,.I! ••• ,,- ••••• '''' .••'~ .'" :£.:..••,..,.. F<.'A>u<III._ •• •( •. "-1)' http://support.veritas.com(',ur4'''·'',IIa",I.il<·''''I''"'· " •• ~~: ..,_''' ••.•• ,'''~ •• ·d ••• l01·••• ,.,.., ••••••••••••• "'"'".;> :t"".~.1":1""'-. ""'.1,1-:" "''to ..••..., .,~•••r""· •.'" +"" ,~._. "'j<.('.o.:', "~"""'''''~ "',-.~ .•.1' Version Release Differences With each new release of the Storage Foundation software. changes are made that may affect the installation or operation of Storage Foundation in your environment. By reading version release notes and installation documentation that are included with the product, you can stay informed of any changes, For more information about specific releases ofYERITAS Storage Foundation, visit the YERITAS Support Web site at: http: / / support. veri tas. com This site contains product and patch information, a searchable knowledge base of technical notes, access to product-specific news groups and c-mail norification services, and other infonnation about contacting technical support staff. Note: If you open a case with YERITAS Support. you can view updates at: http://support.veritas.com/viewcase You can access your case by entering the e-mail address associated with your case and the case number. 2-4 Copynqhtc. 2006 Svmaruec Ccmoranon All fights reserved VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
  • 41. ,S)111illllt'L I Storage Foundation Licensing • Licensing utilities are contained in the VRTSvlic package, which is common to all VERITAS products. • To obtain a license key: - Create a vLicense account and retrieve license keys online. vLicense is a Web site that you can use to retrieve and manage your license keys. or - Complete a License Key Request form and fax it to VERITAS customer support. • To generate a license key, you must provide your: - Software serial number - Customer number - Order number Note: You may also need the network and RDBMS platform, system configuration, and software revision levels. Adding License Keys You must have your license key before you begin installation. because you are prompted for the license key during the installation process. A new license key is nut necessary if you are upgrading Storage Foundation from a previously licensed version of the product. lfyou have an evaluation license key, you must obtain a permanent license key when you purchase the product. The VER[TAS licensing mechanism checks the system date to verify that it has not been set back. [I' the system date has been reset. the evaluation license key becomes invalid. Obtaining a License Key License keys arc delivered on Software License Certificates to you at the conclusion of the order fulfillment process. The certificate specifics the product keys and the number of product licenses purchased. A single key enables you to install the product un the number and type of systems for which you purchased the license. License key arc non-node locked. In a non-node locked model. one key can unlock a product on different servers regardless ufllost ID and architecture type. In a nude locked model. a single license is tied to a single specific server. For each server. you need a di fferent key. Lesson 2 Installationand Interfaces Copyright C 2006 Symamec Corporation. All nqhts reserved 2-5
  • 42. symaruec Generating License Keys [http://vli-~~~-s-e-.v~-;i-ta-s~.-c~Jf-~---' I········· I. Access automatic I , license key generation and delivery. • Manage and track I i license key inventory and usage. I· Locate and reissue lost license keys. I· Report, track, and 1 resolve license key I issues online. I· Consotidate and share license key information with other accounts. • To add a license key: vxlicinst • License keys are installed in: /etc/vx/licenses/lic • To view installed license key information: vxlicrep Displayed information includes: - License key number - Name of the VERIT AS product that the key enables - Type of license - Features enabled by the key Generating License Keys with vLicense VERITAS vl.icense (v L icense. veri tas. com) is a self-service online license management system. vl.iccnsc supports production license keys only. Temporary. evaluation. or demonstration keys must be obtained through your VERITAS sales representative. Note: The VRTSvl ic package can coexist with previous licensing packages. such as VRTSIic. If you have old license keys installed in /etc/vx/eIm.leave this directory on your system. The old and new license utilities cun coexist. 2-6 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals Copvnqht i; 2006 Svmantec Corporation An fights reserved
  • 43. ,S)11Hlnt.?( . I ~ What Gets Installed? In version 5.0, the default installation behavior is to install all packages in Storage Foundation Enterprise HA. In previous versions, the default behavior was to only install packages for which you had typed in a license key. In 5.0, you can choose to install: • All packages included in Storage Foundation Enterprise HA or • All packages included in Storage Foundation Enterprise HA, minus any optional packages, such as documentation and software development kits VERITAS Software Packages When you install a product suite. the component product packages are installed automatically. When installing Storage Foundation. be sure to follow the instructions in the product release notes and installation guides. Package Space Requirements Before you install any of the packages. confirm that your system has enough free disk space to accommodate the installation. Storage Foundation programs and files are installed in the /. /usr. and / opt tile systems. Refer to the product installation guides for a detailed list of package space requirements. Solaris Note VxFS often requires more than the default RK kernel stack size. so entries are added to the jete/system file. This increases the kernel thread stack size of the system to 24K. The original / ete/ system file is copied to /ete/fs/vxfs/system.preinstall. Lesson 2 Installation and Interfaces 2-7 COpyrig!lt:fj 2006 Symanlec Corporation. All rights reserved
  • 44. symantec Optional Features VERITAS FlashSnap - Enables point-in-time copies of data with minimal performance overhead - Includes disk group split/join, FastResync, and storage checkpointing (in conjunction with VxFS) VERITAS Volume Replicator - Enables replication of data to remote locations - VRTSvrdoc: VVR documentation • VERITAS Cluster Volume Manager Used for high availability environments Features are Included In the VxVM package, but they require a separate license. Features are Included In the VxFS package, but they require a separate license. VERITAS Quick 1/0 for Databases Enables applications to access prealiocated VxFS files as raw character devices • VERITAS Cluster File System Enables multiple hosts to mount and perform file operations concurrently on the same file Dynamic Storage Tiering Enables the support for multivolume file systems by managing the placement of files through policies that control both initial file location and the circumstances under which existing files are relocated Storage Foundation Optional Features Several optional features do not require separate packages, only additional licenses. The following optional features are built-in to Storage Foundation that you can enable with additional licenses: VERITAS Flashxnap: FlashSnap facilitates point-in-time copies of data, while enabling applications to maintain optimal performance, by enabling features, such as FastResync and disk group split and join functionality. FlashSnap provides an efficient method to perform offline and off-host processing tasks, such as backup and decision support. VERITAS Volume Replicator: Volume Rcplicator augments Storage Foundation functionality to enable you to replicate data to remote locations over any IP network. Replicated copies of data can be used for disaster recovery, off-host processing, off-host backup, and application migration. Volume Replicator ensures maximum business continuity by delivering true disaster recovery and flexible off-host processing. Cluster Functionality: Storage Foundation includes optional cluster functionality that enables Storage Foundation to be used in a cluster environment. A cI/lSII!I' is a set of hosts that share a set of disks. Each host is referred to as a node in a cluster. When the cluster functionality is enabled, all of the nodes in the cluster can share VxVM objects. The main benefits of cluster configurations are high availability and off-host processing. VERITAS Cluster Server (VCS): ves supplies two major components integral to eFS: the Low Latency Transport (LLT) package and the Group 2-8 CQPynyht'~ 2006 Syrn;'Jnlt:r.: Corporation. All fights reserveu VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 45. Membership and Atomic Broadcast (GAB) package. LLT provides node- to-node communications and monitors network communications. GAB provides cluster state. configuration. and membership service. and it monitors the heartbeat links between systems to ensure that they are active. VERIT AS Cluster File System (CFS): CFS is a shared file system that enables multiple hosts to mount and perform IIle operations concurrently on the same file. VERITAS Cluster Volume Manager (CVM): CVM creates the cluster volumes necessary for mounting cluster file systems. VERIT AS Quick 1/0 for Databases: VERITAS Quick 1/0 for Databases (referred to as Quick 1;0) enables applications to access preallocated VxFS tiles as raw character devices. This provides the administrative benefits of running databases on file systems without the performance degradation usually associated with databases created on file systems. Dynamic Storage Tiering (DST): DST enables the support for multivolume file systems by managing the placement of files through policies that control both initial tile location and the circumstances under which existing files are relocated. Lesson 2 Installation and Interfaces Copyrigh1 It 200{J Svmentec Corporation. All nqtus reserved. I 2-9
  • 46. syrnantec Installation Menu Storage Foundation and High Availability Solutions 5.0 SYMANTEC Product Version Installed Licensed Veritas Cluster Server Veritas File System Veritas Volume Manager Veri tas Volume Replicator Veritas Storage Foundation Veritas Storage Foundation for Oracle Veri tas Storage Foundation for DB2 Veritas Storage Foundation for Sybase Veri tas Storage Foundation Cluster File System Veritas Storage Foundation for Oracle RAe no no no no no no no no no no no no no no no no Task Menu: [2> Install/Upgrade a Product L) License a Product U) Uninstall a Product Q) Quit C) Configure an Installed Product P) Perform a Preinstallation Check 0) View a Product Description ?) Help Enter a Selection: [I,C,L,P,U,D,Q,?] Installing Storage Foundation The Installer is a menu-based installation utility that you can use to install any product contained on the VERITAS Storage Solutions CD-ROM. This utility acts as a wrapper for existing product installation scripts and is most useful when you are installing multiple VERI rAS products or bundles, such as VERITAS Storage Foundation or VERITAS Storage Foundation tor Databases. Note: The example on the slide is from a Solaris platform. Some of the products shown on the menu may not be available on other platforms. For example, VERITAS File System is available only as part of Storage Foundation on HP-liX. Note: The VERITAS Storage Solutions CD-ROM contains an installation guide that describes how to use the installer utility. You should also read all product installation guides and release notes even if you are using the installer utility. To add the Storage Foundation packages using the installer utility: 1 Log on as supcruscr. 2 Mount the VERITAS Storage Solutions CD-ROM. 3 Locate and invoke the installer script: cd / cdrom/ CD_name ./installer 4 If the licensing utilities are installed. the product status page is displayed. This list displays the VERITAS products on the CD-ROM and the installation and licensing status of each product. If the licensing utilities are not installed, you receive a message indicating that the installation utility could not cletermine product status. 2-10 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals Copynghl & 2006 Svrnaotoc Corporation All flyhls reserved
  • 47. 5 Type I to install a product. Follow the instructions to select the product that you want to install. Installation begins automatically. When you add Storage Foundation packages by using the installer utility. all packages are installed. lfyou want to add a specific package only. for example. only the VRTSvrndoc package. then you must add the package manually from the command line. After installation. the installer creates three text files that can be used for auditing or debugging. The names and locations or each file are displayed at the end or the installation and are located in / opt/VRTS/ install / logs: IFile Description Installation log file Contains all commands executed during installation. their output. and any errors generated by the commands: used for debugging installation problems and for analysis by VERITAS Support Responsefile Contains configuration information enteredduring the procedure: can be used for future installation procedures when using the installer script with the -responsefile option Summary file Contains the output of Vf:RITAS product installation scripts: shows products that were installed. locations of log and response files, and installation messagesdisplayed Methods for Adding Storage Foundation Packages A first-time installation or Storage Foundation involves adding the software packages and configuring Storage Foundation fur first-time use. You can add VERITAS product packages by using one of three methods: Method Command Notes VLRITAS installer Installs multiple VERITAS Installation Menu products interactively. Installs packagesand conligures Storage Foundation (or first-time use. Product installation installvm Install individual VFRITAS scripts installfs products internctively. installsf Installs packagesand configures Storage Foundation lor first time use. Native operating pkgadd (Solaris) Install individual packages. for system package swinstall (iIP-UX) example. when using your 0'11 installation installp (AIX) custom installation scripts, commands First-time Storage Foundation rpm (Linux ) configuration must be run as a Then. to configure SF: separatestep. vxinstall Lesson 2 Installation and Interfaces 2-11 Copynqht "( 2006 Svmanter. Corporation All rights reserved
  • 48. Default Disk Group o You can set up a system- wide default disk group to which Storage Foundation commands default if you do not specify a disk group . o If you choose not to set a default disk group at installation, you can set the default disk group later from the command line. Note:In StorageFoundation 4.0and later,the rootdg requirementno longerexists. symaru«, Configuring Storage Foundation : When you i~~taIISt~r~g~F~~~dation, y~u are asl<edify~~'V~~tt~~~~:l Enclosure-Based Naming HostJ "-cl Disk Enclosures .,.II '·c' 2 . ~.I. enc j' -;;ncl encO o Standard device naming is based on controllers, for example, cltOdOs2. o Enclosure-based naming is based on disk enclosures, for example, encO. Configuring Storage Foundation When you install Storage Foundation, you are asked if you want to configure it during installation. This includes deciding whether to use enclosure-based naming and a default disk group. What Is Enclosure-Based Naming'! An enclosure.or disk enclosure,is an intelligent disk array. which permits hot- swapping of disks. With Storage Foundation. disk devices can be named for enclosures rather than for the controllers through which they are accessed as with standard disk device naming (for example. eOtOdO or hdisk2). Enclosure-based naming allows Storage Foundation to access enclosures as separate physical entities. By configuring redundant copies of your data on separate enclosures, you can safeguard against failure of one or more enclosures. This is especially useful in a storage area network (SAN) that uses Fibre Channel hubs or fabric switches and when managing the dynamic multipathing (DMP) feature of Storage Foundation. For example, if two paths (el t 99dO and e2t99dO) exist to a single disk in an enclosure, VxVM can use a single DMP metanode, such as eneO 0, to access the disk. What Is a Default Disk Group'! The main benefit of creating a default disk group is that Storage Foundation commands default to that disk group if you do not specify a disk group on the command line. defaul tdg specifies the default disk group and is an alias for the disk group name that should be assumed if a disk group is not specified ill a command. 2-12 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COp,'!v;)ht G" 2()06 Svmantec COIl-'or<lIlOI1. All flI)t1ls reservcc
  • 49. I Storage Foundation Management Server Storage Foundation 5.0 provides central management capability by introducing a Storage Foundation Management Server (SFMS). With SF 5.0, it is possible to configure a SF host as a managed host or as a standalone host during installation. A Management Server and Authentication Broker must have previously been set up if a managed host is required during installation. To configure a server as a standalone host during installation, you need to answer "n" when asked if you want to enable SFMS Management. You can change a standalone host to a managed host at a later time. Note: This course does not cover SFMS and managed hosts. Storage Foundation Management Server Storage Foundation 5.0 provides central management capability by introducing a Storage Foundation Management Server (SFMS). For more information. refer to the S/Or"geFoundation ManagementS(,I"I·(,I" Administrator's Guide. Lesson 2 Installationand Interfaces Copyright «; 2006 Svmantec Corporation All fignls rese-veo 2-13
  • 50. Sularis symantec Verifying Package Installation To verify package installation, use OS-specific commands: • Solaris: pkginfo -1 VRTSvxvm • HP-UX: sw1ist -1 product VRTSvxvm • AIX: lslpp -1 VRTSvxvm • Linux: rpm -qa VRTSvxvm Verifying Package Installation llyou are not sure whether VERITAS packagesare installed, or if you want to verify which packagesare installed on the system,you can view information about installed packagesby using Ox-specific commands to list package information. To list all installed packageson the system: pkginfo To restrict the list to installed VERITAS packages: pkginfo I grep VRTS To display detailed information about a package: pkginfo -1 VRTSvxvm HP-UX To list all installed packageson the system: sw1ist -1 product To restrict the list to installed VERITAS packages: sw1ist -1 product I grep VRTS To display detailed information about a package: sw1ist -1 product VRTSvxvm 2-14 Cqpyngtll (,. 2006 Symautec Corporauon All nqhts reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 51. AIX To list all installed packages on the system: lslpp To restrict the list to installed VERITAS packages, type: lslpp -1 'VRTS*' To verify that a particular Iileset has heen installed, use its name, for example: lslpp -1 VRTSvxvrn To verify package installation on the system: rpm -qa I grep VRTS To verify a specific package installation on the system: rpm -q[i] package_name For example, to verify that the VRTSvxvm package is installed: rpm -q VRTSvxvrn The - i option lists detailed information about the package. ILinux Lesson 2 Installation and Interfaces 2-15 Copyrighi ~ 2006 Svrnantec Corporation All rigl1;; reserved
  • 52. svmantec Storage Foundation User Interfaces Storage Foundation supports three user interfaces: • VERITAS Enterprise Administrator (VEA): A GUI that provides access through icons, menus, wizards, and dialog boxes Note: This course only covers using VEA on a standalone host. • Command-Line Interface (CLI): UNIX utilities that you invoke from the command line • Volume Manager Support Operations (vxdiskadm): A menu-driven, text-based interface also invoked from the command line Note: vxdiskadm only provides access to certain disk and disk group management functions. Storage Foundation User Interfaces Storage Foundation User Interfaces Storage Foundation supports three user interfaces, Volume Manager objects created by one interface are compatible with those created by the other interfaces. YERITAS Enterprise Administrator (YEA): VERITAS Enterprise Administrator (VEA) is a graphical user interface to Volume Manager and other VERITAS products. VEA provides access to Storage Foundation functionality through visual clements, such as icons, menus. wizards, and dialog boxes. Using VEA, you can manipulate Volume Manager objects and also perform common tile system operations. Command-Line Interface (CLI): The command-line interface (ell) consists of UNIX utilities that you invoke from the command line to perform Storage Foundation and standard UNIX tasks. You can use the ell not only to manipulate Volume Manager objects. but also to perform scripting and debugging functions. Most of the ell commands require supcruser or other appropriate privileges. The ell commands perform functions that range from the simple to the complex, and some require detailed user input. Volume Manager Support Operations (vxdiskadm): The Volume Manager Support Operations interface, commonly called vxdiskadm, is a menu-driven, text-based interface that you can use for disk and disk group administration functions. The vxdiskadm interface has a main menu from which you can select storage management tasks. A single VEA task may perform multiple command-line tasks. 2-16 Copyuqtu 'c 2(1)6 Syn.autec Corporauon All fights reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 53. - I , syrnaruec. ; Menu Bart t"'_ ~.,... •....•~ .•...•....; VEA: Main Window Quick Access Bar ; Toolbar lit 0 <_l ;::""ot.,,.._.! t_'f-"'W"" ~"'br-'" ,..,,,,,-.,.,-.l~<C"'><» ""cO._"",," ~<W1'A<'.~ "'•.•••".I>t.~l ,,: _.,._.,n ~ •.•.•••• ">. t loi,-,q , '> ll"'''' ~ u; ,- ..." :: t,.i:.,,;~~' .'_•....••. Three ways to access tasks: 1. Menu bar 2. Toolbar 3. Context menu (right-click) ""'_==":;":'-':'...1 ~';':"'"d,.·'.·'. '" '~.•.,~ Using the VEA Interface The VERITAS Enterprise Administrator (VEA) is the graphical user interface for Storage Foundation and other VERITAS products. You can use the Storage Foundation features of VEA to administer disks, volumes, and file systems on local or remote machines. VEA is a Java-based interface that consists of a server and a client. You must install the VEA server on a UNIX machine that is running VERITAS Volume Manager. The VEA client can Tunon any machine that supports the Java 1.4 Runtime Environment, which can be Solaris. IIP-UX, AIX, Linux, or Windows. SOllie Storage Foundation features ofVEA include: Remote Administration Security Multiple Host Support Multiple Views of Objects Setting VE! Preferences You can customize general VEA environment auributes through the Preferences window (Select Tools - --Prcfcrcncev). Lesson 2 Installation and Interfaces 2-17 CopYllght ttl2006 Symanter. Corporation. All rights resetveo
  • 54. symaruec VEA: Viewing Tasks and Commands To view underlying command lines, double-click a task. , *-'(:~<;,<j~" "'¥i:t'w. " , 'I" ~ ,.__" '-'--""';"--+-1 ~;;~~i':::J~~;~'~ fr<l flnlC" l!<;.u,t.H~" t>O!k!'lIillIll ('jU! •• r~ltr ••O<.IP T,"~1."l (1"~t<i.uU!<)o~ ."OI$tUi. 'oWN ,",!Jg eso 'I'+! 1(!)t;rrr", ~fnOl •••••~.I¢"!I"'I(iK<:,.,~::.'l' UU( kl'~-":-:O''''''''''----'--'-----' i~~lr4"~IIfll Viewing Commands Through the Task Log The Task Log displays a history of the tasks performed in the current session. Each task is listed with properties. such as the target object of the task. the host, the start time, the task status, and task progress. Displaying the Task Lug window: To display the Task Log, click the Logs tab at the left of the main window. Clearing the Task History: Tasks are persistent in the Task History window. To remove completed tasks from the window, right-click a task and select Clear All Finished Tasks. Viewing ell Commands: To view the command lines executed for a task, double-click a task. The Task Log Details window is displayed tor the task. The CLI commands issued are displayed in the Commands Executed field of the Task Details section. VERITAS Storage Foundation 5.0 for UNIX: Fundamentals2-18 COlJyflgtll '"' 2006 Symantec Coporauon All lights reservcc
  • 55. , S)111.1I1lt'l. I Command-Line Interface You can administer CLI commands from the UNIX shell prompt. Commands can be executed individually or combined into scripts. Most commands are located in /usr/sbin. Add this directory to your PATH environment variable to access the commands. Examples of CLI commands include: vxassist vxprint vxdg vxdisk Creates and manages volumes Lists VxVM configuration records Creates and manages disk groups Administers disks under VM control Using the Command-Line Interface The Storage Foundation command-line interface (CLl) provides commands used for administering Storage Foundation from the shell prompt on a UNIX system. CLl commands can be executed individually for specific tasks or combined into scripts. The Storage Foundation command set ranges from commands requiring minimal user input to commands requiring detailed user input. Many of the Storage Foundation commands require an understanding of Storage Foundation concepts. Most Storage Foundation commands require supcruser or other appropriate access privileges. CLI commands are detailed in manual pages. Accessing Manual Pages for CLI Commands Detailed descriptions ofVxVM and VxFS commands. the options for each utility. and details on how to use them are located in VxVM and VxFS manual pages. Manual pages are installed by default in / opt/VRTS/man. Add this directory to the MANPATI I environment variable. if it is not already added. To access a manual page. type man command name. Examples: man vxassist man mount vxfs Linux Note On Linux. you must also set the MANSECT and ~1ANPATH variables. Lesson 2 Installation and Interfaces Copyrigtlt l~, 2U06 Symamec Corpotauon All rights .csorvoo 2-19
  • 56. symantcc The vxdi skadm Interface vxdiskadm Volume Manager Support Operations Menu: volumeManager/Disk 1 Add or initialize one or more disks 2 Encapsulate one or more disks 3 Remove a disk 4 Remove a disk for replacement 5 Replace a failed or removed disk list List disk information Display help about menu ?? Display help about the menuing system q Exit from menus Note: This example is from a Solaris platform. The options may be slightly different on other platforms. Using the vxdiskadm Interface The vxdiskadm command is a CLI command that you can use to launch the Volume Manager Support Operations menu interface. You can use the Volume Manager Support Operations interface, commonly referred to as vxdiskadm. to perform common disk management tasks. The vxdiskadm interface is restricted 10 managing disk objects and does not provide a means of handl ing all other VxVM objects. Each option in the vxdiskadm interface invokes a sequence ofCLI commands. The vxdiskadm interlace presents disk management tasks to the user as a series of questions. or prompts. To start vxdiskadm. you type vxdiskadm at the command line to display the main menu. The vxdiskadm main menu contains a selection of main tasks that you can use to manipulate Volume Manager objects. Each entry in the main menu leads you through a particular task by providing you with information and prompts. Default answers arc provided for many questions, so you can select common answers. The menu also contains options for listing disk information, displaying help information. and quilling the menu interface. The tasks listed in the main menu are covered throughout this training. Options available in the menu differ somewhat by platform. See the vxdiskadm (1m) manual page for more details on how to use vxdiskadm. Note: vxdiskadm can be run only once per host. A lock file prevents multiple instances from running: /var / spool / locks/ .DrSKADO. LOCK. 2-20 Copynqht F; ';:006 Svmamcc Corporation All rights reserved VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
  • 57. I Installing VEA Installation administration file (Solaris only): VRTSobadmin Windows Client packages: • VRTSobgui, VRTSat, VRTSpbx, VRTSicsco (UNIX) Server packages: • VRTSob • VRTSobc33 • VRTSaa VRTSccg • VRTSdsa • VRTSvail • VRTSvmpro • VRTSfspro 'UN/X VRTSddlpr --_.__._---_._--..- r-;, Install the VEA I server on a UNIX I machine running I I. Storage Foundation. Install the VEA client on any I machine that ,I supports the Java 1.4 Runtime Environment (or I later). • windows/VRTSobgui .rosi (Windows) VEA is installed automatically when you run the SF installation scripts. You can also install VEA by adding packages manually. Managing the VEA Software YEA consists of a server and a client. You must install the YEA server on a UNIX machine that is running YERITAS Volume Manager. You can install the YEA client on the same machine or on any other UNIX or Windows machine that supports the Java 1.4 Runtime Environment (or later), Installing the VEA Server and Client on UNIX If you install Storage Foundation by using the Installer utility. you arc prompted to install both the ,[A server and client packages automatically. If you did not install all of the components by using the Installer. you can add the YEA packages separately. It is recommended that you upgrade YEA to the latest version released with Storage Foundation in order to take advantage of new functionality built into YEA. You can use the YEA with 4.1 and later to manage 3.5.2 and later releases. When adding packages manually. you must install the Volume Manager (VRTSvl ie. VRTSvxvrn) and the infrastructure packages (VRTSat. VRTSpbx. VRTSieseo) before installing the YEA server packages. After installation. also add the YEA startup scripts directory. / opt/VRTSob/bin. to the PATH environment variable. Lesson 2 Installationand Interfaces 2-21 Copyright ,°,2006 Symanrec Corporation. Anuqhts resorvoo
  • 58. syrnanrec Starting the VEA Server and Client Once installed, the VEA server starts up automatically at system startup. To start the VEA server manually: 1. Log on as superuser. 2. Start the VEA server by invoking the server program: /opt/VRTSob/bin/vxsvc (on Solaris and HP-UX) /opt/VRTSob/bin/vxsvcctrl (on Linux) When the VEA server is started: /var /vx/ isis/vxis is. lock ensures that only one instance of the VEA server is running. /var/vx/isis/vxisis .log contains server process log messages. To start the VEA client: On UNIX: /opt/VRTSob/bin/vea On Windows: Select Start->Programs->VERIT AS-> VERITAS Enterprise Administrator. Starting the VEA Server In order to use YEA. the YEA server must be running on the UNIX machine to be administered. Only one instance of the VEA server should be running at a time. Once installed. the YEA server starts up automatically at system startup. You can start the YEA server manually by invoking vxsvc (on Solaris and HP-UX). vxsvcctrl (on Linux ), or by invoking the startup script itself, for example: Solaris /etc/rc2.d/S73isisd start ~IP-LJX /sbin/rc2.d/S700isisd start The YEA client call provide simultaneous access to multiple host machines. Each host machine must be running the VEA server. Note: Entries for your user name and password must exist in the password file or corresponding Network Information Name Service table on the machine to be administered. Your user name must also be included in the YERITAS administration group (v r t s adm, by default) in the group tile or NIS group table. If the vrtsadm entry does not exist. only root can run YEA. You can contigure YEA to connect automatically to hosts when you start the YEA client. In the YEA main window. the Favorite Hosts node can contain a list of hosts that arc reconnected by default at the startup of the YEA client. 2-22 Copyright .; 2006 Svrnantec Corpcrahon. AU nqnts reserved VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
  • 59. , symanrec. Managing VEA The VEA server program is: /opt/VRTSob/bin/vxsvc (Solaris and HP-UX) /opt/VRTSob/bin/vxsvcctrl (Linux) To confirm that the VEA server is running: vxsvc -m (Solaris and HP-UX) vxsvcctrl status (Linux) To stop and restart the VEA server: /etc/init.d/isisd restart (Solaris) /sbin/init.d/isisd restart (HP-UX) To kill the VEA server process: vxsvc -k (Solaris and HP·UX) vxsvcctrl stop (Linux) To display the VEA version number: vxsvc -v (Solaris and HP-UX) vxsvcctrl version (Linux) Managing the VEA Server I Monitoring VEA Event and Task Logs You can monitor VEA server events and tasks from the [vent Log and Task Log nodes in the VEA object tree. You can also view the VEA log file. which is located at /var /vx/ isis/vxisis. log. This tile contains trace messages for the V[A server and VEA service providers. Copylight <E2006 Symantec Corporation Anucnts reserved 2-23Lesson 2 Installation and Interfaces
  • 60. symantcc Labs and solutions for this lesson are located on the following pages: Appendix A provides complete lab instructions, "lab ::: lnstatl.uiou and hucrruccs." I'a~l' , ..7 Appendix B provides complete lab instructions and solutions, "Lab 2 Solutiuns: lnstullation and lnrcriucc-." page n 7 Lesson Summary • Key Points In this lesson, you learned guidelines for a first- time installation of VERITAS Storage Foundation, as well as an introduction to the three interfaces used to manage VERITAS Storage Foundation. • Reference Materials - VERITAS Volume Manager Administrator's Guide - VERITAS Storage Foundation Installation Guide - VERITAS Storage Foundation Release Notes - Storage Foundation Management Server Administrator's Guide 2-24 Lab 2 Lab 2: Installation and Interfaces In this lab, you install VERITAS Storage Foundation 5.0 on your lab system. You also explore the Storage Foundation user interfaces, including the VERITAS Enterprise Administrator interface, the vxdiskadm menu interface, and the command-line interface. For Lab Exercises, see Appendix A. For Lab Solutions, see Appendix B, Copynqt-t ( 20DI'}Svrnantec Lorpcratton All riqhts leserved VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
  • 61. Lesson 3 Creating a Volume and File System
  • 62. svmantec Lesson Introduction • Lesson 1: Virtual Objects • Lesson 2: Installation and Interfaces • Lesson 3: Creating a Volume and File "",', System • Lesson 4: Selecting Volume Layouts • Lesson 5: Making Basic Configuration Changes • Lesson 6: Administering File Systems • Lesson 7: Resolving Hardware Problems ~ ~~'" ,~,"';'~;i!1I.. , symantcc Lesson Topics and Objectives Topic After completing this lesson, you will be able to: Topic 1: Preparing Disks and Initialize an OS disk as a VxVM disk and Disk Groups for Volume create a disk group by using VEA and Creation command-line utilities. Topic 2: Creating a Volume Create a concatenated volume by using VEA and from the command-line, Topic 3: Adding a File System Add a file system to and mount an existing toa Volume volume. Topic 4: Displaying Volume Display volume layout information by using Configuration Information VEA and by using the vxprint command. Topic 5: Displaying Disk and View disk and disk group information and Disk Group Information identify disk status. Topic 6: Removing Volumes, Remove a volume, evacuate a disk, remove a Disks, and Disk Groups disk from a disk group, and destroy a disk group. 3-2 Cop;lfIyhl'~ 2006 Svrnantec COrpOI(ltloll All rights reserved VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
  • 63. ."-*..,,.,.dt~., d"U1I;:M Selecting a Disk Naming Scheme Types of naming schemes: • Traditional device naming: OS-dependent and based on physical connectivity information • Enclosure-based naming: OS-independent, based on the logical name of the enclosure, and customizable You can select a naming scheme: • When you run Storage Foundation installation scripts • Using vxdiskadm, "Change the disk naming scheme" Enclosure-based named disks are displayed in three categories: Enclosures: enclosurenarne # Disks: Disk # Others: Disks that do not return a path-independent identifier to VxVM are displayed in the traditional OS-based format. Preparing Disks and Disk Groups for Volume Creation- Here are some examples of naming schemes: Naming Scheme Example Traditional Solaris: /dev/ l r l dsk/clt9dOs2 HP-UX: /dev/ l r ] dsk/c3t2dO (no slice) AIX: /dev/hdisk2 I.inux: /dev/sda. /dev/hda Enclosure-based senaO- 1.senaO - 2,senaO- 3.. Enclosure-based Customized englab2.hrl.boston3 I Benefits of enclosure-based naming include: Easier fault isolation: Storage Foundation can more effectively place data and metadata to ensure data availability. Device-name independence: Storage Foundation is independent of arbitrary device names used by third-party drivers. Improved SAN management: Storage Foundation can create better location identification information about disks in large disk limns and SANs. Improved cluster management: In a cluster environment. disk array names on all hosts in a cluster can be the same. Improved dynamic multipathing (DMP) management: With multipathcd disks. the name of a disk is independent of the physical communication paths. avoiding confusion and conflict. Copyrighl;~ 2006 Symantec Corporation. All nqtus reserved 3-3Lesson 3 Creating a Volume and File System
  • 64. symantec ~ Stage 1: ; Initialize disk. J ~ ! Uninitialized : Disk ; Stage 2: Assign disk to disk group. Before Configuring a Disk for Use by VxVM In order to use the space ofa physical disk to build VxVM volumes, you must place the disk under Volume Manager control. Before a disk can be placed under volume Manager control, the disk media must be formatted outside ofVxVM using standard operating system formatting methods. SCSI disks arc usually prcformaued. After a disk is formatted. the disk can be initialized for use by Volume Manager. In other words. disks must be detected by the operating system, before VxVM can detect the disks. Stage One: Initialize a Disk A formatted physical disk is considered uninitialized until it is initialized for use by VxVM. When a disk is initialized. the public and private regions are created. and VM disk header information is written to the private region. Any data or partitions that may have existed on the disk are removed. These disks are under Volume Manager control but cannot be used by Volume Manager until they are added to a disk group. Note: Encapsulation is another method of placing a disk under VxVM control in which existing data on the disk is preserved. This method is covered in a later lesson. Changing the Disk Layout To display or change the default values that are used for initializing disks, select the "Change/display the default disk layouts" option in vxdiskadm: 3-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals COPYright ,~, 2006 Svmantec Cornorauon All fights reserved
  • 65. For disk initialization. you can change the default format and the default length of the private region. If the attribute settings for initializing disks are stored in the user-created tile. / etc/ defaul t /vxdi sk, they apply to all disks to be initialized On Solaris for disk encapsulation. you can additionally change the offset values for both the private and public regions. To make encapsulation parameters different from the default VxVM values. create the user-detined / etc/ defaul t /vxencap tile and place the parameters in this tile. On HP-UX when converting LVM disks. you can change the default format and the default private region length. The attribute settings are stored in the /etc/default/vxencap file. Stage Two: Assign a Disk to a Disk Group When you add a disk to a disk group. VxVM assigns a disk media name to the disk and maps this name to the disk access name. Disk media name: A disk media name is the logical disk name assigned to a drive by VxVM. VxVM uses this name to identify the disk for volume operations. such as volume creation and mirroring. Disk access name: A disk access name represents all UNIX paths to the device. A disk access record maps the physical location to the logical name and represents the link between the disk media name and the disk accessname. Disk accessrecords arc dynamic and can be re-created when vxdctl enable is run. The disk media name and disk access name. in addition to the host name. are written to the private region of the disk. Space in the public region is made available for assignment to volumes. Volume Manager has full control of the disk. and the disk can be used to allocate space tor volumes. Whenever the VxVM configuration daemon is started (or vxdctl enable is run). the system reads the private region on every disk and establishes the connections between disk access names and disk media names. A tier disks are placed under Volume Manager control. storage is managed in terms of the logical configuration. File systems mount to logical volumes. not to physical partitions. Logical names. such as /dev/vx/ l r l dsk/diskgroup/volume_name. replace physical locations. such as /dev/ [rl dsk/ device_name. The free space in a disk group refers to the space on all disks within the disk group that has not been allocated as subdisks, When you place a disk into a disk group. its space becomes part or the tree space pool of the disk group. Stage Three: Assign Disk Space to Volumes When you create volumes. space in the public region of a disk is assigned to the volumes. Some operations. such as removal of a disk from a disk group. are restricted itspace on a disk is ill use by a volume. Lesson 3 Creating a Volume and File System Copyright c· 2006 Symantcc Corporation All rignls reserved I 3-5