This document provides an overview of VERITAS Storage Foundation, which includes VERITAS Volume Manager (VxVM) and VERITAS File System (VxFS). VxVM provides storage virtualization across multiple physical storage devices, while VxFS provides a file system to organize stored data. Together they provide benefits of manageability, availability, performance, and scalability for enterprise storage environments. Storage Foundation masks physical storage details and allows online administration to reduce downtime.
An overview of December 2009 enhancements to Veritas Storage Foundation, Veritas Cluster File System and Veritas Cluster Server, Symantec’s storage management and high availability solutions.
This release enables organizations to capitalize on new storage technology – such as solid state drives (SSDs) and thin provisioning – and improving performance and scalability. In addition, near instantaneous recovery of applications is now possible with Veritas Cluster File System, allowing for fast failover of structured information and near linear scalability.
This white paper discusses optimizing backup and recovery for VMware Infrastructure using EMC Avamar. It provides an overview of VMware Infrastructure and its components. It then discusses three solutions for backing up virtual machines using Avamar: backing up via the VMware Consolidated Backup proxy server, installing Avamar agents inside each virtual machine, or installing an agent on the ESX server service console. Avamar reduces backup sizes and times through global data deduplication.
EMC Data domain advanced features and functionssolarisyougood
This document provides an overview of advanced features and functions of Data Domain systems. It covers topics such as virtual tape libraries (VTL), snapshots, replication, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document consists of multiple lessons that describe these topics in detail and includes configuration examples.
Avamar is backup software from EMC that uses global, source-based data deduplication to reduce the size of backup data. It delivers fast daily full backups using existing infrastructure by reducing network bandwidth usage for backup by up to 500 times and reducing total backup storage needs by up to 50 times compared to traditional backup methods. Avamar supports various operating systems, applications, and virtual environments. It provides flexible deployment options including an integrated hardware/software appliance and a virtual edition for VMware.
This document provides details about Avamar backup configurations and procedures for production and campus environments. It includes information on cluster details, utilization and capacities, backup policies, groups, schedules, and retention policies. It also describes how to perform on-demand backups and restores in Avamar, and covers the Avamar Enterprise Manager and replication.
Presentation deduplication backup software and systemxKinAnx
The document provides information on EMC's Avamar deduplication backup software and system. It discusses how Avamar reduces backup time and storage requirements through client-side deduplication. Avamar provides daily full backups, one-step recovery, and supports both physical and virtual environments. It integrates with EMC Data Domain systems and is optimized for backing up virtual machines, remote offices, desktops/laptops, and enterprise applications.
Historically backups have been defined and referenced by the hostname of the physical system being protected. This has worked well when the relationship between the physical host and the operating system was a direct, one to one relationship. Backup processing impact was limited to each physical client and the biggest concern was saturating the network with backup traffic. This was easily managed by limiting the number of simultaneous client backups via a simple setting within the NetBackup policy.
Virtual machine technologies have changed this physical hardware dynamic. Dozens of operating systems (virtual machines) can now reside on a single physical (ESX) host connected to a single storage LUN with network access through a single NIC. When using traditional policy configurations, backup processing randomly occurs with no regard to the physical location of each virtual machine. As backups progress, a subset of ESX servers can be heavily impacted with active backups while other ESX systems sit idly waiting for their virtual machines to be protected. The effect of this is that backups tend to be slower than they need to be and backup processing impact on the ESX servers tends to be random and lopsided. Standard backup policy definitions simply do not translate well into virtual environments.
The NetBackup Virtual machine Intelligent Policy (VIP) feature is designed to solve this problem and more. With Virtual machine Intelligent Policy, backup processing can be automatically load balanced across the entire virtual machine environment. No ESX server is unfairly taxed with excessive backup processing and backups can be significantly faster. Once configured, this load balancing automatically detects changes in the virtual machine environment and automatically compensates backup processing based on these changes. Virtual machine Intelligent Policy places virtual machine backups on autopilot.
Emc data domain® boost integration guideArvind Varade
The document provides an integration guide for using EMC NetWorker Version 9.0.x with EMC Data Domain Boost (DD Boost) technology. It covers planning, practices, and configuration information for using DD Boost devices within a NetWorker backup and storage management environment. Key points include:
- DD Boost allows deduplication of backup data on Data Domain storage systems for reduced storage requirements.
- The guide provides roadmaps and procedures for configuring DD Boost devices, policies for backups and cloning, software requirements, restoring data, monitoring and reporting, and upgrading existing DD Boost configurations.
- Details are given on network and hardware requirements, performance considerations, licensing, and best practices for backup retention, data types
An overview of December 2009 enhancements to Veritas Storage Foundation, Veritas Cluster File System and Veritas Cluster Server, Symantec’s storage management and high availability solutions.
This release enables organizations to capitalize on new storage technology – such as solid state drives (SSDs) and thin provisioning – and improving performance and scalability. In addition, near instantaneous recovery of applications is now possible with Veritas Cluster File System, allowing for fast failover of structured information and near linear scalability.
This white paper discusses optimizing backup and recovery for VMware Infrastructure using EMC Avamar. It provides an overview of VMware Infrastructure and its components. It then discusses three solutions for backing up virtual machines using Avamar: backing up via the VMware Consolidated Backup proxy server, installing Avamar agents inside each virtual machine, or installing an agent on the ESX server service console. Avamar reduces backup sizes and times through global data deduplication.
EMC Data domain advanced features and functionssolarisyougood
This document provides an overview of advanced features and functions of Data Domain systems. It covers topics such as virtual tape libraries (VTL), snapshots, replication, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document consists of multiple lessons that describe these topics in detail and includes configuration examples.
Avamar is backup software from EMC that uses global, source-based data deduplication to reduce the size of backup data. It delivers fast daily full backups using existing infrastructure by reducing network bandwidth usage for backup by up to 500 times and reducing total backup storage needs by up to 50 times compared to traditional backup methods. Avamar supports various operating systems, applications, and virtual environments. It provides flexible deployment options including an integrated hardware/software appliance and a virtual edition for VMware.
This document provides details about Avamar backup configurations and procedures for production and campus environments. It includes information on cluster details, utilization and capacities, backup policies, groups, schedules, and retention policies. It also describes how to perform on-demand backups and restores in Avamar, and covers the Avamar Enterprise Manager and replication.
Presentation deduplication backup software and systemxKinAnx
The document provides information on EMC's Avamar deduplication backup software and system. It discusses how Avamar reduces backup time and storage requirements through client-side deduplication. Avamar provides daily full backups, one-step recovery, and supports both physical and virtual environments. It integrates with EMC Data Domain systems and is optimized for backing up virtual machines, remote offices, desktops/laptops, and enterprise applications.
Historically backups have been defined and referenced by the hostname of the physical system being protected. This has worked well when the relationship between the physical host and the operating system was a direct, one to one relationship. Backup processing impact was limited to each physical client and the biggest concern was saturating the network with backup traffic. This was easily managed by limiting the number of simultaneous client backups via a simple setting within the NetBackup policy.
Virtual machine technologies have changed this physical hardware dynamic. Dozens of operating systems (virtual machines) can now reside on a single physical (ESX) host connected to a single storage LUN with network access through a single NIC. When using traditional policy configurations, backup processing randomly occurs with no regard to the physical location of each virtual machine. As backups progress, a subset of ESX servers can be heavily impacted with active backups while other ESX systems sit idly waiting for their virtual machines to be protected. The effect of this is that backups tend to be slower than they need to be and backup processing impact on the ESX servers tends to be random and lopsided. Standard backup policy definitions simply do not translate well into virtual environments.
The NetBackup Virtual machine Intelligent Policy (VIP) feature is designed to solve this problem and more. With Virtual machine Intelligent Policy, backup processing can be automatically load balanced across the entire virtual machine environment. No ESX server is unfairly taxed with excessive backup processing and backups can be significantly faster. Once configured, this load balancing automatically detects changes in the virtual machine environment and automatically compensates backup processing based on these changes. Virtual machine Intelligent Policy places virtual machine backups on autopilot.
Emc data domain® boost integration guideArvind Varade
The document provides an integration guide for using EMC NetWorker Version 9.0.x with EMC Data Domain Boost (DD Boost) technology. It covers planning, practices, and configuration information for using DD Boost devices within a NetWorker backup and storage management environment. Key points include:
- DD Boost allows deduplication of backup data on Data Domain storage systems for reduced storage requirements.
- The guide provides roadmaps and procedures for configuring DD Boost devices, policies for backups and cloning, software requirements, restoring data, monitoring and reporting, and upgrading existing DD Boost configurations.
- Details are given on network and hardware requirements, performance considerations, licensing, and best practices for backup retention, data types
Transforming Backup and Recovery in VMware environments with EMC Avamar and D...CTI Group
This document discusses the transition from tape-based backup systems to backup appliances and deduplication backup software. It notes that backup appliances are disrupting the market, with tape being marginalized and storage and software functionality converging. Purpose-built backup appliances and deduplication backup software are experiencing much faster growth than tape automation. Deduplication technology is accelerating this transition by making backup storage more efficient and reducing bandwidth needs.
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...Symantec
Symantec’s latest backup appliances, NetBackup 5220 and Backup Exec 3600, which now include the latest NetBackup 7.5 and Backup Exec 2012 software from Symantec announced earlier this year. The new appliances deliver on Symantec’s Better Backup for All initiative to advance what Gartner has called “The Broken State of Backup.”
Les solutions EMC de sauvegarde des données avec déduplication dans les envir...ljaquet
The document discusses EMC's backup and recovery solutions, with a focus on deduplication-based products. It provides an overview of EMC's portfolio including Avamar, Data Domain, and NetWorker. It then discusses key concepts like deduplication fundamentals and how the technology has evolved backup solutions from tape-based to disk-based. Specific product features and benefits are highlighted, such as Avamar's guest-level VMware backup and Data Domain's inline deduplication approach.
The document discusses EMC Data Domain, a data protection storage system that provides deduplication to reduce storage requirements by 10-30x. It protects up to 55 PB of logical capacity in a single system and completes backups faster at up to 31 TB per hour. Data Domain seamlessly integrates with leading backup and archiving applications. It provides reliable access and recovery through data verification and self-healing capabilities.
How to achieve better backup with SymantecArrow ECS UK
Symantec provides holistic data protection solutions to address common customer challenges with backup and recovery, including:
1) Disparate backup solutions that add complexity and cost as data grows in volume and organizations virtualize.
2) Struggling to meet backup windows and service level agreements as data increases in size.
3) Looking for ways to reduce cost, complexity, and risk across their backup and recovery environment.
Symantec's portfolio includes NetBackup for large enterprises and Backup Exec for small and medium businesses, both utilizing shared deduplication and virtualization technologies. Symantec also offers appliances and cloud options for simplified backup and disaster recovery.
This document discusses best practices for deploying VMware vSphere 5 on IBM SONAS scale-out network attached storage. It provides an overview of new features in vSphere 5 including Storage vMotion, Storage DRS, and centralized logging. It then covers planning the creation of NFS shares on SONAS, installing and configuring vSphere, and adding NFS data stores. Recommendations are provided such as using large SONAS storage pools and fewer larger NFS data stores. The document is intended to help customers implement effective storage solutions for enterprise virtual environments requiring extreme scalability.
Veerendra Patil has over 10 years of experience designing, implementing, and administering high availability NetBackup master and media servers, as well as Enterprise Storage Area Networks and Network Attached Storage. He has experience migrating over 50 NetBackup catalogs across various platforms and complex cluster migrations. Additionally, he has expertise installing and configuring Veritas NetBackup, CommVault, VMware, Windows servers, Linux distributions, and various storage solutions like NetApp, EMC, and HP.
Symantec Corp. (Nasdaq: SYMC) today announced it will deliver a new approach for modernizing backup and recovery, a process that has become unnecessarily complicated and expensive as organizations’ data stores grow exponentially. Compared to traditional backup, Symantec’s approach enables 100 times faster backup, eases management and simplifies recovery if a disaster occurs, helping customers realize significant cost savings while better protecting their business information.
Presentation data domain advanced features and functionsxKinAnx
This document provides an overview of Data Domain advanced features and functions for Velocity Partner Accreditation. It covers topics such as virtual tape library (VTL) planning, snapshots, replication, recovery, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document contains lessons and explanations on these topics to help partners learn about and describe Data Domain's data protection solutions.
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Sverige
The document discusses Tivoli Storage Manager (TSM) data protection capabilities for VMware environments, including an overview of virtual machine backup methods like in-guest, on-host, and off-host backup. It also covers how VMware Consolidated Backup works compared to using vStorage APIs for Data Protection, and provides a high-level overview of IBM's TSM solution for backing up and recovering VMware virtual machines. Potential future enhancements like integrating with VMware vSphere client and FlashCopy Manager are also outlined.
TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330Symantec
Symantec AutoSupport is a set of infrastructure, process, and systems that enhance the support experience through proactive monitoring of Symantec Appliance hardware and software, as well as providing automated error reporting and support case creation.
Through automation, internet access, and case management integration, Symantec can vastly improve the support process and give our support engineers the tools to solve problems faster. The AutoSupport infrastructure within Symantec analyzes the Call Home data from each appliance to provide proactive customer support and incident response for hardware failures thus reducing the need for an administrator to initiate support cases. It also enables Symantec to better understand how customers configure and use appliances, and where improvements would be most beneficial.
AutoSupport can also correlate the Call Home data with other site configuration data held by Symantec, for technical support and error analysis. With AutoSupport, Symantec greatly improves the customer support experience.
TECHNICAL WHITE PAPER▶NetBackup 5330 Resiliency/High Availability AttributesSymantec
NetBackup Appliances Family
The NetBackup Appliance family offers complementary solutions to meet the data protection needs of modern enterprises, and includes solutions such as the NetBackup 5230 and NetBackup 5330 Appliances.
NetBackup 5230 Appliance
The NetBackup 5230 Backup Appliance includes master and media capabilities, alongside storage and deduplication features that will meet the needs of small, mid size, and even some large enterprises.
NetBackup 5330 Appliance
The NetBackup 5330 appliance offers media server capabilities, and is designed to meet the needs of large enterprise customers with demanding performance and scalability requirements across virtual and physical infrastructures.
The NetBackup 5330 Appliance is designed to supplement the NetBackup Appliance family by offering key enterprise customers a large-scale and performant offering. This includes sustainable performance over time and scale, predictable job success rates under heavy loads, and powerful deduplication capabilities.
Data protection architectures are, by necessity, complex in nature as they involve so many complex factors. There cannot be a “one size fits all” approach to data protection because the operational requirements of each organization dictate how data is used, and the local risk assessment process dictates to some extent how it will be protected.
The document discusses various virtualization and storage features in VMware's vSphere solution that can help optimize datacenter costs and efficiency. Key points include:
1) Thin provisioning allows virtual disks to only use allocated storage as needed, improving utilization over traditional thick provisioning.
2) Enhancements to software iSCSI and new storage management capabilities in vCenter help improve performance and flexibility.
3) Features like hot extend and storage vMotion allow live expanding of virtual disks and migrating VMs between storage systems with minimal disruption.
The document discusses Avamar, a backup and recovery software from EMC. It provides client-side data deduplication to reduce backup sizes and speeds. Avamar protects virtual and physical environments with scalable management. It discusses how Avamar improves backup performance for various environments like virtual machines, applications, remote offices, and desktops. Avamar provides reliable protection with features like daily integrity checks and fault tolerance.
EMC for Network Attached Storage (NAS) Backup and Recovery Using NDMPEMC
This white paper discusses EMC's backup and recovery solutions for NAS systems using NDMP. It describes how EMC's Avamar and NetWorker solutions can provide optimized data protection for NAS using data deduplication and integration with Data Domain storage. The paper recommends using backups, snapshots, and offsite replication as best practices to meet recovery objectives while improving efficiency.
Symantec delivers on its deduplication everywhere strategy - designed to reduce data everywhere, reduce complexity, and reduce data infrastructure – by announcing Backup Exec 2010 and NetBackup 7.0.
These products both integrate deduplication technology closer to the information source at the client and at the media server to help organizations achieve significant storage and cost savings and simplify their backup and recovery operations through a unified platform.
In addition to deduplication, NetBackup 7 helps enterprise-level organizations protect, store and recover information and adds improved virtual machine protection and faster disaster recovery. Backup Exec 2010 also adds integrated archiving and improved virtual machine protection, helping mid-sized businesses protect more data and utilize less storage - overall saving them time and money.
The switching method you choose for your SBC environment can help determine performance and the experience that end-users have. We found that unifying switching with Cisco VM-FEX resulted in up to 29 percent lower latency than a solution using a traditional vSwitch when running a Citrix XenApp hosted shared desktop farm. Furthermore, the Cisco VM-FEX solution used up to 53 percent less CPU than the vSwitch solution did under extreme network conditions. In addition to these performance advantages, Cisco UCS Manager provides a central point of management and a simplified method to add vSphere hosts to the VM-FEX-enabled vSwitch, which can reduce management time and costs.
As our results show, switching to Cisco VM-FEX can provide your users with a more responsive environment.
This Blueprint is designed to help with customers who are utilising OST technology with Backup Exec’s deduplication Option to improve back end storage capabilities within a complex backup environment.
Relentless Information Growth
The data deduplication technology within Backup Exec 2014 breaks down streams of backup data into “blocks.” Each data block is identified as either unique or non-unique, and a tracking database is used to ensure that only a single copy of a data block is saved to storage by that Backup Exec server. For subsequent backups, the tracking database identifies which blocks have been protected and only stores the blocks that are new or unique. For example, if five different client systems are sending backup data to a Backup Exec server and a data block is found in backup streams from all five of those client systems, only a single copy of the data block is actually stored by the Backup Exec server. This process of reducing redundant data blocks that are saved to backup storage leads to significant reduction in storage space needed for backups.
Data has increased the necessity in making greater investments in IT infrastructure, with the increase in the duplication of data and Data protection processes, such as backup, has compound data growth creating multiple copies of primary data made for operational and disaster recovery. This has also made the Backup Infrastructure far more complex. Now that disk-based systems inherently offer faster restores, disk systems can also make backup environments more complex and difficult to manage. This creates a problem for many backup solutions to manage advanced storage device capabilities such as data deduplication, replication, and ability to write directly to tape.
Power of OpenStorage Technology (OST)
Symantec Backup Exec software and the OpenStorage technology (OST) have been designed to provide centrally managed, edge-to-core data protection in order to span multiple sites and provide disk-to-disk-to-tape (D2D2T) functionality and automate Data Movement. The OpenStorage API introduced in Backup Exec 2010 provides automated movement of data between sites and storage tiers and acts as a single Point of Management and Catalog for Backup Data, regardless of where it resides (remote office or corporate data center) or of what type of media it is stored on (disk or tape), or its age (recent backup or long term archive), providing better Control of Advanced Storage Devices.
The OpenStorage initiative allows customers to better utilize advanced, disk-based storage solutions from qualified partners. It gives the ability to ensure tighter integration between the backup software and storage, greater efficiency and performance using an easy-to-deploy, purpose-built appliance that does not have the limitation of tape emulation devices: increasing Performance and Optimization, achieving faster backups to deduplication appliances via a third-party OST plug-in enabled by Backup Exec.
What is NetBackup appliance? Is it just NetBackup pre-installed on hardware?
The answer is both yes and no.
Yes, NetBackup appliance is simply backup in a box if you are looking for a solution for your data protection and disaster recovery readiness. That is the business problem you are solving with this turnkey appliance that installs in minutes and reduce your operational costs.
No, NetBackup appliance is more than a backup in box if you are comparing it with rolling your own hardware for NetBackup or if you are comparing it with third party deduplication appliances. Here is why I say this…
NetBackup appliance comes with redundant storage in RAID6 for storing your backups
Symantec worked with Intel to design the hardware for running NetBackup optimally for predictable and consistent performance. Eliminates the guesswork while designing the solution.
Many vendors will talk about various processes running on their devices to perform integrity checks, some solutions even need blackout windows to do those operations. NetBackup appliances include Storage Foundation at no additional cost. The storage is managed by Veritas Volume Manager (VxVM) and presented to operating system through Veritas File System. Why is this important? SF is industry-leading storage management infrastructure that powers the most mission-critical applications in the enterprise space. It is built for high-performance and resiliency. NetBackup appliance provides 24/7 protection with data integrity on storage provided by the industry leading technology.
The Linux based operating system, optimized for NetBackup, harden by Symantec eliminates the cost of deploying and maintaining general purpose operating system and associated IT applications.
NetBackup appliances include built-on WAN Optimization driver. Replicate to appliances on remote sites or to the cloud up to 10 times faster on across high latency links.
Your backups need to be protected. Symantec Critical System Protection provides non-signature based Host Intrusion Prevention protection. It protects against zero-day attacks using granular OS hardening policies along with application, user and device controls, all pre-defined for you in NetBackup appliance so that you don’t need to worry about configuring it.
Best of all, reduce your operational expenditure and eliminate complexity! One patch updates everything in this stack! The most holistic data protection solution with the least number of knobs to operate.
The document provides instructions and guidelines for installing and managing Citrix XenServer Dell Edition. It includes sections on installing and configuring XenServer, using XenCenter management software, configuring storage options like local disks and Dell storage arrays, backup and recovery procedures, best practices, and troubleshooting. The document aims to help users optimize the virtualization platform on Dell servers and storage.
Sofware architure of a SAN storage Control SystemGrupo VirreySoft
The document describes the software architecture of a storage control system that uses a cluster of Linux servers to provide storage virtualization and management in a heterogeneous storage area network (SAN) environment. The storage control system, also called the "virtualization engine", aggregates storage resources into a common pool and allocates storage to hosts. It enables advanced functions like fast-write caching, point-in-time copying, remote copying, and transparent data migration. The system is built using commodity hardware and open source software to reduce costs compared to traditional proprietary storage controllers.
This document describes IBM System Storage N series with MultiStore and SnapMover technology. It allows companies to better manage, consolidate, migrate, and replicate critical data with minimal effort. MultiStore software enables one physical storage system to host up to 33 virtual storage systems (Vfilers). Vfilers appear as independent file servers and administrators can quickly create, modify, and migrate Vfilers. SnapMover allows Vfilers to be migrated between storage systems in seconds without disrupting users.
Transforming Backup and Recovery in VMware environments with EMC Avamar and D...CTI Group
This document discusses the transition from tape-based backup systems to backup appliances and deduplication backup software. It notes that backup appliances are disrupting the market, with tape being marginalized and storage and software functionality converging. Purpose-built backup appliances and deduplication backup software are experiencing much faster growth than tape automation. Deduplication technology is accelerating this transition by making backup storage more efficient and reducing bandwidth needs.
Better Backup For All Symantec Appliances NetBackup 5220 Backup Exec 3600 May...Symantec
Symantec’s latest backup appliances, NetBackup 5220 and Backup Exec 3600, which now include the latest NetBackup 7.5 and Backup Exec 2012 software from Symantec announced earlier this year. The new appliances deliver on Symantec’s Better Backup for All initiative to advance what Gartner has called “The Broken State of Backup.”
Les solutions EMC de sauvegarde des données avec déduplication dans les envir...ljaquet
The document discusses EMC's backup and recovery solutions, with a focus on deduplication-based products. It provides an overview of EMC's portfolio including Avamar, Data Domain, and NetWorker. It then discusses key concepts like deduplication fundamentals and how the technology has evolved backup solutions from tape-based to disk-based. Specific product features and benefits are highlighted, such as Avamar's guest-level VMware backup and Data Domain's inline deduplication approach.
The document discusses EMC Data Domain, a data protection storage system that provides deduplication to reduce storage requirements by 10-30x. It protects up to 55 PB of logical capacity in a single system and completes backups faster at up to 31 TB per hour. Data Domain seamlessly integrates with leading backup and archiving applications. It provides reliable access and recovery through data verification and self-healing capabilities.
How to achieve better backup with SymantecArrow ECS UK
Symantec provides holistic data protection solutions to address common customer challenges with backup and recovery, including:
1) Disparate backup solutions that add complexity and cost as data grows in volume and organizations virtualize.
2) Struggling to meet backup windows and service level agreements as data increases in size.
3) Looking for ways to reduce cost, complexity, and risk across their backup and recovery environment.
Symantec's portfolio includes NetBackup for large enterprises and Backup Exec for small and medium businesses, both utilizing shared deduplication and virtualization technologies. Symantec also offers appliances and cloud options for simplified backup and disaster recovery.
This document discusses best practices for deploying VMware vSphere 5 on IBM SONAS scale-out network attached storage. It provides an overview of new features in vSphere 5 including Storage vMotion, Storage DRS, and centralized logging. It then covers planning the creation of NFS shares on SONAS, installing and configuring vSphere, and adding NFS data stores. Recommendations are provided such as using large SONAS storage pools and fewer larger NFS data stores. The document is intended to help customers implement effective storage solutions for enterprise virtual environments requiring extreme scalability.
Veerendra Patil has over 10 years of experience designing, implementing, and administering high availability NetBackup master and media servers, as well as Enterprise Storage Area Networks and Network Attached Storage. He has experience migrating over 50 NetBackup catalogs across various platforms and complex cluster migrations. Additionally, he has expertise installing and configuring Veritas NetBackup, CommVault, VMware, Windows servers, Linux distributions, and various storage solutions like NetApp, EMC, and HP.
Symantec Corp. (Nasdaq: SYMC) today announced it will deliver a new approach for modernizing backup and recovery, a process that has become unnecessarily complicated and expensive as organizations’ data stores grow exponentially. Compared to traditional backup, Symantec’s approach enables 100 times faster backup, eases management and simplifies recovery if a disaster occurs, helping customers realize significant cost savings while better protecting their business information.
Presentation data domain advanced features and functionsxKinAnx
This document provides an overview of Data Domain advanced features and functions for Velocity Partner Accreditation. It covers topics such as virtual tape library (VTL) planning, snapshots, replication, recovery, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document contains lessons and explanations on these topics to help partners learn about and describe Data Domain's data protection solutions.
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011IBM Sverige
The document discusses Tivoli Storage Manager (TSM) data protection capabilities for VMware environments, including an overview of virtual machine backup methods like in-guest, on-host, and off-host backup. It also covers how VMware Consolidated Backup works compared to using vStorage APIs for Data Protection, and provides a high-level overview of IBM's TSM solution for backing up and recovering VMware virtual machines. Potential future enhancements like integrating with VMware vSphere client and FlashCopy Manager are also outlined.
TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330Symantec
Symantec AutoSupport is a set of infrastructure, process, and systems that enhance the support experience through proactive monitoring of Symantec Appliance hardware and software, as well as providing automated error reporting and support case creation.
Through automation, internet access, and case management integration, Symantec can vastly improve the support process and give our support engineers the tools to solve problems faster. The AutoSupport infrastructure within Symantec analyzes the Call Home data from each appliance to provide proactive customer support and incident response for hardware failures thus reducing the need for an administrator to initiate support cases. It also enables Symantec to better understand how customers configure and use appliances, and where improvements would be most beneficial.
AutoSupport can also correlate the Call Home data with other site configuration data held by Symantec, for technical support and error analysis. With AutoSupport, Symantec greatly improves the customer support experience.
TECHNICAL WHITE PAPER▶NetBackup 5330 Resiliency/High Availability AttributesSymantec
NetBackup Appliances Family
The NetBackup Appliance family offers complementary solutions to meet the data protection needs of modern enterprises, and includes solutions such as the NetBackup 5230 and NetBackup 5330 Appliances.
NetBackup 5230 Appliance
The NetBackup 5230 Backup Appliance includes master and media capabilities, alongside storage and deduplication features that will meet the needs of small, mid size, and even some large enterprises.
NetBackup 5330 Appliance
The NetBackup 5330 appliance offers media server capabilities, and is designed to meet the needs of large enterprise customers with demanding performance and scalability requirements across virtual and physical infrastructures.
The NetBackup 5330 Appliance is designed to supplement the NetBackup Appliance family by offering key enterprise customers a large-scale and performant offering. This includes sustainable performance over time and scale, predictable job success rates under heavy loads, and powerful deduplication capabilities.
Data protection architectures are, by necessity, complex in nature as they involve so many complex factors. There cannot be a “one size fits all” approach to data protection because the operational requirements of each organization dictate how data is used, and the local risk assessment process dictates to some extent how it will be protected.
The document discusses various virtualization and storage features in VMware's vSphere solution that can help optimize datacenter costs and efficiency. Key points include:
1) Thin provisioning allows virtual disks to only use allocated storage as needed, improving utilization over traditional thick provisioning.
2) Enhancements to software iSCSI and new storage management capabilities in vCenter help improve performance and flexibility.
3) Features like hot extend and storage vMotion allow live expanding of virtual disks and migrating VMs between storage systems with minimal disruption.
The document discusses Avamar, a backup and recovery software from EMC. It provides client-side data deduplication to reduce backup sizes and speeds. Avamar protects virtual and physical environments with scalable management. It discusses how Avamar improves backup performance for various environments like virtual machines, applications, remote offices, and desktops. Avamar provides reliable protection with features like daily integrity checks and fault tolerance.
EMC for Network Attached Storage (NAS) Backup and Recovery Using NDMPEMC
This white paper discusses EMC's backup and recovery solutions for NAS systems using NDMP. It describes how EMC's Avamar and NetWorker solutions can provide optimized data protection for NAS using data deduplication and integration with Data Domain storage. The paper recommends using backups, snapshots, and offsite replication as best practices to meet recovery objectives while improving efficiency.
Symantec delivers on its deduplication everywhere strategy - designed to reduce data everywhere, reduce complexity, and reduce data infrastructure – by announcing Backup Exec 2010 and NetBackup 7.0.
These products both integrate deduplication technology closer to the information source at the client and at the media server to help organizations achieve significant storage and cost savings and simplify their backup and recovery operations through a unified platform.
In addition to deduplication, NetBackup 7 helps enterprise-level organizations protect, store and recover information and adds improved virtual machine protection and faster disaster recovery. Backup Exec 2010 also adds integrated archiving and improved virtual machine protection, helping mid-sized businesses protect more data and utilize less storage - overall saving them time and money.
The switching method you choose for your SBC environment can help determine performance and the experience that end-users have. We found that unifying switching with Cisco VM-FEX resulted in up to 29 percent lower latency than a solution using a traditional vSwitch when running a Citrix XenApp hosted shared desktop farm. Furthermore, the Cisco VM-FEX solution used up to 53 percent less CPU than the vSwitch solution did under extreme network conditions. In addition to these performance advantages, Cisco UCS Manager provides a central point of management and a simplified method to add vSphere hosts to the VM-FEX-enabled vSwitch, which can reduce management time and costs.
As our results show, switching to Cisco VM-FEX can provide your users with a more responsive environment.
This Blueprint is designed to help with customers who are utilising OST technology with Backup Exec’s deduplication Option to improve back end storage capabilities within a complex backup environment.
Relentless Information Growth
The data deduplication technology within Backup Exec 2014 breaks down streams of backup data into “blocks.” Each data block is identified as either unique or non-unique, and a tracking database is used to ensure that only a single copy of a data block is saved to storage by that Backup Exec server. For subsequent backups, the tracking database identifies which blocks have been protected and only stores the blocks that are new or unique. For example, if five different client systems are sending backup data to a Backup Exec server and a data block is found in backup streams from all five of those client systems, only a single copy of the data block is actually stored by the Backup Exec server. This process of reducing redundant data blocks that are saved to backup storage leads to significant reduction in storage space needed for backups.
Data has increased the necessity in making greater investments in IT infrastructure, with the increase in the duplication of data and Data protection processes, such as backup, has compound data growth creating multiple copies of primary data made for operational and disaster recovery. This has also made the Backup Infrastructure far more complex. Now that disk-based systems inherently offer faster restores, disk systems can also make backup environments more complex and difficult to manage. This creates a problem for many backup solutions to manage advanced storage device capabilities such as data deduplication, replication, and ability to write directly to tape.
Power of OpenStorage Technology (OST)
Symantec Backup Exec software and the OpenStorage technology (OST) have been designed to provide centrally managed, edge-to-core data protection in order to span multiple sites and provide disk-to-disk-to-tape (D2D2T) functionality and automate Data Movement. The OpenStorage API introduced in Backup Exec 2010 provides automated movement of data between sites and storage tiers and acts as a single Point of Management and Catalog for Backup Data, regardless of where it resides (remote office or corporate data center) or of what type of media it is stored on (disk or tape), or its age (recent backup or long term archive), providing better Control of Advanced Storage Devices.
The OpenStorage initiative allows customers to better utilize advanced, disk-based storage solutions from qualified partners. It gives the ability to ensure tighter integration between the backup software and storage, greater efficiency and performance using an easy-to-deploy, purpose-built appliance that does not have the limitation of tape emulation devices: increasing Performance and Optimization, achieving faster backups to deduplication appliances via a third-party OST plug-in enabled by Backup Exec.
What is NetBackup appliance? Is it just NetBackup pre-installed on hardware?
The answer is both yes and no.
Yes, NetBackup appliance is simply backup in a box if you are looking for a solution for your data protection and disaster recovery readiness. That is the business problem you are solving with this turnkey appliance that installs in minutes and reduce your operational costs.
No, NetBackup appliance is more than a backup in box if you are comparing it with rolling your own hardware for NetBackup or if you are comparing it with third party deduplication appliances. Here is why I say this…
NetBackup appliance comes with redundant storage in RAID6 for storing your backups
Symantec worked with Intel to design the hardware for running NetBackup optimally for predictable and consistent performance. Eliminates the guesswork while designing the solution.
Many vendors will talk about various processes running on their devices to perform integrity checks, some solutions even need blackout windows to do those operations. NetBackup appliances include Storage Foundation at no additional cost. The storage is managed by Veritas Volume Manager (VxVM) and presented to operating system through Veritas File System. Why is this important? SF is industry-leading storage management infrastructure that powers the most mission-critical applications in the enterprise space. It is built for high-performance and resiliency. NetBackup appliance provides 24/7 protection with data integrity on storage provided by the industry leading technology.
The Linux based operating system, optimized for NetBackup, harden by Symantec eliminates the cost of deploying and maintaining general purpose operating system and associated IT applications.
NetBackup appliances include built-on WAN Optimization driver. Replicate to appliances on remote sites or to the cloud up to 10 times faster on across high latency links.
Your backups need to be protected. Symantec Critical System Protection provides non-signature based Host Intrusion Prevention protection. It protects against zero-day attacks using granular OS hardening policies along with application, user and device controls, all pre-defined for you in NetBackup appliance so that you don’t need to worry about configuring it.
Best of all, reduce your operational expenditure and eliminate complexity! One patch updates everything in this stack! The most holistic data protection solution with the least number of knobs to operate.
The document provides instructions and guidelines for installing and managing Citrix XenServer Dell Edition. It includes sections on installing and configuring XenServer, using XenCenter management software, configuring storage options like local disks and Dell storage arrays, backup and recovery procedures, best practices, and troubleshooting. The document aims to help users optimize the virtualization platform on Dell servers and storage.
Sofware architure of a SAN storage Control SystemGrupo VirreySoft
The document describes the software architecture of a storage control system that uses a cluster of Linux servers to provide storage virtualization and management in a heterogeneous storage area network (SAN) environment. The storage control system, also called the "virtualization engine", aggregates storage resources into a common pool and allocates storage to hosts. It enables advanced functions like fast-write caching, point-in-time copying, remote copying, and transparent data migration. The system is built using commodity hardware and open source software to reduce costs compared to traditional proprietary storage controllers.
This document describes IBM System Storage N series with MultiStore and SnapMover technology. It allows companies to better manage, consolidate, migrate, and replicate critical data with minimal effort. MultiStore software enables one physical storage system to host up to 33 virtual storage systems (Vfilers). Vfilers appear as independent file servers and administrators can quickly create, modify, and migrate Vfilers. SnapMover allows Vfilers to be migrated between storage systems in seconds without disrupting users.
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandINFINIDAT
Desktop and server virtualization have brought many benefits to the data center. These two initiatives have allowed IT to respond quickly to the needs of the organization while driving down IT costs, physical footprint requirements and energy demands. But there is one area of the data center that has actually increased in cost since virtualization started to make its way into production… storage. Because of virtualization, more data centers need "ash to meet the random I/O nature of the virtualized environment, which of course is more expensive, on a dollar per GB basis, than hard disk drives. The single biggest problem however is the signi!cant increase in the number of discrete storage systems that service the environment. This “storage sprawl” threatens the return on investment (ROI) of virtualization projects and makes storage more complex to manage.
Learn more at www.infinidat.com.
FILE SYSTEM AND NAS: Local File Systems; Network file Systems and file servers; Shared Disk file systems; Comparison of fiber Channel and NAS.
STORAGE VIRTUALIZATION: Definition of Storage virtualization; Implementation Considerations; Storage virtualization on Block or file level; Storage virtualization on various levels of the storage Network; Symmetric and Asymmetric storage virtualization in the Network
This document discusses NetApp's unified storage architecture, which aims to address challenges in enterprise data centers by consolidating different storage functions onto a single platform. It describes how traditional storage requires separate systems for primary storage, SANs, NAS, and other functions. NetApp's unified storage architecture integrates multiprotocol support, single management, data protection, multiple storage tiers, quality of service, and legacy system front-ending onto one system. This allows consolidation of storage resources for better performance, manageability and cost savings compared to other vendors' so-called unified solutions.
Understanding the Windows Server Administration Fundamentals (Part-2)Tuan Yang
Windows Server Administration is an advanced computer networking topic that includes server installation and configuration, server roles, storage, Active Directory and Group Policy, file, print, and web services, remote access, virtualization, application servers, troubleshooting, performance, and reliability. With these slides, explore the key fundamentals of the Windows Server Administration.
Learn more about:
» Storage technologies.
» File Systems.
» HDD managements.
» Troubleshooting methodology.
» Server boot process.
» System configuration.
» System monitoring.
» High Availability & fault tolerance.
» Back up.
Huawei Symantec Oceanspace VIS6000 OverviewUtopia Media
Huawei Symantec provides a consolidated and flexible storage network solution called VIS6000 that simplifies storage management, improves security and availability, and enables data migration and disaster recovery across heterogeneous storage platforms. Key features include virtualization, centralized management, snapshot and replication technologies, and support for consolidating different storage arrays into a single resource pool.
This document provides a blueprint for a fault tolerant NAS configuration using Symantec File Store with VMware and NEC hardware. It discusses the growth of unstructured data and need for scalable, reliable storage. The configuration outlined uses Symantec File Store software running on VMware virtual machines to manage file systems stored on NEC servers and storage arrays. NetBackup is used to backup the file systems to a separate backup site for disaster recovery purposes. The blueprint defines the typical hardware and software components, use cases, and operational procedures for this file storage system architecture.
Emc data domain technical deep dive workshopsolarisyougood
The document provides an overview of EMC Data Domain products and services. It discusses Data Domain systems which provide scalable and high performance protection storage for backup and archive data. The systems integrate with leading backup and archiving applications. The document also summarizes Data Domain software options such as Boost, Encryption, Replicator and Extended Retention which provide additional functionality.
vFabric Data Director 2.7 customer deckJunchi Zhang
The document discusses vFabric Data Director, a platform that provides database-as-a-service capabilities. It enables database-aware virtualization, automates database lifecycle management, and provides self-service database provisioning. This reduces costs while improving agility, automation, and service quality for database management.
This document discusses IBM Spectrum Virtualize 101 and IBM Spectrum Storage solutions. It provides an overview of software defined storage and IBM Spectrum Virtualize, describing how it achieves storage virtualization and mobility. It also provides details on the new IBM Spectrum Virtualize DH8 hardware platform, including its performance improvements over previous platforms and support for compression acceleration.
Log-Structured File System (LSFS) as a weapon to fight “I/O Blender” virtuali...StarWind Software
A side effect of the massive server workload consolidation enabled by hypervisor-based virtualization has been the addition of 7 to 16 I/O ports per machine to handle the I/O requirements of the hosted apps. But, before the data can be sent to and from storage devices, virtualization administrators often confront a more challenging problem that some have taken to calling the I/O Blender Effect. The I/O Blender effect is seen when multiple virtual machines send their I/O streams at the same time to a hypervisor for processing, increasing random accesses and increasing latency. Some software-defined storage (SDS) architectures are actually making the problem worse by caching raw small logical block writes using flash memory devices, leading to accelerated wear of the device and contributing virtually no improvement in storage I/O performance.
This lecture discusses best practices for implementing fault tolerance in computer hardware, data storage, virtualization, remote hosting, and networks. It recommends redundancies like RAID configurations for hard drives, distributed file systems for data storage, server virtualization, and hosting servers remotely. Ensuring availability involves redundancy in network infrastructure, uninterruptible power supplies, and multiple internet connections. Software as a Service can offer high fault tolerance but network access is still required.
Emc vi pr controller customer presentationsolarisyougood
The document discusses EMC's ViPR software-defined storage platform. ViPR abstracts physical storage into virtual pools, automates provisioning, and provides self-service access. It can manage storage from EMC and third parties in a single platform. ViPR simplifies management and empowers users with a public cloud-like experience on-premises.
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan ShettyEMC Forum India
Virtualizing Critical Applications with Vsphere 5 provides concise summaries of the key enhancements in vSphere 5 that enable virtualizing even the most critical applications. These include support for larger virtual machines with up to 32 vCPUs, 1TB of RAM and 4x larger sizes. It also improves availability, storage, and network services with features like Storage DRS, Profile-Driven Storage, and Network I/O Control that provide performance guarantees and help prevent resource starvation issues. The document also highlights how vSphere 5 simplifies infrastructure deployment and management with capabilities such as Auto Deploy, vCenter Server Appliance, and the new Web Client.
Similar to Veritas storage foundation_5.0_for_unix_-_fundamentals (20)
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
2. COURSE DEVELOPERS
Gail Ade~
BilJ,(eGerrits
lECHNICAL
CONTRIBUTORS AND
REVIEWERS
Jade ArrinJ,(lolI
:Iargy Cassid~
Ro~' Freeman
Joe Gallagher
Bruce Garner
Tomer G urantz
Bill Havev
Geue Henriksen
Gerald Jackson
Haymond Karns
Bill Lehman
Boh l.ucas
Durivunc Manikhung
Chr'istlan Rahanus
Dan Rugers
Kleher Saldanha
Albrecht Scriba
"liehe! Simoni
Anaudu Sirisena
Pete 'Iuemmes
Copyright' 2006 Symamec Corporation. All rights reserved. Symantcc.
the Symanrec Logo. and "LRITAS arc trademarks or registered
trademarks uf 5) mantee Corporation Of its alfiluues in the U.S. and other
countries. Other names may be trademarks of their respective owners.
-I IllS PUBLICATION IS I'ROVIDfD "AS IS" AND ALL EXPRESS OR
IMPLllDCONIlITIONS. REPRESENTArJONS AND WARRANTIES.
INCLUDIN(i ANY 11PLlUl WARRANTY OF MFRCHANTA81L1TY.
IITNI.sS FOR A PARIICULAR PURPOSE OR NON-
INFRIN(iI:MEN r. ARL DISCLAIiIED. EXCEl'! TO THE FXTEN!
rHAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY
INVALID. SYIANTI:C CORPORATION SHAl.L NOT HE L1ABLI:
lOR INCIDENTAL OR CONSEQULNTIAL DA1AGI-.S IN
CONNECTION WITH (HI FURNISHING. PERIC)RMANCE. OR USE
OF THIS PUIlLlCAIION. TH~. INFORIATION CONTAINLD
H!:RUN IS SUBJECT ro CHANtiE WITHOUT NOTICE.
No part orthe contents ofthis hook may be reproduced or transmitted in
any torm or b) any means without the , riuen permission of the publisher.
tLRIT-l.')' ::';/orugc FOflllllulion 5.0 /iw [,i;V/.': Fundamentals
Symnnrec Corporation
~03305te ens Creek 81 U.
Cupertino. CA ()SOI4
3. Table of Contents
Course Introduction
What Is Storage Virtualization?.
Introducing VERITAS Storage Foundation
VERITAS Storage Foundation Curriculum ..
Lesson 1: Virtual Objects
Physical Data Storage
Virtual Data Storage
Volume Manager Storage Objects
Volume Manager RAID Levels ..
Lesson 2: Installation and Interfaces
Installation Prerequisites .
Adding License Keys.. . .
VERITAS Software Packages ..
Installing Storage Foundation.
Storage Foundation User Interfaces.
Managing the VEA Software
Lesson 3: Creating a Volume and File System
Preparing Disks and Disk Groups for Volume Creation
Creating a Volume .
Adding a File System to a Volume
Displaying Volume Configuration Information ...
Displaying Disk and Disk Group Information
Removing Volumes, Disks, and Disk Groups
Lesson 4: Selecting Volume Layouts
Comparing Volume Layouts .....
Creating Volumes with Various Layouts.
Creating a Layered Volume .
Allocating Storage for Volumes .
Lesson 5: Making Basic Configuration Changes
Administering Mirrored Volumes .
Resizing a Volume .
Moving Data Between Systems
Renaming Disks and Disk Groups
Managing Old Disk Group Versions ..
Table of Contents
Copyrigtlt ,( 2006 Svmantec Corporation All rights reserved
Intro-2
Intro-6
Intro-11
1-3
1-10
1-13
1-15
2-3
.... 2-5
. 2-7
2-10
2-16
2-21
...... 3-3
3-12
3·18
3-21
3-24
3-30
. 4-3
. 4-9
4-18
.4·25
5-3
5-10
5-16
5-21
5-23
4. Lesson 6: Administering File Systems
Comparing the Allocation Policies of VxFS and Traditional File Systems 6-3
Using VERITAS File System Commands 6-5
Controlling File System Fragmentation 6-9
Logging in VxFS , 6-15
Lesson 7: Resolving Hardware Problems
How Does VxVM Interpret Failures in Hardware 7-3
Recovering Disabled Disk Groups "" , 7-8
Resolving Disk Failures ,.., , 7-12
Managing Hot Relocation at the Host Level , ,... 7-22
Appendix A: Lab Exercises
Lab 1: Introducing the Lab Environment , A-3
Lab 2: Installation and Interfaces ,..,.., A-7
Lab 3: Creating a Volume and File System ,..,.., , A-15
Lab 4: Selecting Volume Layouts , A-21
Lab 5: Making Basic Configuration Changes , , A-29
Lab 6: Administering File Systems...... .. ,.., A-37
Lab 7: Resolving Hardware Problems.. ...., ,.., A-47
Appendix B: Lab Solutions
Lab 1 Solutions: Introducing the Lab Environment ,.., B-3
Lab 2 Solutions: Installation and Interfaces , , , B-7
Lab 3 Solutions: Creating a Volume and File System , ,.., B-21
Lab 4 Solutions: Selecting Volume Layouts ,.., B-33
Lab 5 Solutions: Making Basic Configuration Changes ,.." B-47
Lab 6 Solutions: Administering File Systems , " B-67
Lab 7 Solutions: Resolving Hardware Problems " "...................... B-85
Glossary
Index
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copyngt1t <.;; .. LUOb Symaruec Corporation All nqhts reserved.
6. Storage Management Issues
Human Resource E-mail
Database Server
Customer Order
Database
90% F
symantec
10% 50% Full _
Multiple-vendor hardware
Explosive data growth
Different application needs
Management pressure to
increase efficiency
I' Multiple operating systems
r , Rapid change
Budgetary constraints
Problem: Customer
order database cannot
access unutilized
storage.
Common solution: Add
more storage.
What Is Storage Virtualization?
Storage Management Issues
Storage management is becoming increasingly complex due to:
Storage hardware from multiple vendors
Unprecedented data growth
Dissimilar applications with different storage resource needs
Management pressure to increase efficiency
Multiple operating systems
Rapidly changing business climates
Budgetary and cost-control constraints
To create a truly efficient environment. administrators must have the tools to
skillfully manage large, complex, and heterogeneous environments. Storage
virtualization helps businesses to simplify the complex IT storage environment
and gain control of capital and operating costs by providing consistent and
automated management of storage.
Intro-2 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
Copynyht r; 2006 Svmantec coroorauoo All ngllts reserved
7. ,S}lnamcc.
What Is Storage Virtualization?
.MimkM4
Consumer
& m' 'f
§$ A
Virtualizatlon:
The logical
representation of
physical storage
across the entire
enterprise
Consumer
fIiIiiIrIIi"m''fl
Consumer
sliildeM"
Application requirements from storage
• Application • Throughput • Failure
requirements • Responsiveness
resistance
• Growth otential • Recovery time
Ca acit Performance Availabilit
• Disk size • Disk seek time • MTBF
• Number of disks! • Cache hit rate • Path
path redundanc
Physical aspects of storage
Physical Storage Resources
Defining Storage Virtualization
Storage virtualization is the process of taking multiple physical storage devices
and combining them into logical (virtual) storage devices that are presented to the
operating system. applications. and users. Storage virtualization builds a layer of
abstraction above the physical storage so that data is not restricted to specific
hardware devices. creating a Ilexible storage environment. Storage virtualization
simplifies management of storage and potentially reduces cost through improved
hardware utilization and consolidation.
With storage virtualization. the physical aspects of storage arc masked to users.
Administrators can concentrate less on physical aspects of storage and more on
delivering access to necessary data.
Benefits of storage virtualization include:
Greater IT productivity through the automation of manual tasks and simplified
administration of heterogeneous environments
Increased application return on investment through improved throughput and
increased uptime
Lower hardware costs through the optimized use of hardware resources
Copyright t: 20U6 Symanter; Corporation All riqh'~ .eservco
Intro-3Course Introduction
8. syrnantec
Storage Virtualization: Types
Storage-Based
JfIfIJ'AIiI'AY
Servers
Host-Based
AiIII1'
Server
Network-Based
AYAYAY
Servers
Storage
~j,~
Storage
~s.,",
Storage
Most companies use a combination of these three
types of storage virtualization to support their chosen
architectures and application requirements.
How Is Storage Virtualization Used in Your Environment?
The way in which you use storage virtuulization. and the benefits derived from
storage virtualization. depend on the nature of your IT infrastructure and your
specific application requirements. Three main types of storage virtualization used
today arc:
Storage-based
Host-based
Network-based
Most companies use a combination of these three types of storage virtualization
solutions to support their chosen architecture and application needs.
The type of storage virtualization that you use depends on factors. such as the:
Heterogeneity of deployed enterprise storage arrays
Need for applications to access data contained in multiple storage devices
Importance of uptime when replacing or upgrading storage
Need for multiple hosts to access data within a single storage device
Value of the maturity of technology
Investments in a SAN architecture
Level of security required
Level of scalability needed
Inlro-4
Copynghl ·,C ;':OOb Svmantec Conioranon AU flghls reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
9. Storage-Based Storage Virtualization
Storage-basedstorage virtualization refers to disks within an individual array that
are presented virtually to multiple servers. Storage is virtualized by the array itself
For example. RAID arrays virtualize the individual disks (that are contained within
the array) into logical LUNS. which are accessed by host operating systems using
the same method of addressing as a directly-attached physical disk.
This type of storage virtualization is useful under these conditions:
You need to have data in an array accessible to servers of di fferent operating
systems.
All of a server's data needs are met by storage contained in the physical box.
You are not concerned about disruption to data access when replacing or
upgrading the storage.
The main limitation to this type of storage virtualization is that data cannot be
shared between arrays. creating islands of storage that must be managed.
Host-Based Storage Virtualization
Host-basedstorage virtualization refers to disks within multiple arrays and from
multiple vendors that are presented virtually to a single host server. For example.
software-based solutions. such as VERITAS Storage Foundation. provide host-
based storage virtualizarion. Using VERlTAS Storage Foundation to administer
host-based storage virtualization is the focus of this training.
I lost-based storage virtualization is useful under these conditions:
A server needs to access data stored in multiple storage devices.
You need the flexibility to access data stored in arrays from different vendors.
Additional servers do not need to access the data assigned to a particular host.
Maturity of technology is a highly important factor to you in making IT
decisions.
Note: By combining VERITAS Storage Foundation with clustering technologies.
such as VERITAS Cluster Volume Manager. storage can be virtualized to multiple
hosts ofthe same operating system.
Network-Based Storage Virtualization
Network-basedstorage virtualization refers to disks from multiple arrays and
multiple vendors that arc presented virtually to multiple servers. Network-based
storage virtualization is useful under these conditions:
You need 10 have data accessible across heterogeneous servers and storage
devices.
You require central administration of storage across all Network Attached
Storage (NAS) systems or Storage Area Network (SAN) devices.
You want to ensure that replacing or upgrading storage does not disrupt data
access.
You want to virtualize storage to provide block services to applications.
Course Introduction Intro-5
Copyright ,~,2006 Svmantec Corporation. All nnhts reserved
10. syrnarucc
VERITAS Storage Foundation
VERIT AS Storage Foundation provides host-based
storage virtualization for performance, availability,
and manageability benefits for enterprise computing
environments.
High Availability
Application Soluti
Data Protection
Volume Manager
and File System
Company Business Process
VERITAS Cluster Server/Replication
ons Storage Foundation for Databases
-<
VERITAS NetBackup/Backup Exec
~ VERIT AS Storage Foundation
Hardware and Operating System
Introducing VERITAS Storage Foundation
VERITAS storage management solutions address the increasing costs of managing
mission-critical data and disk resources in Direct Attached Storage (DAS) and
Storage Area Network (SAN) environments.
Atthe heart of these solutions is VERITAS Storage Foundation, which includes
VERITAS Volume Manager (VxVM). VERITAS File System (VxFS), and other
value-added products. Independently, these components provide key benefits.
When used together as an integrated solution, VxVM and VxFS deliver the highest
possible levels of performance, availability, and manageability for heterogeneous
storage environments.
Intro-6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Cnpyllght ,£ 2006 Syroantec Corporation All r,ghts reserved
11. Users / Applications/ Databases
.................................................................................................... " ....,,
! Virtual Storage Resources
00.0 ••• 00 0 .0..(?v!2!J.t.1 '.',
VERITAS
Volume
Manager
(VxVM)
What Is VERITAS Volume Manager?
VERITAS Volume Manager, the industry-leader in storage virtualizarion. is an
easy-to-use, online storage management solution for organizations that require
uninterrupted, consistent access to mission-critical data. VxVM enables you to
apply business policies to configure, share. and manage storage without worrying
about the physical limitations of disk storage. VxVM reduces the total cost of
ownership by enabling administrators to easily build storage configurations that
improve performance and increase data availability.
Working in conjunction with VERITAS File System, VERITAS Volume Manager
creates a foundation for other value-added technologies. such as SAN
environments, clustering and failover, automated management. backup and IISM,
and remote browser-based management.
What Is VERITAS File System?
A file system is a collection of directories organized into a structure that enables
you to locate and store tiles. All processed information is eventually stored in a tile
system. The main purposes of a file system arc to:
Provide shared access to data storage.
Provide structured access to data.
Control accessto data.
Provide a common. portable application interface.
Enable the manageability or data storage.
The value of a file system depends on its integrity and performance.
Copyright os 2006 swnante- Corporation. All rigllts reserveo
Intro-7Course Introduction
12. svrnaruec
VERITAS Storage Foundation: Benefits
• Manageability
- Manage storage and file systems from one interface.
- Configure storage online across Solaris, HP-UX, AIX, and
Linux.
- Provide additional benefits for array environments, such as
inter-array mirroring.
• Availability
- Features are implemented to protect against data loss.
- Online operations lessen planned downtime.
• Performance
- 1/0 throughput can be maximized using volume layouts.
- Performance bottlenecks can be located and eliminated
using analysis tools.
• Scalability
- VxVM and VxFS run on 32-bit and 64-bit operating systems.
- Storage can be deported to larger enterprise platforms.
Benefits of VERITAS Storage Foundation
Commercial system availability now requires continuous uptime in many
implementations. Systems must be available 24 hours a day. 7 days a week, and
365 days a year. VERlTAS Storage Foundation reduces the cost ofownership by
providing scalable manageability, availability, and performance enhancements for
these enterprise computing environments.
Manageability
Management of storage and the tile system is performed online in real time,
eliminating the need for planned downtime.
Online volume and file system management can be performed through an
intuitive. easy-to-use graphical user interface that is integrated with the
VERITAS Volume Manager (VxVM) product.
Vx VM provides consistent management across Solaris. HP-llX, AlX, Linux,
and Windows platforms.
VxFS command operations are consistent across Solaris, HP-UX, AlX, and
Linux platforms.
Storage Foundation provides additional benefits for array environments, such
as inter-array mirroring.
Availability
Through software RAID techniques, storage remains available in the event of
hardware fai lure.
Intro-8
CopytlQtll ''",2006 Svmantec Corporation All fights reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
13. 1I0t relocation guarantees the rebuilding of redundancy in the case of a disk
failure.
Recovery time is minimized with logging and background mirror
resynchronization.
Logging of file system changes enables fast file system recovery.
A snapshot of a file system provides an internally consistent. read-only image
for backup. and file system checkpoints provide read-writable snapshots.
Performance
I/O throughput can be maximized by measuring and modifying volume layouts
while storage remains online.
Performance bottlenecks can he located and eliminated using YxYM analysis
tools.
Extent-based allocation of space lor files minimizes file level access time.
Read-ahead buffering dynamically tunes itself to the volume layout.
Aggressive caching of writes greatly reduces the number of disk accesses.
Direct I/O performs file [10 directly into and out of user butlers.
Scalability
YxYM runs over a 32-bit and M-hit operating system.
Ilosts can be replaced without modifying storage.
Hosts with different operating systems can access the same storage.
Storage devices can be spanned.
YxYM is fully integrated with YxFS so that modifying the volume layout
automatically modi lies the file system internals.
With YxFS. several add-on products are available for maximizing performance
in a database environment.
Course Introduction Intro-9
Copyright;;; 2006 Symalll~r. Corporation All rights .esorveo
14. • Reconfigure and resize storage
across the logical devices
presented by a RAID array.
• Mirror between arrays to improve
disaster recovery protection of an
array.
Use arrays and JBODs.
• Use snapshots with mirrors in
different locations for disaster
recovery and off-host processing.
Use VERITAS Volume Replicator
(VVR) to provide hardware-
independent replication services.
symantec
Storage Foundation and RAID Arrays: Benefits
With Storage Foundation, you can:
Benefits of VxVM and RAID Arrays
RAID arrays virtualize individual disks into logical LUNS which are accessed by
host operating systems as "physical devices." that is, using the same method of
addressing as a directly-attached physical disk.
VxVM virtualizes both the physical disks and the logical LUNs presented by a
RAID array. Modifying the configuration ofa RAID array may result in changes in
SCSI addresses of LUNs, requiring modification of application configurations.
VxVM provides an effective method ofrcconfiguring and resizing storage across
the logical devices presented by a RAID array.
When using VxVM with RAID arrays. you can leverage the strengths of both
technologies:
You can use Vx VM to mirror between arrays 0 improve disaster recovery
protection against the failure of an array. particularly if one array is remote.
Arrays can be of different manufacture: that is, one array can be a RAID array
and the other a J80D.
VxVM facilitates data reorganization and maximizes available resources.
VxVM improves overall performance by making 1/0 activity parallel for a
volume through more than one 110 path to and within the array.
You can use snapshots with mirrors in different locations. which is beneficial
for disaster recovery and off-host processing.
If you include VERITAS Volume Rcplicaror (VVR) in your environment,
VVR can be used to provide hardware-independent replication services.
tntro-10 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
Copyngtll' LU06Svmantec Corporaucn All nqtits reserved
16. Storage Foundation Fundamentals: Overview
• Lesson 1: Virtual Objects
• Lesson 2: Installation and Interfaces
Lesson 3: Creating a Volume and File
System
• Lesson 4: Selecting Volume Layouts
• Lesson 5: Making Basic Configuration
Changes
• Lesson 6: Administering File Systems
• Lesson 7: Resolving Hardware
Problems
syrnantcc
VERITAS Storage Foundation for UNIX: Fundamentals Overview
This training provides comprehensive instruction un operating the file and disk
management foundation products: VERITAS Volume Manager (VxVM) and
VERITAS File System (VxFS). In this training. you learn how to combine file
system and disk management technology to ensure easy management of all storage
and maximum availability of essential data.
Objectives
After completing this training. you will be able to:
Identify VxVM virtual storage objects and volume layouts.
Install and configure Storage Foundation.
Configure and manage disks and disk groups.
Create concatenated. striped, mirrored. RAID-5, and layered volumes.
Configure volumes by adding mirrors and logs and resizing volumes and tile
systems.
Perform tile system administration.
Resolve basic hardware problems.
Intro-12
Copynght'~ 2006 Svmantec Corpoauon All nghl<; reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
17. , S)111<1ntt'(
Course Resources
• Lab Exercises (Appendix A)
• Lab Solutions (Appendix B)
• Glossary
Additional Course Resources
Appendix A: Lab Exercises
This section contains hands-on exercises that enable you to practice the concepts
and procedures presented in the lessons.
Appendix B: Lab Solutions
This section contains detailed solutions to the lab exercises for each lesson.
Glossary
For your reference. this course includes a glossary ofterms related to V[RITAS
Storage Foundation.
Course Introduction tntro-13
Copyrigtllf' 2006 Symanter Corporation All rights reserved
18. Typographic Conventions Used in This Course
The following tables describe the typographic conventions used in this course.
Typographic Conventions in Text and Commands
Convention Element Examples
Courier Nell. Command input. To display the robot and drive configurauon:
bold both syntax and tpconfig -d
examples
To display disk information:
vxdisk -0 alldgs list
Courier New.
· Command output lu the output:
plain
· Command protocol mlnlmum: 40-
names. directory protocol - maximum: 60
names. tile protocol
-current: 0
names. path Locale the al tnames directory.
names. user
Go 10http: / /www.symantec.com.
names.
passwords. URLs Enler the value 300.
when used within Log on as user l.
regular text
paragraphs.
Courier New. Variables in To install the media server:
Italic. bold or command syntax, /cdrom_directory/install
plain and examples:
To access a manual page:
· Variables in
command input
man command name
-
are Italic. plain. To display detailed information lor a disk:
· Variables in vxdisk -g disk_group list
command output disk -
name
are ltulic. bold.
Typographic Conventions in Graphical User Interface Descriptions
Convention Element Examples
Arrow Menu navigation paths Select File -->Save.
Initial capitalization Buttons. menus. windows, Select the Next buuon.
options. and other interface Open the Task Sialus
clements window.
Remove the checkmark
trorn the Print File check
box.
()uotation marks lutertucc clements with Select the "Include
long names subvolumes in object view
window" check box.
Intro-14 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copyright ~ ;!()Clfi Svmanter: Corporation AUfights reserved
20. symantcc
Lesson Introduction
• Lesson 1: VirtU!!L9_bl~~ts •••
• Lesson 2: Installation and Interfaces
• Lesson 3: Creating a Volume and File
System
• Lesson 4: Selecting Volume Layouts
• Lesson 5: Making Basic Configuration
Changes
• Lesson 6: Administering File Systems
• Lesson 7: Resolving Hardware
Problems
svmantec
Lesson Topics and Objectives
Topic After completing this lesson, you
will be able to:
Topic 1: Physical Data Identify the structural characteristics of
Storage a disk that are affected by placing a
disk under VxVM control.
Topic 2: Virtual Data Describe the structural characteristics
Storage of a disk after it is placed under VxVM
control.
Topic 3: Volume Manager Identify the virtual objects that are
Storage Objects created by VxVM to manage data
storage, including disk groups, VxVM
disks, subdisks, piexes, and volumes.
Topic 4: Volume Manager Define VxVM RAID levels and identify
RAID Levels virtual storage layout types used by
VxVM to remap address space.
1-2
COPYlIgl1l·.~ 2006 Symanter. Ccq-orauon All fights rnservoc
VERITAS SLorage Foundation 5.0 for UNIX: Fundamentals
21. , S)11Jan,'(
I
Physical Disk Structure
Physical storage objects:
• The basic physical storage device that ultimately stores
your data is the hard disk.
• When you install your operating system, hard disks are
formatted as part of the installation program.
• Partitioning is the basic method of organizing a disk to
prepare for files to be written to and retrieved from the
disk.
• A partitioned disk has a prearranged storage pattern that
is designed for the storage and retrieval of data.
Solaris I HP-UX I AIX I Linux I
Physical Data Storage
Physical Disk Structure
Solaris
A physical disk under Solaris contains the partition table of the disk and the volume Table
of contents (VTOC) in the first sector 151~ bytes) of the disk. The VTOC has at least an
entry lor the backup partition on the II hole disk (partition tag 5, normally partition number
2), so the OS may work correctly with the disk. The VTOC is always a part of the backup
partition and may be part ota standard data partition. You can destroy the VTOC using the
raw device driver on that partition making the disk immediately unusable.
Sector 0 of disk: VTOC
Sector 1-15 of
/ partition: bootblock
~1US'
IPartition 2 (backup slice)
refers to the entire disk.
Partitions
(Slices)
1
Copyright 1'. 2006 Symaruec Corporation All nqtus reserved
Lesson 1 Virtual Objects
22. If the disk contains the partition fur the rout file system mounted on / (partition tag 2), lor
example of an OS disk, this root partition contains the bootblock for the first boot stage
after the Open Bout Prom within sector I - 15. Sector 0 is skipped, so there is no
overlapping between VTOe and boorblock. if the root partition starts at the beginning of
the disk.
The li"t sector ofa file system un Soluris cannut start before sector 16 of the partition.
Sector 16 contains the main super block of the file system. Using the block device driver
of the file system prevents VTOC and boot block from being overwritten by applicuuon
data.
Note: 011 Solaris. VxVM 4. I and later support EFI disks. EFI disks are an lntcl-based
technology that allows disks to retain BIOS code.
HP-lJX
On an HP-UX system. the physical disk is traditionally partitioned using either the whole
disk approach or Logical Volume Manager (LVM).
HP-UX Disk
cOtld4
LVM Disk
cOtld4
The whole disk approach enables you tu partition a disk in five ways: the whole disk is
used by a single file system: the whole disk is used as swap area: the whole disk is
used as a raw partition: a portion of the disk contains a file system, and the rest is used
as swap: or the boot disk contains a 2-MB special boot are". the root file system, and a
swap area.
An LVM data disk consists of four areas: Physical Volume Reserved Area (PVRA):
Volume Group Reserved Area (VGRA): user data area: and Bad Block Relocation
Area IBBRA).
AIX
A native AIX disk docs not have a partition table "I' the kind familiar on many other
operating systems. such as Solaris, Linux, and Windows. An application could use the
entire unstructured raw physical device. but the lirst 5 I 2-byte sector normally contains
intunn.uion, including a physical volume identifier (pvid) to support recognition of the
disk by AIX. An AIX disk is managed by IBM's Logical Volume Manager (LVM) by
defuult. A disk managed by LVM is called a physical volume (PV). A physical volume
consists of:
PV reserved area: A physical volume begins with a reserved area of 128 sectors
containing I'V metadatu. including the pvid.
1-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
COPYright ';:' 2(1()h Sytn<ll1le: Corporauon All [lgtHS reserveo
23. Volume Group Descrlptur Area ('G[)A): One or two copies of the V[X;/ tollows.
The V(jDA contains information describing a volume group (Vt.i ), which consists of
one or more physical volumes. Included in the mctadata in the V(j[)A is the defiuiuon
of the physical partition (PI') Sill'. nonnally-t MH,
Physlcal partitions: The remainder of the disk is divided into a number of physical
partitions, All of the PVs in a volume group have PI's of the same size. as defined in
the VGDA, In a normal VG. there can be up to.n PI's in a P'. In a big VCi. there can
be up to 12R PI's in a Pv. I
,Raw
device
,hdisk3
Physical volume
reserved area
(128 sectors)
Volume Group
Descriptor Areas
Physical partitions
(equal size,
defined in VGDA)
The term partition is used differently in different operating systems. In many kinds of
UNiX. Linux, and Windows. a partition is a variable sized portion of contiguous disk
spacethat can be formatted to contain a file system. in LVM. a PI' is mapped to a logical
partition 11..1'). and one or more LPs from any location throughout the V(j can be
combined to define a logical volume (LV). A logical '0IU111eis the entity that can be
formatted to contain a file system (by default either .IFS or .IFS2), So a physical partition
compares in concept more closely to a disk allocation cluster in some other operating
systems. and a logical volume plays the role that a partition does in some other operating
systems.
Linux
On Linux. a nonboot disk can be divided into one to lour primary partitions. One of these
primary partitions can be used to contain logical partitions. and it is called the extended
partition. The extended partition can have lip to I ~ logical partitions on a SCSi disk and lip
to 60 logical partitions on an IDE disk, You can use fdisk to set up partitions on a Linux
disk,
Lesson 1 Virtual Objects 1-5
CopyriglH '-':2006 Symantcc Corporation All fights reserveo
24. Primary Partition 1
/dev/sdal or/dev/hdal
Primary Partition 2
/dev/sda2or/dev/hda2
Primary Partition 3
/dev/sda3or/dev/hda3
Primary Partition 4
(Extended Partition)
/dev/sda4
/dev/hda4
On a l.inux boot disk. the boot partition must be a primary partition and is typically
located within the first 1024 cylinders of the drive. On the boot disk. you must also have a
dedicated swap partition. The swap partition can be a primary or a logical partition. and it
can be located anywhere on the disk.
Logical partitions must be contiguous. but they do not need to take up all of the space of
the extended partition. Only one primary partition can be extended. The extended partition
docs not take up any space until it is subdivided into logical partitions.
VERITAS Volume Manager 4.0 for Linux does not support most hardware RAID
controllers currently unless they present SCSI device interfaces with names of the
form / dev / sdx.
The following controllers are supported:
PERC, on the Dell 1650
MegaRAID. on the Dell 1650
ScrvcRAID. on x440 systems
Compaq array controllers that require the SmartI and CCISS drivers (w hich present
device paths. such as I dev I idal c #d#p # and I dev I cc i ssl c #d#p#) arc supported
lor normal use and for rootnbiliry.
1·6 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Cupynqht c ;1.0011Svmanu-c Corporation All fights rcserveo
25. ,S)l1HHlh'(
Physical Disk Naming
VxVM parses disk names to retrieve connectivity
information for disks. Operating systems have different
conventions:
Operating System Device Naming Convention Example
Solaris /dev/[rJdsk/c1t9dOs2
/dev/ [rJ dak/c3t2dO (no slice)
/dev/hdisk2 (no slice)
SCSI disks:
/dev/ade [1-4J (primary partitions)
/dev/ade [5 -16J (logical partitions)
/dev/adbN(on the second disk)
/ dev / adcN (on the third disk)
IDE disks:
/dev/hdeN. /dev/hClbN./dev/hdcN
HP-UX
AIX
Linux
Physical Disk Naming
Solaris
I
You locate and access the data on a physical disk by using a device name that
specifies the controller, target ID. and disk number. A typical device name uses the
format: c#t#d#.
c# is the controller number.
t# is the target !D.
d# is the logical unit number (LUN) of the drive attached to the target.
Ira disk is divided into partitions. you also specify the partition number in the
device name:
s# is the partition (slice) number,
For example. the device name cOtOdOsl is connected to controller number 0 ill
the system. with a target ID oro. physical disk number O. and partition number I
011 the disk.
HP-liX
You locate and access the data on a physical disk by using a device name that
specifies the controller. target ID, and disk number. A typical device name uses the
format: c#t#d#.
c# is the controller number.
t # is the target !D.
d# is the logical unit number (LUN) of the drive attached to the target.
Copyright .~..2006 Svmanter Corporauon All fights reserved
1-7Lesson 1 Virtual Objects
26. For example, the cOt OdO device name is connected to the controller number °in
the system, with a target 10 of 0, and the physical disk number O.
AIX
Every device in AIX is assigned a location code that describes its connection to the
system. The general format of this identifier isAB-CD-EF-GH, where the letters
represent decimal digits or uppercase letters. The first two characters represent the
bus. the second pair identify the adapter, the third pair represent the connector, and
the tinal pair uniquely represent the device. For example, a SCSI disk drive might
have a location identifier of 04 - 01- 00 - 6, O.In this example, 04 means the PCI
bus, 01 is the slot number on the PCI bus occupied by the SCSI adapter, 00 means
the only or internal connector, and the 6,0 means SCSIID 6, LUN o.
However, this data is used internally by AIX to locate a device. The device name
that a system administrator or software uses to identify a device is less hardware
dependant. The system maintains a special database called the Object Data
Manager (ODM) that contains essential definitions for most objects in the system,
including devices. Through the ODM. a device name is mapped to the location
identifier. The device names are referred to by special files found in the / dev
directory. For example, the SCSI disk identified previously might have the device
name hdisk3 (the fourth hard disk identified by the system). The device named
hdisk3 is accessed by the file name /dev/hdisk3.
If a device is moved so that it has a different location identifier, the ODM is
updated so that it retains the same device name. and the move is transparent to
users. This is facilitated by the physical volume identifier stored in the first sector
of a physical volume. This unique 128-bit number is used by the system to
recognize the physical volume wherever it may be attached because it is also
associated with the device name in the ODM.
Linux
On Linux, device names are displayed in the format:
• sdx [N]
• hdx [N]
In the syntax:
sd refers to a SCSI disk, and hd refers to an EIDE disk.
x is a letter that indicates the order of disks detected by the operating system.
For example, sda refers to the first SCSI disk, sdb refers to the second SCSI
disk. and so on.
N is an optional parameter that represents a partition number in the range I
through 16. For example. sda7 references partition 7 on the first SCSI disk.
Primary partitions on a disk are I. 2, .~.4: logical partitions have numbers 5 and up.
If the partition number is omitted, the device name indicates the entire disk.
1-8 VERITAS Storage Foundation 5.0 for UNIX.' Fundamentals
Copynqht ', ;'[)Of) Symantec Corporaucn All fights reserverr
27. Physical Data Storage
Note: Throughout this course, the term disk is used to mean either disk or LUN.
Whatever the OS sees as a storage device, VxVM sees as a disk.
• Reads and writes
on unmanaged
physical disks
can be a slow
process .
• Disk arrays and
multipathed disk
arrays can
improve 110 speed
and throughput.
IApplications! Databases / Users
I
00PhYSi!1[!LUNS 00Disk array: A collection of physical disks used
to balance 1/0 across multiple disks
Multipathed disk array: Provides multiple ports
to access disks to achieve performance and
availability benefits
Disk Arrays
Reads and writes on unmanaged physical disks can be a relatively slow process,
because disks are physical devices that require time to move the heads to the
correct position on the disk before reading or writing. If all of the read and write
operations are performed to individual disks. one at a time. the read-write time can
become unmanageable.
A disk 1/1"'(11' is a collection of physical disks. Performing 110 operations on
multiple disks in a disk array can improve 1/0 speed and throughput.
Hardware arrays present disk storage to the host operating system as LUNs.
Multipathed Disk Arrays
Some disk arrays provide multiple ports to access disk devices. These ports.
coupled with the host bus adaptor (IIBA) controller and any data bus or 110
processor local to the array. compose multiple hardware paths to access the disk
devices. This type of disk array is called a muttipathed disk aI'I'O'.
You can connect rnultipathed disk arrays to host systems in many different
configurations. such as:
Connecting multiple ports to different controllers on a single host
Chaining ports through a single controller on a host
Connecting ports to di tferent hosts simultaneously
Lesson 1 Virtual Objects 1-9
Cop'r'riglll'(~ 2006 Symantec Corporation All rights. roservoo
28. symarucc
Virtual Data Storage
• Volume Manager
creates a virtual
layer of data
storage.
• Volume Manager
volumes appear to
applications
to be physical disk
partitions.
• Volumes have
block and character
device nodes in the
Zdev tree:
Idev/vxl lr l dsk/ ...
Multidisk
configurations:
• Concatenation
• Mirroring
• Striping
• RAID·S
High Availability:
• Disk group
import and deport
• Hot relocation
• Dynamic
multipathing
Disk Spanning Load·Balancing
Virtual Data Storage
Virtual Storage Management
VER lTAS Volume Manager creates a virtual level of storage management above
the physical device level by creating virtual storage objects. The virtual storage
object that is visible to users and applications is called a 1'0111111<'.
What Is a Volume?
A volume is a virtual object, created by Volume Manager, that stores data. A
volume consists of space from one or more physical disks on which the data is
physically stored.
How Do You Access a Volume?
Volumes created by VxVM appear to the operating system as physical disks, and
applications that interact with volumes work in the same way as with physical
disks. All users and applications access volumes as contiguous address space using
special device files in a manner similar to accessing a disk partition.
Volumes have block and character device nodes in the / dev tree. You can supply
the name of the path to a volume in your commands and programs, in your file
system and database configuration files, and in any other context where you would
otherwise use the path to a physical disk partition.
1-10 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copynght '6 2006 Symantec Comor auon All fights reserved
29. I
Volume Manager Control
When you place a disk under VxVM control, a cross-platform data sharing (CDS)
disk layout is used, which ensures that the disk is accessible on different
platforms, regardless of the platform on which the disk was initialized.
I····- ..·-------·.L~_~lOS-reserved areas
I
that contain:
" Platform blocks
I " VxVM 10 blocks
I
" AIX and HP-UX
. coexistence labels
Public
Region
Volume Manager-Controlled Disks
With Volume Manager. you enable virtual data storage by bringing a disk under
Volume Manager control. By default in VxVM 4.0 and later. Volume Manager
uses a cross-platform data sharing (CDS) disk layout. A CDS disk is consistently
recognized by all VxVM-supported UNIX platforms and consists of:
Ox-reserved area: To accommodate plat tonn-spcci fie disk usage. f 2RK is
reserved for disk labels. platform blocks. and platform-coexistence labels.
Private region: The private region stores information. such as disk headers.
configuration copies. and kernel logs. in addition to other platform-specific
management areas that VxVM uses to manage virtual objects. The private
region represents a small management overhead:
Operating System Default Block/Sector Size Default Private Region Size
Solaris 512 bytes 65536 sectors (I 024K)
HI'-UX 1024 bytes 3276Rsectors ( I024K)
AIX 512 bytes 65536 sectors (I024K)
Linux 512 bytes 65536 sectors ( I024K)
Public region: The public region consists of the remainder of the space on the
disk. The public region represents the available space that Volume Manager
can LIseto assign to volumes and is where an application stores data. Volume
Manager never overwrites this area unless specifically instructed to do so.
Lesson 1 Virtual Objects 1-11
Copvriqht 'I' 2006 svoteotec Corporation All rights reservco
30. syrnantec.
Comparing CDS and Pre-4.x Disks
CDS Disk
(>4.x Default)
Private region
(metadata) and
public region (user
data) are created on
a single partition.
Suitable for moving
between different
operating systems
Not suitable for
boot partitions
Sliced Disk
(Pre-4.x Solaris Default)
Private region and
public region are
created on
separate
partitions.
Not suitable for
moving between
different operating
systems
Suitable for boot
partitions
Simple Disk
(Pre •••.x HP-UX Default)
Private region and
public region are
created on the
whole disk with
specific offsets.
Not suitable for
moving between
different operating
systems
Suitable for boot
partitions
Note: This format is
called hpdisk format
as of VxVM 4.1 on the
HP-UX platform.
Comparing CDS Disks and Pre-4.x Disks
The pre-t.v disk layouts arc still available in VxVM 4.0 and later. These layouts
are used for hringing the boot disk under VxVM control on operating systems that
support that capability,
On platforms that support bringing the boot disk under V.xVM control, CDS disks
cannot be used tor boot disks. CDS disks have specific disk layout requirements
that enable a common disk layout across different platforms, and these
requirements arc not compatible with the particular platform-specific requirements
of boot disks. Therefore, when placing a hoot disk under VxVM control. you must
use a prc-4.x disk layout (sliced on Solaris, hpdisk on HP-UX).
For non boot disks, you can convert CDS disks to sliced disks and vice versa by
using VxVM utilities.
Other disk types, working with boot disks, and transferring data across platforms
with CDS disks are topics covered in detail in later lessons.
1-12
Cupyrlght L 200t) Symantec Corporation All f,gtl1s reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
31. Volume Manager Storage Objects
Disk Group
Volumes
acctdg
expvol payvol
acctd 01-01
acctdgO~ - O~
llcctdg03-02
expvol-Ol
acctdg01-02 llcctdg03-01
acctdg02-01
Ploxes
payvol-Ol payvol-02
-VxVM Disks acctdg03
Subdisks
Physical Disks
Volume Manager Storage Objects
I
Disk Groups
A disk group is a collection of Vx VM disks that share a common configurution.
You group disks into disk groups for management purposes. such as to hold the
data for a specific application or set of applications. For example. data for
accounting applications can be organized in a disk group called acctdg. A disk
group contigurcttion is a set olrccords with detailed information about related
Volume Manager objects in a disk group. their attributes. and their connections.
Volume Manager objects cannot span disk groups. For example. a volume's
subdisks. plexes. and disks must be derived from the same disk group as the
volume. You can create additional disk groups as necessary. Disk groups enable
you to group disks into logical collections. Disk groups and their components can
be moved as a unit from one host machine to another.
Volume Manager Disks
A Volume Manager (VxVM) disk represents the public region ota physical disk
that is under Volume Manager control. Each VxVM disk corresponds to one
physical disk. Each VxVM disk has a unique virtual disk name called a disk media
name,The disk media name is a logical name used lor Volume Manager
administrative purposes. Volume Manager uses the disk media name when
assigning space to volumes. A VxVM disk is given a disk media name when it is
added to a disk group.
Default disk media name: diskgrollplili
Copyright ~~.2Un6 Svmantec Corporanon All rights reserved
1-13Lesson 1 Virtual Objects
32. You can supply the disk media name or allow Volume Manager to assign a default
name. The disk media name is stored with a unique disk ID to avoid name
collision. After a VxVM disk is assigned a disk media name, the disk is no longer
referred 10 by its physical address. The physical address (for example, clltlldll or
hdiskll) becomes known <IS the disk access record.
Subdisks
A VxVM disk can be divided into one or more subdisks. A subdisk is a set of
contiguous disk blocks that represent a specific portion ofa VxVM disk, which is
mapped to a specific region of a physical disk. A subdisk is a subsection of a disk's
public region. A subdisk is the smallest unit of storage in Volume Manager.
Therefore, subdisks are the building blocks for Volume Manager objects.
A subdisk is defined by an offset and a length in sectors on a VxVM disk.
Default subdisk name: DMname-1l1l
A Vx VM disk can contain multiple subdisks, but subdisks cannot overlap or share
the same portions ofa VxVM disk. Any VxVM disk space that is not reserved or
that is not part of a subdisk is free space. You can use free space to create new
subdisks.
Conceptually, a subdisk is similar to <I partition. Both a subdisk and a partition
divide a disk into pieces defined by an offset address and length. Each of those
pieces represent a reservation of contiguous space on the physical disk. However,
while the maximum number of partitions to a disk is limited by some operating
systems, there is no theoretical limit to the number of subdisks that can be attached
to a single plex. This number has been limited by default to <I value 01'4090. If
required, this default can be changed, using the vo1_ subdisk _num tunable
parameter. For more information on tunable parameters, see the I'ERITAS Volume
.Mal/agerAdministrator '.1 Guide.
Plexes
Volume Manager uses subdisks to build virtual objects called plexes. A plex is a
structured or ordered collection of subdisks that represents one copy of the data in
a volume. A plex consists of one or more subdisks located on one or more physical
disks. The length of a plex is determined by the last block that can be read or
written on the last subdisk in the plex.
Default plcx name: volume_name-Il#
Volumes
A volume is a virtual storage device that is used by applications in a manner
similar to <I physical disk. Due 10 its virtual nature, a volume is not restricted by the
physical size constraints that apply to a physical disk. A VxVM volume can be as
large as the total of available. unreserved free physical disk space in the disk
group. A volume consists of one or more plcxes.
Default volume name: vol ul1le name##
1-14 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copyright l(; 2006 Syruantec Corporation All fights reserved
33. , S)111'1I1t('(
~
Volume Layouts
Volume layout: The way plexes are configured to remap the
volume address space through which 1/0 is redirected
.....Di~~~U [--R-~-;;i~;~~;1
LayeredConcatenated Striped
RAID-O
Data Redundancy
Mirrored RAID-5 Striped and
RAl0-O+1RAl0-5
Volume Manager RAID Levels
RAID
I
RAID is an acronym for Redundant Array of Independent Disks. RAID is a
storage management approach in which an army of disks is created. and part of the
combined storage capacity of the disks is used to store duplicate information about
the data in the array. By maintaining a redundant array of disks. you can regenerate
data in the case of disk failure.
RAID configuration models arc classified in terms of RAID levels. which arc
defined by the number of disks in the array. the way data is spanned across the
disks. and the method used for redundancy. Each RA ID level has speci lie features
and performance benefits that involve a trade-oil between performance and
reliability.
Volume Layouts
RAID levels correspond to volume layouts. A volume's layout refers to the
organization of plexcs in a volume. Volume layout is the way plcxes are
configured to remap the volume address space through which 110 is redirected at
run-time. Volume layouts are based on the concepts of disk spanning. redundancy.
and resilience.
Disk Spanning
Disk spanning is the combining of disk space from multiple physical disks to Iorrn
one logical drive. Disk spanning has two forms:
Lesson 1 Virtual Objects
Copyright!': 2006 Symantec Corporation All rights reserved
1-15
34. Concatenation: Concatenation is the mapping of data in a linear manner
across two or more disks.
In a concatenated volume. subdisks are arranged both sequentially and
contiguously within a plex. Concatenation allows a volume to be created from
multiple regions of one or more disks if there is not enough space for an entire
volume on a single region of a disk.
Striping: Striping is the mapping of data in equally-sized chunks alternating
across multiple disks. Striping is also called interleaving.
In a striped volume. data is spread evenly across multiple disks. Stripes are
equally-sized fragments that are allocated alternately and evenly to the
subdisks of a single plcx. There must be at least two subdisks in a striped plex ,
each of which must exist on a different disk. Configured properly. striping not
only helps to balance 1/0 but also to increase throughput.
Data Redundancy
To protect data against disk failure, the volume layout must provide some form of
data redundancy. Redundancy is achieved in two ways:
Mirroring: Mirroring is maintaining two or more copies of volume data.
A mirrored volume uses multiple plcxcs to duplicate the information contained
in a volume. Although a volume can have a single plex, at least two are
required for true mirroring (redundancy of data). Each of these plexes should
contain disk space from different disks for the redundancy to be useful.
Parity: Parity is a calculated value used to reconstruct data after a failure by
doing an exclusive OR (XOR) procedure on the data. Parity information can be
stored on a disk. Ifpart ofa volume fails, the data on that portion of the failed
volume can be re-created from the remaining data and parity information.
A RAIO-S volume uses striping to spread data and parity evenly aeross
multiple disks in an array. Each stripe contains a parity stripe unit and data
stripe units. Parity can be used to reconstruct data if one of the disks fails. In
comparison to the performance of striped volumes, write throughput of RAI D-
S volumes decreases, because parity infonnauon needs to be updated each time
data is accessed. However. in comparison to mirroring, the use of parity
reduces the amount of space required.
Resilience
A resilient volume, also called a layered volume, is a volume that is built on one or
more other volumes. Resilient volumes enable the mirroring of data at a more
granular level. For example. a resilient volume can be concatenated or striped at
the top level and then mirrored at the bottom level.
A layered volume is a virtual Volume Manager object that nests other virtual
objects inside of itself. Layered volumes provide better fault tolerance by
mirroring data at a more granular level.
1-16 VERITAS Storage Foundation 5.0 for UNIX' Fundamentals
CopYrlyhl '& 2006 Symantec Corporauon AU nqhts reserved
35. , syrnanrec
I
Lesson Summary
• Key Points
This lesson described the virtual storage objects
that VERITAS Volume Manager uses to manage
physical disk storage, including disk groups,
VxVM disks, subdisks, plexes, and volumes.
• Reference Materials
VERITAS Volume Manager Administrator's Guide
'symantl'C
Labs and solutions for this lesson arc located on the following pages:
Appendix A provides complete lab instructions. "Lib I' IrilJ"duc'ing tile: !,;11
Lnvironmctu." p:I',!C i-,~
Appendix B provides complete lab instructions and solutions, "I .ab 1 S"IUlioI1S:
1,llrodliCIII;2 the 1:11) 1..11in1J1!l1CIlI," rage' B-,;
Lab 1
Lab 1: Introducing the Lab Environment
In this lab, you are introduced to the lab
environment, system, and disks that you will use
throughout this course.
For Lab Exercises, see Appendix A.
For .t:cabSoluti~!!s, see Appendix B,
Lesson 1 Virtual Objects
Copyright'~ 2006 Svmantec Corporation All rights reserved
1-17
36. 1-18 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copynghi .~ 2006 Svmantec Corporation. AlIlIghl!'i resorvec
38. syrnantec
Lesson Introduction
• Lesson 1: Virtual Objects
~_Lesson~'!..sta~lationandInterface.s_~"
• Lesson 3: Creating a Volume and File
System
• Lesson 4: Selecting Volume Layouts
• Lesson 5: Making Basic Configuration
Changes
• Lesson 6: Administering File Systems
• Lesson 7: Resolving Hardware
Problems
"~, .AS:, #'lli~1[ii-Jl L , svmantcc
Lesson Topics and Objectives
Topic After completing this lesson, you will
be able to:
Topic 1: Installation Identify operating system compatibility and
Prerequisites other preinstallation considerations.
Topic 2: Adding License Keys Obtain license keys, add licenses by using
vxlic inst, and view licenses by using
vxlicrep.
Topic 3: VERITAS Software Identify the packages that are included in the
Packages Storage Foundation 5.0 software.
Topic 4: Installing Storage Install Storage Foundation interactively, by
Foundation using the installation utility.
Topic 5: Storage Foundation Describe the three Storage Foundation user
User Interfaces interfaces.
Topic 6: Managing the VEA Install, start, and manage the VEA server.
Server
2-2
Copynyhl:- 2006 Svmantec Corporaucn. All fights reserveu
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
39. '''l!t ..,~. ~ .. ", ,,,
~[ , S}ll1;!n1CC.
as Compatibility
The VERITAS Storage Foundation product line
operates on the following operating systems:
SF Solaris HP-UX AIX Linux
Version Version Version Version Version
5.0 8,9,10 11i.v2 (0904) 5.2,5.3
RHEL 4 Update 3,
SLES 9 SP3
4.1 8,9,10, x86 11i.v2 (0904) No release
RHEL 4 Update 1 (2.6),
SLES 9 SP1
4.0 7,8,9 No release 5.1, 5.2, 5.3 RHEL 3 Update 2 (1686)
3.5.x 2.6,7,8 11.11i (0902) No release No release*
• Note: Version 3.2.2 on Linux has
functionality equivalent to 3.5 on Solaris.
I
Installation Prerequisites
OS Version Compatibility
Before installing Storage Foundation. ensure that the version of Storage
Foundation that you are installing is compatible with the version of the operating
system that you are running. You may need to upgrade your operating system
before you install the latest Storage Foundation version.
VERITAS StorageFoundation 5.0 operates on the following operating systems:
Solaris 8 (SPARe Platform 32-bit and 64-bil)
Solaris 9 (SPARe Platform 32-bit and M-bil)
Solari, 10 (SPARe Platform M-bil)
September 2004 release of HP-UX II i version 2.0 or later
AIX 5.2 ML6 (legacy)
AIX 5.3 TL4 with SP4
Red Hat Enterprise l.inux 4 (RIIEL 4) with Update 3 (2.6.9-34
kernel) on AMD Optcron or Intel Xeon EM64T (xX6_(4)
SUSE Linux Enterprise Server <) (SLES 9) with SP3 (2.6.5-7.244.
252 kernels) on AMD Opteron or Intel Xcon EM64T (xR6_(4)
Check the /'F:R1TAS Storag« Foundation Release No/es Ior additional operating
Solaris
HP-UX
AIX
Llnux
system requirements.
Lesson 2 Installation and Interfaces 2-3
Copyright b 2006 svoantcc Corporation All nqhts reserved.
40. symantec.
Support Resources
·Il;'l,q! Storag~ Founnanonfor U~~IX
Products I
Search for Technotes
I Support services ~~E:~~,~""""~
1Wf'l'tIttlfUialln.
r~'-"-'''-''-F-"-,,,,-,-.,,--~----rp~a~tc~h~e~s=;I--------'---------------
"un •.•.••,"tlf~d•• "·, lI.IM ••••••••.'."". hll ••• " •.•••lo.11<,11'.
'->."';' ~:.;tn ~J ""~': .• '!It-:.," •• s. ~~, "'.1)00;<1':><;"< Jo·1'.~$ot> ,~'~.;~~'''' ""~,!~~.•.,,,~ ,.~.•~~~,,<;<'"
r·,.,,~,',1""rH .'.'''~'''''' ~,~.•••.",~~'·""··'''''.''''f· C.h"·"-" ':l'""c•.- •.."'!!....•••...•I"*"""~-~,~ 'w,'<q- •.•••"""'~r.·j'f'·
".""' ...••
.,~,.'"'At.~~'·""~'>''
~1'''fl''t'"F''''''''~I'r.-~,t>.U'
"," ~" •• It••• ,"',.,.I! ••• ,,- •••••
'''' .••'~ .'" :£.:..••,..,.. F<.'A>u<III._ ••
•( •. "-1)'
http://support.veritas.com(',ur4'''·'',IIa",I.il<·''''I''"'·
" •• ~~: ..,_''' ••.•• ,'''~ •• ·d ••• l01·••• ,.,.., ••••••••••••• "'"'".;>
:t"".~.1":1""'-. ""'.1,1-:" "''to ..••..., .,~•••r""· •.'" +"" ,~._.
"'j<.('.o.:', "~"""'''''~ "',-.~ .•.1'
Version Release Differences
With each new release of the Storage Foundation software. changes are made that
may affect the installation or operation of Storage Foundation in your
environment. By reading version release notes and installation documentation that
are included with the product, you can stay informed of any changes,
For more information about specific releases ofYERITAS Storage Foundation,
visit the YERITAS Support Web site at: http: / / support. veri tas. com
This site contains product and patch information, a searchable knowledge base of
technical notes, access to product-specific news groups and c-mail norification
services, and other infonnation about contacting technical support staff.
Note: If you open a case with YERITAS Support. you can view updates at:
http://support.veritas.com/viewcase
You can access your case by entering the e-mail address associated with your case
and the case number.
2-4
Copynqhtc. 2006 Svmaruec Ccmoranon All fights reserved
VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
41. ,S)111illllt'L
I
Storage Foundation Licensing
• Licensing utilities are contained in the VRTSvlic
package, which is common to all VERITAS products.
• To obtain a license key:
- Create a vLicense account and retrieve license keys online.
vLicense is a Web site that you can use to retrieve and
manage your license keys.
or
- Complete a License Key Request form and fax it to
VERITAS customer support.
• To generate a license key, you must provide your:
- Software serial number
- Customer number
- Order number
Note: You may also need the network and RDBMS platform, system
configuration, and software revision levels.
Adding License Keys
You must have your license key before you begin installation. because you are
prompted for the license key during the installation process. A new license key is
nut necessary if you are upgrading Storage Foundation from a previously licensed
version of the product.
lfyou have an evaluation license key, you must obtain a permanent license key
when you purchase the product. The VER[TAS licensing mechanism checks the
system date to verify that it has not been set back. [I' the system date has been reset.
the evaluation license key becomes invalid.
Obtaining a License Key
License keys arc delivered on Software License Certificates to you at the
conclusion of the order fulfillment process. The certificate specifics the product
keys and the number of product licenses purchased. A single key enables you to
install the product un the number and type of systems for which you purchased the
license.
License key arc non-node locked.
In a non-node locked model. one key can unlock a product on different servers
regardless ufllost ID and architecture type.
In a nude locked model. a single license is tied to a single specific server. For
each server. you need a di fferent key.
Lesson 2 Installationand Interfaces
Copyright C 2006 Symamec Corporation. All nqhts reserved
2-5
42. symaruec
Generating License Keys
[http://vli-~~~-s-e-.v~-;i-ta-s~.-c~Jf-~---'
I·········
I. Access automatic
I
, license key generation
and delivery.
• Manage and track
I
i license key inventory
and usage.
I· Locate and reissue lost
license keys.
I· Report, track, and
1 resolve license key
I issues online.
I· Consotidate and share
license key information
with other accounts.
• To add a license key:
vxlicinst
• License keys are installed in:
/etc/vx/licenses/lic
• To view installed license key
information:
vxlicrep
Displayed information includes:
- License key number
- Name of the VERIT AS
product that the key enables
- Type of license
- Features enabled by the key
Generating License Keys with vLicense
VERITAS vl.icense (v L icense. veri tas. com) is a self-service online license
management system.
vl.iccnsc supports production license keys only. Temporary. evaluation. or
demonstration keys must be obtained through your VERITAS sales representative.
Note: The VRTSvl ic package can coexist with previous licensing packages. such
as VRTSIic. If you have old license keys installed in /etc/vx/eIm.leave this
directory on your system. The old and new license utilities cun coexist.
2-6 VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
Copvnqht i; 2006 Svmantec Corporation An fights reserved
43. ,S)11Hlnt.?( .
I
~
What Gets Installed?
In version 5.0, the default installation behavior is to
install all packages in Storage Foundation Enterprise HA.
In previous versions, the default behavior was to only
install packages for which you had typed in a license key.
In 5.0, you can choose to install:
• All packages included in Storage Foundation Enterprise HA
or
• All packages included in Storage Foundation Enterprise HA,
minus any optional packages, such as documentation
and software development kits
VERITAS Software Packages
When you install a product suite. the component product packages are installed
automatically. When installing Storage Foundation. be sure to follow the
instructions in the product release notes and installation guides.
Package Space Requirements
Before you install any of the packages. confirm that your system has enough free
disk space to accommodate the installation. Storage Foundation programs and files
are installed in the /. /usr. and / opt tile systems. Refer to the product
installation guides for a detailed list of package space requirements.
Solaris Note
VxFS often requires more than the default RK kernel stack size. so entries are
added to the jete/system file. This increases the kernel thread stack size of the
system to 24K. The original / ete/ system file is copied to
/ete/fs/vxfs/system.preinstall.
Lesson 2 Installation and Interfaces 2-7
COpyrig!lt:fj 2006 Symanlec Corporation. All rights reserved
44. symantec
Optional Features
VERITAS FlashSnap
- Enables point-in-time copies of data with minimal
performance overhead
- Includes disk group split/join, FastResync, and
storage checkpointing (in conjunction with VxFS)
VERITAS Volume Replicator
- Enables replication of data to remote locations
- VRTSvrdoc: VVR documentation
• VERITAS Cluster Volume Manager
Used for high availability environments
Features are
Included In the
VxVM package,
but they require a
separate license.
Features are
Included In the
VxFS package,
but they require a
separate license.
VERITAS Quick 1/0 for Databases
Enables applications to access prealiocated VxFS files
as raw character devices
• VERITAS Cluster File System
Enables multiple hosts to mount and perform file
operations concurrently on the same file
Dynamic Storage Tiering
Enables the support for multivolume file systems by
managing the placement of files through policies that
control both initial file location and the circumstances
under which existing files are relocated
Storage Foundation Optional Features
Several optional features do not require separate packages, only additional
licenses. The following optional features are built-in to Storage Foundation that
you can enable with additional licenses:
VERITAS Flashxnap: FlashSnap facilitates point-in-time copies of data,
while enabling applications to maintain optimal performance, by enabling
features, such as FastResync and disk group split and join functionality.
FlashSnap provides an efficient method to perform offline and off-host
processing tasks, such as backup and decision support.
VERITAS Volume Replicator: Volume Rcplicator augments Storage
Foundation functionality to enable you to replicate data to remote locations
over any IP network. Replicated copies of data can be used for disaster
recovery, off-host processing, off-host backup, and application migration.
Volume Replicator ensures maximum business continuity by delivering true
disaster recovery and flexible off-host processing.
Cluster Functionality: Storage Foundation includes optional cluster
functionality that enables Storage Foundation to be used in a cluster
environment.
A cI/lSII!I' is a set of hosts that share a set of disks. Each host is referred to as a
node in a cluster. When the cluster functionality is enabled, all of the nodes in
the cluster can share VxVM objects. The main benefits of cluster
configurations are high availability and off-host processing.
VERITAS Cluster Server (VCS): ves supplies two major components
integral to eFS: the Low Latency Transport (LLT) package and the Group
2-8
CQPynyht'~ 2006 Syrn;'Jnlt:r.: Corporation. All fights reserveu
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
45. Membership and Atomic Broadcast (GAB) package. LLT provides node-
to-node communications and monitors network communications. GAB
provides cluster state. configuration. and membership service. and it
monitors the heartbeat links between systems to ensure that they are active.
VERIT AS Cluster File System (CFS): CFS is a shared file system that
enables multiple hosts to mount and perform IIle operations concurrently
on the same file.
VERITAS Cluster Volume Manager (CVM): CVM creates the cluster
volumes necessary for mounting cluster file systems.
VERIT AS Quick 1/0 for Databases: VERITAS Quick 1/0 for Databases
(referred to as Quick 1;0) enables applications to access preallocated VxFS
tiles as raw character devices. This provides the administrative benefits of
running databases on file systems without the performance degradation usually
associated with databases created on file systems.
Dynamic Storage Tiering (DST): DST enables the support for multivolume
file systems by managing the placement of files through policies that control
both initial tile location and the circumstances under which existing files are
relocated.
Lesson 2 Installation and Interfaces
Copyrigh1 It 200{J Svmentec Corporation. All nqtus reserved.
I
2-9
46. syrnantec
Installation Menu
Storage Foundation and High Availability Solutions 5.0
SYMANTEC Product Version Installed Licensed
Veritas Cluster Server
Veritas File System
Veritas Volume Manager
Veri tas Volume Replicator
Veritas Storage Foundation
Veritas Storage Foundation for Oracle
Veri tas Storage Foundation for DB2
Veritas Storage Foundation for Sybase
Veri tas Storage Foundation Cluster File System
Veritas Storage Foundation for Oracle RAe
no no
no no
no
no no
no no
no no
no
no no
no no
Task Menu:
[2> Install/Upgrade a Product
L) License a Product
U) Uninstall a Product
Q) Quit
C) Configure an Installed Product
P) Perform a Preinstallation Check
0) View a Product Description
?) Help
Enter a Selection: [I,C,L,P,U,D,Q,?]
Installing Storage Foundation
The Installer is a menu-based installation utility that you can use to install any
product contained on the VERITAS Storage Solutions CD-ROM. This utility acts
as a wrapper for existing product installation scripts and is most useful when you
are installing multiple VERI rAS products or bundles, such as VERITAS Storage
Foundation or VERITAS Storage Foundation tor Databases.
Note: The example on the slide is from a Solaris platform. Some of the products
shown on the menu may not be available on other platforms. For example,
VERITAS File System is available only as part of Storage Foundation on HP-liX.
Note: The VERITAS Storage Solutions CD-ROM contains an installation guide
that describes how to use the installer utility. You should also read all product
installation guides and release notes even if you are using the installer utility.
To add the Storage Foundation packages using the installer utility:
1 Log on as supcruscr.
2 Mount the VERITAS Storage Solutions CD-ROM.
3 Locate and invoke the installer script:
cd / cdrom/ CD_name
./installer
4 If the licensing utilities are installed. the product status page is displayed. This
list displays the VERITAS products on the CD-ROM and the installation and
licensing status of each product. If the licensing utilities are not installed, you
receive a message indicating that the installation utility could not cletermine
product status.
2-10 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
Copynghl & 2006 Svrnaotoc Corporation All flyhls reserved
47. 5 Type I to install a product. Follow the instructions to select the product that
you want to install. Installation begins automatically.
When you add Storage Foundation packages by using the installer utility. all
packages are installed. lfyou want to add a specific package only. for example.
only the VRTSvrndoc package. then you must add the package manually from the
command line.
After installation. the installer creates three text files that can be used for auditing
or debugging. The names and locations or each file are displayed at the end or the
installation and are located in / opt/VRTS/ install / logs:
IFile Description
Installation log file Contains all commands executed during installation. their output.
and any errors generated by the commands: used for debugging
installation problems and for analysis by VERITAS Support
Responsefile Contains configuration information enteredduring the procedure:
can be used for future installation procedures when using the
installer script with the -responsefile option
Summary file Contains the output of Vf:RITAS product installation scripts:
shows products that were installed. locations of log and response
files, and installation messagesdisplayed
Methods for Adding Storage Foundation Packages
A first-time installation or Storage Foundation involves adding the software
packages and configuring Storage Foundation fur first-time use. You can add
VERITAS product packages by using one of three methods:
Method Command Notes
VLRITAS installer Installs multiple VERITAS
Installation Menu products interactively.
Installs packagesand conligures
Storage Foundation (or first-time
use.
Product installation installvm Install individual VFRITAS
scripts installfs products internctively.
installsf Installs packagesand configures
Storage Foundation lor first time
use.
Native operating pkgadd (Solaris) Install individual packages. for
system package swinstall (iIP-UX) example. when using your 0'11
installation
installp (AIX)
custom installation scripts,
commands First-time Storage Foundation
rpm (Linux )
configuration must be run as a
Then. to configure SF: separatestep.
vxinstall
Lesson 2 Installation and Interfaces 2-11
Copynqht "( 2006 Svmanter. Corporation All rights reserved
48. Default Disk Group
o You can set up a system-
wide default disk group to
which Storage Foundation
commands default if you do
not specify a disk group .
o If you choose not to set a
default disk group at
installation, you can set the
default disk group later from
the command line.
Note:In StorageFoundation
4.0and later,the rootdg
requirementno longerexists.
symaru«,
Configuring Storage Foundation
: When you i~~taIISt~r~g~F~~~dation, y~u are asl<edify~~'V~~tt~~~~:l
Enclosure-Based Naming
HostJ
"-cl
Disk
Enclosures
.,.II '·c' 2
. ~.I. enc
j' -;;ncl
encO
o Standard device naming is based on
controllers, for example, cltOdOs2.
o Enclosure-based naming is based on
disk enclosures, for example, encO.
Configuring Storage Foundation
When you install Storage Foundation, you are asked if you want to configure it
during installation. This includes deciding whether to use enclosure-based naming
and a default disk group.
What Is Enclosure-Based Naming'!
An enclosure.or disk enclosure,is an intelligent disk array. which permits hot-
swapping of disks. With Storage Foundation. disk devices can be named for
enclosures rather than for the controllers through which they are accessed as with
standard disk device naming (for example. eOtOdO or hdisk2).
Enclosure-based naming allows Storage Foundation to access enclosures as
separate physical entities. By configuring redundant copies of your data on
separate enclosures, you can safeguard against failure of one or more enclosures.
This is especially useful in a storage area network (SAN) that uses Fibre Channel
hubs or fabric switches and when managing the dynamic multipathing (DMP)
feature of Storage Foundation. For example, if two paths (el t 99dO and
e2t99dO) exist to a single disk in an enclosure, VxVM can use a single DMP
metanode, such as eneO 0, to access the disk.
What Is a Default Disk Group'!
The main benefit of creating a default disk group is that Storage Foundation
commands default to that disk group if you do not specify a disk group on the
command line. defaul tdg specifies the default disk group and is an alias for the
disk group name that should be assumed if a disk group is not specified ill a
command.
2-12 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
COp,'!v;)ht G" 2()06 Svmantec COIl-'or<lIlOI1. All flI)t1ls reservcc
49. I
Storage Foundation Management Server
Storage Foundation 5.0 provides central
management capability by introducing a
Storage Foundation Management Server (SFMS).
With SF 5.0, it is possible to configure a SF host as
a managed host or as a standalone host during
installation.
A Management Server and Authentication Broker
must have previously been set up if a managed
host is required during installation.
To configure a server as a standalone host during
installation, you need to answer "n" when asked if
you want to enable SFMS Management.
You can change a standalone host to a managed
host at a later time.
Note: This course does not cover SFMS and managed hosts.
Storage Foundation Management Server
Storage Foundation 5.0 provides central management capability by introducing a
Storage Foundation Management Server (SFMS). For more information. refer to
the S/Or"geFoundation ManagementS(,I"I·(,I" Administrator's Guide.
Lesson 2 Installationand Interfaces
Copyright «; 2006 Svmantec Corporation All fignls rese-veo
2-13
50. Sularis
symantec
Verifying Package Installation
To verify package installation, use OS-specific
commands:
• Solaris:
pkginfo -1 VRTSvxvm
• HP-UX:
sw1ist -1 product VRTSvxvm
• AIX:
lslpp -1 VRTSvxvm
• Linux:
rpm -qa VRTSvxvm
Verifying Package Installation
llyou are not sure whether VERITAS packagesare installed, or if you want to
verify which packagesare installed on the system,you can view information about
installed packagesby using Ox-specific commands to list package information.
To list all installed packageson the system:
pkginfo
To restrict the list to installed VERITAS packages:
pkginfo I grep VRTS
To display detailed information about a package:
pkginfo -1 VRTSvxvm
HP-UX
To list all installed packageson the system:
sw1ist -1 product
To restrict the list to installed VERITAS packages:
sw1ist -1 product I grep VRTS
To display detailed information about a package:
sw1ist -1 product VRTSvxvm
2-14
Cqpyngtll (,. 2006 Symautec Corporauon All nqhts reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
51. AIX
To list all installed packages on the system:
lslpp
To restrict the list to installed VERITAS packages, type:
lslpp -1 'VRTS*'
To verify that a particular Iileset has heen installed, use its name, for example:
lslpp -1 VRTSvxvrn
To verify package installation on the system:
rpm -qa I grep VRTS
To verify a specific package installation on the system:
rpm -q[i] package_name
For example, to verify that the VRTSvxvm package is installed:
rpm -q VRTSvxvrn
The - i option lists detailed information about the package.
ILinux
Lesson 2 Installation and Interfaces 2-15
Copyrighi ~ 2006 Svrnantec Corporation All rigl1;; reserved
52. svmantec
Storage Foundation User Interfaces
Storage Foundation supports three user interfaces:
• VERITAS Enterprise Administrator (VEA):
A GUI that provides access through icons, menus,
wizards, and dialog boxes
Note: This course only covers using VEA on a standalone
host.
• Command-Line Interface (CLI): UNIX utilities that
you invoke from the command line
• Volume Manager Support Operations
(vxdiskadm): A menu-driven, text-based interface
also invoked from the command line
Note: vxdiskadm only provides access to certain disk and
disk group management functions.
Storage Foundation User Interfaces
Storage Foundation User Interfaces
Storage Foundation supports three user interfaces, Volume Manager objects
created by one interface are compatible with those created by the other interfaces.
YERITAS Enterprise Administrator (YEA): VERITAS Enterprise
Administrator (VEA) is a graphical user interface to Volume Manager and
other VERITAS products. VEA provides access to Storage Foundation
functionality through visual clements, such as icons, menus. wizards, and
dialog boxes. Using VEA, you can manipulate Volume Manager objects and
also perform common tile system operations.
Command-Line Interface (CLI): The command-line interface (ell) consists
of UNIX utilities that you invoke from the command line to perform Storage
Foundation and standard UNIX tasks. You can use the ell not only to
manipulate Volume Manager objects. but also to perform scripting and
debugging functions. Most of the ell commands require supcruser or other
appropriate privileges. The ell commands perform functions that range from
the simple to the complex, and some require detailed user input.
Volume Manager Support Operations (vxdiskadm): The Volume
Manager Support Operations interface, commonly called vxdiskadm, is a
menu-driven, text-based interface that you can use for disk and disk group
administration functions. The vxdiskadm interface has a main menu from
which you can select storage management tasks.
A single VEA task may perform multiple command-line tasks.
2-16
Copyuqtu 'c 2(1)6 Syn.autec Corporauon All fights reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
53. -
I
, syrnaruec.
; Menu Bart t"'_ ~.,... •....•~ .•...•....;
VEA: Main Window
Quick
Access
Bar
; Toolbar
lit 0
<_l ;::""ot.,,.._.! t_'f-"'W"" ~"'br-'" ,..,,,,,-.,.,-.l~<C"'><» ""cO._"",," ~<W1'A<'.~ "'•.•••".I>t.~l
,,: _.,._.,n
~ •.•.•••• ">.
t loi,-,q , '>
ll"''''
~ u; ,-
..."
:: t,.i:.,,;~~'
.'_•....••.
Three ways to
access tasks:
1. Menu bar
2. Toolbar
3. Context menu
(right-click)
""'_==":;":'-':'...1 ~';':"'"d,.·'.·'. '" '~.•.,~
Using the VEA Interface
The VERITAS Enterprise Administrator (VEA) is the graphical user interface for
Storage Foundation and other VERITAS products. You can use the Storage
Foundation features of VEA to administer disks, volumes, and file systems on
local or remote machines.
VEA is a Java-based interface that consists of a server and a client. You must
install the VEA server on a UNIX machine that is running VERITAS Volume
Manager. The VEA client can Tunon any machine that supports the Java 1.4
Runtime Environment, which can be Solaris. IIP-UX, AIX, Linux, or Windows.
SOllie Storage Foundation features ofVEA include:
Remote Administration
Security
Multiple Host Support
Multiple Views of Objects
Setting VE! Preferences
You can customize general VEA environment auributes through the Preferences
window (Select Tools - --Prcfcrcncev).
Lesson 2 Installation and Interfaces 2-17
CopYllght ttl2006 Symanter. Corporation. All rights resetveo
54. symaruec
VEA: Viewing Tasks and Commands
To view underlying command
lines, double-click a task.
, *-'(:~<;,<j~"
"'¥i:t'w. " , 'I" ~ ,.__"
'-'--""';"--+-1 ~;;~~i':::J~~;~'~
fr<l flnlC" l!<;.u,t.H~"
t>O!k!'lIillIll
('jU! •• r~ltr ••O<.IP
T,"~1."l
(1"~t<i.uU!<)o~
."OI$tUi.
'oWN ,",!Jg eso 'I'+! 1(!)t;rrr", ~fnOl
•••••~.I¢"!I"'I(iK<:,.,~::.'l'
UU(
kl'~-":-:O''''''''''----'--'-----'
i~~lr4"~IIfll
Viewing Commands Through the Task Log
The Task Log displays a history of the tasks performed in the current session. Each
task is listed with properties. such as the target object of the task. the host, the start
time, the task status, and task progress.
Displaying the Task Lug window: To display the Task Log, click the Logs
tab at the left of the main window.
Clearing the Task History: Tasks are persistent in the Task History window.
To remove completed tasks from the window, right-click a task and select
Clear All Finished Tasks.
Viewing ell Commands: To view the command lines executed for a task,
double-click a task. The Task Log Details window is displayed tor the task.
The CLI commands issued are displayed in the Commands Executed field of
the Task Details section.
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals2-18
COlJyflgtll '"' 2006 Symantec Coporauon All lights reservcc
55. , S)111.1I1lt'l.
I
Command-Line Interface
You can administer CLI commands from the UNIX shell
prompt.
Commands can be executed individually or combined
into scripts.
Most commands are located in /usr/sbin. Add this
directory to your PATH environment variable to
access the commands.
Examples of CLI commands include:
vxassist
vxprint
vxdg
vxdisk
Creates and manages volumes
Lists VxVM configuration records
Creates and manages disk groups
Administers disks under VM control
Using the Command-Line Interface
The Storage Foundation command-line interface (CLl) provides commands used
for administering Storage Foundation from the shell prompt on a UNIX system.
CLl commands can be executed individually for specific tasks or combined into
scripts.
The Storage Foundation command set ranges from commands requiring minimal
user input to commands requiring detailed user input. Many of the Storage
Foundation commands require an understanding of Storage Foundation concepts.
Most Storage Foundation commands require supcruser or other appropriate access
privileges.
CLI commands are detailed in manual pages.
Accessing Manual Pages for CLI Commands
Detailed descriptions ofVxVM and VxFS commands. the options for each utility.
and details on how to use them are located in VxVM and VxFS manual pages.
Manual pages are installed by default in / opt/VRTS/man. Add this directory to
the MANPATI I environment variable. if it is not already added.
To access a manual page. type man command name.
Examples:
man vxassist
man mount vxfs
Linux Note
On Linux. you must also set the MANSECT and ~1ANPATH variables.
Lesson 2 Installation and Interfaces
Copyrigtlt l~, 2U06 Symamec Corpotauon All rights .csorvoo
2-19
56. symantcc
The vxdi skadm Interface
vxdiskadm
Volume Manager Support Operations
Menu: volumeManager/Disk
1 Add or initialize one or more disks
2 Encapsulate one or more disks
3 Remove a disk
4 Remove a disk for replacement
5 Replace a failed or removed disk
list List disk information
Display help about menu
?? Display help about the menuing system
q Exit from menus
Note: This example is from a Solaris platform. The options may be
slightly different on other platforms.
Using the vxdiskadm Interface
The vxdiskadm command is a CLI command that you can use to launch the
Volume Manager Support Operations menu interface. You can use the Volume
Manager Support Operations interface, commonly referred to as vxdiskadm. to
perform common disk management tasks. The vxdiskadm interface is restricted
10 managing disk objects and does not provide a means of handl ing all other
VxVM objects.
Each option in the vxdiskadm interface invokes a sequence ofCLI commands.
The vxdiskadm interlace presents disk management tasks to the user as a series
of questions. or prompts.
To start vxdiskadm. you type vxdiskadm at the command line to display the
main menu.
The vxdiskadm main menu contains a selection of main tasks that you can use to
manipulate Volume Manager objects. Each entry in the main menu leads you
through a particular task by providing you with information and prompts. Default
answers arc provided for many questions, so you can select common answers.
The menu also contains options for listing disk information, displaying help
information. and quilling the menu interface.
The tasks listed in the main menu are covered throughout this training. Options
available in the menu differ somewhat by platform. See the vxdiskadm (1m)
manual page for more details on how to use vxdiskadm.
Note: vxdiskadm can be run only once per host. A lock file prevents multiple
instances from running: /var / spool / locks/ .DrSKADO. LOCK.
2-20
Copynqht F; ';:006 Svmamcc Corporation All rights reserved
VERITAS Storage Foundation 5.0 for UNIX. Fundamentals
57. I
Installing VEA
Installation
administration
file (Solaris only):
VRTSobadmin
Windows
Client packages:
• VRTSobgui, VRTSat, VRTSpbx,
VRTSicsco (UNIX)
Server packages:
• VRTSob
• VRTSobc33
• VRTSaa
VRTSccg
• VRTSdsa
• VRTSvail
• VRTSvmpro
• VRTSfspro
'UN/X VRTSddlpr
--_.__._---_._--..-
r-;, Install the VEA
I server on a UNIX
I machine running
I
I. Storage Foundation.
Install the VEA
client on any
I machine that
,I
supports the Java
1.4 Runtime
Environment (or
I later).
• windows/VRTSobgui .rosi (Windows)
VEA is installed automatically when you run the SF installation
scripts. You can also install VEA by adding packages manually.
Managing the VEA Software
YEA consists of a server and a client. You must install the YEA server on a UNIX
machine that is running YERITAS Volume Manager. You can install the YEA
client on the same machine or on any other UNIX or Windows machine that
supports the Java 1.4 Runtime Environment (or later),
Installing the VEA Server and Client on UNIX
If you install Storage Foundation by using the Installer utility. you arc prompted to
install both the ,[A server and client packages automatically. If you did not install
all of the components by using the Installer. you can add the YEA packages
separately.
It is recommended that you upgrade YEA to the latest version released with
Storage Foundation in order to take advantage of new functionality built into YEA.
You can use the YEA with 4.1 and later to manage 3.5.2 and later releases.
When adding packages manually. you must install the Volume Manager
(VRTSvl ie. VRTSvxvrn) and the infrastructure packages (VRTSat. VRTSpbx.
VRTSieseo) before installing the YEA server packages. After installation. also
add the YEA startup scripts directory. / opt/VRTSob/bin. to the PATH
environment variable.
Lesson 2 Installationand Interfaces 2-21
Copyright ,°,2006 Symanrec Corporation. Anuqhts resorvoo
58. syrnanrec
Starting the VEA Server and Client
Once installed, the VEA server starts up automatically
at system startup.
To start the VEA server manually:
1. Log on as superuser.
2. Start the VEA server by invoking the server program:
/opt/VRTSob/bin/vxsvc (on Solaris and HP-UX)
/opt/VRTSob/bin/vxsvcctrl (on Linux)
When the VEA server is started:
/var /vx/ isis/vxis is. lock ensures that only one instance
of the VEA server is running.
/var/vx/isis/vxisis .log contains server process log
messages.
To start the VEA client:
On UNIX: /opt/VRTSob/bin/vea
On Windows: Select Start->Programs->VERIT AS->
VERITAS Enterprise Administrator.
Starting the VEA Server
In order to use YEA. the YEA server must be running on the UNIX machine to be
administered. Only one instance of the VEA server should be running at a time.
Once installed. the YEA server starts up automatically at system startup. You can
start the YEA server manually by invoking vxsvc (on Solaris and HP-UX).
vxsvcctrl (on Linux ), or by invoking the startup script itself, for example:
Solaris
/etc/rc2.d/S73isisd start
~IP-LJX
/sbin/rc2.d/S700isisd start
The YEA client call provide simultaneous access to multiple host machines. Each
host machine must be running the VEA server.
Note: Entries for your user name and password must exist in the password file or
corresponding Network Information Name Service table on the machine to be
administered. Your user name must also be included in the YERITAS
administration group (v r t s adm, by default) in the group tile or NIS group table.
If the vrtsadm entry does not exist. only root can run YEA.
You can contigure YEA to connect automatically to hosts when you start the YEA
client. In the YEA main window. the Favorite Hosts node can contain a list of
hosts that arc reconnected by default at the startup of the YEA client.
2-22
Copyright .; 2006 Svrnantec Corpcrahon. AU nqnts reserved
VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
59. , symanrec.
Managing VEA
The VEA server program is:
/opt/VRTSob/bin/vxsvc (Solaris and HP-UX)
/opt/VRTSob/bin/vxsvcctrl (Linux)
To confirm that the VEA server is running:
vxsvc -m (Solaris and HP-UX)
vxsvcctrl status (Linux)
To stop and restart the VEA server:
/etc/init.d/isisd restart (Solaris)
/sbin/init.d/isisd restart (HP-UX)
To kill the VEA server process:
vxsvc -k (Solaris and HP·UX)
vxsvcctrl stop (Linux)
To display the VEA version number:
vxsvc -v (Solaris and HP-UX)
vxsvcctrl version (Linux)
Managing the VEA Server
I
Monitoring VEA Event and Task Logs
You can monitor VEA server events and tasks from the [vent Log and Task Log
nodes in the VEA object tree. You can also view the VEA log file. which is located
at /var /vx/ isis/vxisis. log. This tile contains trace messages for the V[A
server and VEA service providers.
Copylight <E2006 Symantec Corporation Anucnts reserved
2-23Lesson 2 Installation and Interfaces
60. symantcc
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions, "lab ::: lnstatl.uiou and
hucrruccs." I'a~l' , ..7
Appendix B provides complete lab instructions and solutions, "Lab 2 Solutiuns:
lnstullation and lnrcriucc-." page n 7
Lesson Summary
• Key Points
In this lesson, you learned guidelines for a first-
time installation of VERITAS Storage Foundation,
as well as an introduction to the three interfaces
used to manage VERITAS Storage Foundation.
• Reference Materials
- VERITAS Volume Manager Administrator's Guide
- VERITAS Storage Foundation Installation Guide
- VERITAS Storage Foundation Release Notes
- Storage Foundation Management Server
Administrator's Guide
2-24
Lab 2
Lab 2: Installation and Interfaces
In this lab, you install VERITAS Storage
Foundation 5.0 on your lab system. You also
explore the Storage Foundation user interfaces,
including the VERITAS Enterprise Administrator
interface, the vxdiskadm menu interface, and the
command-line interface.
For Lab Exercises, see Appendix A.
For Lab Solutions, see Appendix B,
Copynqt-t ( 20DI'}Svrnantec Lorpcratton All riqhts leserved
VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
62. svmantec
Lesson Introduction
• Lesson 1: Virtual Objects
• Lesson 2: Installation and Interfaces
• Lesson 3: Creating a Volume and File "",',
System
• Lesson 4: Selecting Volume Layouts
• Lesson 5: Making Basic Configuration
Changes
• Lesson 6: Administering File Systems
• Lesson 7: Resolving Hardware Problems
~ ~~'" ,~,"';'~;i!1I.. , symantcc
Lesson Topics and Objectives
Topic After completing this lesson, you will be
able to:
Topic 1: Preparing Disks and Initialize an OS disk as a VxVM disk and
Disk Groups for Volume create a disk group by using VEA and
Creation command-line utilities.
Topic 2: Creating a Volume Create a concatenated volume by using VEA
and from the command-line,
Topic 3: Adding a File System Add a file system to and mount an existing
toa Volume volume.
Topic 4: Displaying Volume Display volume layout information by using
Configuration Information VEA and by using the vxprint command.
Topic 5: Displaying Disk and View disk and disk group information and
Disk Group Information identify disk status.
Topic 6: Removing Volumes, Remove a volume, evacuate a disk, remove a
Disks, and Disk Groups disk from a disk group, and destroy a disk
group.
3-2
Cop;lfIyhl'~ 2006 Svrnantec COrpOI(ltloll All rights reserved
VERITAS Storage Foundation 5,0 for UNIX: Fundamentals
63. ."-*..,,.,.dt~., d"U1I;:M
Selecting a Disk Naming Scheme
Types of naming schemes:
• Traditional device naming: OS-dependent and based on
physical connectivity information
• Enclosure-based naming: OS-independent, based on the
logical name of the enclosure, and customizable
You can select a naming scheme:
• When you run Storage Foundation installation scripts
• Using vxdiskadm, "Change the disk naming scheme"
Enclosure-based named disks are displayed in three
categories:
Enclosures: enclosurenarne #
Disks: Disk #
Others: Disks that do not return a path-independent identifier
to VxVM are displayed in the traditional OS-based format.
Preparing Disks and Disk Groups for Volume Creation-
Here are some examples of naming schemes:
Naming Scheme Example
Traditional Solaris: /dev/ l r l dsk/clt9dOs2
HP-UX: /dev/ l r ] dsk/c3t2dO (no slice)
AIX: /dev/hdisk2
I.inux: /dev/sda. /dev/hda
Enclosure-based senaO-
1.senaO
-
2,senaO- 3..
Enclosure-based Customized englab2.hrl.boston3
I
Benefits of enclosure-based naming include:
Easier fault isolation: Storage Foundation can more effectively place data and
metadata to ensure data availability.
Device-name independence: Storage Foundation is independent of arbitrary
device names used by third-party drivers.
Improved SAN management: Storage Foundation can create better location
identification information about disks in large disk limns and SANs.
Improved cluster management: In a cluster environment. disk array names
on all hosts in a cluster can be the same.
Improved dynamic multipathing (DMP) management: With multipathcd
disks. the name of a disk is independent of the physical communication paths.
avoiding confusion and conflict.
Copyrighl;~ 2006 Symantec Corporation. All nqtus reserved
3-3Lesson 3 Creating a Volume and File System
64. symantec
~
Stage 1: ;
Initialize disk. J
~ !
Uninitialized :
Disk ;
Stage 2:
Assign disk
to disk group.
Before Configuring a Disk for Use by VxVM
In order to use the space ofa physical disk to build VxVM volumes, you must
place the disk under Volume Manager control. Before a disk can be placed under
volume Manager control, the disk media must be formatted outside ofVxVM
using standard operating system formatting methods. SCSI disks arc usually
prcformaued. After a disk is formatted. the disk can be initialized for use by
Volume Manager. In other words. disks must be detected by the operating system,
before VxVM can detect the disks.
Stage One: Initialize a Disk
A formatted physical disk is considered uninitialized until it is initialized for use
by VxVM. When a disk is initialized. the public and private regions are created.
and VM disk header information is written to the private region. Any data or
partitions that may have existed on the disk are removed.
These disks are under Volume Manager control but cannot be used by Volume
Manager until they are added to a disk group.
Note: Encapsulation is another method of placing a disk under VxVM control in
which existing data on the disk is preserved. This method is covered in a later
lesson.
Changing the Disk Layout
To display or change the default values that are used for initializing disks, select
the "Change/display the default disk layouts" option in vxdiskadm:
3-4 VERITAS Storage Foundation 5.0 for UNIX: Fundamentals
COPYright ,~, 2006 Svmantec Cornorauon All fights reserved
65. For disk initialization. you can change the default format and the default length
of the private region. If the attribute settings for initializing disks are stored in
the user-created tile. / etc/ defaul t /vxdi sk, they apply to all disks to be
initialized
On Solaris for disk encapsulation. you can additionally change the offset
values for both the private and public regions. To make encapsulation
parameters different from the default VxVM values. create the user-detined
/ etc/ defaul t /vxencap tile and place the parameters in this tile.
On HP-UX when converting LVM disks. you can change the default format
and the default private region length. The attribute settings are stored in the
/etc/default/vxencap file.
Stage Two: Assign a Disk to a Disk Group
When you add a disk to a disk group. VxVM assigns a disk media name to the disk
and maps this name to the disk access name.
Disk media name: A disk media name is the logical disk name assigned to a
drive by VxVM. VxVM uses this name to identify the disk for volume
operations. such as volume creation and mirroring.
Disk access name: A disk access name represents all UNIX paths to the
device. A disk access record maps the physical location to the logical name
and represents the link between the disk media name and the disk accessname.
Disk accessrecords arc dynamic and can be re-created when vxdctl enable
is run.
The disk media name and disk access name. in addition to the host name. are
written to the private region of the disk. Space in the public region is made
available for assignment to volumes. Volume Manager has full control of the disk.
and the disk can be used to allocate space tor volumes. Whenever the VxVM
configuration daemon is started (or vxdctl enable is run). the system reads the
private region on every disk and establishes the connections between disk access
names and disk media names.
A tier disks are placed under Volume Manager control. storage is managed in terms
of the logical configuration. File systems mount to logical volumes. not to physical
partitions. Logical names. such as
/dev/vx/ l r l dsk/diskgroup/volume_name. replace physical locations.
such as /dev/ [rl dsk/ device_name.
The free space in a disk group refers to the space on all disks within the disk group
that has not been allocated as subdisks, When you place a disk into a disk group.
its space becomes part or the tree space pool of the disk group.
Stage Three: Assign Disk Space to Volumes
When you create volumes. space in the public region of a disk is assigned to the
volumes. Some operations. such as removal of a disk from a disk group. are
restricted itspace on a disk is ill use by a volume.
Lesson 3 Creating a Volume and File System
Copyright c· 2006 Symantcc Corporation All rignls reserved
I
3-5