This document discusses layered volumes in VxVM. Layered volumes tolerate failure better and have greater redundancy than standard volumes. For example, a mirrored-striped layered volume has quicker recovery than a standard mirrored volume since each mirror covers less storage space. The document also discusses different types of layered volumes like mirror-striped and stripe-mirrored, how they provide redundancy and performance advantages over standard volumes, and some of their limitations.
1.Introduction
2.OS Structures
3.Process
4.Threads
5.CPU Scheduling
6.Process Synchronization
7.Dead Locks
8.Memory Management
9.Virtual Memory
10.File system Interface
11.File system implementation
12.Mass Storage System
13.IO Systems
14.Protection
15.Security
16.Distributed System Structure
17.Distributed File System
18.Distributed Co Ordination
19.Real Time System
20.Multimedia Systems
21.Linux
22.Windows
This document discusses DB2 backup and recovery. It covers logging, different backup types including full, incremental, and delta backups. It also discusses performing backups offline and online. The document describes how to check backup history and image consistency. Recovery types like crash, version, and roll-forward recovery are explained. Commands for restarting, restoring, and recovering databases are provided. The appendix includes links for more information on backup, restore, and roll-forward commands.
ZFS is a filesystem and logical volume manager that provides advanced features like snapshots, compression, and checksumming. It was originally developed at Sun and is now open source under the OpenZFS project. This document discusses how to use and manage ZFS on FreeNAS and PC-BSD operating systems, including creating ZFS pools and datasets, using snapshots and scrubs, and adding features like SLOG and L2ARC devices. Management utilities for these tasks are provided in the GUI and CLI of each OS.
Contains information about the DB2 DSNZPARM that forms the DB2 configuration parameters. All about the different types of zPARMs. A way to update them dynamically.
This document provides instructions for configuring Distributed Replicated Block Device (DRBD) to create a high availability cluster between two servers. It discusses mirroring a block device via the network to provide network-based RAID 1 functionality. The document outlines the steps to install and configure DRBD, including installing packages, configuring resources, initializing metadata storage, starting the DRBD service, and creating a filesystem on the mirrored block device. It also provides requirements for DRBD and a sample installation script.
ZFS is a modern filesystem designed to add features not found in traditional filesystems. It provides massive storage pools, data integrity checks, snapshots and clones for backup/restore, and more. This document discusses how to use ZFS features in FreeNAS and PC-BSD operating systems through their management utilities, including creating ZFS storage pools and datasets, adding SLOG/L2ARC devices, taking snapshots, and restoring from snapshots. It also covers scrubbing for disk errors and boot environments for rolling back upgrades. Additional resources are provided for learning more about administering ZFS.
DB2 10 & 11 for z/OS System Performance Monitoring and OptimisationJohn Campbell
This is a "One day Seminar -ODS " . The objectives of this ODS are to focus on key areas
• System address space CPU, EDM pools, data set activity, logging, lock/latch contention, DBM1
virtual and real storage, buffer pools and GBP,…
• Identify the key performance indicators to be monitored
• Provide rules-of-thumb to be applied
• Typically expressed in a range, e.g. < X-Y
• If <x,>Y, need further investigation and tuning - RED
• Boundary condition if in between - AMBER
• Investigate with more detailed tracing and analysis when time available
• Provide tuning advice for common problems
1.Introduction
2.OS Structures
3.Process
4.Threads
5.CPU Scheduling
6.Process Synchronization
7.Dead Locks
8.Memory Management
9.Virtual Memory
10.File system Interface
11.File system implementation
12.Mass Storage System
13.IO Systems
14.Protection
15.Security
16.Distributed System Structure
17.Distributed File System
18.Distributed Co Ordination
19.Real Time System
20.Multimedia Systems
21.Linux
22.Windows
This document discusses DB2 backup and recovery. It covers logging, different backup types including full, incremental, and delta backups. It also discusses performing backups offline and online. The document describes how to check backup history and image consistency. Recovery types like crash, version, and roll-forward recovery are explained. Commands for restarting, restoring, and recovering databases are provided. The appendix includes links for more information on backup, restore, and roll-forward commands.
ZFS is a filesystem and logical volume manager that provides advanced features like snapshots, compression, and checksumming. It was originally developed at Sun and is now open source under the OpenZFS project. This document discusses how to use and manage ZFS on FreeNAS and PC-BSD operating systems, including creating ZFS pools and datasets, using snapshots and scrubs, and adding features like SLOG and L2ARC devices. Management utilities for these tasks are provided in the GUI and CLI of each OS.
Contains information about the DB2 DSNZPARM that forms the DB2 configuration parameters. All about the different types of zPARMs. A way to update them dynamically.
This document provides instructions for configuring Distributed Replicated Block Device (DRBD) to create a high availability cluster between two servers. It discusses mirroring a block device via the network to provide network-based RAID 1 functionality. The document outlines the steps to install and configure DRBD, including installing packages, configuring resources, initializing metadata storage, starting the DRBD service, and creating a filesystem on the mirrored block device. It also provides requirements for DRBD and a sample installation script.
ZFS is a modern filesystem designed to add features not found in traditional filesystems. It provides massive storage pools, data integrity checks, snapshots and clones for backup/restore, and more. This document discusses how to use ZFS features in FreeNAS and PC-BSD operating systems through their management utilities, including creating ZFS storage pools and datasets, adding SLOG/L2ARC devices, taking snapshots, and restoring from snapshots. It also covers scrubbing for disk errors and boot environments for rolling back upgrades. Additional resources are provided for learning more about administering ZFS.
DB2 10 & 11 for z/OS System Performance Monitoring and OptimisationJohn Campbell
This is a "One day Seminar -ODS " . The objectives of this ODS are to focus on key areas
• System address space CPU, EDM pools, data set activity, logging, lock/latch contention, DBM1
virtual and real storage, buffer pools and GBP,…
• Identify the key performance indicators to be monitored
• Provide rules-of-thumb to be applied
• Typically expressed in a range, e.g. < X-Y
• If <x,>Y, need further investigation and tuning - RED
• Boundary condition if in between - AMBER
• Investigate with more detailed tracing and analysis when time available
• Provide tuning advice for common problems
The document discusses various DB2 recovery options including backup and restore, the recovery process model, important recovery-related system files, advanced copy services, and transportable schemas. It provides examples of the backup and restore process models and describes key DB2 recovery-related files. It also outlines the scripted interface for advanced copy services backup and differences between DB2 versions 9.7 and 10.1 related to advanced copy services.
"Dear Students,
Greetings from www.etraining.guru
We provide BEST online training for IBM DB2 LUW/UDB DBA by a database architect. Our DB2 Trainer comes with a working experience of 11+ years, 9+ years in DB2 and a DB2 certified professional.
DB2 LUW DBA Course Content: http://www.etraining.guru/course/dba/online-training-db2-luw-udb-dba
Course Cost: USD 350 (or) INR 21000
Number of Hours: 30-35 hours
Regards,
Karthik
www.etraining.guru"
This document provides an overview of NVM compression, a hybrid flash-aware application level compression solution. It discusses the drawbacks of existing row-level compression in MySQL and outlines an architecture for NVM compression that avoids these drawbacks. Key aspects of the NVM compression approach include performing compression only during flush, using sparse addressing to avoid over-provisioning flash space, and adding a new multi-threaded flush framework. Evaluation results and building blocks of the solution are also briefly mentioned.
ZFS is a filesystem that provides end-to-end data integrity and reliability through the use of checksums, copy-on-write transactions, and pooled storage. Key features include detecting and correcting silent data corruption, eliminating volumes in favor of pooled storage, and providing a transactional design with consistent data. Administration is simplified with only two commands needed to manage the entire storage configuration.
Cassandra EU 2012 - Storage Internals by Nicolas Favre-FelixAcunu
The document discusses Cassandra's storage internals. It describes how Cassandra writes data to memtables and commit logs in memory before flushing to immutable SSTables on disk. It also explains how compaction merges SSTables to reclaim space and improve performance. For reads, Cassandra uses memtables, bloom filters on SSTables, key caches, and row caches to minimize disk I/O. Counters are implemented by coordinating writes across replicas.
DB2 is a database manager that runs on Linux, Unix, and Windows operating systems. It allows users to catalog databases, start and stop instances, and configure parameters. Key commands for managing DB2 include db2icrt for creating instances, db2idrop for dropping instances, db2ilist for listing instances, and db2set for setting configuration parameters at the global, instance, and node level. The db2set command provides centralized control over environmental variables.
The document discusses the evolution of DB2 HADR tool from version 8.2 to 10. It provides an overview of HADR and how it works, describes the key features introduced in each version, provides an example of how to set up HADR, and discusses techniques for optimizing HADR performance and using HADR beyond high availability for database migration.
Automatic Storage Management (ASM) metrics are a goldmine: Let's use them!BertrandDrouvot
This document introduces the asm_metrics utility for monitoring Automatic Storage Management (ASM) metrics. The utility provides real-time ASM metrics like reads/writes per second and I/O times. It is customizable, allowing users to view metrics by ASM instance, database instance, diskgroup, or failgroup. The document provides several use cases for how admins can use asm_metrics to monitor I/O performance and balance across various ASM components.
Presentation db2 best practices for optimal performancesolarisyougood
This document summarizes best practices for optimizing DB2 performance on various platforms. It discusses sizing workloads based on factors like concurrent users and response time objectives. Guidelines are provided for selecting CPUs, memory, disks and platforms. The document reviews physical database design best practices like choosing a page size and tablespace design. It also discusses index design, compression techniques, and benchmark results showing DB2's high performance.
This document provides an overview of the features and management utilities of the ZFS filesystem for FreeNAS and PC-BSD operating systems. It describes key ZFS concepts like pools, RAIDZ levels, datasets, snapshots, scrubs, deduplication, and boot environments. It also outlines how to perform tasks like creating pools and datasets, adding disks, taking and restoring snapshots, and scheduling scrubs on both FreeNAS and PC-BSD. Additional resources for learning more about best practices and advanced ZFS topics are also referenced.
FreeNAS 9.1.0 is an update to the open source Network Attached Storage system. Key updates include support for the latest ZFS features like LZ4 compression, an improved volume manager, and easier installation of plugins and software using a new AppCafe browser. The update also features alerts that can be dismissed, enhanced shell functionality, and other administrative improvements. FreeNAS is based on FreeBSD and provides enterprise-grade file sharing and storage capabilities for home or business use.
Slides from the S8 File Systems Tutorial at USENIX LISA'13 conference in Washington, DC. The topic covers ext4, btrfs, and ZFS with an emphasis on Linux implementations.
The document provides an overview and demonstration of Docker and CoreOS. It discusses how Docker allows for standardized packaging and isolation of applications and their dependencies into containers. CoreOS is introduced as a minimal Linux OS optimized for running Docker containers in highly available clusters, with automatic updates and tools for service management (Fleet) and distributed key-value storage (etcd). Examples of architectures using Docker and CoreOS are presented, along with potential benefits including more efficient application development, deployment and resource utilization.
ZFS is a file system developed by Sun Microsystems that provides advanced storage capabilities such as data integrity checking, snapshots and cloning. Some key features of ZFS include using copy-on-write storage, end-to-end checksumming of data to prevent silent data corruption, transactional semantics for consistency, and pooled storage that allows for thin provisioning and easy management of storage resources. ZFS aims to eliminate many of the issues with traditional file systems through its novel approach to data storage and management.
The document provides a summary of the hardware, licenses, and features of a Data Domain system. It includes:
- Hardware information such as memory, disks, network cards, and enclosure details.
- License keys for shelf capacities in the active and archive tiers, as well as feature licenses for encryption, expanded storage, and secure multi-tenancy.
- Descriptions of the different licenses and what features they enable, such as encryption of the filesystem or sharing the system among multiple tenants.
TSA provides automatic monitoring and availability management of resources configured for high availability in a cluster domain. It monitors DB2 HADR resources and DB2 instance resources, and can start, stop, and fail over these resources between nodes when failures occur. The document provides examples of how DB2 HADR and instance resources are defined and monitored by TSA using the IBM.Application resource type.
Windows 2000 is a 32-bit operating system designed for compatibility, reliability, and performance. It includes several key components like the kernel, executive services, and environmental subsystems. The kernel schedules threads and handles exceptions/interrupts. Executive services include the object manager, virtual memory manager, process manager, and I/O manager. Environmental subsystems allow running applications from other operating systems. The document also discusses disk structure, file systems, networking, and other OS concepts.
Hadoop World 2011: HDFS Federation - Suresh Srinivas, HortonworksCloudera, Inc.
Scalability of the NameNode has been a key issue for HDFS clusters. Because the entire file system metadata is stored in memory on a single NameNode, and all metadata operations are processed on this single system, the NameNode both limits the growth in size of the cluster and makes the NameService a bottleneck for the MapReduce framework as demand increases. This presentation will describe the features and implementation of HDFS Federation scheduled for release with Hadoop-0.23.
Online configuration allows administrators to reload configuration changes on RegionServers and HMaster without restarting the services. The utility makes the configuration reload and notifies interested classes. Classes implement an interface to be notified of configuration changes and update local state. This allows operational changes to number of threads, cache settings, and other parameters without downtime.
The document discusses new features in Windows Server 2016 related to cluster rolling upgrades. It describes the process for performing an in-place upgrade of a Hyper-V cluster from Windows Server 2012 R2 to Windows Server 2016 without downtime. The process involves pausing each node, upgrading its OS, then rejoining it to the cluster. Once all nodes are upgraded, the cluster functional level can be upgraded to enable new Windows Server 2016 features. The document also covers new storage replication capabilities in Windows Server 2016 technical preview called Storage Replica.
This document provides information about accessing VMware technical documentation and submitting feedback. It lists the most up-to-date documentation location on the VMware website, which also provides product updates. It includes instructions for submitting documentation feedback to VMware. The document also contains copyright information and a glossary of VMware terms.
This document provides a cheat sheet on common Logical Volume Manager (LVM) commands for displaying, creating, modifying, and troubleshooting physical volumes (PVs), volume groups (VGs), and logical volumes (LVs) in Linux. It lists directory locations and files related to LVM, describes tools for diagnostics and debugging, and provides examples of commands for scanning and managing PVs, VGs, and LVs, including displaying information, creating, extending, reducing, removing, and changing attributes of volumes. It also discusses snapshots, mirroring, and procedures for repairing corrupted LVM metadata with and without replacing faulty disks.
The document discusses various DB2 recovery options including backup and restore, the recovery process model, important recovery-related system files, advanced copy services, and transportable schemas. It provides examples of the backup and restore process models and describes key DB2 recovery-related files. It also outlines the scripted interface for advanced copy services backup and differences between DB2 versions 9.7 and 10.1 related to advanced copy services.
"Dear Students,
Greetings from www.etraining.guru
We provide BEST online training for IBM DB2 LUW/UDB DBA by a database architect. Our DB2 Trainer comes with a working experience of 11+ years, 9+ years in DB2 and a DB2 certified professional.
DB2 LUW DBA Course Content: http://www.etraining.guru/course/dba/online-training-db2-luw-udb-dba
Course Cost: USD 350 (or) INR 21000
Number of Hours: 30-35 hours
Regards,
Karthik
www.etraining.guru"
This document provides an overview of NVM compression, a hybrid flash-aware application level compression solution. It discusses the drawbacks of existing row-level compression in MySQL and outlines an architecture for NVM compression that avoids these drawbacks. Key aspects of the NVM compression approach include performing compression only during flush, using sparse addressing to avoid over-provisioning flash space, and adding a new multi-threaded flush framework. Evaluation results and building blocks of the solution are also briefly mentioned.
ZFS is a filesystem that provides end-to-end data integrity and reliability through the use of checksums, copy-on-write transactions, and pooled storage. Key features include detecting and correcting silent data corruption, eliminating volumes in favor of pooled storage, and providing a transactional design with consistent data. Administration is simplified with only two commands needed to manage the entire storage configuration.
Cassandra EU 2012 - Storage Internals by Nicolas Favre-FelixAcunu
The document discusses Cassandra's storage internals. It describes how Cassandra writes data to memtables and commit logs in memory before flushing to immutable SSTables on disk. It also explains how compaction merges SSTables to reclaim space and improve performance. For reads, Cassandra uses memtables, bloom filters on SSTables, key caches, and row caches to minimize disk I/O. Counters are implemented by coordinating writes across replicas.
DB2 is a database manager that runs on Linux, Unix, and Windows operating systems. It allows users to catalog databases, start and stop instances, and configure parameters. Key commands for managing DB2 include db2icrt for creating instances, db2idrop for dropping instances, db2ilist for listing instances, and db2set for setting configuration parameters at the global, instance, and node level. The db2set command provides centralized control over environmental variables.
The document discusses the evolution of DB2 HADR tool from version 8.2 to 10. It provides an overview of HADR and how it works, describes the key features introduced in each version, provides an example of how to set up HADR, and discusses techniques for optimizing HADR performance and using HADR beyond high availability for database migration.
Automatic Storage Management (ASM) metrics are a goldmine: Let's use them!BertrandDrouvot
This document introduces the asm_metrics utility for monitoring Automatic Storage Management (ASM) metrics. The utility provides real-time ASM metrics like reads/writes per second and I/O times. It is customizable, allowing users to view metrics by ASM instance, database instance, diskgroup, or failgroup. The document provides several use cases for how admins can use asm_metrics to monitor I/O performance and balance across various ASM components.
Presentation db2 best practices for optimal performancesolarisyougood
This document summarizes best practices for optimizing DB2 performance on various platforms. It discusses sizing workloads based on factors like concurrent users and response time objectives. Guidelines are provided for selecting CPUs, memory, disks and platforms. The document reviews physical database design best practices like choosing a page size and tablespace design. It also discusses index design, compression techniques, and benchmark results showing DB2's high performance.
This document provides an overview of the features and management utilities of the ZFS filesystem for FreeNAS and PC-BSD operating systems. It describes key ZFS concepts like pools, RAIDZ levels, datasets, snapshots, scrubs, deduplication, and boot environments. It also outlines how to perform tasks like creating pools and datasets, adding disks, taking and restoring snapshots, and scheduling scrubs on both FreeNAS and PC-BSD. Additional resources for learning more about best practices and advanced ZFS topics are also referenced.
FreeNAS 9.1.0 is an update to the open source Network Attached Storage system. Key updates include support for the latest ZFS features like LZ4 compression, an improved volume manager, and easier installation of plugins and software using a new AppCafe browser. The update also features alerts that can be dismissed, enhanced shell functionality, and other administrative improvements. FreeNAS is based on FreeBSD and provides enterprise-grade file sharing and storage capabilities for home or business use.
Slides from the S8 File Systems Tutorial at USENIX LISA'13 conference in Washington, DC. The topic covers ext4, btrfs, and ZFS with an emphasis on Linux implementations.
The document provides an overview and demonstration of Docker and CoreOS. It discusses how Docker allows for standardized packaging and isolation of applications and their dependencies into containers. CoreOS is introduced as a minimal Linux OS optimized for running Docker containers in highly available clusters, with automatic updates and tools for service management (Fleet) and distributed key-value storage (etcd). Examples of architectures using Docker and CoreOS are presented, along with potential benefits including more efficient application development, deployment and resource utilization.
ZFS is a file system developed by Sun Microsystems that provides advanced storage capabilities such as data integrity checking, snapshots and cloning. Some key features of ZFS include using copy-on-write storage, end-to-end checksumming of data to prevent silent data corruption, transactional semantics for consistency, and pooled storage that allows for thin provisioning and easy management of storage resources. ZFS aims to eliminate many of the issues with traditional file systems through its novel approach to data storage and management.
The document provides a summary of the hardware, licenses, and features of a Data Domain system. It includes:
- Hardware information such as memory, disks, network cards, and enclosure details.
- License keys for shelf capacities in the active and archive tiers, as well as feature licenses for encryption, expanded storage, and secure multi-tenancy.
- Descriptions of the different licenses and what features they enable, such as encryption of the filesystem or sharing the system among multiple tenants.
TSA provides automatic monitoring and availability management of resources configured for high availability in a cluster domain. It monitors DB2 HADR resources and DB2 instance resources, and can start, stop, and fail over these resources between nodes when failures occur. The document provides examples of how DB2 HADR and instance resources are defined and monitored by TSA using the IBM.Application resource type.
Windows 2000 is a 32-bit operating system designed for compatibility, reliability, and performance. It includes several key components like the kernel, executive services, and environmental subsystems. The kernel schedules threads and handles exceptions/interrupts. Executive services include the object manager, virtual memory manager, process manager, and I/O manager. Environmental subsystems allow running applications from other operating systems. The document also discusses disk structure, file systems, networking, and other OS concepts.
Hadoop World 2011: HDFS Federation - Suresh Srinivas, HortonworksCloudera, Inc.
Scalability of the NameNode has been a key issue for HDFS clusters. Because the entire file system metadata is stored in memory on a single NameNode, and all metadata operations are processed on this single system, the NameNode both limits the growth in size of the cluster and makes the NameService a bottleneck for the MapReduce framework as demand increases. This presentation will describe the features and implementation of HDFS Federation scheduled for release with Hadoop-0.23.
Online configuration allows administrators to reload configuration changes on RegionServers and HMaster without restarting the services. The utility makes the configuration reload and notifies interested classes. Classes implement an interface to be notified of configuration changes and update local state. This allows operational changes to number of threads, cache settings, and other parameters without downtime.
The document discusses new features in Windows Server 2016 related to cluster rolling upgrades. It describes the process for performing an in-place upgrade of a Hyper-V cluster from Windows Server 2012 R2 to Windows Server 2016 without downtime. The process involves pausing each node, upgrading its OS, then rejoining it to the cluster. Once all nodes are upgraded, the cluster functional level can be upgraded to enable new Windows Server 2016 features. The document also covers new storage replication capabilities in Windows Server 2016 technical preview called Storage Replica.
This document provides information about accessing VMware technical documentation and submitting feedback. It lists the most up-to-date documentation location on the VMware website, which also provides product updates. It includes instructions for submitting documentation feedback to VMware. The document also contains copyright information and a glossary of VMware terms.
This document provides a cheat sheet on common Logical Volume Manager (LVM) commands for displaying, creating, modifying, and troubleshooting physical volumes (PVs), volume groups (VGs), and logical volumes (LVs) in Linux. It lists directory locations and files related to LVM, describes tools for diagnostics and debugging, and provides examples of commands for scanning and managing PVs, VGs, and LVs, including displaying information, creating, extending, reducing, removing, and changing attributes of volumes. It also discusses snapshots, mirroring, and procedures for repairing corrupted LVM metadata with and without replacing faulty disks.
Volume manager software provides logical volume management and virtualization of storage disks. It optimizes storage usage, increases filesystem limits, and provides flexibility, capacity, speed and resilience through features like mirroring and striping. Virtualization is performed by either the storage device itself or a software layer on the host system. Hot spares and snapshotting provide fault tolerance and online backups.
Inspection and maintenance tools (Linux / OpenStack)Gerard Braad
This handout is part of the training at UnitedStack and will introduce you to several inspection and maintenance tools.
It is generated from the slides at: http://gbraad.gitlab.io/tools-training/
Source: https://gitlab.com/gbraad/tools-training
XPDS13: VIRTUAL DISK INTEGRITY IN REAL TIME JP BLAKE, ASSURED INFORMATION SE...The Linux Foundation
This paper introduces the Virtual Disk Integrity in Real Time (vDIRT) monitor, a mechanism to measure virtual hard disks in real time from the Dom0 trusted computing base. vDIRT is an improvement over traditional methods for auditing file integrity which rely on a service in a potentially compromised host. It also overcomes the limitations of existing methods for assuring disk integrity that are coarse grained and do not scale to large disks. vDIRT is a capability to measure disk reads and writes in real time, allowing for fine grained tracking of sectors within files, as well as the overall disk. The vDIRT implementation and its impact on performance is discussed to show that disk operation monitoring from Dom0 is practical.
This document discusses design considerations for building stretched clusters across multiple sites. Stretched clusters provide high availability across sites but introduce additional complexity. There are two main storage configurations - stretched SAN and distributed virtual storage. Distributed virtual storage using EMC VPLEX provides read/write access at both sites simultaneously but special behaviors like preferred sites must be considered. Key design challenges include controlling VM placement, dealing with single points of failure, and addressing network issues like horseshoe routing. The document recommends separate clusters at each site connected via vMotion instead of a single stretched cluster.
The Unofficial VCAP / VCP VMware Study GuideVeeam Software
Veeam® is happy to provide the VMware community with new, unofficial study guides prepared by VMware certified professionals Jason Langer and Josh Coen.
Free VCP5-DCV Study Guide
In this 136-page study guide Jason and Josh cover all seven of the exam blueprint sections to help prepare you for the VCP exam.
Free VCAP5-DCA Study Guide
For those currently holding their VCP certification and want to take it up a notch, Jason and Josh have you covered with the 248-page VCAP5-DCA study guide. Using this study guide along with hands-on lab time will help you in the three and a half hours, lab-based VCAP5-DCA exam.
Storage, San And Business Continuity OverviewAlan McSweeney
The document provides an overview of storage systems and business continuity options. It discusses various types of storage including DAS, NAS and SAN. It then covers business continuity and disaster recovery strategies like replication, snapshots and mirroring. It also discusses how server virtualization can help improve disaster recovery.
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld
This document provides an overview of Virtual SAN (VSAN) including:
- VSAN aggregates local flash and HDDs across ESXi hosts into a shared datastore for VMs. It provides software-defined storage that is integrated with VMware's stack.
- VSAN's goals are to provide compelling TCO through reduced CAPEX/OPEX and be the software-defined storage for all VMware products through strong integration.
- The document discusses VSAN architecture, deployment, scaling, performance, resiliency, and management.
The document discusses virtualization and configuring Hyper-V for high availability. It defines key concepts like virtualization, high availability, and fault tolerance. It then explains how to configure Hyper-V for high availability by enabling failover clustering, creating a cluster with multiple Hyper-V servers, and creating highly available virtual machines on the cluster that can migrate between nodes in the event of failure. The document also discusses RAID configurations and the components of a Microsoft iSCSI environment.
Updated study material available for 1Z0-027 Exam-Oracle Exadata Database Machine Administration, Software Release visit@ https://www.troytec.com/1Z0-027-exams.html
This document discusses setting up a highly available SAP system on Linux using two clusters - an Oracle9i RAC cluster for the database and a Red Hat cluster for SAP services. It describes configuring the Red Hat cluster to make the $ORACLE_HOME directory and SAP central instance services highly available, including setting up the network, shared storage, and clustered NFS service.
Topics: Brief concept about LVM
To know more about
Offer- http://mazenet-chennai.in/mazenet-offers.html
Syllabus- http://www.mazenet-chennai.in/redhat-training-in-chennai.html
Slide share- http://www.slideshare.net/mazenet_solution/presentations
For more events- http://mazenet-chennai.in/mazenet-events.html
All videos- https://www.youtube.com/c/Mazenetsolution
Facebook- https://www.facebook.com/Mazenet.IT.Solution/
Twitter- https://twitter.com/Maze_net
Mail us - marketing@mazenetsolution.com
Contact- 9629728714
Windows Server 2016 supports two types of disk configurations: basic and dynamic disks. Basic disks are divided into partitions and work with older Windows versions, while dynamic disks are divided into volumes and work with Windows 2000 and newer. Dynamic disks provide features like fault tolerance and the ability to modify disks without rebooting. Storage spaces allow flexible, scalable storage by creating virtual disks from storage pools of physical disks. RAID (Redundant Array of Independent Disks) is also supported, with RAID 0 providing striping for performance, RAID 1 providing mirroring for redundancy, and RAID 5 providing striping with parity for fault tolerance using 3 or more disks.
This document provides an overview of administering a storage area network (SAN) using IBM and Cisco equipment. It discusses SAN concepts like zoning, fabrics, multi-pathing, and installing operating systems from the SAN. The specific setup includes an IBM DS4400 storage subsystem with expansion units, a Cisco 9216 fabric switch, and two IBM blade centers with internal QLogic switches. The document aims to educate IT staff on maintaining this SAN configuration using IBM Storage Manager and Cisco Fabric Manager software.
This document provides an overview of AWK scripting and various Unix commands covered on Day 4 of a Unix training session. The agenda includes AWK scripting, advanced commands like compression and archiving, an introduction to the File Transfer Protocol (FTP), and Unix process control. The document then goes into detailed explanations of AWK scripting concepts like patterns and actions, operators, control structures, built-in variables, arrays, functions and more. It also covers commands for file compression, changing file ownership, the tar archiving utility, and using FTP to transfer files between systems.
This document provides an overview of a UNIX training session that covers shell scripting and sed commands. The session objectives are to understand regular expressions, grep commands, shell features and environment, and writing basic shell scripts. Topics covered include looping statements, conditional statements, here documents, signals, traps, arrays, functions, and an introduction to the sed stream editor for performing text editing operations. Examples are provided for various shell scripting constructs and common sed commands.
The document discusses topics related to UNIX including regular expressions, grep commands, UNIX shells, shell environment variables, and shell scripting. The agenda covers regular expressions and grep, UNIX shells, shell environment, and shell scripting. It provides examples and explanations of regular expressions, grep family commands, popular UNIX shells like Bourne shell, Korn shell, C shell, and Bourne-Again shell. It also discusses shell environment variables and how to set them as well as an introduction to shell scripting.
The document provides information about the person's role as a Linux System Engineer including responsibilities like installing hardware, networking, building servers, patching systems, and troubleshooting issues for developers, DBAs and other teams. It also answers questions about supporting different environments, recent challenges, scripting experience, and Linux fundamentals.
There are three options for reporting faults from an asynchronous web service: returning the fault in the response message, invoking a fault callback operation, or handling the fault in an error hospital. The document focuses on returning faults in the response message or invoking a fault callback. It provides instructions for implementing these options in Oracle BPEL, including defining fault response schemas, setting correlation properties, and using pick activities to handle normal and fault responses.
The document provides instructions for creating a custom Java action to handle faults in SOA and attaching it to fault policies. It involves: 1) creating a Java project in JDeveloper, 2) coding a class that implements the required interface, 3) deploying the JAR file to the server, and 4) configuring fault policies to use the new action.
This document describes the life cycle management of Oracle JCA adapters, including installing, starting, stopping, defining interfaces, configuring properties, describing data structures, physically deploying, and other aspects of Oracle JCA adapters. It includes sections on specific tasks like installing adapters, starting and stopping adapters, defining interfaces by importing WSDLs, configuring message header properties, and physically deploying adapters packaged in RAR files.
The document provides instructions for creating a custom Java action to handle faults in SOA and attaching it to fault policies. It involves: 1) creating a Java project in JDeveloper, 2) coding a class that implements the required interface, 3) deploying the JAR file to the server, and 4) configuring fault policies to use the new action.
- Oracle Business Rules is a lightweight business rules product that is part of Oracle Fusion Middleware and can be used in SOA and BPM suites. It allows business processes to be more agile and align with changing business demands by acting as a central rules repository.
- The document demonstrates how to create a rule in Oracle Business Rules using JDeveloper to calculate student grades based on average marks and test it using various methods like a debugging function, the Enterprise Manager console, and SOAP UI web services calls.
- A decision table rule is created to return a grade based on comparing average marks to ranges in a bucketset. The rule can then be tested by passing sample data and evaluating the output.
SOAP is a protocol for exchanging XML-based messages over networks, normally using HTTP/HTTPS. It allows applications to communicate in a decentralized and distributed environment. A SOAP message contains an envelope element with a header and body. The body contains the call and response information while the header contains optional metadata. SOAP uses XML schemas to define the structure and content of messages to ensure interoperability.
The document provides an overview of Oracle SOA Suite, which integrates capabilities like messaging, service discovery, orchestration, web services management and security, business rules, events framework, and business activity monitoring. It leverages standards like SCA, SDO, and BPEL and brings together components like BPEL, ESB, and OWSM into a single environment using SCA composites. The key benefits of SOA Suite include interoperability, increased reuse, more agile business processes, improved visibility, and reduced maintenance costs.
Web services allow applications to communicate over the web through open standards like XML, SOAP, WSDL and UDDI. A WSDL file describes the operations and messages a web service exposes. SOAP is the messaging protocol used to exchange information between web services using XML. UDDI is a registry where businesses can publish and discover web services.
The document discusses implementing a while loop activity in BPEL to increment an input variable by repeatedly invoking a partner service.
It describes creating two BPEL processes - a "Called" process that increments an input by 1, and a "Caller" process that contains a while loop. The while loop invokes the Called process, assigns the output back to the input, and continues looping while the input is less than 5. This allows the input to be incremented from 3 to 4 to 5 by repeatedly calling the partner service.
XML (eXtensible Markup Language) is a markup language that is designed to store and transport data. It allows data to be shared across different systems, software, and hardware. XML documents contain elements that can have child elements, attributes, and text. XML has simple, strict syntax rules for tags, nesting, and formatting. Elements can be extended without breaking existing applications. This makes XML very flexible and extensible for sharing structured data.
XPath is a language for navigating and selecting nodes in an XML document using path expressions. It selects nodes by following a path through the XML tree structure. Some useful path expressions include nodename to select child nodes, / to select from the root, and // to select nodes anywhere in the document that match the selection. XPath uses wildcards like * to match any element node and @* to match any attribute node.
XQuery is to XML what SQL is to database tables. It was designed to query XML data, not just XML files but anything that can appear as XML, including databases. The document provides an example of an XQuery expression to retrieve book titles from an XML document where the price is greater than 30, ordered by title. It then shows the XML document that will be used in examples, containing book data. Examples are given to select nodes from the XML document using XQuery expressions.
The document discusses XML Schema Definition (XSD) and its purpose in validating XML documents. It compares XSD to Document Type Definition (DTD) and provides sample code for each. The document also covers XSD basics like data types, nested complex types, and occurrence constraints. Finally, it outlines the steps for installing Oracle SOA Suite 11.1.1.3, including database installation, middleware home creation, and domain configuration.
An XML schema describes the structure and elements of an XML document. It defines elements, attributes, data types, properties like required/optional, and relationships between elements. XML schema is more powerful than older DTD schemas as it allows defining data types and namespaces. Schemas are written in XML syntax, making them easy to read, write and process using standard XML tools. This document provides examples of simple and complex element definitions in an XML schema.
XSLT is a language for transforming XML documents into other formats like XHTML. It works by applying templates defined in an XSL stylesheet to an XML source document. Key components of XSLT include:
- The <xsl:template> element defines templates that are applied to parts of the XML document matched by an XPath expression
- The <xsl:value-of> element extracts the value of an XML element to include in the output
- The <xsl:for-each> element loops through matching elements to repeatedly apply templates
Oracle JCA Adapters are deployed as JCA 1.5 resource adapters in an Oracle WebLogic Server container. This document discusses the life cycle of Oracle JCA Adapters, including installing, starting, stopping, defining interfaces, configuring properties, physically deploying, and creating application server connections for Oracle JCA Adapters. It also covers deploying adapter applications from JDeveloper and manually deploying adapter files.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Odoo ERP software
Odoo ERP software, a leading open-source software for Enterprise Resource Planning (ERP) and business management, has recently launched its latest version, Odoo 17 Community Edition. This update introduces a range of new features and enhancements designed to streamline business operations and support growth.
The Odoo Community serves as a cost-free edition within the Odoo suite of ERP systems. Tailored to accommodate the standard needs of business operations, it provides a robust platform suitable for organisations of different sizes and business sectors. Within the Odoo Community Edition, users can access a variety of essential features and services essential for managing day-to-day tasks efficiently.
This blog presents a detailed overview of the features available within the Odoo 17 Community edition, and the differences between Odoo 17 community and enterprise editions, aiming to equip you with the necessary information to make an informed decision about its suitability for your business.
What is Master Data Management by PiLog Groupaymanquadri279
PiLog Group's Master Data Record Manager (MDRM) is a sophisticated enterprise solution designed to ensure data accuracy, consistency, and governance across various business functions. MDRM integrates advanced data management technologies to cleanse, classify, and standardize master data, thereby enhancing data quality and operational efficiency.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Revolutionizing Visual Effects Mastering AI Face Swaps.pdfUndress Baby
The quest for the best AI face swap solution is marked by an amalgamation of technological prowess and artistic finesse, where cutting-edge algorithms seamlessly replace faces in images or videos with striking realism. Leveraging advanced deep learning techniques, the best AI face swap tools meticulously analyze facial features, lighting conditions, and expressions to execute flawless transformations, ensuring natural-looking results that blur the line between reality and illusion, captivating users with their ingenuity and sophistication.
Web:- https://undressbaby.com/
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
2. A layered volume is a virtual VERITAS Volume Manager
object that is built on top of other volumes. The layered
volume structure tolerates failure better and has greater
redundancy than the standard volume structure. For
example, in a striped-mirror layered volume, each
mirror (plex) covers a smaller area of storage space, so
recovery is quicker than with a standard mirrored
volume.
5. The logical objects layered volume and layered plex are used
for more efficient I/O.
The primary reason for using mirrored stripe volume is to
gain the performance offered by striping and the availability
offered by mirroring.
Here between subdisks, striping happens and between plexes
mirroring will happens. If one subdisk goes down in one
plex, data can be redundant from another plex.
Limitations:sd2
Mirror-striped volumes suffers the high cost of mirroring and
requires twice the disk drive space of Non-Redundant volumes.
7. At the subdisk level mirroring will happens
For the efficient access we are using layered volumes and
layered plexes.
Striped mirror volumes have the performance and reliability
advantages of a mirrored stripe volumes. But can tolerate a
high percentage of disk drive failures without data loss.
Stripe-mirrored volumes also have a quick recovery time
after a disk drive failure. Because only a single stripe must be
resynchronized instead of on entire mirror.
Limitations:
Striped-mirror volumes suffers the high cost of mirroring
requires twice the disk drive space of a non-redundant
volumes.
8. VxVM Daemons
Vxconfigd
Vxsvc
Vxconfigbackupd : /etc/vx/cbr/bk
Vxrelocd – hot relocation
Vxnotify – disk configuration changes managed by Vxconfigd
Vxcached – manages cached volumes associated with space
optimized snapshots.
9. Online relayout
Online relayout allows you to convert between storage layouts in VxVM, with
uninterrupted data access. Typically, you would do this to change the
redundancy or performance characteristics of a volume. VxVM adds
redundancy to storage either by duplicating the data (mirroring) or by adding
parity (RAID-5). Performance characteristics of storage in VxVM can be
changed by changing the striping parameters, which are the number of columns
and the stripe width.
Limitations of online relayout:
10. Limitations of online relayout
Log plexes cannot be transformed.
Volume snapshots cannot be taken when there is an online relayout operation running
on the volume.
Online relayout cannot create a non-layered mirrored volume in a single step.
It always creates a layered mirrored volume even if you specify a non-layered mirrored
layout, such as mirror-stripe or mirror-concat. Use the vxassist convert command to
turn the layered mirrored volume that results from a relayout into a non-layered
volume.
The usual restrictions apply for the minimum number of physical disks that are required
to create the destination layout. For example, mirrored volumes require at least as
many disks as mirrors, striped and RAID-5 volumes require at least as many disks as
columns, and striped-mirror volumes require at least as many disks as columns
multiplied by mirrors.
11. To be eligible for layout transformation, the plexes in a mirrored volume must have
identical stripe widths and numbers of columns. Relayout is not possible unless you
make the layouts of the individual plexes identical.
Online relayout involving RAID-5 volumes is not supported for shareable disk groups in
a cluster environment.
Online relayout cannot transform sparse plexes, nor can it make any plex sparse. (A
sparse plex is a plex that is not the same size as the volume, or that has regions that are
not mapped to any subdisk.)
The number of mirrors in a mirrored volume cannot be changed using relayout.
Only one relayout may be applied to a volume at a time.
12. Performing online relayout
# vxassist [-b] [-g diskgroup] relayout volume [layout=layout] [relayout_options]
If specified, the -b option makes relayout of the volume a background task.
The following destination layout configurations are supported.
concat-mirror concatenated-mirror
concat concatenated
nomirror concatenated
Administering volumes 363
Performing online relayout
nostripe concatenated
raid5 RAID-5 (not supported for shared disk groups)
span concatenated
stripe striped
13. For example, the following command changes a
concatenated volume, vol02, in disk group, mydg, to a
striped volume with the default number of columns, 2, and
default stripe unit size, 64 kilobytes:
# vxassist -g mydg relayout vol02 layout=stripe
14. Hot-relocation
Hot-relocation is a feature that allows a system to react automatically to I/O
failures on redundant objects (mirrored or RAID-5 volumes) in VxVM and
restore redundancy and access to those objects. VxVM detects I/O failures on
objects and relocates the affected subdisks. The subdisks are relocated to disks
designated as spare disks or to free space within the disk group. VxVM then
reconstructs the objects that existed before the failure and makes them
accessible again. When a partial disk failure occurs (that is, a failure affecting
only some subdisks on a disk), redundant data on the failed portion of the disk
is relocated. Existing volumes on the unaffected portions of the disk remain
accessible.
15. Discovering and configuring newly
added disk devices
The vxdiskconfig utility scans and configures new disk devices attached to the host, disk
devices that become online, or fibre channel devices that are zoned to host bus adapters
connected to this host. The command calls platform specific interfaces to configure new
disk devices and brings them under control of the operating system. It scans for disks
that were added since VxVM’s configuration daemon was last started. These disks are
then dynamically configured and recognized by VxVM.
# vxdctl -f enable
# vxdisk -f scandisks
However, a complete scan is initiated if the system configuration has been modified
by changes to:
Installed array support libraries.
The devices that are listed as being excluded from use by VxVM.
DISKS (JBOD), SCSI3, or foreign device definitions.
17. To list the devices configured from
a Host Bus Adapter
18. To add an unsupported disk array
to the DISKS category
19. To verify that the DMP paths are recognized, use the vxdmpadm
getdmpnode command as shown in the following sample output for
the example array:
20. To change the disk-naming scheme
Select Change the disk naming scheme from the vxdiskadm main menu to change the
disk-naming scheme that you wantVxVMto use.Whenprompted, enter y to change the
naming scheme. This restarts the vxconfigd daemon to bring the new disk naming
scheme into effect. Alternatively, you can change the naming scheme from the
command line.
Use the following command to select enclosure-based naming:
# vxddladm set namingscheme=ebn [persistence={yes|no}]
[use_avid=yes|no] [lowercase=yes|no]
Use the following command to select operating system-based naming:
# vxddladm set namingscheme=osn [persistence={yes|no}]
[lowercase=yes|no]
21. The optional persistence argument allows you to select whether
the names of disk devices that are displayed by VxVM remain
unchanged after disk hardware has been reconfigured and the
system rebooted. By default, enclosure-based naming is
persistent. Operating system-based naming is not persistent by
default.
22. To remove the error state for simple or
nopriv disks in the boot disk group
23. Removing and replacing disks
A replacement disk should have the same disk geometry as the disk that failed.
That is, the replacement disk should have the same bytes per sector, sectors per
track, tracks per cylinder and sectors per cylinder, same number of cylinders,
and the same number of accessible cylinders.
You can use the prtvtoc command to obtain disk information.
30. Dynamic new LUN addition to a
new target ID
In this case, a new group of LUNS is mapped to the host by
multiple HBA ports.
An OS device scan is issued for the LUNs to be recognized and
added to DMP control.
The high-level procedure and the VxVM commands are generic.
However, the OS commands may vary for Solaris versions.
32. To clean up the device tree after
you remove LUNs
33.
34. Dynamic Multipathing
How DMP works
The Dynamic Multipathing (DMP) feature of Veritas Volume Manager (VxVM)
provides greater availability, reliability and performance by using path failover and load
balancing. This feature is available for multiported disk arrays from various vendors.
Multiported disk arrays can be connected to host systems through multiple paths. To
detect the various paths to a disk, DMP uses a mechanism that is specific to each
supported array type.DMPcan also differentiate between different enclosures of a
supported array type that are connected to the same host system.
The multipathing policy used by DMP depends on the characteristics of the disk array.
35. DMP supports the following standard
array types:
Active/Active (A/A) :
Allows several paths to be used concurrently for I/O. Such arrays allow DMP to
provide greater I/O throughput by balancing the I/O load uniformly across the
multiple paths to the LUNs. In the event that one path fails, DMP automatically
routes I/O over the other available paths.
Asymmetric Active/Active (A/A-A):
A/A-A or Asymmetric Active/Active arrays can be accessed through secondary
storage paths with little performance degradation. Usually an A/A-A array behaves
like an A/P array rather than an A/A array. However, during failover, an A/A-A
array behaves like an A/A array.
Active/Passive (A/P):
Allows access to its LUNs (logical units; real disks or virtual disks created using
hardware) via the primary (active) path on a single controller (also known as an
access port or a storage processor) during normal operation.
36. Active/Passive in explicit failover mode or non-autotrespass
mode (A/P-F):
The appropriate command must be issued to the array to make the LUNs fail
over to the secondary path.
Active/Passive with LUNgroup failover (A/P-G):
For Active/Passive arrays withLUNgroup failover (A/PG arrays), a group of LUNs
that are connected through a controller is treated as a single failover entity. Unlike
A/P arrays, failover occurs at the controller level, and not for individual LUNs. The
primary and secondary controller are each connected to a separate group of LUNs. If
a single LUN in the primary controller’s LUN group fails, all LUNs in that group fail
over to the secondary controller.
Concurrent Active/Passive (A/P-C)
Concurrent Active/Passive in explicit failover mode or non-autotrespass mode (A/PF-C)
Concurrent Active/Passive with LUN group failover (A/PG-C)
Variants of the A/P, AP/F and A/PG array types that support concurrent I/O and load
balancing by having multiple primary paths into a controller. This functionality is provided by a
controller with multiple ports, or by the insertion of a SAN hub or switch between an array and
a controller. Failover to the secondary (passive) path occurs only if all the active primary paths
fail.
39. Displaying the paths to a disk
The vxdisk command is used to display the multipathing information for a
particular metadevice. The metadevice is a device representation of a particular
physical disk having multiple physical paths from one of the system’s HBA
controllers. In VxVM, all the physical disks in the system are represented as
metadevices with one or more physical paths.
48. Displaying HBA details
The vxdmpadm getctlr command displays HBA vendor details and the
Controller ID. For iSCSI devices, the Controller ID is the IQN or IEEE-format
based name. For FC devices, the Controller ID is the WWN. Because the
WWN is obtained from ESD, this field is blank if ESD is not running. ESD is a
daemon process used to notify DDL about occurrence of events. The WWN
shown as ‘Controller ID’ maps to the WWN of the HBA port associated with
the host controller.
52. Displaying plex information
Listing plexes helps identify free plexes for building volumes.
Use the plex (–p) option to the vxprint command to list
information about all plexes. To display detailed information
about all plexes in the system, use the following command:
# vxprint -lp
To display detailed information about a specific plex, use the
following command:
# vxprint [-g diskgroup] -l plex
The -t option prints a single line of information about the
plex. To list free plexes, use the following command:
# vxprint -pt
60. Plex kernel states
The plex kernel state indicates the accessibility of the plex to
the volume driver which monitors it.
No user intervention is required to set these states; they are
maintained internally. On a system that is operating
properly, all plexes are enabled.
61. Attaching and associating plexes
A plex becomes a participating plex for a volume by
attaching it to a volume. (Attaching a plex associates it with
the volume and enables the plex for use.) To attach a plex to
an existing volume, use the following command:
# vxplex [-g diskgroup] att volume plex
Example:
# vxplex -g mydg att vol01 vol01-02
62. If the volume does not already exist, a plex (or multiple
plexes) can be associated with the volume when it is created
using the following command:
# vxmake [-g diskgroup] -U usetype vol volume plex=plex1[,plex2...]
For example, to create a mirrored, fsgen-type volume
named home, and to associate two existing plexes named
home-1 and home-2 with home, use the following
command:
# vxmake -g mydg -U fsgen vol home plex=home-1,home-2
63. Taking plexes offline
To take a plex OFFLINE so that repair or maintenance can be
performed on the physical disk containing subdisks of that plex, use the
following command:
# vxmend [-g diskgroup] off plex
If a disk has a head crash, put all plexes that have associated subdisks on
the affected disk OFFLINE. For example, if plexes vol01-02 and vol02-
02 in the disk group, mydg, had subdisks on a drive to be repaired, use
the following command to take these plexes offline:
# vxmend -g mydg off vol01-02 vol02-02
This command places vol01-02 and vol02-02 in the OFFLINE state, and
they remain in that state until it is changed. The plexes are not
automatically recovered on rebooting the system.
64. Detaching plexes
To temporarily detach one data plex in a mirrored volume,
use the following command:
# vxplex [-g diskgroup] det plex
For example, to temporarily detach a plex named vol01-02
in the disk group, mydg, and place it in maintenance mode,
use the following command:
# vxplex -g mydg det vol01-02
65. Reattaching plexes
When a disk has been repaired or replaced and is again ready for use, the plexes
must be put back online (plex state set to ACTIVE). To set the plexes to
ACTIVE, use one of the following procedures depending on the state of the
volume.
■ If the volume is currently ENABLED, use the following command to reattach
the plex:
# vxplex [-g diskgroup] att volume plex ...
For example, for a plex named vol01-02 on a volume named vol01 in the disk
group, mydg, use the following command:
# vxplex -g mydg att vol01 vol01-02
As when returning an OFFLINE plex to ACTIVE, this command starts to
recover the contents of the plex and, after the revive is complete, sets the plex
utility state to ACTIVE.
66. If the volume is not in use (not ENABLED), use the
following command to re-enable the plex for use:
# vxmend [-g diskgroup] on plex
For example, to re-enable a plex named vol01-02 in the disk
group, mydg, enter:
# vxmend -g mydg on vol01-02
67. Listing Unstartable Volumes
An unstartable volume can be incorrectly configured or have
other errors or conditions that prevent it from being started.
To display unstartable volumes, use the vxinfo command.
This displays information about the accessibility and usability
of volumes:
68. How to recover and start a Veritas Volume
Manager logical volume where the volume is
DISABLED ACTIVE and has a plex that is
DISABLED RECOVER
# vxprint -ht -g testdg
DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
dg testdg default default 84000 970356463.1203.alu
dm testdg01 c1t4d0s2 sliced 2179 8920560 -
dm testdg02 c1t6d0s2 sliced 2179 8920560 -
v test - DISABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test DISABLED RECOVER 17841120 CONCAT - RW
sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
69. Change the plex test-01 to the DISABLED STALE state:
#vxmend -g diskgroup fix stale <plex_name>
For example:
# vxmend -g testdg fix stale test-01
70. # vxprint -ht -g testdg
DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
dg testdg default default 84000 970356463.1203.alu
dm testdg01 c1t4d0s2 sliced 2179 8920560 -
dm testdg02 c1t6d0s2 sliced 2179 8920560 -
v test - DISABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test DISABLED STALE 17841120 CONCAT - RW
sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
71. Change the plex test-01 to the
DISABLED CLEAN state:
vxmend -g diskgroup fix clean <plex_name>
For example:
# vxmend -g testdg fix clean test-01
# vxprint -ht -g testdg
DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
dg testdg default default 84000 970356463.1203.alu
dm testdg01 c1t4d0s2 sliced 2179 8920560 -
dm testdg02 c1t6d0s2 sliced 2179 8920560 -
v test - DISABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test DISABLED CLEAN 17841120 CONCAT - RW
sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
72. Start the volume test:
vxvol -g diskgroup start <volume>
For example:
# vxvol -g diskgroup start test
# vxprint -ht -g testdg
DG NAME NCONFIG NLOG MINORS GROUP-ID
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
V NAME RVG KSTATE STATE LENGTH USETYPE PREFPLEX RDPOL
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
dg testdg default default 84000 970356463.1203.alu
dm testdg01 c1t4d0s2 sliced 2179 8920560 -
dm testdg02 c1t6d0s2 sliced 2179 8920560 -
v test - ENABLED ACTIVE 17840128 fsgen - SELECT
pl test-01 test ENABLED ACTIVE 17841120 CONCAT - RW
sd testdg01-01 test-01 testdg01 0 8920560 0 c1t4d0 ENA
sd testdg02-01 test-01 testdg02 0 8920560 8920560 c1t6d0 ENA
73. Recovering an unstartable
volume with a disabled plex in
the RECOVER state
To recover an unstartable volume with a disabled plex in the RECOVER state
Use the following command to force the plex into the OFFLINE state:
# vxmend [-g diskgroup] -o force off plex
Place the plex into the STALE state using this command:
# vxmend [-g diskgroup] on plex
If there are other ACTIVE or CLEAN plexes in the volume, use the following
command to reattach the plex to the volume:
# vxplex [-g diskgroup] att plex volume
If the volume is already enabled, resynchronization of the plex is started
immediately.
If there are no other clean plexes in the volume, use this command to make the
plex DISABLED and CLEAN:
# vxmend [-g diskgroup] fix clean plex
If the volume is not already enabled, use the following command to start it, and
preform any resynchronization of the plexes in the background:
# vxvol [-g diskgroup] -o bg start volume
If the data in the plex was corrupted, and the volume has no ACTIVE or CLEAN
redundant plexes from which its contents can be resynchronized, it must be
restored from a backup or from a snapshot image.
74. Clearing the failing flag on a
disk
If I/O errors are intermittent rather than persistent, Veritas Volume
Manager sets the failing flag on a disk, rather than detaching the disk.
Such errors can occur due to the temporary removal of a cable,
controller faults, a partially faulty LUN in a disk array, or a disk with a
few bad sectors or tracks.
If the hardware fault is not with the disk itself (for example, it is caused
by problems with the controller or the cable path to the disk), you can
use the vxedit command to unset the failing flag after correcting the
source of the
I/O error.
Warning: Do not unset the failing flag if the reason for the I/O errors is
unknown. If the disk hardware truly is failing, and the flag is cleared,
there is a risk of data loss.
75. To clear the failing flag on a disk
1. Use the vxdisk list command to find out which disks are failing:
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
hdisk10 auto:simple mydg01 mydg online
hdisk11 auto:simple mydg02 mydg online failing
hdisk12 auto:simple mydg03 mydg online
. . . Use the vxedit set command to clear the flag for each disk that is marked as
failing (in this example, mydg02):
# vxedit set failing=off mydg02
Use the vxdisk list command to verify that the failing flag has been cleared: #
vxdisk list
DEVICE TYPE DISK GROUP STATUS
hdisk10 auto:simple mydg01 mydg online
hdisk11 auto:simple mydg02 mydg online
hdisk12 auto:simple mydg03 mydg online
76. Veritas Unstartable Volume
In this example of VXVM 4.0 on a Solaris 8 system, an array
was temporarily unavailable, causing problems with a file
system whose two plexes resided on the array.
Veritas Unstartable Volume
77. bash-2.03# cd /files04
bash: cd: /files04: I/O error
The volume was in DISABLED ACTIVE state, and both plexes were in DISABLED RECOVER state.
v vol04 - DISABLED ACTIVE 29360128 SELECT
- fsgen
pl vol04-01 vol04 DISABLED RECOVER 29367434 STRIPE
3/128 RW
sd appsdg01-04 vol04-01 cs_array07-f0 8392167 2797389 0/0
c1t0d0 ENA
sd appsdg07-01 vol04-01 cs_array03-f2 0 5594778 0/2797389
c4t2d0 ENA
sd appsdg07-04 vol04-01 cs_array03-f2 11189556 1396899
0/8392167 c4t2d0 ENA
sd appsdg02-04 vol04-01 cs_array07-f1 8392167 2797389 1/0
c1t1d0 ENA
sd appsdg10-02 vol04-01 cs_array06-f1 2797389 5594778 1/2797389
c5t1d0 ENA
sd appsdg10-05 vol04-01 cs_array06-f1 13986945 1396899
1/8392167 c5t1d0 ENA
sd appsdg03-04 vol04-01 cs_array07-f2 8392167 2797389 2/0
c1t2d0 ENA
sd appsdg11-02 vol04-01 cs_array06-f2 8392167 6991677 2/2797389
c5t2d0 ENA
pl vol04-02 vol04 DISABLED RECOVER 29367434 STRIPE
3/128 RW
sd appsdg04-02 vol04-02 cs_array07-f3 2797389 2797389 0/0
c1t3d0 ENA
sd appsdg04-05 vol04-02 cs_array07-f3 0 2797389 0/2797389
c1t3d0 ENA
sd appsdg04-06 vol04-02 cs_array07-f3 16784334 894159 0/5594778
c1t3d0 ENA
78. We confirmed that the storage array was available to the operating system.
# luxadm probe
Found Enclosure(s):
...
SENA Name:cs_array06 Node WWN:5080020000038ba8
Logical Path:/dev/es/ses6
Logical Path:/dev/es/ses7
# luxadm display cs_array06
SLOT FRONT DISKS (Node WWN) REAR DISKS
(Node WWN)
0 On (O.K.) 2000002037094289 On
(O.K.) 200000203709422e
1 On (O.K.) 2000002037093aaf On
(O.K.) 2000002037094220
2 On (O.K.) 200000203709410b On
(O.K.) 2000002037093ddd
3 On (O.K.) 2000002037094254 On
(O.K.) 200000203709422b
4 On (O.K.) 20000020370940da On
(O.K.) 2000002037094247
5 Not Installed Not
Installed
6 On (O.K.) 2000002037093df0 On
80. We then followed the "Recovering an Unstartable Volume with a Disabled Plex in the RECOVER
State" procedure in the Volume Manager Troubleshooting Guide.
1. Force plex vol04-01 into the OFFLINE state.
# vxmend -g appsdg -o force off vol04-01
2. Place plex vol04-01 into the STALE state.
# vxmend -g appsdg on vol04-01
3. There are no other clean plexes in the volume, so make plex vol04-01 DISABLED and
CLEAN.
# vxmend -g appsdg fix clean vol04-01
4. Start the volume, and perform resynchronization of the plexes in the background.
# vxvol -g appsdg -o bg start vol04
At this point, the file system is unmounted, checked for file system consistency, and remounted.
# umount /files04
# mount /files04
UX:vxfs mount: ERROR: V-3-21268: /dev/vx/dsk/appsdg/vol04
is corrupted. needs checking
# fsck -F vxfs /dev/vx/rdsk/appsdg/vol04
log replay in progress
replay complete - marking super-block as CLEAN
# mount /files04
81. Restarting a Disabled Volume
If a disk failure caused a volume to be disabled, you must
restore the volume from a backup after replacing the failed
disk. Any volumes that are listed as Unstartable must be
restarted using the vxvol command before restoring their
contents from a backup. For example, to restart the volume
mkting so that it can be restored from backup, use the
following command:
86. Restoring a Disk Group
Configuration
The following command performs a precommit analysis of
the state of the disk group configuration, and reinstalls the
disk headers where these have become corrupted:
# /etc/vx/bin/vxconfigrestore -p [-l directory]
{diskgroup | dgid}
The disk group can be specified either by name or by ID.