This document provides information about Hitachi Data Systems software including Hitachi TrueCopy, ShadowImage, and Copy-on-Write Snapshot. It outlines software prerequisites, terms, files, and common commands for configuring and managing remote replication and in-system replication functionality. The document includes sections describing command devices, pair volumes, replication modes, configuration files, log files, and parameters for commands like paircreate, pairdisplay and horcmctl.
This document provides an overview of the Hitachi Content Platform (HCP) architecture. It describes HCP as a secure, simple and smart web-scale object storage platform that can scale from 4TB to unlimited capacity. It supports a variety of use cases including archiving, regulatory compliance, backup reduction, cloud applications, unstructured data management, and file sync and share. Key features of HCP include unprecedented capacity scaling, multi-protocol access, hybrid storage pools, strong security, extensive metadata and search capabilities, and global access topology.
The document provides an overview of database backup, restore, and recovery. It describes various types of failures that may occur including statement failures, user process failures, instance failures, media failures, and user errors. It emphasizes the importance of defining a backup and recovery strategy that considers business requirements, operational requirements, technical considerations, and disaster recovery issues to minimize data loss and downtime in the event of failures.
Introduction to linux memory management and advantages of using huge pages for certain applications, and final wrap-up with benefits of Transparent hugepages available in RHEL 6.
- Oracle Data Guard is a data protection and disaster recovery solution that maintains up to 9 synchronized standby databases to protect enterprise data from failures, disasters, errors, and corruptions.
- Data Guard uses redo apply and SQL apply technologies to synchronize primary and standby databases by transmitting redo logs from the primary and applying the redo logs on the standby databases.
- Data Guard allows role transitions like switchovers and failovers between primary and standby databases to minimize downtime during planned and unplanned outages.
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup AppliancesVeritas Technologies LLC
Together, NetBackup 8.0 and 8.1 are perhaps the two most significant consecutive releases in NetBackup history. Attend this session to learn how the newly released NetBackup 8.1 builds on version 8.0 to deliver the promise of modern data protection and advanced information management like never before. This session will feature a detailed technical overview of the new security architecture in NetBackup 8.1 that keeps data secure across any network, new dedupe to the cloud capabilities that deliver industry-leading performance, instant recovery for Oracle, added support for virtual and next-gen workloads, faster and easier deployments, and many other new features and capabilities.
Percona XtraBackup is an open-source tool for performing backups of MySQL or MariaDB databases. It can create full, incremental, compressed, encrypted, and streaming backups. For full backups, it copies data files and redo logs then runs crash recovery to make the data consistent. Incremental backups only copy changed pages by tracking log sequence numbers. The backups can be prepared then restored using copy-back or move-back options.
The document provides an overview of storage technology options including network attached storage (NAS), storage area networks (SANs), and discusses specific NAS and SAN products. It highlights the key features of an iSCSI SAN brick platform including software for snapshots, replication, and continuous data protection. Appliance strategies and partnerships are also summarized.
Hitachi Virtual Storage Platform and Storage Virtualization Operating System ...Hitachi Vantara
This document summarizes Hitachi's Virtual Storage Platform G1000 and Storage Virtualization Operating System. The SVOS allows data to be accessed continuously across sites and on mobile apps. The VSP G1000 uses a virtual storage machine architecture to provide highly available, clustered systems. It can support all-flash configurations with over 600TB of capacity or mixed flash and disk. Management is streamlined through a single view of all virtualized storage assets. The VSP G1000 also aims to redefine technology refresh cycles by allowing nondisruptive data and system migration through its global storage virtualization.
This document provides an overview of the Hitachi Content Platform (HCP) architecture. It describes HCP as a secure, simple and smart web-scale object storage platform that can scale from 4TB to unlimited capacity. It supports a variety of use cases including archiving, regulatory compliance, backup reduction, cloud applications, unstructured data management, and file sync and share. Key features of HCP include unprecedented capacity scaling, multi-protocol access, hybrid storage pools, strong security, extensive metadata and search capabilities, and global access topology.
The document provides an overview of database backup, restore, and recovery. It describes various types of failures that may occur including statement failures, user process failures, instance failures, media failures, and user errors. It emphasizes the importance of defining a backup and recovery strategy that considers business requirements, operational requirements, technical considerations, and disaster recovery issues to minimize data loss and downtime in the event of failures.
Introduction to linux memory management and advantages of using huge pages for certain applications, and final wrap-up with benefits of Transparent hugepages available in RHEL 6.
- Oracle Data Guard is a data protection and disaster recovery solution that maintains up to 9 synchronized standby databases to protect enterprise data from failures, disasters, errors, and corruptions.
- Data Guard uses redo apply and SQL apply technologies to synchronize primary and standby databases by transmitting redo logs from the primary and applying the redo logs on the standby databases.
- Data Guard allows role transitions like switchovers and failovers between primary and standby databases to minimize downtime during planned and unplanned outages.
Deep Dive: a technical insider's view of NetBackup 8.1 and NetBackup AppliancesVeritas Technologies LLC
Together, NetBackup 8.0 and 8.1 are perhaps the two most significant consecutive releases in NetBackup history. Attend this session to learn how the newly released NetBackup 8.1 builds on version 8.0 to deliver the promise of modern data protection and advanced information management like never before. This session will feature a detailed technical overview of the new security architecture in NetBackup 8.1 that keeps data secure across any network, new dedupe to the cloud capabilities that deliver industry-leading performance, instant recovery for Oracle, added support for virtual and next-gen workloads, faster and easier deployments, and many other new features and capabilities.
Percona XtraBackup is an open-source tool for performing backups of MySQL or MariaDB databases. It can create full, incremental, compressed, encrypted, and streaming backups. For full backups, it copies data files and redo logs then runs crash recovery to make the data consistent. Incremental backups only copy changed pages by tracking log sequence numbers. The backups can be prepared then restored using copy-back or move-back options.
The document provides an overview of storage technology options including network attached storage (NAS), storage area networks (SANs), and discusses specific NAS and SAN products. It highlights the key features of an iSCSI SAN brick platform including software for snapshots, replication, and continuous data protection. Appliance strategies and partnerships are also summarized.
Hitachi Virtual Storage Platform and Storage Virtualization Operating System ...Hitachi Vantara
This document summarizes Hitachi's Virtual Storage Platform G1000 and Storage Virtualization Operating System. The SVOS allows data to be accessed continuously across sites and on mobile apps. The VSP G1000 uses a virtual storage machine architecture to provide highly available, clustered systems. It can support all-flash configurations with over 600TB of capacity or mixed flash and disk. Management is streamlined through a single view of all virtualized storage assets. The VSP G1000 also aims to redefine technology refresh cycles by allowing nondisruptive data and system migration through its global storage virtualization.
- Oracle Database is a comprehensive, integrated database management system that provides an open approach to information management.
- The Oracle architecture includes database structures like data files, control files, and redo log files as well as memory structures like the system global area (SGA) and process global area (PGA).
- Key components of the Oracle architecture include the database buffer cache, shared pool, redo log buffer, and background processes that manage instances.
MySQL Group Replication is a new 'synchronous', multi-master, auto-everything replication plugin for MySQL introduced with MySQL 5.7. It is the perfect tool for small 3-20 machine MySQL clusters to gain high availability and high performance. It stands for high availability because the fault of replica don't stop the cluster. Failed nodes can rejoin the cluster and new nodes can be added in a fully automatic way - no DBA intervention required. Its high performance because multiple masters process writes, not just one like with MySQL Replication. Running applications on it is simple: no read-write splitting, no fiddling with eventual consistency and stale data. The cluster offers strong consistency (generalized snapshot isolation).
It is based on Group Communication principles, hence the name.
Understanding nas (network attached storage)sagaroceanic11
The document discusses network attached storage and storage area networks. It covers various storage models including direct attached storage (DAS), network attached storage (NAS), storage area networks (SANs) and content addressed storage (CAS). For SANs specifically, it describes the key components which include host bus adapters, fibre cabling, fibre channel switches/hubs, storage arrays and management systems. It also discusses SAN connectivity, topologies, management functions and deployment examples.
The document describes IBM DB2's High Availability Disaster Recovery (HADR) multiple standby configuration. It allows a primary database to have one principal standby and up to two auxiliary standbys. The principal standby supports all sync modes, while auxiliary standbys use super async mode. Takeovers can occur from any standby and DB2 will automatically reconfigure other standbys to connect to the new primary if they are in its target list. The document provides details on configuration, initialization, failover behavior and an example deployment across four servers.
1. A distributed switch functions as a single virtual switch across all associated hosts and is configured in vCenter Server at the data center level. It consists of a control plane in vCenter Server and I/O planes in the VMkernel of each ESXi host.
2. Key components of a distributed switch include distributed ports, uplinks, and port groups. Distributed ports can connect VMs or VMkernel interfaces. Uplinks associate physical NICs across hosts. Port groups define connection configurations.
3. Configuring a distributed switch involves adding the switch in vCenter Server, creating distributed port groups, and defining properties like uplink ports and multicast filtering mode. This provides a consistent network configuration template across
This document provides an agenda and overview for a training session on Oracle Database backup and recovery. The agenda covers the purpose of backups and recovery, Oracle data protection solutions including Recovery Manager (RMAN) and flashback technologies, and the Data Recovery Advisor tool. It also discusses various types of data loss to protect against, backup strategies like incremental backups, and validating and recovering backups.
Kvm performance optimization for ubuntuSim Janghoon
This document discusses various techniques for optimizing KVM performance on Linux systems. It covers CPU and memory optimization through techniques like vCPU pinning, NUMA affinity, transparent huge pages, KSM, and virtio_balloon. For networking, it discusses vhost-net, interrupt handling using MSI/MSI-X, and NAPI. It also covers block device optimization through I/O scheduling, cache mode, and asynchronous I/O. The goal is to provide guidance on configuring these techniques for workloads running in KVM virtual machines.
This document discusses object storage and EMC's object storage solutions. It begins with an overview of how traditional storage is becoming inadequate to handle growing unstructured data and the advantages of object storage. Key characteristics of object storage like scalability, geo-distribution and support for large files are described. Example use cases that can benefit from object storage like global content repositories and IoT data collection are provided. The document then discusses EMC's object storage offerings like ECS and how they address the needs of these use cases through scalability, various access protocols and geo-distribution. It also covers EMC's HDFS data service and how it can address limitations of traditional HDFS.
This document provides instructions on configuring DevStack using the local.conf file. It describes how local.conf has replaced the deprecated localrc file, and discusses settings that can be configured in local.conf such as passwords, network ranges, logging options, and enabling reinstallation of OpenStack components each time stack.sh is run. Examples of local.conf configuration are provided.
This document provides an overview and summary of Oracle Data Guard. It discusses the key benefits of Data Guard including disaster recovery, data protection, and high availability. It describes the different types of Data Guard configurations including physical and logical standbys. The document outlines the basic architecture and processes involved in implementing Data Guard including redo transport, apply services, and role transitions. It also summarizes some of the features and protection modes available in different Oracle database versions.
Cosco: An Efficient Facebook-Scale Shuffle ServiceDatabricks
Cosco is an efficient shuffle-as-a-service that powers Spark (and Hive) jobs at Facebook warehouse scale. It is implemented as a scalable, reliable and maintainable distributed system. Cosco is based on the idea of partial in-memory aggregation across a shared pool of distributed memory. This provides vastly improved efficiency in disk usage compared to Spark's built-in shuffle. Long term, we believe the Cosco architecture will be key to efficiently supporting jobs at ever larger scale. In this talk we'll take a deep dive into the Cosco architecture and describe how it's deployed at Facebook. We will then describe how it's integrated to run shuffle for Spark, and contrast it with Spark's built-in sort-based shuffle mechanism and SOS (presented at Spark+AI Summit 2018).
This is the presentation I delivered on Hadoop User Group Ireland meetup in Dublin on Nov 28 2015. It covers at glance the architecture of GPDB and most important its features. Sorry for the colors - Slideshare is crappy with PDFs
This document discusses various methods for performing database backups, including Recovery Manager (RMAN), Oracle Secure Backup, and user-managed backups. It covers key backup concepts like full versus incremental backups, online versus offline backups, and image copies versus backup sets. The document also provides instructions on configuring backup settings and scheduling automated database backups using RMAN and Enterprise Manager.
This document discusses distributed database systems. It defines centralized, distributed, and decentralized database systems. The key topics covered include distributed database management systems (DDBMS), advantages and disadvantages of DDBMS, distributed database design involving data fragmentation, replication and allocation, functions of a DDBMS, types of DDBMS including homogeneous and heterogeneous, and database transparency and gateways. The document is presented by a group with members Zupash, Sana, Marhaba and a group leader Hira Anwar.
Highly efficient backups with percona xtrabackupNilnandan Joshi
Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases. In this talk we'll describe below things.
- How it works with MySQL/Percona Server and what are the features provided
- Difference between Xtrabackup and Innobackupex
- How to take full/increment/partial backup and restore
- How to use features like streaming, compression, remote and compact backups
- How to troubleshoot the issue with xtrabackup
1. To create users in ODI, go to the security tab, click the add icon, provide a username and password along with expiration dates, and save.
2. New users initially have no access or profiles assigned. Profiles like CONNECT, DESIGNER, METADATA ADMIN, OPERATOR, and TOPOLOGY ADMIN must be granted from the master repository to allow access to different areas of ODI.
3. Once all necessary profiles are granted, the new user will have full access to create, view, edit and manage objects in various areas of the ODI repository like designer, metadata, operators, and connections.
Automating a PostgreSQL High Availability Architecture with AnsibleEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. Postgres is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
EDB reference architectures are designed to help new and existing users alike to quickly design a deployment architecture that suits their needs. Users can use these reference architectures as a blueprint or as the basis for a design that enhances and extends the functionality and features offered.
This webinar will explore:
- Concepts of High Availability
- Quick review of EDB reference architectures
- EDB tools to create a highly available PostgreSQL architecture
- Options for automating the deployment of reference architectures
- EDB Ansible® roles helping in automating the deployment of reference architectures
- Features and capabilities of Ansible roles
- Automating the provisioning of the resources in the cloud using Terraform™
This document discusses optimizing Spark write-heavy workloads to S3 object storage. It describes problems with eventual consistency, renames, and failures when writing to S3. It then presents several solutions implemented at Qubole to improve the performance of Spark writes to Hive tables and directly writing to the Hive warehouse location. These optimizations include parallelizing renames, writing directly to the warehouse, and making recover partitions faster by using more efficient S3 listing. Performance improvements of up to 7x were achieved.
ODA Backup Restore Utility & ODA Rescue Live DiskRuggero Citton
When applying maintenance to Oracle Database Appliance, it's best practice to back up the ODA system environment (local system boot disk). DBAs have procedures to backup and recover the database but it is also important that you are able to backup and recover the environment that runs the database. This is especially useful if you encounter an issue during patching; you can quickly restore the system disk back to the pre-patch state.
Steve Twene has over 12 years of experience as a Storage Consultant with expertise in all aspects of HDS technology and software. He has delivered numerous migration and implementation projects for financial and government institutions. Currently, he is employed by HDS as a Technical Storage Consultant, where he serves as the team lead and lead architect for storage projects at AIB Group.
This document contains a summary of Sikha Mishra's professional experience and qualifications. She worked as a Management Trainee at Tata Hitachi Construction Machinery Company from 2014-2015, where her responsibilities included secretarial work like conducting meetings, regulatory filings, and maintaining statutory records. She also assisted with legal work such as drafting agreements and handling litigation. Sikha holds a B.Com degree and is a qualified Company Secretary. She aims to build her career by contributing her skills and advancing her knowledge in an organization.
- Oracle Database is a comprehensive, integrated database management system that provides an open approach to information management.
- The Oracle architecture includes database structures like data files, control files, and redo log files as well as memory structures like the system global area (SGA) and process global area (PGA).
- Key components of the Oracle architecture include the database buffer cache, shared pool, redo log buffer, and background processes that manage instances.
MySQL Group Replication is a new 'synchronous', multi-master, auto-everything replication plugin for MySQL introduced with MySQL 5.7. It is the perfect tool for small 3-20 machine MySQL clusters to gain high availability and high performance. It stands for high availability because the fault of replica don't stop the cluster. Failed nodes can rejoin the cluster and new nodes can be added in a fully automatic way - no DBA intervention required. Its high performance because multiple masters process writes, not just one like with MySQL Replication. Running applications on it is simple: no read-write splitting, no fiddling with eventual consistency and stale data. The cluster offers strong consistency (generalized snapshot isolation).
It is based on Group Communication principles, hence the name.
Understanding nas (network attached storage)sagaroceanic11
The document discusses network attached storage and storage area networks. It covers various storage models including direct attached storage (DAS), network attached storage (NAS), storage area networks (SANs) and content addressed storage (CAS). For SANs specifically, it describes the key components which include host bus adapters, fibre cabling, fibre channel switches/hubs, storage arrays and management systems. It also discusses SAN connectivity, topologies, management functions and deployment examples.
The document describes IBM DB2's High Availability Disaster Recovery (HADR) multiple standby configuration. It allows a primary database to have one principal standby and up to two auxiliary standbys. The principal standby supports all sync modes, while auxiliary standbys use super async mode. Takeovers can occur from any standby and DB2 will automatically reconfigure other standbys to connect to the new primary if they are in its target list. The document provides details on configuration, initialization, failover behavior and an example deployment across four servers.
1. A distributed switch functions as a single virtual switch across all associated hosts and is configured in vCenter Server at the data center level. It consists of a control plane in vCenter Server and I/O planes in the VMkernel of each ESXi host.
2. Key components of a distributed switch include distributed ports, uplinks, and port groups. Distributed ports can connect VMs or VMkernel interfaces. Uplinks associate physical NICs across hosts. Port groups define connection configurations.
3. Configuring a distributed switch involves adding the switch in vCenter Server, creating distributed port groups, and defining properties like uplink ports and multicast filtering mode. This provides a consistent network configuration template across
This document provides an agenda and overview for a training session on Oracle Database backup and recovery. The agenda covers the purpose of backups and recovery, Oracle data protection solutions including Recovery Manager (RMAN) and flashback technologies, and the Data Recovery Advisor tool. It also discusses various types of data loss to protect against, backup strategies like incremental backups, and validating and recovering backups.
Kvm performance optimization for ubuntuSim Janghoon
This document discusses various techniques for optimizing KVM performance on Linux systems. It covers CPU and memory optimization through techniques like vCPU pinning, NUMA affinity, transparent huge pages, KSM, and virtio_balloon. For networking, it discusses vhost-net, interrupt handling using MSI/MSI-X, and NAPI. It also covers block device optimization through I/O scheduling, cache mode, and asynchronous I/O. The goal is to provide guidance on configuring these techniques for workloads running in KVM virtual machines.
This document discusses object storage and EMC's object storage solutions. It begins with an overview of how traditional storage is becoming inadequate to handle growing unstructured data and the advantages of object storage. Key characteristics of object storage like scalability, geo-distribution and support for large files are described. Example use cases that can benefit from object storage like global content repositories and IoT data collection are provided. The document then discusses EMC's object storage offerings like ECS and how they address the needs of these use cases through scalability, various access protocols and geo-distribution. It also covers EMC's HDFS data service and how it can address limitations of traditional HDFS.
This document provides instructions on configuring DevStack using the local.conf file. It describes how local.conf has replaced the deprecated localrc file, and discusses settings that can be configured in local.conf such as passwords, network ranges, logging options, and enabling reinstallation of OpenStack components each time stack.sh is run. Examples of local.conf configuration are provided.
This document provides an overview and summary of Oracle Data Guard. It discusses the key benefits of Data Guard including disaster recovery, data protection, and high availability. It describes the different types of Data Guard configurations including physical and logical standbys. The document outlines the basic architecture and processes involved in implementing Data Guard including redo transport, apply services, and role transitions. It also summarizes some of the features and protection modes available in different Oracle database versions.
Cosco: An Efficient Facebook-Scale Shuffle ServiceDatabricks
Cosco is an efficient shuffle-as-a-service that powers Spark (and Hive) jobs at Facebook warehouse scale. It is implemented as a scalable, reliable and maintainable distributed system. Cosco is based on the idea of partial in-memory aggregation across a shared pool of distributed memory. This provides vastly improved efficiency in disk usage compared to Spark's built-in shuffle. Long term, we believe the Cosco architecture will be key to efficiently supporting jobs at ever larger scale. In this talk we'll take a deep dive into the Cosco architecture and describe how it's deployed at Facebook. We will then describe how it's integrated to run shuffle for Spark, and contrast it with Spark's built-in sort-based shuffle mechanism and SOS (presented at Spark+AI Summit 2018).
This is the presentation I delivered on Hadoop User Group Ireland meetup in Dublin on Nov 28 2015. It covers at glance the architecture of GPDB and most important its features. Sorry for the colors - Slideshare is crappy with PDFs
This document discusses various methods for performing database backups, including Recovery Manager (RMAN), Oracle Secure Backup, and user-managed backups. It covers key backup concepts like full versus incremental backups, online versus offline backups, and image copies versus backup sets. The document also provides instructions on configuring backup settings and scheduling automated database backups using RMAN and Enterprise Manager.
This document discusses distributed database systems. It defines centralized, distributed, and decentralized database systems. The key topics covered include distributed database management systems (DDBMS), advantages and disadvantages of DDBMS, distributed database design involving data fragmentation, replication and allocation, functions of a DDBMS, types of DDBMS including homogeneous and heterogeneous, and database transparency and gateways. The document is presented by a group with members Zupash, Sana, Marhaba and a group leader Hira Anwar.
Highly efficient backups with percona xtrabackupNilnandan Joshi
Percona XtraBackup is an open source, free MySQL hot backup software that performs non-blocking backups for InnoDB and XtraDB databases. In this talk we'll describe below things.
- How it works with MySQL/Percona Server and what are the features provided
- Difference between Xtrabackup and Innobackupex
- How to take full/increment/partial backup and restore
- How to use features like streaming, compression, remote and compact backups
- How to troubleshoot the issue with xtrabackup
1. To create users in ODI, go to the security tab, click the add icon, provide a username and password along with expiration dates, and save.
2. New users initially have no access or profiles assigned. Profiles like CONNECT, DESIGNER, METADATA ADMIN, OPERATOR, and TOPOLOGY ADMIN must be granted from the master repository to allow access to different areas of ODI.
3. Once all necessary profiles are granted, the new user will have full access to create, view, edit and manage objects in various areas of the ODI repository like designer, metadata, operators, and connections.
Automating a PostgreSQL High Availability Architecture with AnsibleEDB
Highly available databases are essential to organizations depending on mission-critical, 24/7 access to data. Postgres is widely recognized as an excellent open-source database, with critical maturity and features that allow organizations to scale and achieve high availability.
EDB reference architectures are designed to help new and existing users alike to quickly design a deployment architecture that suits their needs. Users can use these reference architectures as a blueprint or as the basis for a design that enhances and extends the functionality and features offered.
This webinar will explore:
- Concepts of High Availability
- Quick review of EDB reference architectures
- EDB tools to create a highly available PostgreSQL architecture
- Options for automating the deployment of reference architectures
- EDB Ansible® roles helping in automating the deployment of reference architectures
- Features and capabilities of Ansible roles
- Automating the provisioning of the resources in the cloud using Terraform™
This document discusses optimizing Spark write-heavy workloads to S3 object storage. It describes problems with eventual consistency, renames, and failures when writing to S3. It then presents several solutions implemented at Qubole to improve the performance of Spark writes to Hive tables and directly writing to the Hive warehouse location. These optimizations include parallelizing renames, writing directly to the warehouse, and making recover partitions faster by using more efficient S3 listing. Performance improvements of up to 7x were achieved.
ODA Backup Restore Utility & ODA Rescue Live DiskRuggero Citton
When applying maintenance to Oracle Database Appliance, it's best practice to back up the ODA system environment (local system boot disk). DBAs have procedures to backup and recover the database but it is also important that you are able to backup and recover the environment that runs the database. This is especially useful if you encounter an issue during patching; you can quickly restore the system disk back to the pre-patch state.
Steve Twene has over 12 years of experience as a Storage Consultant with expertise in all aspects of HDS technology and software. He has delivered numerous migration and implementation projects for financial and government institutions. Currently, he is employed by HDS as a Technical Storage Consultant, where he serves as the team lead and lead architect for storage projects at AIB Group.
This document contains a summary of Sikha Mishra's professional experience and qualifications. She worked as a Management Trainee at Tata Hitachi Construction Machinery Company from 2014-2015, where her responsibilities included secretarial work like conducting meetings, regulatory filings, and maintaining statutory records. She also assisted with legal work such as drafting agreements and handling litigation. Sikha holds a B.Com degree and is a qualified Company Secretary. She aims to build her career by contributing her skills and advancing her knowledge in an organization.
Mukesh Balani is seeking a career opportunity where he can apply his skills to continually grow the organization. He has over 10 years of experience in management roles for various organizations focused on livelihood projects and skills training. His experience includes coordinating projects for the Ministry of Rural Development in India and managing centers that provided skills training. He holds an MBA in Financial Management and has experience managing teams and ensuring proper workplace operations.
Symantec delivers on its deduplication everywhere strategy - designed to reduce data everywhere, reduce complexity, and reduce data infrastructure – by announcing Backup Exec 2010 and NetBackup 7.0.
These products both integrate deduplication technology closer to the information source at the client and at the media server to help organizations achieve significant storage and cost savings and simplify their backup and recovery operations through a unified platform.
In addition to deduplication, NetBackup 7 helps enterprise-level organizations protect, store and recover information and adds improved virtual machine protection and faster disaster recovery. Backup Exec 2010 also adds integrated archiving and improved virtual machine protection, helping mid-sized businesses protect more data and utilize less storage - overall saving them time and money.
The document provides a summary of a storage administrator's experience and skills. It summarizes over 10 years of experience working with various storage platforms such as Hitachi, HP, EMC, NetApp, and more. It also lists skills in VMware, Linux, networking, hardware troubleshooting, and tools used. The administrator is currently working as a storage admin at Wipro Technologies in Chennai, India and is seeking new opportunities.
Sowmya Devi M V is a Red Hat Linux administrator with over 8 years of experience supporting Linux, Solaris, HP-UX, and Windows servers. She has extensive experience installing, configuring, and maintaining Linux environments, including with Red Hat, Oracle Solaris, Veritas Clustering, and virtualization software like VMware. Currently she works as a Linux system administrator at DHL Supply Chain, where her responsibilities include application support, database management, scripting, and security administration.
The document provides an agenda for an Infoseminar on 3PAR storage solutions. It includes an introduction, overview of 3PAR positioning and capabilities such as virtualization, high availability, efficiency, and recovery manager for VMware. There will also be a demonstration, questions and answers, and information on next steps.
I invite you to come and listen to my presentation about how Openstack and Gluster are integrating together in both Cinder and Swift.
I will give a brief description about Openstack storage components (Cinder, Swift and Glance) , followed by an intro to Gluster, and then present the integration points and some preferred topology and configuration between gluster and openstack.
A Step-By-Step Disaster Recovery Blueprint & Best Practices for Your NetBacku...Symantec
In this technical session we will share a few customer tested blueprints for implementing DR strategies with NetBackup appliances showing support for onsite and offsite disaster recovery. This includes the architecture design with Symantec best practices, down to execution of the wizards and command lines needed to implement the solution.
Watch the recording of this Google+ Hangout: http://bit.ly/13oTjvp
EMC Starter Kit - IBM BigInsights - EMC IsilonBoni Bruno
The document provides an overview of deploying IBM BigInsights v4.0 with EMC Isilon OneFS for HDFS storage. It includes a pre-installation checklist of supported software versions and hardware requirements. The installation overview section describes prerequisites and steps to prepare the Isilon storage, Linux compute nodes, and install IBM Open Platform and value packages. It also covers security configuration and administration after deployment.
Priya Upadhyay is seeking a position that allows her to effectively utilize her knowledge and contribute to an organization's growth. She has a MBA in International Business and HR and over 5 years of work experience in international business development and customer service. Her objective is to work for a progressive organization and gain knowledge while being part of a team focused on growth.
This document provides an overview of EMC's VMAX3 storage array. Key points include:
- VMAX3 offers improved performance, scale, and simplicity over prior VMAX arrays through its new architecture and software capabilities.
- It can scale from 100K to 400K drives and offers all-flash configurations. Software suites provide different levels of functionality.
- The new engines powering VMAX3 offer increased cores, memory, and bandwidth over prior engines. I/O modules support various host and replication interfaces.
- Features like Service Level Objectives and ProtectPoint backup simplify management and reduce risk. Hardware redundancy and non-disruptive upgrades maximize availability.
Geetha Rajesh is applying for a position as a bank cashier. She has over 3 years of experience working as a bank teller at State Bank of India in India. Her skills include competent customer service, communication, teamwork, and meeting deadlines and targets. She has an MBA in Insurance Management from Pondicherry University and experience handling various bank transactions over the counter as well as registering new customers and accounts.
Deepa Nair is seeking a position in the organization. She has a Bachelor of Engineering degree in Electrical and Computer Engineering from Scope College of Engineering, Bhopal with high marks. Her technical skills include programming in C and C++, data structures, and computer applications. She has work experience in networking through an industrial training and has completed projects in robotics, home appliance control, and switch control. She is proficient in English, Hindi, and Malayalam and has received several awards for her academic and extracurricular achievements.
This document provides an introduction to 100 questions about planning, installing, and managing VMware Server, Workstation, and ESX. It aims to answer the most common questions asked in forums and by customers. Each section addresses a different aspect of VMware and virtualization to help users become more successful with VMware products and solutions.
The document provides details about installation, upgrade, hardware requirements, supported operating systems and databases for VMware ESX Server 3.0.1 and Virtual Center 2.0.1. It discusses the major components, minimum hardware requirements for VirtualCenter Server and Virtual Infrastructure Client. It also lists the supported databases, file extensions, differences between ESX and GSX, current ESX hardware version and various virtualization products.
This document provides release notes for DEFORM-3D Version 6.1 (sp1). Key updates include discontinuing support for some older operating systems, license manager updates requiring a hardware key, issues with antivirus software interfering with the license manager, improvements to the user interface like new templates and visualization enhancements, additions to the FEM engine like ring rolling and induction heating capabilities, and fixes to user routine files and boolean operations in the GUI Pre processor.
The document discusses tools for deploying Informix instances, including the Informix Deployment Assistant (DA) and Informix Deployment Utility (DU). The DA allows users to create snapshots of Informix instances and data for deployment on other computers. The DU is then used to rapidly deploy the packaged instances. Key points covered include the components and usage of each tool, as well as limitations such as difficulty deploying production instances or supporting raw devices. The configuration file used by the DU to customize deployments is also described.
This document discusses embedded software development and image processing using the TI DaVinci platform. It provides an overview of the DaVinci SOC which includes both a GPP and DSP. It describes installing the necessary toolchain and software, setting up an NFS server, building the Linux kernel, cross-compiling programs, and provides an example of a simple image zooming application.
Cloud Firewall (CFW) Logging also known as RFD 163 is a feature where we will start logging specific kinds of firewall records in a manner that doesn’t require as many per compute node resources.
This logging will allow us to pay attention to inbound packets that drop. We want to record new TCP connections or connectionless UDP sessions in a manner that fits in nicely and are “aggregatable” into a proper Triton deployment. To activate this, a user has to opt into logging by marking a firewall rule with the "log" attribute.
The document discusses general-purpose processors and their basic architecture. It explains that general-purpose processors have a control unit and datapath that are designed to perform a variety of computation tasks through software programs. The control unit sequences through instruction cycles that involve fetching instructions from memory, decoding them, fetching operands, executing operations in the datapath, and storing results. Pipelining and other techniques can improve processor throughput and performance. The document also covers programming models and assembly-level instruction sets.
The document discusses Agnostic Device Drivers (ADD), a concept for a slimline boot firmware for Linux on Power Architecture systems. ADD uses a small virtual machine to execute bytecode programs that control low-bandwidth devices like I2C and GPIO in a platform-agnostic way. ADD programs can be packaged in the device tree or inserted at runtime, and the virtual machine can run in hosted mode or as a kernel thread. ADD aims to provide high-level programming capabilities while maintaining flexibility, performance and a small footprint. The document also discusses further opportunities for ADD like real-time support, scheduling, multi-OS compatibility and extended debugging facilities.
The document discusses new features in Informix 11.70, including:
- Table and storage space defragmentation tools to improve performance.
- Enhancements to storage space administration through utilities to generate schemas and commands.
- Tools for deploying and embedding Informix instances through the Deployment Assistant and Utility.
- Increased usability through features like automatic DBA procedures, table location, and event alarms.
Informix User Group France - 30/11/2010 - Fonctionalités IDS 11.7Nicolas Desachy
Informix 11.70 includes several new features to improve administration, performance, and availability. Key features include:
1) A table defragmenter (OLTR) that can reorganize tables online with no downtime.
2) Enhancements to storage provisioning and the ability to generate schemas for dbspaces, chunks, and logs.
3) An embeddability toolkit including a deployment assistant and utility to rapidly deploy packaged Informix instances.
4) Performance improvements such as forest of trees indexing, multi-index scans, and fragment-level statistics.
The document discusses Blackfin device drivers and provides an overview of the device driver model for Blackfin processors. It describes the common API, dataflow methods, and initialization process. An example UART driver application is presented to demonstrate usage of the driver API and chained dataflow method.
How to Use GSM/3G/4G in Embedded Linux SystemsToradex
The number of embedded devices that are connected to the internet is growing each day. Nowadays, they are installed majorly using a wireless connection. They need mobile network coverage to be connected to the internet. Read our next blog which tells you about the various configurations to connect a device such as Colibri iMX6S with the Colibri Evaluation Board running Linux to the internet through the PPP (Point-to-Point Protocol) link. Read More: https://www.toradex.com/blog/how-to-use-gsm-3g-4g-in-embedded-linux-systems
This document provides an overview of the peripherals and memory architecture of the TMSLF2407 DSP controller. It describes the event managers, timers, PWM generators, ADC, CAN interface, SPI, SCI, GPIO pins, watchdog timer, PLL clock module, memory spaces, RAM, flash memory, and software tools used for development including Code Composer Studio. It then provides step-by-step instructions for using CCS to open and build a sample project, load and run the program on the DSP, view registers and memory, and debug the code in real-time mode.
This document provides instructions for installing and configuring Videonetics IVMS video management software on a server. It involves:
1. Configuring NIC bonding and installing CUDA and Docker to enable video processing capabilities.
2. Running an installation file to deploy the IVMS master server, media servers, and database.
3. Manually starting the media and master servers initially and accessing the web console to configure settings.
The full document then provides additional steps for adding media servers to the client and demonstrates the live video monitoring capabilities.
The document discusses C programming and embedded C programming concepts. It covers the differences between C and embedded C, embedded C constraints, how to make code more readable through commenting and documenting memory mapped devices. It also discusses data structures, stacks, queues, and provides code examples for stack implementation and operations using arrays.
Wouldn't it be great for a new developer on your team to have their dev environment totally set up on their first day? What about having your CI tests running in the background while you work on new features? What about having the confidence that your dev environment mirrors testing and prod? Containers enable this to become reality, along with other great benefits like keeping dependencies nice and tidy and making packaged code easier to share. Come learn about the ways containers can help you build and ship software easily.
This document discusses embedded systems and microcontrollers. It begins by defining an embedded system as a special-purpose computer system designed to perform dedicated functions as part of a larger machine. It then discusses the essential components of embedded systems including microprocessors, sensors, converters, actuators, and memory. The document goes on to compare microprocessors and microcontrollers, describing the differences in their architecture and components. It also covers embedded system applications, characteristics, and development processes. Finally, it provides details about the specific microcontroller PIC16F887A, describing its features, memory types, registers, and other components.
VMware End-User-Computing Best Practices PosterVMware Academy
This document provides best practices for configuring and managing various VMware Horizon and related products in a virtual desktop infrastructure (VDI) environment. It includes recommendations for installing and updating agents in the proper order, sizing infrastructure components appropriately based on the number of users and sessions, optimizing master images, balancing performance and cost considerations, and leveraging tools like App Volumes and User Environment Manager to improve management and end user experience. The document emphasizes the importance of testing, monitoring, and following established norms and limits to ensure a reliable and scalable VDI deployment.
Android 5.0 Lollipop brings huge change, compare to before.
This report includes statistics from source code with data and hidden features from source code & git log investigation.
The document discusses SUMA, a tool that automates the download of maintenance and technology levels from a fix server on AIX systems. It provides examples of using SUMA to list configuration settings, schedule periodic downloads of the latest fixes, and download specific fixes like APARs or filesets. SUMA allows flexibility in configuring fix types, actions, scheduling, logging and generating reports for download tasks.
XPDDS17: Keynote: Shared Coprocessor Framework on ARM - Oleksandr Andrushchen...The Linux Foundation
With the grown interest in virtualization from big players around the world there are more and more companies choose ARM SoCs as their target platform for running server environments. It is also known that majority of such SoCs come with broad coprocessors available on the die, e.g. GPU, DSP, security etc. But at the moment the only way to speed up guests with these is either using a para-virtualized approach or making that HW dedicated to a specific guest.
Shared coprocessor framework for Xen aims to allow all guest OSes to benefit from this companion HW with ease while running unmodified software and/or firmware on guest side. You don’t need to worry about setting up IO ranges, interrupts, scheduling etc.: it is all covered, making support of new shared HW way faster.
As an example of the shared coprocessor framework usage a virtualized GPU will be shown.
How GenAI Can Improve Supplier Performance Management.pdfZycus
Data Collection and Analysis with GenAI enables organizations to gather, analyze, and visualize vast amounts of supplier data, identifying key performance indicators and trends. Predictive analytics forecast future supplier performance, mitigating risks and seizing opportunities. Supplier segmentation allows for tailored management strategies, optimizing resource allocation. Automated scorecards and reporting provide real-time insights, enhancing transparency and tracking progress. Collaboration is fostered through GenAI-powered platforms, driving continuous improvement. NLP analyzes unstructured feedback, uncovering deeper insights into supplier relationships. Simulation and scenario planning tools anticipate supply chain disruptions, supporting informed decision-making. Integration with existing systems enhances data accuracy and consistency. McKinsey estimates GenAI could deliver $2.6 trillion to $4.4 trillion in economic benefits annually across industries, revolutionizing procurement processes and delivering significant ROI.
The Power of Visual Regression Testing_ Why It Is Critical for Enterprise App...kalichargn70th171
Visual testing plays a vital role in ensuring that software products meet the aesthetic requirements specified by clients in functional and non-functional specifications. In today's highly competitive digital landscape, users expect a seamless and visually appealing online experience. Visual testing, also known as automated UI testing or visual regression testing, verifies the accuracy of the visual elements that users interact with.
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
What’s new in VictoriaMetrics - Q2 2024 UpdateVictoriaMetrics
These slides were presented during the virtual VictoriaMetrics User Meetup for Q2 2024.
Topics covered:
1. VictoriaMetrics development strategy
* Prioritize bug fixing over new features
* Prioritize security, usability and reliability over new features
* Provide good practices for using existing features, as many of them are overlooked or misused by users
2. New releases in Q2
3. Updates in LTS releases
Security fixes:
● SECURITY: upgrade Go builder from Go1.22.2 to Go1.22.4
● SECURITY: upgrade base docker image (Alpine)
Bugfixes:
● vmui
● vmalert
● vmagent
● vmauth
● vmbackupmanager
4. New Features
* Support SRV URLs in vmagent, vmalert, vmauth
* vmagent: aggregation and relabeling
* vmagent: Global aggregation and relabeling
* vmagent: global aggregation and relabeling
* Stream aggregation
- Add rate_sum aggregation output
- Add rate_avg aggregation output
- Reduce the number of allocated objects in heap during deduplication and aggregation up to 5 times! The change reduces the CPU usage.
* Vultr service discovery
* vmauth: backend TLS setup
5. Let's Encrypt support
All the VictoriaMetrics Enterprise components support automatic issuing of TLS certificates for public HTTPS server via Let’s Encrypt service: https://docs.victoriametrics.com/#automatic-issuing-of-tls-certificates
6. Performance optimizations
● vmagent: reduce CPU usage when sharding among remote storage systems is enabled
● vmalert: reduce CPU usage when evaluating high number of alerting and recording rules.
● vmalert: speed up retrieving rules files from object storages by skipping unchanged objects during reloading.
7. VictoriaMetrics k8s operator
● Add new status.updateStatus field to the all objects with pods. It helps to track rollout updates properly.
● Add more context to the log messages. It must greatly improve debugging process and log quality.
● Changee error handling for reconcile. Operator sends Events into kubernetes API, if any error happened during object reconcile.
See changes at https://github.com/VictoriaMetrics/operator/releases
8. Helm charts: charts/victoria-metrics-distributed
This chart sets up multiple VictoriaMetrics cluster instances on multiple Availability Zones:
● Improved reliability
● Faster read queries
● Easy maintenance
9. Other Updates
● Dashboards and alerting rules updates
● vmui interface improvements and bugfixes
● Security updates
● Add release images built from scratch image. Such images could be more
preferable for using in environments with higher security standards
● Many minor bugfixes and improvements
● See more at https://docs.victoriametrics.com/changelog/
Also check the new VictoriaLogs PlayGround https://play-vmlogs.victoriametrics.com/
Boost Your Savings with These Money Management AppsJhone kinadey
A money management app can transform your financial life by tracking expenses, creating budgets, and setting financial goals. These apps offer features like real-time expense tracking, bill reminders, and personalized insights to help you save and manage money effectively. With a user-friendly interface, they simplify financial planning, making it easier to stay on top of your finances and achieve long-term financial stability.
Nashik's top web development company, Upturn India Technologies, crafts innovative digital solutions for your success. Partner with us and achieve your goals
In this infographic, we have explored cost-effective strategies for iOS app development, focusing on building high-quality apps within a budget. Key points covered include prioritizing essential features, leveraging existing tools and libraries, adopting cross-platform development approaches, optimizing for a Minimum Viable Product (MVP), and integrating with cloud services and third-party APIs. By implementing these strategies, businesses and developers can create functional and engaging iOS apps while minimizing development costs and time-to-market.
Stork Product Overview: An AI-Powered Autonomous Delivery FleetVince Scalabrino
Imagine a world where instead of blue and brown trucks dropping parcels on our porches, a buzzing drove of drones delivered our goods. Now imagine those drones are controlled by 3 purpose-built AI designed to ensure all packages were delivered as quickly and as economically as possible That's what Stork is all about.
A Comprehensive Guide on Implementing Real-World Mobile Testing Strategies fo...kalichargn70th171
In today's fiercely competitive mobile app market, the role of the QA team is pivotal for continuous improvement and sustained success. Effective testing strategies are essential to navigate the challenges confidently and precisely. Ensuring the perfection of mobile apps before they reach end-users requires thoughtful decisions in the testing plan.
A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.
Streamlining End-to-End Testing Automation with Azure DevOps Build & Release Pipelines
Automating end-to-end (e2e) test for Android and iOS native apps, and web apps, within Azure build and release pipelines, poses several challenges. This session dives into the key challenges and the repeatable solutions implemented across multiple teams at a leading Indian telecom disruptor, renowned for its affordable 4G/5G services, digital platforms, and broadband connectivity.
Challenge #1. Ensuring Test Environment Consistency: Establishing a standardized test execution environment across hundreds of Azure DevOps agents is crucial for achieving dependable testing results. This uniformity must seamlessly span from Build pipelines to various stages of the Release pipeline.
Challenge #2. Coordinated Test Execution Across Environments: Executing distinct subsets of tests using the same automation framework across diverse environments, such as the build pipeline and specific stages of the Release Pipeline, demands flexible and cohesive approaches.
Challenge #3. Testing on Linux-based Azure DevOps Agents: Conducting tests, particularly for web and native apps, on Azure DevOps Linux agents lacking browser or device connectivity presents specific challenges in attaining thorough testing coverage.
This session delves into how these challenges were addressed through:
1. Automate the setup of essential dependencies to ensure a consistent testing environment.
2. Create standardized templates for executing API tests, API workflow tests, and end-to-end tests in the Build pipeline, streamlining the testing process.
3. Implement task groups in Release pipeline stages to facilitate the execution of tests, ensuring consistency and efficiency across deployment phases.
4. Deploy browsers within Docker containers for web application testing, enhancing portability and scalability of testing environments.
5. Leverage diverse device farms dedicated to Android, iOS, and browser testing to cover a wide range of platforms and devices.
6. Integrate AI technology, such as Applitools Visual AI and Ultrafast Grid, to automate test execution and validation, improving accuracy and efficiency.
7. Utilize AI/ML-powered central test automation reporting server through platforms like reportportal.io, providing consolidated and real-time insights into test performance and issues.
These solutions not only facilitate comprehensive testing across platforms but also promote the principles of shift-left testing, enabling early feedback, implementing quality gates, and ensuring repeatability. By adopting these techniques, teams can effectively automate and execute tests, accelerating software delivery while upholding high-quality standards across Android, iOS, and web applications.
Ensuring Efficiency and Speed with Practical Solutions for Clinical OperationsOnePlan Solutions
Clinical operations professionals encounter unique challenges. Balancing regulatory requirements, tight timelines, and the need for cross-functional collaboration can create significant internal pressures. Our upcoming webinar will introduce key strategies and tools to streamline and enhance clinical development processes, helping you overcome these challenges.
Building the Ideal CI-CD Pipeline_ Achieving Visual PerfectionApplitools
Explore the advantages of integrating AI-powered testing into the CI/CD pipeline in this session from Applitools engineer Brandon Murray. More information and session materials at applitools.com
Discover how shift-left strategies and advanced testing in CI/CD pipelines can enhance customer satisfaction and streamline development processes, including:
• Significantly reduced time and effort needed for test creation and maintenance compared to traditional testing methods.
• Enhanced UI coverage that eliminates the necessity for manual testing, leading to quicker and more effective testing processes.
• Effortless integration with the development workflow, offering instant feedback on pull requests and facilitating swifter product releases.
2. • -x drivescan: displays the relationship between the
Thunder 9500 V Series system’s LDEV to the Windows
hard drives
• -x env: Displays environment variables
• -x findcmddev: searches for Command Devices
• -x mount: displays/mounts specified drives
• -x portscan: Displays devices on specified port(s)
• -x setenv: sets environment variables
• -x sleep: causes CCI to wait/sleep for specified seconds
• -x sync: Flushes unwritten data from Windows to
specified devices. The logical and physical devices to be
synchronized must be offline to all other applications. The
sync does not propagate to a specified drive, which has a
directory mount on the Windows 2000/2003 system.
• -x umount: Unmounts the specified logical drive and
deletes the drive letter. Before deleting the drive letter,
this subcommand executes sync internally for the
specified logical drive and flushes unwritten data.
• -x usetenv: resets environment variables
Details of CCI commands
horcctl:
-d Set to the trace control of the client
-c Set to the trace control of HORCM
-S Shutdown of HORCM
-D Displays the Command Device name currently used
by HORCM. If the command device is blocked due to
online maintenance (microcode replacement) of the
Thunder 9500 V Series system, you can check the
Command Device name in advance using this option.
-C Changes the control device of HORCM
-u <unitid> Specifies the unitid for '-D or -C' options
-ND Show network addr and port name currently used
-NC Changes the network addr of HORCM
-g <group> Specifies the group name in the HORCM file for
'-ND or -NC' options
-l <level> Set to the trace_level
-b <y/n> Set to the trace_mode
-s <size(KB)> Set to the trace_size
horcmshutdown: Stops HORCM application
One (1) CCI instance:
• UNIX: # horcmshutdown.sh
• Windows: > horcmshutdown
Two (2) CCI instances called 0 and 1:
• UNIX: # horcmshutdown.sh 0 1
• Windows: > horcmshutdown 0 1
horcmstart {inst}: Starts HORCM application
One (1) CCI instance:
• UNIX: # horcmstart.sh
• Windows: > horcmstart
Two (2) CCI instances called 0 and 1:
• UNIX: # horcmstart.sh 0 1
• Windows: > horcmstart 0 1
Notes:
If argument has no instance number, then it starts one (1)
HORCM and uses the environment variables set by the
user.
For UNIX-based platforms if HORCMINST is specified:
• HORCM_CONF = /etc/horcm*.conf (* is instance
number)
HORCM_LOG = /HORCM/log*/curlog HORCM_LOGS =
/HORCM/log*/tmplog
For UNIX-based platforms If no HORCMINST is
specified:
• HORCM_CONF = /etc/horcm.conf
HORCM_LOG = /HORCM/log/curlog
HORCM_LOGS = /HORCM/log/tmplog
For Windows NT
®
/2000 platform If HORCMINST is
specified:
• HORCM_CONF = WINNThorcm*.conf (* is instance
number)
HORCM_LOG = HORCMlog*curlog
HORCM_LOGS = HORCMlog*tmplog
For Windows NT/2000 platform If no HORCMINST is
specified:
• HORCM_CONF = WINNThorcm.conf
HORCM_LOG = HORCMlogcurlog
HORCM_LOGS = HORCMlogtmplog
If HORCM fails to start:
• Check contents of the horcm*.conf files
• Verify that the Command Device(s) is valid.
inqraid:
• [-inqdump] Dump option for STD inquiry info
• [-fx] Display of LDEV# with hexadecimal
• [-fp] Display of the H.A.R.D volume with adding '*'
• [-fl] Display of the LDEV GUARD volume with adding '*'
• [-fg] Display of the host group ID with port
• [-fw] Display of the volstat with wide format
• [-CLI] Display with the command line interface (CLI)
format
• [-CLIWP] Displays the Port_WWN for this host with the
CLI format
• [-CLIWN] Displays the Node_WWN for this host with the
CLI format
• [-sort] Displays and sorts by Serial# and LDEV#
• [-sort -CM] Displays and sorts the cmddev by Serial# in
horcm.conf image
• [-fv] Display of Volume{GUID} via $Volume for Windows
2000.
• [No arg] Find out the LDEV from harddisk#... in the
STDIN
• [-find[c]] Find the group by using pairdisplay from
harddisk#... in the STDIN.
• [-gplba] Obtains the logical block access (LBA) for usable
partition from disk#... in the STDIN.
• [-gvinf] Obtains a drive layout and makes a layout file
from disk#... in the STDIN
• [-svinf[=PTN]] Sets a drive layout to disk[=PTN]# in the
STDIN
• [harddisk#...] Find out the LDEV from args(harddisk#...)
• [$DosDevice] Find out the LDEV from DosDevice
• $LETALL -> Specifies all of the Drive Letter
$C: -> Specifies a 'C:' drive
$Phys -> Specifies all Physical Drives
$Volume -> Specifies all LDM Vols for Win2K
$Volume{...} -> Specifies a Volume{...} for Win2K
• [echo hd0-10 | inqraid] Find out the LDEV from
harddisk#... of the echo
• [echo hd0-10 | inqraid -find] Find out the group from
harddisk#... of the echo
• [ inqraid $LETALL -CLI ] Find out the LDEV from all of
the Drive letter
• [ inqraid $Volume -CLI ] Find out the LDEV from all of
the LDM Volumes for Win2k.
• [ inqraid $Phys -gvinf -CLI ] Gets a drive layout and
makes a layout file from all of the Physical Drives
• [ echo hd0-10 | inqraid -svinf ] Sets a drive layout to
disk#0-10
• [ls /dev/rdsk/* | /HORCM/usr/bin/inqraid] Find out the
LDEV from /dev/rdsk/... of the ls
• [ls /dev/rdsk/* | /HORCM/usr/bin/inqraid -find] Find out
the group from /dev/rdsk/... of the ls
• [vxdisk list | grep vg_name | /HORCM/usr/bin/inqraid]
Find out the LDEV from vg_name of the vxdisk
• [ pairdisplay -l -fd -g VG1 | inqraid -svinf=Harddisk ] Sets
a drive layout to disk# related to a group(VG1).
paircreate:
• -g <group> Specifies the group_name
• -d <pair Vol> Specifies the pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] Specifies the LDEV# in the
RAID without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -f <fence> [CTGID] Specifies the fence_level
(never/status/data/async) [TrueCopy and Universal
Replicator software]
• -c <size> Specifies the track size for copy (1-15)
• -split [ShadowImage and Copy-on-Write software only]
Splits the paired volume after the initial copy operation is
complete.
• -m <mode..> Specifies the create mode<'cyl or trk'> for
SVOL, <grp CTG# (0-127)> [ShadowImage software
only] Makes a group for splitting all ShadowImage
software pairs specified in a group, such as TrueCopy
Asynchronous software, or <cc> [ShadowImage software
only]. Specifies the Hitachi Volume Migration software
(CruiseControl) mode for volume migration
• -nocopy Set to the No_copy_mode (TrueCopy
software only)
• -nomsg Not display message of paircreate
• -pid <id#> Specifies the pool ID for pooling SVOL (Copy-
on-Write software for enterprise storage systems)
• -jp <id> (HORC/Universal Replicator software only):
Basically, Universal Replicator software has the same
characteristic as a TrueCopy Asynchronous software
Remote Copy Consistency Group; therefore, this option
is used to specify a Journal Group ID for the PVOL.
• -js <id> (HORC/Universal Replicator software only): This
option is used to specify a Journal Group ID for the
SVOL. Both the -jp <id> and -js <id> options are valid
when the fence level is set to "ASYNC", and each Journal
Group ID is automatically bound to the CTGID.
• -vl Specifies the vector(Local_node)
• -vr Specifies the vector(Remote_node)
Warnings for paircreate using CCI:
• Use –vl if this server has the HORCCM instance that
controls the PVOLs. However, if multiple HORCM
instances are running in this server, make sure the
correct env variable is set. (Best practice is to use horcm
instance 0 and set HORCMINST=0)
• Use –vr if this server does not have the HORCM instance
that controls the PVOLs. If multiple HORCM instances
are running in this server, make sure the correct env
variable is set because this server will use the remote
instance specified in the HORCM_INST ip_address of
the horcm*.conf file that is specified in the local env
HORCMINST variable.
• Before issuing the paircreate command, verify that the
SVOL is not mounted on any system. If the SVOL is
mounted after paircreate, delete the pair, unmount the
SVOL, and reissue the paircreate command.
Note: HiCommand will not create pairs if the SVOL is
mounted.
paircurchk:
The paircurchk command assumes that the target
is an SVOL, is used to check consistency, and is used in
conjunction with the horctakeover command.
• -g <group> Specifies the group_name
• -d <pair Vol> Specifies the pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] Specifies the LDEV# in the
RAID without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -nomsg Not display message of paircurchk
pairdisplay:
Displays the pairing status, which enables you to verify the
completion of pair creation or pair resynchronization. This
command is used to confirm the configuration of the paired
volume connection path (physical link of paired volumes
among the servers).
• -x <command> <arg> ... Specifies the SUB command
• -g <group> Specifies the group_name
• -d <pair Vol> Specifies the pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] Specifies the LDEV# in the
RAID without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -c Specifies the pair_check
• -l Specifies the local only
• -m <mode> Specifies the display_mode(cas/all) for
cascading configuration
• -f[x] Specifies the display of LDEV#(hex)
• -f[c] Specifies the display of COPY rate
• -f[d] Specifies the display of the Device file name
• -f[m] Specifies the display of the Bitmap table
• -f[e] Specifies the display of the External LUN mapped
to LDEV
• -CLI Specifies the display of the CLI format
• -FHORC Specifies the force operation for cascading
HORC_VOL
• -FMRCF [mun#] Specifies the force operation for
cascading MRCF_VOL
• -v jnl[t] Specifies the display of the journal information
interconnected to the group (Universal Replicator only)
• -v ctg Specifies the display of the CT group
information interconnected to the group (TrueCopy and
Universal Replicator software only]
• -v smk Specifies display of the Marker on the volume
pairevtwait:
• -x <command> <arg> ... Specifies the SUB command
• -g <group> group_name
• -d <pair Vol> pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] the LDEV# in the RAID
without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -nomsg Not display message of pairevtwait
3. • -nowait Set to the No_wait_mode
• -s <status> ... Specifies the
status_name(smpl/copy/pair/psus/psuse(psue))
• -t <timeout> [interval] Wait_time
• -l Specifies the local only
• -FHORC Specifies the force operation for cascading
HORC_VOL
• -FMRCF [mun#] Specifies the force operation for
cascading MRCF_VOL
pairmon:
• -xh Help/Usage for SUB commands
• -x <command> <arg> ... SUB command
• -D Set to the Default_mode
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -allsnd Set to the All_send_mode
• -resevt Set to the Reset_mode
• -nowait Set to the No_wait_mode
• -s <status> ... Specifies the
status_name(smpl/copy/pair/psus/psuse(psue))
pairresync:
• -x <command> <arg> ... Specifies the SUB command
• -g <group> group_name
• -d <pair Vol> pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] the LDEV# in the RAID
without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -nomsg Not display message of pairresync
• -c <size> Specifies the track size for copy (1-15)
• -l Specifies the local only
• -restore Specify Re_sync from SVOL to PVOL
[ShadowImage software only]
• -FHORC Specifies the force operation for cascading
HORC_VOL
• -FMRCF [mun#] Specifies the force operation for
cascading MRCF_VOL
• -swapp Specifies Swap_resync for Changing PVOL to
SVOL on the PVOL side
• -swaps Specifies Swap_resync for Changing SVOL to
PVOL on the SVOL side
Warning for pairresync using CCIs:
• Ensure SVOL is not mounted prior to issuing the
pairresync
• Ensure PVOL is not mounted prior to issuing the
pairresync with the restore argument
pairsplit:
• -x <command> <arg> ... Specifies the SUB command
• -g <group> Specifies the group_name
• -d <pair Vol> Specifies the pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] Specifies the LDEV# in the
RAID without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -nomsg Not display message of pairsplit
• -r split_mode(Read_Only)
• -rw split_mode(Read_Write)
• -S Specify the split_mode(Simplex)
• -R split_mode(Svol_Simplex)
• -P split_mode(Pvol_Suspend)
• -l Specifies the local only
• -FHORC Specifies the force operation for cascading
HORC_PVOL
• -FMRCF [mun#] Specifies the force operation for
cascading MRCF_PVOL
pairvolchk:
• -x <command> <arg> ... Specifies the SUB command
• -g <group> group_name
• -d <pair Vol> pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] the LDEV# in the RAID
without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -nomsg No message of pairvolchk
• -c Remote_volume_check
• -ss Encode of pair_status
• -FHORC Specifies the force operation for cascading
HORC_VOL
• -FMRCF [mun#] Specifies the force operation for
cascading MRCF_VOL
raidar:
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -x <command> <arg> ... Specifies the SUB command
• -s <interval> [count] Specifies the starting and
interval(sec)
• -sm <interval> [count] Specifies the starting and
interval(min)
• -p <port> <targ> <lun> port(CL1-A or cl1-a... cl3-a or
CL3-A ... for the expansion(Lower) port) target_ID LUN#
• -pd[g] <drive#(0-N)> Physical drive#
raidqry:
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -x <command> <arg> ... Specifies the SUB command
• -l Specifies the local query
• -r <group> Specifies the remote query
• -f Specifies display for floatable host
raidscan:
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -x <command> <arg> ... Specifies the SUB command
• -p <port> [hgrp#] Specifies the port_name(CL1-A or cl1-
a... cl3-a or CL3-A... for the expansion(Lower) port)
• -pd[g] <drive#(0-N)> Physical drive#
• -pi <'strings'> Specifies the 'strings' for -find option
without using STDIN
• -t <targ> Specifies the target_ID
• -l <lun> Specifies the LUN#
• -m <mun> Scan the specified MU# only
• -s <Seq#> Seq#(Serial#) of the RAID
• -f[f] display of the volume-type
• -f[x] display of the LDEV#(hex)
• -f[g] display of the Group-name
• -f[d] display of the Device file name
• -f[e] display of the External LUN only
• -CLI Specifies display of CLI format
• -find[g] Find out the LDEV from the Physical drive# via
STDIN.
• -find inst [-fx] Registers the Physical drive via STDIN to
HORCM and
o permits its volumes on horcm.conf in Protection Mode
• -find verify [mun#] [-f[x][d]] Find out the relation between
Group
! on horcm.conf and Physical drive via STDIN
• -find[g] conf [mun#][-g name] Displays the Physical drive
in horcm.conf image.
• -find sync [mun#][-g name] Flushes the system buffer
associated to a group.
• For example: [C:HORCMetc>raidscan -pi $Phys –find]
DEVICE_FILE UID S/F PORT TARG LUN SERIAL LDEV PRODUCT_ID
Harddisk0 0 F CL2-A 25 0 2496 16 DF600F-CM
Harddisk1 0 F CL2-A 25 1 2496 18 DF600F
Harddisk2 0 F CL2-A 25 2 2496 19 DF600F
• For example: [ raidscan -pi hd0-10 -find [-fx] ]
• For example: [ echo hd0-10 | raidscan -find [-fx] ]
• For example: [ echo $Phys | raidscan -find [-fx] ]
o $variable specifies as follows.
o $LETALL -> All of the Drive Letter
o $Phys -> All of the Physical Drives
o $Volume -> All of the LDM Volumes for Windows2000
Details of Windows Sub Commands
-x drivescan: -x drivescan drive#(0-N)
Example of displaying windows drives 0 - 20:
C:horcmetc>raidscan -x drivescan harddisk0, 20
-x findcmddev: -x findcmddev drive#(0-N)
Example to search for command device in drives 0– 20
C:horcmetc>raidscan -x findcmddev hdisk0, 20
-x mount:
-x mount drive: hdisk# partition# ... (for Windows NT
®
)
-x mount drive: Volume#(0-N) ... (for Windows 2000/2003)
-x mount drive: [[directory]] Volume#(0-N) ... (for Windows
2000/2003)
Example to display all mounted filesystems:
C:horcmetc>raidscan –x mount
-x portscan: -x portscan port#(0-N)
Example of displaying drives on ports 0 - 20:
C:horcmetc>raidscan -x portscan port0, 20
-x sync:
-x sync A: B: C: ...
-x sync all
-x sync drive#(0-N) ...
-x sync Volume#(0-N) ... (Windows 2000/2003 systems)
-x sync D:directory or directory pattern ... (Windows
2000/2003 systems only)
Example of flushing data to drive D:
C:horcmetc> pairsplit –x sync D:
-x umount:
-x umount drive:
-x umount drive:[[directory]] … Windows 2000/2003
Example of unmounting F: and G: and then splitting the
volume group called oradb
C:horcmetc> pairsplit -x umount F: -x umount G: -g
oradb
Environment Variables
HORCC_LOG:
• Specifies the command log directory name, default =
/HORCM/log* (* = instance number).
HORCC_MRCF
• Required for ShadowImage or Copy-on-Write software
[formally QuickShadow]
• To display for Win, “Set h”
• To set on for Win, “Set HORCC_MRCF=1”
• To set off for Win, “Set HORCC_MRCF=”
• To set For B shell, “# HORCC_MRCF=1” followed by “#
export HORCC_MRCF”
• To set for C shell, “# setenv HORCC_MRCF 1”
• Do not set on this env variable if issuing TrueCopy
Synchronous/Asynchronous software commands.
HORCM_CONF:
• Names the HORCM configuration file.
default = /etc/horcm.conf
HORCMINST:
• Specifies the instance number when using two (2) or
more CCI instances on the same server. The command
execution environment and the HORCM activation
environment require an instance number to be specified.
Set the configuration definition file (HORCM_CONF) and
log directories (HORCM_LOG and HORCC_LOG) for
each instance.
• To display for Win, “Set h”
• To set on instance 0 for Win, “Set HORCMINST=0”
• To set on instance 1 for Win, “Set HORCMINST=1”
• To set off for Win, Set HORCMINST =”
• To set on instance 0 for B shell, “# HORCMINST=0”
followed by “# export HORCMINST”
• To set for C shell, “# setenv HORCMINST 0”
HORCMPROMOD:
• Sets HORCM forcibly to protection mode
• Command Devices in non-protection mode can be used
as protection mode also
HORCMPERM:
• Specifies the file name for the protected volumes. When
this variable is not specified, the default name is as
follows:
UNIX : /etc/horcmperm*.conf
Windows NT/200X:WINNThorcmperm*.conf
(* as an instance number):
Note: The polling environment variables are validated for
only the Hitachi Universal Storage Platform and Network
Storage Controller and are also validated on TrueCopy-
TrueCopy/ShadowImage cascading operations using “-
FHOMRCF [MU#] option. To maintain compatibility across
RAID subsystems, these variables are ignored by Hitachi
Lightning 9900
™
V/9900 Series enterprise storage systems,
which enables you to use a script with “$HORCC_SPLT,
$HORCC_RSYN, $HORCC_REST” for Universal Storage
Platform/Network Storage Controller on the Lightning 9900
V/9900 storage systems.
HORCC_SPLT (for Enterprise):
• “Set HORCC_SPLT=NORMAL” The “pairsplit” and
“paircreate –split” will be performed as non-quick mode
regardless of the setting of the mode (122) via service
processor (SVP) (Remote console).
4. • “Set HORCC_SPLT=QUICK” The “pairsplit” and
“paircreate –split” will be performed as Quick Split
regardless of the mode (122) via SVP (Remote console).
HORCC_RSYN (for Enterprise):
• “Set HORCC_RSYN=NORMAL” The “pairresync” will be
performed as Non quick Resync mode regardless of
setting of the mode (87) via SVP (Remote console).
• “Set HORCC_RSYN=QUICK” The “pairresync” will be
performed as Quick Resync mode regardless of setting of
the mode (87) via SVP (Remote console).
HORCC_REST (for Enterprise):
• “Set HORCC_REST=NORMAL” The “pairresync –
restore” will be performed as Non quick mode regardless
of the setting of the mode (80) via SVP (Remote
console).
• “Set HORCC_REST=QUICK” The “pairresync –restore”
will be performed as Quick Restore regardless of the
setting of the mode (80) via SVP (Remote console).
horcm*.conf
HORCM_MON ip_address
• String type with max of 63 characters
• Actual IP address or alias name of this local server
• If all associated instances are in one (1) server, alias of
localhost is OK
• If two (2) or more network addresses on different
subnets, this item must be NONE
HORCM_MON Service
• String or numeric with max of 15 characters
• Port name (requires entry in appropriate services file) or
port number of local server
HORCM_MON Poll (10 ms)
• The interval for polling (health check) of the other
instance(s)
• Calculating the value for poll(10ms):
6000 x the number of all associated CCI instances. With
two (2) instances, this equals 120000ms or a poll every
two (2) minutes.
• If all the CCI instances are in a single server, turn off
polling by entering –1 to increase performance
HORCM_MON Timeout (10 ms)
• Timeout value for no response from remote server.
Default is 3000 x 10ms or 30 seconds.
HORCM_CMD dev_name
• String type with a max of 63 characters
• Command Device must be mapped to a server port
running the CCI instance.
Examples of Command Devices:
HP-UX®: /dev/rdsk/c0t0d0
Solaris™: /dev/rdsk/c0t0d0s2
OR
/dev/rdsk/c0t50060E80000000000000A9C300000252d0s2
Note: format with no label required
AIX®: /dev/rhdiskX
Note: X = device number is created automatically by AIX
Tru64 UNIX: /dev/rdisk/dskXc
Note: X = device number assigned by Tru64 UNIX
Linux®: /dev/sdX
Note: X = device number assigned by Linux
IRIX®: /dev/rdsk/dksXdXlXvol
OR
/dev/rdsk/node_wwn/lunXvol/cXpX
Note: X = device number assigned by IRIX
Windows NT/2000/2003: .PhysicalDriveX
OR
.CMD-Ser#-LDEV#-Port#
Note: Ser# is the Serial Number of the array, LDEV3 is the
array internal LU number, and Port# is the Cluster/Port to
which the command disk is assigned.
OR
.Volume{guid} (Windows 2000/2003 only)
Note: X = device number assigned by Windows
NT/2000/2003. If configurations change, Windows may
assign a different physical drive number after a subsequent
reboot and the Command Device will not be found. To avoid
this problem, assign a partition and logical drive (without a
drive letter and no Windows format) to the Command
Device to get a GUID.
• When a server is connected to two (2) or more Thunder
9500 V systems, the HORCM identifies each system
using the unit ID (see Figure 2.22). The unit ID is
assigned sequentially in the order described in this
section of the configuration definition file. If more than
one (1) Command Device (maximum of two) is specified
in a disk subsystem, the second Command Device has to
be described side-by-side with the already described
Command Device in a line. The server must be able to
verify that the unit ID is the same as the Serial# (Serial
ID) among servers when a Thunder 9500 V system is
shared by two (2) or more servers, which can be verified
using the raidqry command.
HORCM_DEV dev_group
• String type with max of 31, but the recommended value is
eight (8) characters
• Names a group of paired logical volumes and must be
unique
• Commands can be executed for all corresponding
volumes by group name
HORCM_DEV dev_name
• String type with a max of 31, but the recommended value
is eight (8) characters
• Each pair requires a unique dev_name
• Warning: A duplicate dev_name will cause
horcmstart to fail.
HORCM_DEV port #
• String type with a max of 31 characters
• The port numbers must be CL1-x or CL2-x
• The port number can also be CL1-x-y, where y is the host
storage group number as found on subsystem
• The Thunder 9500 V system uses the following mapping:
• CL1-A, CL1-B, CL1-C, CL1-D = 9500V/AMS/WMS port
0A, 0B, 0C and 0D
• CL2-A, CL2-B, CL2-C, CL2-D = 9500V/AMS/WMS port
1A, 1B, 1C and 1D
HORCM_DEV Target ID
• Numeric type (decimal) with a max of seven (7)
characters
• Use TID from raidscan –p <port>.
HORCM_DEV dev_group LU#
• Numeric type (decimal) with a max of seven (7)
characters
• Use LU values from raidscan –p <port>
• Never use hex values or data corruption may occur. If
hex has alpha character, then invalid MU# may occur.
HORCM_DEV MU#
• Decimal
• MU# is blank for TrueCopy software pairs
• MU# defines the remote copy number of ShadowImage
and Copy-on-Write, formerly QuickShadow volumes
• If Environment variable HORCC_MRCF=1, at least one
(1) pair must have a MU#
• The SVOL of ShadowImage or Copy-on-Write, formerly
QuickShadow must be MU#0
HORCM_LDEV dev_group
• String type with max of 31, but the recommended value is
either (8) characters
• Names a group of paired logical volumes and must be
unique
• Commands can be executed for all corresponding
volumes by group name
• Only available with CCI 1-16-X and higher – Can be
used with/instead of HORCM_DEV
HORCM_LDEV dev_name
• String type with a max of 31, but the recommended value
is either (8) characters
• Each pair requires a unique dev_name
• Warning: A duplicate dev_name will cause
horcmstart to fail.
• Only available with CCI 1-16-X and higher – Can be
used with/instead of HORCM_DEV
HORCM_LDEV serial#
• Numeric type with a max of 12
• This is the Serial Number of the subsystem of the LDEV
• Only Available with CCI 1-16-X and higher – Can be
used with/instead of HORCM_DEV
HORCM_LDEV CU:LDEV (LDEV#)
• Numeric type with a max of six (6)
• Format can be CU:LDEV, decimal value, 0xhex value
• Only available on with CCI 1-16-X and higher – Can
be used with/instead of HORCM_DEV
HORCM_INST dev_group
• All group names defined in HORCM_DEV section must
be entered here.
HORCM_INST ip_address
• IP address or alias name of the remote server that
contains the dev_group.
• If all associated instances are in one (1) server, alias of
`localhost’ is OK
• If two (2) or more network addresses are on different
subnets, this item must be NONE
HORCM_INST service
Port name (requires entry in appropriate services file) or
port number of remote server.
Cascaded Mirrors Detail
Midrange only has 1:3, cascade is only ShadowImage
software on enterprise storage systems.
Return Codes
Pairvolchk -ss:
11 SMPL
For TrueCopy Synchronous/ShadowImage software
22 PVOL_COPY or PVOL_RCPY
23 PVOL_PAIR
24 PVOL_PSUS
25 PVOL_PSUE
32 SVOL_COPY or SVOL_RCPY
33 SVOL_PAIR
34 SVOL_PSUS
35 SVOL_PSUE
For TrueCopy Asynchronous/Universal Replicator software
42 PVOL_COPY or PVOL_RCPY
43 PVOL_PAIR
44 PVOL_PSUS
45 PVOL_PSUE
52 SVOL_COPY or SVOL_RCPY
53 SVOL_PAIR
54 SVOL_PSUS
55 SVOL_PSUE
Pairevtwait -nowait:
Status Return
Mnemonic Value Meaning
Smpl 1 Simplex (No Mirror)
Copy 2 Copy
Pair 3 Paired
Psus 4 Suspended
Psue 5 Suspended with Error
Pairevtwait :
0 Normal (Success)
232 Timeout waiting for specified status on the local host
233 Timeout waiting for specified status
5. Example of TrueCopy Synchronous Software for Thunder 9500 V Series System (Refer to Diagram)
Operations Commands
Display CCI version C:HORCMetc>raidqry -h
Model : RAID-Manager/WindowsNT
Ver&Rev: 01-11-03/00
Find Command Device
Note: HORCM must be shutdown to run this command.
C:HORCMetc>raidscan -x findcmddev drive#(0,20)
cmddev of Ser# 462 = .PhysicalDrive4
cmddev of Ser# 463 = .PhysicalDrive6
Write cmd dev in horcm*.conf C:HORCMetc>notepad c:winnthorcm0.conf
C:HORCMetc>notepad c:winnthorcm1.conf
• Start horcm
• Set env variable for horcm instance 0
• Display TID and LUs for Thunder 9570V
™
high-end systems
serial #65010462
• Alter horcm0.conf if required
• HORCM must be shutdown and restarted for any changes to
horcm*.conf files to take affect.
C:HORCMetc>horcmstart 0 1
starting HORCM inst 0
HORCM inst 0 starts successfully.
starting HORCM inst 1
HORCM inst 1 starts successfully.
C:HORCMetc>set HORCMINST=0
C:HORCMetc>raidscan -p cl1-b -fx -s 462
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,P-LDEV#
CL1-B / ef/ 5, 1, 24.1(18)............SMPL ---- ------ ----, ----- ----
CL1-B / ef/ 5, 1, 25.1(19)............SMPL ---- ------ ----, ----- ----
• Set env variable for horcm instance one (1)
• Display TID and LUs for Thunder 9570V system serial
#65010463
• Alter horcm1.conf if required
• HORCM must be shutdown and restarted for any changes
to horcm*.conf files to take affect
C:HORCMetc>set HORCMINST=1
C:HORCMetc>raidscan -p cl1-b -fx -s 463
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,P-LDEV#
CL1-B / ef / 5, 1, 21.1(15)............SMPL ---- ------ ----, ----- ----
CL1-B / ef / 5, 1, 22.1(16)............SMPL ---- ------ ----, ----- ----
• Set env variable for horcm instance 0
• Start initial copy of Volume group VG01
C:HORCMetc>set HORCMINST=0
C:HORCMetc>paircreate -g VG01 -vl -c 15 -f never
Display the copy status to verify COPY to PAIR status. C:HORCMetc>pairdisplay -g VG01 -fc
Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# M
VG01 work01(L) (CL1-B , 1, 24) 462 24..P-VOL PAIR NEVER , 100 21 -
VG01 work01(R) (CL1-B , 1, 21) 463 21..S-VOL PAIR NEVER , 100 24 -
VG01 work02(L) (CL1-B , 1, 25) 462 25..P-VOL PAIR NEVER , 100 22 -
VG01 work02(R) (CL1-B , 1, 22) 463 22..S-VOL PAIR NEVER , 100 25 -
Suspend Volume Group VG01 and verify that status went from
PAIR to PSUS.
C:HORCMetc>pairsplit -g VG01
C:HORCMetc>pairdisplay -g VG01 -fc
Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# M
VG01 work01(L) (CL1-B , 1, 24) 462 24..P-VOL PSUS NEVER , 100 21 -
VG01 work01(R) (CL1-B , 1, 21) 463 21..S-VOL SSUS NEVER , 100 24 -
VG01 work02(L) (CL1-B , 1, 25) 462 25..P-VOL PSUS NEVER , 100 22 -
VG01 work02(R) (CL1-B , 1, 22) 463 22..S-VOL SSUS NEVER , 100 25 -
Resync Volume group VG01 and verify that status went from
PSUS to PAIR. Make sure to use the –fc argument to display
percentage, or the status may display PAIR and may not be
completed.
C:HORCMetc>pairdisplay -g VG01 -fc
Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# M
VG01 work01(L) (CL1-B , 1, 24) 462 24..P-VOL PAIR NEVER , 100 21 -
VG01 work01(R) (CL1-B , 1, 21) 463 21..S-VOL PAIR NEVER , 100 24 -
VG01 work02(L) (CL1-B , 1, 25) 462 25..P-VOL PAIR NEVER , 100 22 -
VG01 work02(R) (CL1-B , 1, 22) 463 22..S-VOL PAIR NEVER , 100 25 -
Delete the pairs and verify status went from PAIR to SIMPLEX. C:HORCMetc>pairsplit -g VG01 -S
C:HORCMetc>pairdisplay -g VG01 -fc
Group PairVol(L/R) (Port#,TID,LU), Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# M
VG01 work01(L) (CL1-B , 1, 24) 462 24..SMPL ---- ------,----- ---- -
VG01 work01(R) (CL1-B , 1, 21) 463 21..SMPL ---- ------,----- ---- -
VG01 work02(L) (CL1-B , 1, 25) 462 25..SMPL ---- ------,----- ---- -
VG01 work02(R) (CL1-B , 1, 22) 463 22..SMPL ---- ------,----- ---- -
Shutdown horcm C:HORCMetc>horcmshutdown 0 1
inst 0:
HORCM Shutdown inst 0 !!!
inst 1:
HORCM Shutdown inst 1 !!!
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
10.15.11.194 horcm0 12000 3000
HORCM_CMD
#dev_name
.PHYSICALDRIVE4 #0462
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
VG01 test01 CL1-B 1 5 0
VG01 work01 CL1-B 1 24 0
VG01 work02 CL1-B 1 25 0
HORCM_INST
#dev_group ip_address service
VG01 10.15.11.194 horcm1
C:winnthorcm0.conf
Example of 9500V TrueCopy
Fibre Switch
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
10.15.11.194 horcm1 12000 3000
HORCM_CMD
#dev_name
.PHYSICALDRIVE6 #0463
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
VG01 test01 CL1-B 1 3 0
VG01 work01 CL1-B 1 21 0
VG01 work02 CL1-B 1 22 0
HORCM_INST
#dev_group ip_address service
VG01 10.15.11.194 horcm0
C:winnthorcm1.conf
VG01 work01
VG01 work02
W2K ServerHORCMINST0
Fibre
Port
HORCMINST1
P-Vol
Command
device
9500V #65010462
Product ID = DF600F
P-Vol
0-A
0-B
1-A
1-B
Command
device
9500V #65010462
Product ID = DF500F
S-Vol
S-Vol
0-A
0-B
1-A
1-B