This document provides information about Hitachi Data Systems software including Hitachi TrueCopy, ShadowImage, and Copy-on-Write Snapshot. It outlines software prerequisites, terms, files, and common commands for configuring and managing remote replication and in-system replication functionality. The document includes sections describing command devices, pair volumes, replication modes, configuration files, log files, and parameters for commands like paircreate, pairdisplay and horcmctl.
Ramon Fried covers the following topics:
* What DMA is.
* DMA Buffer Allocations and Management.
* Cache Coherency.
* PCI and DMA.
* dmaengine Framework.
Ramon is an Embedded Linux team leader in TandemG, leading various cutting edge projects in the Linux kernel.
He has years of experience in embedded systems, operating systems and Linux kernel.
Some basic knowledges required for beginners in writing linux kernel module - with a description of linux source tree, so that the idea of where and how develops. The working of insmod and rmmod commands are described also.
High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
In the networking world there are a number of ways to increase performance over naive use of basic Berkeley sockets. These techniques have ranged from polling blocking sockets, non-blocking sockets controlled by Epoll, all the way through completely bypassing the Linux kernel for maximum network performance where you talk directly to the network interface card by using something like DPDK or Netmap. All these tools have their place, and generally occupy a space from convenience to performance. But in recent years, that landscape has changed massively.. The tools available to the average Linux systems developer have improved from the creation of io_uring, to the expansion of bpf from a simple filtering language to a full-on programming environment embedded directly in the kernel. Along with that came something called XDP (express datapath). This was Linux kernel's answer to kernel-bypass networking. AF_XDP is the new socket type created by this feature, and generally works very similarly to something like DPDK. History lessons out of the way, this talk will look into, and discuss the merits of this technology, it's place in the broader ecosystem and how it can be used to attain the highest level of performance possible. This talk will dive into crucial details, such as how AF_XDP works, how it can be integrated into a larger system and finally more advanced topics such as request sharding/load balancing. There will be detailed look at the design of AF_XDP, the eBpf code used, as well as the userspace code required to drive it all. It will also include performance numbers from this setup compared to regular kernel networking. And most importantly how to put all this together to handle as much data as possible on a single modern multi-core system.
Ramon Fried covers the following topics:
* What DMA is.
* DMA Buffer Allocations and Management.
* Cache Coherency.
* PCI and DMA.
* dmaengine Framework.
Ramon is an Embedded Linux team leader in TandemG, leading various cutting edge projects in the Linux kernel.
He has years of experience in embedded systems, operating systems and Linux kernel.
Some basic knowledges required for beginners in writing linux kernel module - with a description of linux source tree, so that the idea of where and how develops. The working of insmod and rmmod commands are described also.
High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
In the networking world there are a number of ways to increase performance over naive use of basic Berkeley sockets. These techniques have ranged from polling blocking sockets, non-blocking sockets controlled by Epoll, all the way through completely bypassing the Linux kernel for maximum network performance where you talk directly to the network interface card by using something like DPDK or Netmap. All these tools have their place, and generally occupy a space from convenience to performance. But in recent years, that landscape has changed massively.. The tools available to the average Linux systems developer have improved from the creation of io_uring, to the expansion of bpf from a simple filtering language to a full-on programming environment embedded directly in the kernel. Along with that came something called XDP (express datapath). This was Linux kernel's answer to kernel-bypass networking. AF_XDP is the new socket type created by this feature, and generally works very similarly to something like DPDK. History lessons out of the way, this talk will look into, and discuss the merits of this technology, it's place in the broader ecosystem and how it can be used to attain the highest level of performance possible. This talk will dive into crucial details, such as how AF_XDP works, how it can be integrated into a larger system and finally more advanced topics such as request sharding/load balancing. There will be detailed look at the design of AF_XDP, the eBpf code used, as well as the userspace code required to drive it all. It will also include performance numbers from this setup compared to regular kernel networking. And most importantly how to put all this together to handle as much data as possible on a single modern multi-core system.
Concept explanation on Hardware Abstraction Layer in embedded system development.
Slides start with explanation of modular programming as introduction to show the importance of HAL.
Tasklet vs work queues (Deferrable functions in linux)RajKumar Rampelli
Deferrable functions in linux is a mechanism to delay the execution of any piece of code later in the kernel context. Can be implemented using Tasklet and work queues
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConJérôme Petazzoni
Containers are everywhere. But what exactly is a container? What are they made from? What's the difference between LXC, butts-nspawn, Docker, and the other container systems out there? And why should we bother about specific filesystems?
In this talk, Jérôme will show the individual roles and behaviors of the components making up a container: namespaces, control groups, and copy-on-write systems. Then, he will use them to assemble a container from scratch, and highlight the differences (and likelinesses) with existing container systems.
The more components a system has, the more challenging its maintenance becomes. Oracle Exadata marries storage with computation through a fast, reliable network, and patching all of these seems daunting. Many companies seem to struggle with it, with some even avoiding it altogether by keeping it “pending.” This session presents tested, applied, and working tips to make the Oracle Exadata patching experience smooth as silk, like vacationing in Hawaii.
Shuah Khan, Linux kernel power management developer and Senior Linux Engineer from the Samsung OSG discusses how to use the DMA Debug API in Linux to find data corruptions and memory leaks.
This presentation is about U-Boot: the most popular open source, primary boot loader used in embedded devices, as well as it's mechanisms and features.
The respective talk was held by Sam Protsenko (Software Engineer, Consultant, GlobalLogic) at GlobalLogic Mykolaiv Embedded TechTalk #1 on May 25, 2018.
The U-Boot is an "Universal Bootloader" ("Das U-Boot") is a monitor program that is under GPL. This production quality boot-loader is used as default boot loader by several board vendors. It is easily portable and easy to port and to debug by supporting PPC, ARM, MIPS, x86,m68k, NIOS, Microblaze architectures. Here is a presentation that introduces U-Boot.
Memory Mapping Implementation (mmap) in Linux KernelAdrian Huang
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Oracle RAC Virtualized - In VMs, in Containers, On-premises, and in the CloudMarkus Michalewicz
This presentation discusses the support guidelines for using Oracle Real Application Clusters (RAC) in virtualized environments, for which general Oracle Database support guidelines are discussed shortly first.
First presented during DOAG 2021 User Conference, this presentation replaces its predecessor from 2016: https://www.slideshare.net/MarkusMichalewicz/how-to-use-oracle-rac-in-a-cloud-a-support-question
Numerous technologies exist for profiling and tracing live Linux systems - from the traditional and straight forward gProf and strace to the more elaborate SystemTap, oProfile and the Linux Trace Toolkit. Very recently some new technologies, perf events and ftrace, have appeared that can already largely take the place of these traditional tools and have gained mainline acceptance in the Linux community - meaning that they will become more and more relevant in the future and are already being used to shed light on real world performance issues.
This presentation provides an overview of a number of the more noteworthy instrumentation tools available for Linux and the technologies that they build upon. Some examples of using perf events to analyse a running system to help track down real world performance problems are demonstrated.
How to use Exachk effectively to manage Exadata environments OGBEmeaSandesh Rao
Exachk is a tool for helping with best practices with an Exadata Box. This presentation will go through setup , usage , options and how to use it more effectively to be more proactive in fixing issues with an Exadata environment. There are features like baselines , scheduler for ongoing automation , Collection Manager an Apex based interface which is used to determine the common problems , how to setup this dashboard all for free and in under 30 minutes to be a rockstar Exadata DBA
Linux Tutorial For Beginners | Linux Administration Tutorial | Linux Commands...Edureka!
This Linux Tutorial will help you get started with Linux Administration. This Linux tutorial will also give you an introduction to the basic Linux commands so that you can start using the Linux CLI. Do watch the video till the very end to see all the demonstration. Below are the topics covered in this tutorial:
1) Why go for Linux?
2) Various distributions of Linux
3) Basic Linux commands: ls, cd, pwd, clear commands
4) Working with files & directories: cat, vi, gedit, mkdir, rmdir, rm commands
5) Managing file Permissions: chmod, chgrp, chown commands
6) Updating software packages from Linux repository
7) Compressing & Decompressing files using TAR command
8) Environment variables and Regular expressions
9) Starting and killing processes
10) Managing users
11) SSH protocol for accessing remote hosts
Concept explanation on Hardware Abstraction Layer in embedded system development.
Slides start with explanation of modular programming as introduction to show the importance of HAL.
Tasklet vs work queues (Deferrable functions in linux)RajKumar Rampelli
Deferrable functions in linux is a mechanism to delay the execution of any piece of code later in the kernel context. Can be implemented using Tasklet and work queues
Anatomy of a Container: Namespaces, cgroups & Some Filesystem Magic - LinuxConJérôme Petazzoni
Containers are everywhere. But what exactly is a container? What are they made from? What's the difference between LXC, butts-nspawn, Docker, and the other container systems out there? And why should we bother about specific filesystems?
In this talk, Jérôme will show the individual roles and behaviors of the components making up a container: namespaces, control groups, and copy-on-write systems. Then, he will use them to assemble a container from scratch, and highlight the differences (and likelinesses) with existing container systems.
The more components a system has, the more challenging its maintenance becomes. Oracle Exadata marries storage with computation through a fast, reliable network, and patching all of these seems daunting. Many companies seem to struggle with it, with some even avoiding it altogether by keeping it “pending.” This session presents tested, applied, and working tips to make the Oracle Exadata patching experience smooth as silk, like vacationing in Hawaii.
Shuah Khan, Linux kernel power management developer and Senior Linux Engineer from the Samsung OSG discusses how to use the DMA Debug API in Linux to find data corruptions and memory leaks.
This presentation is about U-Boot: the most popular open source, primary boot loader used in embedded devices, as well as it's mechanisms and features.
The respective talk was held by Sam Protsenko (Software Engineer, Consultant, GlobalLogic) at GlobalLogic Mykolaiv Embedded TechTalk #1 on May 25, 2018.
The U-Boot is an "Universal Bootloader" ("Das U-Boot") is a monitor program that is under GPL. This production quality boot-loader is used as default boot loader by several board vendors. It is easily portable and easy to port and to debug by supporting PPC, ARM, MIPS, x86,m68k, NIOS, Microblaze architectures. Here is a presentation that introduces U-Boot.
Memory Mapping Implementation (mmap) in Linux KernelAdrian Huang
Note: When you view the the slide deck via web browser, the screenshots may be blurred. You can download and view them offline (Screenshots are clear).
Oracle RAC Virtualized - In VMs, in Containers, On-premises, and in the CloudMarkus Michalewicz
This presentation discusses the support guidelines for using Oracle Real Application Clusters (RAC) in virtualized environments, for which general Oracle Database support guidelines are discussed shortly first.
First presented during DOAG 2021 User Conference, this presentation replaces its predecessor from 2016: https://www.slideshare.net/MarkusMichalewicz/how-to-use-oracle-rac-in-a-cloud-a-support-question
Numerous technologies exist for profiling and tracing live Linux systems - from the traditional and straight forward gProf and strace to the more elaborate SystemTap, oProfile and the Linux Trace Toolkit. Very recently some new technologies, perf events and ftrace, have appeared that can already largely take the place of these traditional tools and have gained mainline acceptance in the Linux community - meaning that they will become more and more relevant in the future and are already being used to shed light on real world performance issues.
This presentation provides an overview of a number of the more noteworthy instrumentation tools available for Linux and the technologies that they build upon. Some examples of using perf events to analyse a running system to help track down real world performance problems are demonstrated.
How to use Exachk effectively to manage Exadata environments OGBEmeaSandesh Rao
Exachk is a tool for helping with best practices with an Exadata Box. This presentation will go through setup , usage , options and how to use it more effectively to be more proactive in fixing issues with an Exadata environment. There are features like baselines , scheduler for ongoing automation , Collection Manager an Apex based interface which is used to determine the common problems , how to setup this dashboard all for free and in under 30 minutes to be a rockstar Exadata DBA
Linux Tutorial For Beginners | Linux Administration Tutorial | Linux Commands...Edureka!
This Linux Tutorial will help you get started with Linux Administration. This Linux tutorial will also give you an introduction to the basic Linux commands so that you can start using the Linux CLI. Do watch the video till the very end to see all the demonstration. Below are the topics covered in this tutorial:
1) Why go for Linux?
2) Various distributions of Linux
3) Basic Linux commands: ls, cd, pwd, clear commands
4) Working with files & directories: cat, vi, gedit, mkdir, rmdir, rm commands
5) Managing file Permissions: chmod, chgrp, chown commands
6) Updating software packages from Linux repository
7) Compressing & Decompressing files using TAR command
8) Environment variables and Regular expressions
9) Starting and killing processes
10) Managing users
11) SSH protocol for accessing remote hosts
Symantec delivers on its deduplication everywhere strategy - designed to reduce data everywhere, reduce complexity, and reduce data infrastructure – by announcing Backup Exec 2010 and NetBackup 7.0.
These products both integrate deduplication technology closer to the information source at the client and at the media server to help organizations achieve significant storage and cost savings and simplify their backup and recovery operations through a unified platform.
In addition to deduplication, NetBackup 7 helps enterprise-level organizations protect, store and recover information and adds improved virtual machine protection and faster disaster recovery. Backup Exec 2010 also adds integrated archiving and improved virtual machine protection, helping mid-sized businesses protect more data and utilize less storage - overall saving them time and money.
I invite you to come and listen to my presentation about how Openstack and Gluster are integrating together in both Cinder and Swift.
I will give a brief description about Openstack storage components (Cinder, Swift and Glance) , followed by an intro to Gluster, and then present the integration points and some preferred topology and configuration between gluster and openstack.
A Step-By-Step Disaster Recovery Blueprint & Best Practices for Your NetBacku...Symantec
In this technical session we will share a few customer tested blueprints for implementing DR strategies with NetBackup appliances showing support for onsite and offsite disaster recovery. This includes the architecture design with Symantec best practices, down to execution of the wizards and command lines needed to implement the solution.
Watch the recording of this Google+ Hangout: http://bit.ly/13oTjvp
Cloud Firewall (CFW) Logging also known as RFD 163 is a feature where we will start logging specific kinds of firewall records in a manner that doesn’t require as many per compute node resources.
This logging will allow us to pay attention to inbound packets that drop. We want to record new TCP connections or connectionless UDP sessions in a manner that fits in nicely and are “aggregatable” into a proper Triton deployment. To activate this, a user has to opt into logging by marking a firewall rule with the "log" attribute.
How to Use GSM/3G/4G in Embedded Linux SystemsToradex
The number of embedded devices that are connected to the internet is growing each day. Nowadays, they are installed majorly using a wireless connection. They need mobile network coverage to be connected to the internet. Read our next blog which tells you about the various configurations to connect a device such as Colibri iMX6S with the Colibri Evaluation Board running Linux to the internet through the PPP (Point-to-Point Protocol) link. Read More: https://www.toradex.com/blog/how-to-use-gsm-3g-4g-in-embedded-linux-systems
Wouldn't it be great for a new developer on your team to have their dev environment totally set up on their first day? What about having your CI tests running in the background while you work on new features? What about having the confidence that your dev environment mirrors testing and prod? Containers enable this to become reality, along with other great benefits like keeping dependencies nice and tidy and making packaged code easier to share. Come learn about the ways containers can help you build and ship software easily.
VMware End-User-Computing Best Practices PosterVMware Academy
The End-User-Computing Best Practices poster gives you up-to-date tips and guidelines for configuring and sizing the wide range of EUC products. Enlarge and print!
Android 5.0 Lollipop brings huge change, compare to before.
This report includes statistics from source code with data and hidden features from source code & git log investigation.
XPDDS17: Keynote: Shared Coprocessor Framework on ARM - Oleksandr Andrushchen...The Linux Foundation
With the grown interest in virtualization from big players around the world there are more and more companies choose ARM SoCs as their target platform for running server environments. It is also known that majority of such SoCs come with broad coprocessors available on the die, e.g. GPU, DSP, security etc. But at the moment the only way to speed up guests with these is either using a para-virtualized approach or making that HW dedicated to a specific guest.
Shared coprocessor framework for Xen aims to allow all guest OSes to benefit from this companion HW with ease while running unmodified software and/or firmware on guest side. You don’t need to worry about setting up IO ranges, interrupts, scheduling etc.: it is all covered, making support of new shared HW way faster.
As an example of the shared coprocessor framework usage a virtualized GPU will be shown.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
2. • -x drivescan: displays the relationship between the
Thunder 9500 V Series system’s LDEV to the Windows
hard drives
• -x env: Displays environment variables
• -x findcmddev: searches for Command Devices
• -x mount: displays/mounts specified drives
• -x portscan: Displays devices on specified port(s)
• -x setenv: sets environment variables
• -x sleep: causes CCI to wait/sleep for specified seconds
• -x sync: Flushes unwritten data from Windows to
specified devices. The logical and physical devices to be
synchronized must be offline to all other applications. The
sync does not propagate to a specified drive, which has a
directory mount on the Windows 2000/2003 system.
• -x umount: Unmounts the specified logical drive and
deletes the drive letter. Before deleting the drive letter,
this subcommand executes sync internally for the
specified logical drive and flushes unwritten data.
• -x usetenv: resets environment variables
Details of CCI commands
horcctl:
-d Set to the trace control of the client
-c Set to the trace control of HORCM
-S Shutdown of HORCM
-D Displays the Command Device name currently used
by HORCM. If the command device is blocked due to
online maintenance (microcode replacement) of the
Thunder 9500 V Series system, you can check the
Command Device name in advance using this option.
-C Changes the control device of HORCM
-u <unitid> Specifies the unitid for '-D or -C' options
-ND Show network addr and port name currently used
-NC Changes the network addr of HORCM
-g <group> Specifies the group name in the HORCM file for
'-ND or -NC' options
-l <level> Set to the trace_level
-b <y/n> Set to the trace_mode
-s <size(KB)> Set to the trace_size
horcmshutdown: Stops HORCM application
One (1) CCI instance:
• UNIX: # horcmshutdown.sh
• Windows: > horcmshutdown
Two (2) CCI instances called 0 and 1:
• UNIX: # horcmshutdown.sh 0 1
• Windows: > horcmshutdown 0 1
horcmstart {inst}: Starts HORCM application
One (1) CCI instance:
• UNIX: # horcmstart.sh
• Windows: > horcmstart
Two (2) CCI instances called 0 and 1:
• UNIX: # horcmstart.sh 0 1
• Windows: > horcmstart 0 1
Notes:
If argument has no instance number, then it starts one (1)
HORCM and uses the environment variables set by the
user.
For UNIX-based platforms if HORCMINST is specified:
• HORCM_CONF = /etc/horcm*.conf (* is instance
number)
HORCM_LOG = /HORCM/log*/curlog HORCM_LOGS =
/HORCM/log*/tmplog
For UNIX-based platforms If no HORCMINST is
specified:
• HORCM_CONF = /etc/horcm.conf
HORCM_LOG = /HORCM/log/curlog
HORCM_LOGS = /HORCM/log/tmplog
For Windows NT
®
/2000 platform If HORCMINST is
specified:
• HORCM_CONF = WINNThorcm*.conf (* is instance
number)
HORCM_LOG = HORCMlog*curlog
HORCM_LOGS = HORCMlog*tmplog
For Windows NT/2000 platform If no HORCMINST is
specified:
• HORCM_CONF = WINNThorcm.conf
HORCM_LOG = HORCMlogcurlog
HORCM_LOGS = HORCMlogtmplog
If HORCM fails to start:
• Check contents of the horcm*.conf files
• Verify that the Command Device(s) is valid.
inqraid:
• [-inqdump] Dump option for STD inquiry info
• [-fx] Display of LDEV# with hexadecimal
• [-fp] Display of the H.A.R.D volume with adding '*'
• [-fl] Display of the LDEV GUARD volume with adding '*'
• [-fg] Display of the host group ID with port
• [-fw] Display of the volstat with wide format
• [-CLI] Display with the command line interface (CLI)
format
• [-CLIWP] Displays the Port_WWN for this host with the
CLI format
• [-CLIWN] Displays the Node_WWN for this host with the
CLI format
• [-sort] Displays and sorts by Serial# and LDEV#
• [-sort -CM] Displays and sorts the cmddev by Serial# in
horcm.conf image
• [-fv] Display of Volume{GUID} via $Volume for Windows
2000.
• [No arg] Find out the LDEV from harddisk#... in the
STDIN
• [-find[c]] Find the group by using pairdisplay from
harddisk#... in the STDIN.
• [-gplba] Obtains the logical block access (LBA) for usable
partition from disk#... in the STDIN.
• [-gvinf] Obtains a drive layout and makes a layout file
from disk#... in the STDIN
• [-svinf[=PTN]] Sets a drive layout to disk[=PTN]# in the
STDIN
• [harddisk#...] Find out the LDEV from args(harddisk#...)
• [$DosDevice] Find out the LDEV from DosDevice
• $LETALL -> Specifies all of the Drive Letter
$C: -> Specifies a 'C:' drive
$Phys -> Specifies all Physical Drives
$Volume -> Specifies all LDM Vols for Win2K
$Volume{...} -> Specifies a Volume{...} for Win2K
• [echo hd0-10 | inqraid] Find out the LDEV from
harddisk#... of the echo
• [echo hd0-10 | inqraid -find] Find out the group from
harddisk#... of the echo
• [ inqraid $LETALL -CLI ] Find out the LDEV from all of
the Drive letter
• [ inqraid $Volume -CLI ] Find out the LDEV from all of
the LDM Volumes for Win2k.
• [ inqraid $Phys -gvinf -CLI ] Gets a drive layout and
makes a layout file from all of the Physical Drives
• [ echo hd0-10 | inqraid -svinf ] Sets a drive layout to
disk#0-10
• [ls /dev/rdsk/* | /HORCM/usr/bin/inqraid] Find out the
LDEV from /dev/rdsk/... of the ls
• [ls /dev/rdsk/* | /HORCM/usr/bin/inqraid -find] Find out
the group from /dev/rdsk/... of the ls
• [vxdisk list | grep vg_name | /HORCM/usr/bin/inqraid]
Find out the LDEV from vg_name of the vxdisk
• [ pairdisplay -l -fd -g VG1 | inqraid -svinf=Harddisk ] Sets
a drive layout to disk# related to a group(VG1).
paircreate:
• -g <group> Specifies the group_name
• -d <pair Vol> Specifies the pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] Specifies the LDEV# in the
RAID without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -f <fence> [CTGID] Specifies the fence_level
(never/status/data/async) [TrueCopy and Universal
Replicator software]
• -c <size> Specifies the track size for copy (1-15)
• -split [ShadowImage and Copy-on-Write software only]
Splits the paired volume after the initial copy operation is
complete.
• -m <mode..> Specifies the create mode<'cyl or trk'> for
SVOL, <grp CTG# (0-127)> [ShadowImage software
only] Makes a group for splitting all ShadowImage
software pairs specified in a group, such as TrueCopy
Asynchronous software, or <cc> [ShadowImage software
only]. Specifies the Hitachi Volume Migration software
(CruiseControl) mode for volume migration
• -nocopy Set to the No_copy_mode (TrueCopy
software only)
• -nomsg Not display message of paircreate
• -pid <id#> Specifies the pool ID for pooling SVOL (Copy-
on-Write software for enterprise storage systems)
• -jp <id> (HORC/Universal Replicator software only):
Basically, Universal Replicator software has the same
characteristic as a TrueCopy Asynchronous software
Remote Copy Consistency Group; therefore, this option
is used to specify a Journal Group ID for the PVOL.
• -js <id> (HORC/Universal Replicator software only): This
option is used to specify a Journal Group ID for the
SVOL. Both the -jp <id> and -js <id> options are valid
when the fence level is set to "ASYNC", and each Journal
Group ID is automatically bound to the CTGID.
• -vl Specifies the vector(Local_node)
• -vr Specifies the vector(Remote_node)
Warnings for paircreate using CCI:
• Use –vl if this server has the HORCCM instance that
controls the PVOLs. However, if multiple HORCM
instances are running in this server, make sure the
correct env variable is set. (Best practice is to use horcm
instance 0 and set HORCMINST=0)
• Use –vr if this server does not have the HORCM instance
that controls the PVOLs. If multiple HORCM instances
are running in this server, make sure the correct env
variable is set because this server will use the remote
instance specified in the HORCM_INST ip_address of
the horcm*.conf file that is specified in the local env
HORCMINST variable.
• Before issuing the paircreate command, verify that the
SVOL is not mounted on any system. If the SVOL is
mounted after paircreate, delete the pair, unmount the
SVOL, and reissue the paircreate command.
Note: HiCommand will not create pairs if the SVOL is
mounted.
paircurchk:
The paircurchk command assumes that the target
is an SVOL, is used to check consistency, and is used in
conjunction with the horctakeover command.
• -g <group> Specifies the group_name
• -d <pair Vol> Specifies the pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] Specifies the LDEV# in the
RAID without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -nomsg Not display message of paircurchk
pairdisplay:
Displays the pairing status, which enables you to verify the
completion of pair creation or pair resynchronization. This
command is used to confirm the configuration of the paired
volume connection path (physical link of paired volumes
among the servers).
• -x <command> <arg> ... Specifies the SUB command
• -g <group> Specifies the group_name
• -d <pair Vol> Specifies the pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] Specifies the LDEV# in the
RAID without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -c Specifies the pair_check
• -l Specifies the local only
• -m <mode> Specifies the display_mode(cas/all) for
cascading configuration
• -f[x] Specifies the display of LDEV#(hex)
• -f[c] Specifies the display of COPY rate
• -f[d] Specifies the display of the Device file name
• -f[m] Specifies the display of the Bitmap table
• -f[e] Specifies the display of the External LUN mapped
to LDEV
• -CLI Specifies the display of the CLI format
• -FHORC Specifies the force operation for cascading
HORC_VOL
• -FMRCF [mun#] Specifies the force operation for
cascading MRCF_VOL
• -v jnl[t] Specifies the display of the journal information
interconnected to the group (Universal Replicator only)
• -v ctg Specifies the display of the CT group
information interconnected to the group (TrueCopy and
Universal Replicator software only]
• -v smk Specifies display of the Marker on the volume
pairevtwait:
• -x <command> <arg> ... Specifies the SUB command
• -g <group> group_name
• -d <pair Vol> pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] the LDEV# in the RAID
without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -nomsg Not display message of pairevtwait
3. • -nowait Set to the No_wait_mode
• -s <status> ... Specifies the
status_name(smpl/copy/pair/psus/psuse(psue))
• -t <timeout> [interval] Wait_time
• -l Specifies the local only
• -FHORC Specifies the force operation for cascading
HORC_VOL
• -FMRCF [mun#] Specifies the force operation for
cascading MRCF_VOL
pairmon:
• -xh Help/Usage for SUB commands
• -x <command> <arg> ... SUB command
• -D Set to the Default_mode
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -allsnd Set to the All_send_mode
• -resevt Set to the Reset_mode
• -nowait Set to the No_wait_mode
• -s <status> ... Specifies the
status_name(smpl/copy/pair/psus/psuse(psue))
pairresync:
• -x <command> <arg> ... Specifies the SUB command
• -g <group> group_name
• -d <pair Vol> pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] the LDEV# in the RAID
without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -nomsg Not display message of pairresync
• -c <size> Specifies the track size for copy (1-15)
• -l Specifies the local only
• -restore Specify Re_sync from SVOL to PVOL
[ShadowImage software only]
• -FHORC Specifies the force operation for cascading
HORC_VOL
• -FMRCF [mun#] Specifies the force operation for
cascading MRCF_VOL
• -swapp Specifies Swap_resync for Changing PVOL to
SVOL on the PVOL side
• -swaps Specifies Swap_resync for Changing SVOL to
PVOL on the SVOL side
Warning for pairresync using CCIs:
• Ensure SVOL is not mounted prior to issuing the
pairresync
• Ensure PVOL is not mounted prior to issuing the
pairresync with the restore argument
pairsplit:
• -x <command> <arg> ... Specifies the SUB command
• -g <group> Specifies the group_name
• -d <pair Vol> Specifies the pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] Specifies the LDEV# in the
RAID without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -nomsg Not display message of pairsplit
• -r split_mode(Read_Only)
• -rw split_mode(Read_Write)
• -S Specify the split_mode(Simplex)
• -R split_mode(Svol_Simplex)
• -P split_mode(Pvol_Suspend)
• -l Specifies the local only
• -FHORC Specifies the force operation for cascading
HORC_PVOL
• -FMRCF [mun#] Specifies the force operation for
cascading MRCF_PVOL
pairvolchk:
• -x <command> <arg> ... Specifies the SUB command
• -g <group> group_name
• -d <pair Vol> pair_volume_name
• -d[g] <drive#(0-N)> [mun#] Specifies the Physical drive#
without '-g' option
• -d[g] <Seq#> <ldev#> [mun#] the LDEV# in the RAID
without '-g' option
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -nomsg No message of pairvolchk
• -c Remote_volume_check
• -ss Encode of pair_status
• -FHORC Specifies the force operation for cascading
HORC_VOL
• -FMRCF [mun#] Specifies the force operation for
cascading MRCF_VOL
raidar:
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -x <command> <arg> ... Specifies the SUB command
• -s <interval> [count] Specifies the starting and
interval(sec)
• -sm <interval> [count] Specifies the starting and
interval(min)
• -p <port> <targ> <lun> port(CL1-A or cl1-a... cl3-a or
CL3-A ... for the expansion(Lower) port) target_ID LUN#
• -pd[g] <drive#(0-N)> Physical drive#
raidqry:
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -x <command> <arg> ... Specifies the SUB command
• -l Specifies the local query
• -r <group> Specifies the remote query
• -f Specifies display for floatable host
raidscan:
• -I[#] Set to HORCMINST#
• -IH[#] or -ITC[#] Set to HORC mode [and HORCMINST#]
• -IM[#] or -ISI[#] Set to MRCF mode [and HORCMINST#]
• -x <command> <arg> ... Specifies the SUB command
• -p <port> [hgrp#] Specifies the port_name(CL1-A or cl1-
a... cl3-a or CL3-A... for the expansion(Lower) port)
• -pd[g] <drive#(0-N)> Physical drive#
• -pi <'strings'> Specifies the 'strings' for -find option
without using STDIN
• -t <targ> Specifies the target_ID
• -l <lun> Specifies the LUN#
• -m <mun> Scan the specified MU# only
• -s <Seq#> Seq#(Serial#) of the RAID
• -f[f] display of the volume-type
• -f[x] display of the LDEV#(hex)
• -f[g] display of the Group-name
• -f[d] display of the Device file name
• -f[e] display of the External LUN only
• -CLI Specifies display of CLI format
• -find[g] Find out the LDEV from the Physical drive# via
STDIN.
• -find inst [-fx] Registers the Physical drive via STDIN to
HORCM and
o permits its volumes on horcm.conf in Protection Mode
• -find verify [mun#] [-f[x][d]] Find out the relation between
Group
! on horcm.conf and Physical drive via STDIN
• -find[g] conf [mun#][-g name] Displays the Physical drive
in horcm.conf image.
• -find sync [mun#][-g name] Flushes the system buffer
associated to a group.
• For example: [C:HORCMetc>raidscan -pi $Phys –find]
DEVICE_FILE UID S/F PORT TARG LUN SERIAL LDEV PRODUCT_ID
Harddisk0 0 F CL2-A 25 0 2496 16 DF600F-CM
Harddisk1 0 F CL2-A 25 1 2496 18 DF600F
Harddisk2 0 F CL2-A 25 2 2496 19 DF600F
• For example: [ raidscan -pi hd0-10 -find [-fx] ]
• For example: [ echo hd0-10 | raidscan -find [-fx] ]
• For example: [ echo $Phys | raidscan -find [-fx] ]
o $variable specifies as follows.
o $LETALL -> All of the Drive Letter
o $Phys -> All of the Physical Drives
o $Volume -> All of the LDM Volumes for Windows2000
Details of Windows Sub Commands
-x drivescan: -x drivescan drive#(0-N)
Example of displaying windows drives 0 - 20:
C:horcmetc>raidscan -x drivescan harddisk0, 20
-x findcmddev: -x findcmddev drive#(0-N)
Example to search for command device in drives 0– 20
C:horcmetc>raidscan -x findcmddev hdisk0, 20
-x mount:
-x mount drive: hdisk# partition# ... (for Windows NT
®
)
-x mount drive: Volume#(0-N) ... (for Windows 2000/2003)
-x mount drive: [[directory]] Volume#(0-N) ... (for Windows
2000/2003)
Example to display all mounted filesystems:
C:horcmetc>raidscan –x mount
-x portscan: -x portscan port#(0-N)
Example of displaying drives on ports 0 - 20:
C:horcmetc>raidscan -x portscan port0, 20
-x sync:
-x sync A: B: C: ...
-x sync all
-x sync drive#(0-N) ...
-x sync Volume#(0-N) ... (Windows 2000/2003 systems)
-x sync D:directory or directory pattern ... (Windows
2000/2003 systems only)
Example of flushing data to drive D:
C:horcmetc> pairsplit –x sync D:
-x umount:
-x umount drive:
-x umount drive:[[directory]] … Windows 2000/2003
Example of unmounting F: and G: and then splitting the
volume group called oradb
C:horcmetc> pairsplit -x umount F: -x umount G: -g
oradb
Environment Variables
HORCC_LOG:
• Specifies the command log directory name, default =
/HORCM/log* (* = instance number).
HORCC_MRCF
• Required for ShadowImage or Copy-on-Write software
[formally QuickShadow]
• To display for Win, “Set h”
• To set on for Win, “Set HORCC_MRCF=1”
• To set off for Win, “Set HORCC_MRCF=”
• To set For B shell, “# HORCC_MRCF=1” followed by “#
export HORCC_MRCF”
• To set for C shell, “# setenv HORCC_MRCF 1”
• Do not set on this env variable if issuing TrueCopy
Synchronous/Asynchronous software commands.
HORCM_CONF:
• Names the HORCM configuration file.
default = /etc/horcm.conf
HORCMINST:
• Specifies the instance number when using two (2) or
more CCI instances on the same server. The command
execution environment and the HORCM activation
environment require an instance number to be specified.
Set the configuration definition file (HORCM_CONF) and
log directories (HORCM_LOG and HORCC_LOG) for
each instance.
• To display for Win, “Set h”
• To set on instance 0 for Win, “Set HORCMINST=0”
• To set on instance 1 for Win, “Set HORCMINST=1”
• To set off for Win, Set HORCMINST =”
• To set on instance 0 for B shell, “# HORCMINST=0”
followed by “# export HORCMINST”
• To set for C shell, “# setenv HORCMINST 0”
HORCMPROMOD:
• Sets HORCM forcibly to protection mode
• Command Devices in non-protection mode can be used
as protection mode also
HORCMPERM:
• Specifies the file name for the protected volumes. When
this variable is not specified, the default name is as
follows:
UNIX : /etc/horcmperm*.conf
Windows NT/200X:WINNThorcmperm*.conf
(* as an instance number):
Note: The polling environment variables are validated for
only the Hitachi Universal Storage Platform and Network
Storage Controller and are also validated on TrueCopy-
TrueCopy/ShadowImage cascading operations using “-
FHOMRCF [MU#] option. To maintain compatibility across
RAID subsystems, these variables are ignored by Hitachi
Lightning 9900
™
V/9900 Series enterprise storage systems,
which enables you to use a script with “$HORCC_SPLT,
$HORCC_RSYN, $HORCC_REST” for Universal Storage
Platform/Network Storage Controller on the Lightning 9900
V/9900 storage systems.
HORCC_SPLT (for Enterprise):
• “Set HORCC_SPLT=NORMAL” The “pairsplit” and
“paircreate –split” will be performed as non-quick mode
regardless of the setting of the mode (122) via service
processor (SVP) (Remote console).
4. • “Set HORCC_SPLT=QUICK” The “pairsplit” and
“paircreate –split” will be performed as Quick Split
regardless of the mode (122) via SVP (Remote console).
HORCC_RSYN (for Enterprise):
• “Set HORCC_RSYN=NORMAL” The “pairresync” will be
performed as Non quick Resync mode regardless of
setting of the mode (87) via SVP (Remote console).
• “Set HORCC_RSYN=QUICK” The “pairresync” will be
performed as Quick Resync mode regardless of setting of
the mode (87) via SVP (Remote console).
HORCC_REST (for Enterprise):
• “Set HORCC_REST=NORMAL” The “pairresync –
restore” will be performed as Non quick mode regardless
of the setting of the mode (80) via SVP (Remote
console).
• “Set HORCC_REST=QUICK” The “pairresync –restore”
will be performed as Quick Restore regardless of the
setting of the mode (80) via SVP (Remote console).
horcm*.conf
HORCM_MON ip_address
• String type with max of 63 characters
• Actual IP address or alias name of this local server
• If all associated instances are in one (1) server, alias of
localhost is OK
• If two (2) or more network addresses on different
subnets, this item must be NONE
HORCM_MON Service
• String or numeric with max of 15 characters
• Port name (requires entry in appropriate services file) or
port number of local server
HORCM_MON Poll (10 ms)
• The interval for polling (health check) of the other
instance(s)
• Calculating the value for poll(10ms):
6000 x the number of all associated CCI instances. With
two (2) instances, this equals 120000ms or a poll every
two (2) minutes.
• If all the CCI instances are in a single server, turn off
polling by entering –1 to increase performance
HORCM_MON Timeout (10 ms)
• Timeout value for no response from remote server.
Default is 3000 x 10ms or 30 seconds.
HORCM_CMD dev_name
• String type with a max of 63 characters
• Command Device must be mapped to a server port
running the CCI instance.
Examples of Command Devices:
HP-UX®: /dev/rdsk/c0t0d0
Solaris™: /dev/rdsk/c0t0d0s2
OR
/dev/rdsk/c0t50060E80000000000000A9C300000252d0s2
Note: format with no label required
AIX®: /dev/rhdiskX
Note: X = device number is created automatically by AIX
Tru64 UNIX: /dev/rdisk/dskXc
Note: X = device number assigned by Tru64 UNIX
Linux®: /dev/sdX
Note: X = device number assigned by Linux
IRIX®: /dev/rdsk/dksXdXlXvol
OR
/dev/rdsk/node_wwn/lunXvol/cXpX
Note: X = device number assigned by IRIX
Windows NT/2000/2003: .PhysicalDriveX
OR
.CMD-Ser#-LDEV#-Port#
Note: Ser# is the Serial Number of the array, LDEV3 is the
array internal LU number, and Port# is the Cluster/Port to
which the command disk is assigned.
OR
.Volume{guid} (Windows 2000/2003 only)
Note: X = device number assigned by Windows
NT/2000/2003. If configurations change, Windows may
assign a different physical drive number after a subsequent
reboot and the Command Device will not be found. To avoid
this problem, assign a partition and logical drive (without a
drive letter and no Windows format) to the Command
Device to get a GUID.
• When a server is connected to two (2) or more Thunder
9500 V systems, the HORCM identifies each system
using the unit ID (see Figure 2.22). The unit ID is
assigned sequentially in the order described in this
section of the configuration definition file. If more than
one (1) Command Device (maximum of two) is specified
in a disk subsystem, the second Command Device has to
be described side-by-side with the already described
Command Device in a line. The server must be able to
verify that the unit ID is the same as the Serial# (Serial
ID) among servers when a Thunder 9500 V system is
shared by two (2) or more servers, which can be verified
using the raidqry command.
HORCM_DEV dev_group
• String type with max of 31, but the recommended value is
eight (8) characters
• Names a group of paired logical volumes and must be
unique
• Commands can be executed for all corresponding
volumes by group name
HORCM_DEV dev_name
• String type with a max of 31, but the recommended value
is eight (8) characters
• Each pair requires a unique dev_name
• Warning: A duplicate dev_name will cause
horcmstart to fail.
HORCM_DEV port #
• String type with a max of 31 characters
• The port numbers must be CL1-x or CL2-x
• The port number can also be CL1-x-y, where y is the host
storage group number as found on subsystem
• The Thunder 9500 V system uses the following mapping:
• CL1-A, CL1-B, CL1-C, CL1-D = 9500V/AMS/WMS port
0A, 0B, 0C and 0D
• CL2-A, CL2-B, CL2-C, CL2-D = 9500V/AMS/WMS port
1A, 1B, 1C and 1D
HORCM_DEV Target ID
• Numeric type (decimal) with a max of seven (7)
characters
• Use TID from raidscan –p <port>.
HORCM_DEV dev_group LU#
• Numeric type (decimal) with a max of seven (7)
characters
• Use LU values from raidscan –p <port>
• Never use hex values or data corruption may occur. If
hex has alpha character, then invalid MU# may occur.
HORCM_DEV MU#
• Decimal
• MU# is blank for TrueCopy software pairs
• MU# defines the remote copy number of ShadowImage
and Copy-on-Write, formerly QuickShadow volumes
• If Environment variable HORCC_MRCF=1, at least one
(1) pair must have a MU#
• The SVOL of ShadowImage or Copy-on-Write, formerly
QuickShadow must be MU#0
HORCM_LDEV dev_group
• String type with max of 31, but the recommended value is
either (8) characters
• Names a group of paired logical volumes and must be
unique
• Commands can be executed for all corresponding
volumes by group name
• Only available with CCI 1-16-X and higher – Can be
used with/instead of HORCM_DEV
HORCM_LDEV dev_name
• String type with a max of 31, but the recommended value
is either (8) characters
• Each pair requires a unique dev_name
• Warning: A duplicate dev_name will cause
horcmstart to fail.
• Only available with CCI 1-16-X and higher – Can be
used with/instead of HORCM_DEV
HORCM_LDEV serial#
• Numeric type with a max of 12
• This is the Serial Number of the subsystem of the LDEV
• Only Available with CCI 1-16-X and higher – Can be
used with/instead of HORCM_DEV
HORCM_LDEV CU:LDEV (LDEV#)
• Numeric type with a max of six (6)
• Format can be CU:LDEV, decimal value, 0xhex value
• Only available on with CCI 1-16-X and higher – Can
be used with/instead of HORCM_DEV
HORCM_INST dev_group
• All group names defined in HORCM_DEV section must
be entered here.
HORCM_INST ip_address
• IP address or alias name of the remote server that
contains the dev_group.
• If all associated instances are in one (1) server, alias of
`localhost’ is OK
• If two (2) or more network addresses are on different
subnets, this item must be NONE
HORCM_INST service
Port name (requires entry in appropriate services file) or
port number of remote server.
Cascaded Mirrors Detail
Midrange only has 1:3, cascade is only ShadowImage
software on enterprise storage systems.
Return Codes
Pairvolchk -ss:
11 SMPL
For TrueCopy Synchronous/ShadowImage software
22 PVOL_COPY or PVOL_RCPY
23 PVOL_PAIR
24 PVOL_PSUS
25 PVOL_PSUE
32 SVOL_COPY or SVOL_RCPY
33 SVOL_PAIR
34 SVOL_PSUS
35 SVOL_PSUE
For TrueCopy Asynchronous/Universal Replicator software
42 PVOL_COPY or PVOL_RCPY
43 PVOL_PAIR
44 PVOL_PSUS
45 PVOL_PSUE
52 SVOL_COPY or SVOL_RCPY
53 SVOL_PAIR
54 SVOL_PSUS
55 SVOL_PSUE
Pairevtwait -nowait:
Status Return
Mnemonic Value Meaning
Smpl 1 Simplex (No Mirror)
Copy 2 Copy
Pair 3 Paired
Psus 4 Suspended
Psue 5 Suspended with Error
Pairevtwait :
0 Normal (Success)
232 Timeout waiting for specified status on the local host
233 Timeout waiting for specified status
5. Example of TrueCopy Synchronous Software for Thunder 9500 V Series System (Refer to Diagram)
Operations Commands
Display CCI version C:HORCMetc>raidqry -h
Model : RAID-Manager/WindowsNT
Ver&Rev: 01-11-03/00
Find Command Device
Note: HORCM must be shutdown to run this command.
C:HORCMetc>raidscan -x findcmddev drive#(0,20)
cmddev of Ser# 462 = .PhysicalDrive4
cmddev of Ser# 463 = .PhysicalDrive6
Write cmd dev in horcm*.conf C:HORCMetc>notepad c:winnthorcm0.conf
C:HORCMetc>notepad c:winnthorcm1.conf
• Start horcm
• Set env variable for horcm instance 0
• Display TID and LUs for Thunder 9570V
™
high-end systems
serial #65010462
• Alter horcm0.conf if required
• HORCM must be shutdown and restarted for any changes to
horcm*.conf files to take affect.
C:HORCMetc>horcmstart 0 1
starting HORCM inst 0
HORCM inst 0 starts successfully.
starting HORCM inst 1
HORCM inst 1 starts successfully.
C:HORCMetc>set HORCMINST=0
C:HORCMetc>raidscan -p cl1-b -fx -s 462
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,P-LDEV#
CL1-B / ef/ 5, 1, 24.1(18)............SMPL ---- ------ ----, ----- ----
CL1-B / ef/ 5, 1, 25.1(19)............SMPL ---- ------ ----, ----- ----
• Set env variable for horcm instance one (1)
• Display TID and LUs for Thunder 9570V system serial
#65010463
• Alter horcm1.conf if required
• HORCM must be shutdown and restarted for any changes
to horcm*.conf files to take affect
C:HORCMetc>set HORCMINST=1
C:HORCMetc>raidscan -p cl1-b -fx -s 463
PORT# /ALPA/C,TID#,LU#.Num(LDEV#....)...P/S, Status,Fence,LDEV#,P-Seq#,P-LDEV#
CL1-B / ef / 5, 1, 21.1(15)............SMPL ---- ------ ----, ----- ----
CL1-B / ef / 5, 1, 22.1(16)............SMPL ---- ------ ----, ----- ----
• Set env variable for horcm instance 0
• Start initial copy of Volume group VG01
C:HORCMetc>set HORCMINST=0
C:HORCMetc>paircreate -g VG01 -vl -c 15 -f never
Display the copy status to verify COPY to PAIR status. C:HORCMetc>pairdisplay -g VG01 -fc
Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# M
VG01 work01(L) (CL1-B , 1, 24) 462 24..P-VOL PAIR NEVER , 100 21 -
VG01 work01(R) (CL1-B , 1, 21) 463 21..S-VOL PAIR NEVER , 100 24 -
VG01 work02(L) (CL1-B , 1, 25) 462 25..P-VOL PAIR NEVER , 100 22 -
VG01 work02(R) (CL1-B , 1, 22) 463 22..S-VOL PAIR NEVER , 100 25 -
Suspend Volume Group VG01 and verify that status went from
PAIR to PSUS.
C:HORCMetc>pairsplit -g VG01
C:HORCMetc>pairdisplay -g VG01 -fc
Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# M
VG01 work01(L) (CL1-B , 1, 24) 462 24..P-VOL PSUS NEVER , 100 21 -
VG01 work01(R) (CL1-B , 1, 21) 463 21..S-VOL SSUS NEVER , 100 24 -
VG01 work02(L) (CL1-B , 1, 25) 462 25..P-VOL PSUS NEVER , 100 22 -
VG01 work02(R) (CL1-B , 1, 22) 463 22..S-VOL SSUS NEVER , 100 25 -
Resync Volume group VG01 and verify that status went from
PSUS to PAIR. Make sure to use the –fc argument to display
percentage, or the status may display PAIR and may not be
completed.
C:HORCMetc>pairdisplay -g VG01 -fc
Group PairVol(L/R) (Port#,TID,LU),Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# M
VG01 work01(L) (CL1-B , 1, 24) 462 24..P-VOL PAIR NEVER , 100 21 -
VG01 work01(R) (CL1-B , 1, 21) 463 21..S-VOL PAIR NEVER , 100 24 -
VG01 work02(L) (CL1-B , 1, 25) 462 25..P-VOL PAIR NEVER , 100 22 -
VG01 work02(R) (CL1-B , 1, 22) 463 22..S-VOL PAIR NEVER , 100 25 -
Delete the pairs and verify status went from PAIR to SIMPLEX. C:HORCMetc>pairsplit -g VG01 -S
C:HORCMetc>pairdisplay -g VG01 -fc
Group PairVol(L/R) (Port#,TID,LU), Seq#,LDEV#.P/S,Status,Fence, % ,P-LDEV# M
VG01 work01(L) (CL1-B , 1, 24) 462 24..SMPL ---- ------,----- ---- -
VG01 work01(R) (CL1-B , 1, 21) 463 21..SMPL ---- ------,----- ---- -
VG01 work02(L) (CL1-B , 1, 25) 462 25..SMPL ---- ------,----- ---- -
VG01 work02(R) (CL1-B , 1, 22) 463 22..SMPL ---- ------,----- ---- -
Shutdown horcm C:HORCMetc>horcmshutdown 0 1
inst 0:
HORCM Shutdown inst 0 !!!
inst 1:
HORCM Shutdown inst 1 !!!
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
10.15.11.194 horcm0 12000 3000
HORCM_CMD
#dev_name
.PHYSICALDRIVE4 #0462
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
VG01 test01 CL1-B 1 5 0
VG01 work01 CL1-B 1 24 0
VG01 work02 CL1-B 1 25 0
HORCM_INST
#dev_group ip_address service
VG01 10.15.11.194 horcm1
C:winnthorcm0.conf
Example of 9500V TrueCopy
Fibre Switch
HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
10.15.11.194 horcm1 12000 3000
HORCM_CMD
#dev_name
.PHYSICALDRIVE6 #0463
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#
VG01 test01 CL1-B 1 3 0
VG01 work01 CL1-B 1 21 0
VG01 work02 CL1-B 1 22 0
HORCM_INST
#dev_group ip_address service
VG01 10.15.11.194 horcm0
C:winnthorcm1.conf
VG01 work01
VG01 work02
W2K ServerHORCMINST0
Fibre
Port
HORCMINST1
P-Vol
Command
device
9500V #65010462
Product ID = DF600F
P-Vol
0-A
0-B
1-A
1-B
Command
device
9500V #65010462
Product ID = DF500F
S-Vol
S-Vol
0-A
0-B
1-A
1-B