This technical paper provides the best practices for implementing the IBM Storwize V7000 Unified system NDMP backup solution using EMC NetWorker. To know more about the IBM Storwize V7000, visit http://ibm.co/TaLb6Q.
The Symantec NetBackup Platform is a complete backup and recovery solution that is optimized for virtually any workload, including physical, virtual, arrays, or big data infrastructures. NetBackup delivers flexible target storage options, such as tape, 3rd-party disk, cloud, or appliance storage devices, including the NetBackup Deduplication Appliances and Integrated Backup Appliances.
NetBackup 7.6 delivers the performance, automation, and manageability necessary to protect virtualized deployments at scale – where thousands of Virtual Machines and petabytes of data are the norm today, and where software-defined data centers and IT-as-a-service become the norm tomorrow. Enterprises trust Symantec.
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...Novell
SUSE Linux Enterprise Server High Availability Extension provides a range of modules that can be assembled in multiple ways to build high availability clusters to host your critical business services. This session will examine some of the most common solutions and discuss best practices for setting up a new cluster.
More in detail, this session will discuss in a second step how to prepare a cluster with the SUSE Linux Enterprise High Availability Extension to make an SAP application highly available as certified by Novell and SAP. This scenario is not only from interest for companies looking into making their SAP environment highly available, but also for those that want to migrate from Unix to the SUSE Linux Enterprise platform.
TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330Symantec
Symantec AutoSupport is a set of infrastructure, process, and systems that enhance the support experience through proactive monitoring of Symantec Appliance hardware and software, as well as providing automated error reporting and support case creation.
Through automation, internet access, and case management integration, Symantec can vastly improve the support process and give our support engineers the tools to solve problems faster. The AutoSupport infrastructure within Symantec analyzes the Call Home data from each appliance to provide proactive customer support and incident response for hardware failures thus reducing the need for an administrator to initiate support cases. It also enables Symantec to better understand how customers configure and use appliances, and where improvements would be most beneficial.
AutoSupport can also correlate the Call Home data with other site configuration data held by Symantec, for technical support and error analysis. With AutoSupport, Symantec greatly improves the customer support experience.
This document provides best practices for implementing and operating Oracle Real Application Clusters (RAC) with Oracle 10g. It covers planning best practices such as understanding the architecture, setting expectations, defining objectives, and project planning. Implementation best practices include installation, configuration, database creation, and application considerations. Operational best practices address backup/recovery, performance monitoring, and production migrations.
Control groups (cgroups) allow administrators to allocate CPU, memory, storage, and other system resources to groups of processes running on the system. The document describes testing done using cgroups on a Red Hat Enterprise Linux 6 system with four Oracle database instances running an OLTP workload. It demonstrates how cgroups can be used for application consolidation, performance optimization, dynamic resource management, and application isolation.
This document provides instructions for an exercise to familiarize users with cluster administration basics in Data ONTAP. The objectives are to connect to the command shell, explore the command hierarchy, manage privileges and licenses, and install and configure OnCommand System Manager. The tasks include connecting to the cluster shell, exploring commands and options, comparing privilege levels, using tab completion, installing and configuring OnCommand System Manager, and managing feature licenses.
This technical paper provides the best practices for implementing the IBM Storwize V7000 Unified system NDMP backup solution using EMC NetWorker. To know more about the IBM Storwize V7000, visit http://ibm.co/TaLb6Q.
The Symantec NetBackup Platform is a complete backup and recovery solution that is optimized for virtually any workload, including physical, virtual, arrays, or big data infrastructures. NetBackup delivers flexible target storage options, such as tape, 3rd-party disk, cloud, or appliance storage devices, including the NetBackup Deduplication Appliances and Integrated Backup Appliances.
NetBackup 7.6 delivers the performance, automation, and manageability necessary to protect virtualized deployments at scale – where thousands of Virtual Machines and petabytes of data are the norm today, and where software-defined data centers and IT-as-a-service become the norm tomorrow. Enterprises trust Symantec.
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...Novell
SUSE Linux Enterprise Server High Availability Extension provides a range of modules that can be assembled in multiple ways to build high availability clusters to host your critical business services. This session will examine some of the most common solutions and discuss best practices for setting up a new cluster.
More in detail, this session will discuss in a second step how to prepare a cluster with the SUSE Linux Enterprise High Availability Extension to make an SAP application highly available as certified by Novell and SAP. This scenario is not only from interest for companies looking into making their SAP environment highly available, but also for those that want to migrate from Unix to the SUSE Linux Enterprise platform.
TECHNICAL BRIEF▶NetBackup Appliance AutoSupport for NetBackup 5330Symantec
Symantec AutoSupport is a set of infrastructure, process, and systems that enhance the support experience through proactive monitoring of Symantec Appliance hardware and software, as well as providing automated error reporting and support case creation.
Through automation, internet access, and case management integration, Symantec can vastly improve the support process and give our support engineers the tools to solve problems faster. The AutoSupport infrastructure within Symantec analyzes the Call Home data from each appliance to provide proactive customer support and incident response for hardware failures thus reducing the need for an administrator to initiate support cases. It also enables Symantec to better understand how customers configure and use appliances, and where improvements would be most beneficial.
AutoSupport can also correlate the Call Home data with other site configuration data held by Symantec, for technical support and error analysis. With AutoSupport, Symantec greatly improves the customer support experience.
This document provides best practices for implementing and operating Oracle Real Application Clusters (RAC) with Oracle 10g. It covers planning best practices such as understanding the architecture, setting expectations, defining objectives, and project planning. Implementation best practices include installation, configuration, database creation, and application considerations. Operational best practices address backup/recovery, performance monitoring, and production migrations.
Control groups (cgroups) allow administrators to allocate CPU, memory, storage, and other system resources to groups of processes running on the system. The document describes testing done using cgroups on a Red Hat Enterprise Linux 6 system with four Oracle database instances running an OLTP workload. It demonstrates how cgroups can be used for application consolidation, performance optimization, dynamic resource management, and application isolation.
This document provides instructions for an exercise to familiarize users with cluster administration basics in Data ONTAP. The objectives are to connect to the command shell, explore the command hierarchy, manage privileges and licenses, and install and configure OnCommand System Manager. The tasks include connecting to the cluster shell, exploring commands and options, comparing privilege levels, using tab completion, installing and configuring OnCommand System Manager, and managing feature licenses.
An introduction to the Design of Warehouse-Scale ComputersAlessio Villardita
A brief overview of the main factors involved in the design of Warehouse-Scale Computers (WSC), from the hardware, to the cooling system to the overall plant energy efficiency, always keeping in mind the costs of such a big architecture.
Co-Author: Pietro Piscione (https://www.linkedin.com/pub/pietro-piscione/84/b37/926)
A work based on:
"The Datacenter as a Computer, An Introduction to the Design of Warehouse-Scale Machines, Second Edition"
by
Luiz André Barroso
Jimmy Clidaras
Urs Hölzle
Trivadis TechEvent 2017 ACFS Replication as of 12 2 by Mathias ZarickTrivadis
A replication for the ASM cluster file system was introduced already with version 11.2.0.2. Oracle database version 12.2 comes along with fundamental alteration in architecture of ACFS replication. The talk brings in some light into this change, and explains setup and operating aspects. Possible use cases for application are discussed.
Linux Disaster Recovery Best Practices with rearGratien D'haese
The document discusses Linux disaster recovery best practices using the Relax and Recover (rear) tool. It recommends deciding on a disaster recovery strategy, including which backup mechanism and location to use. It provides details on using the NETFS backup type with rear to back up to network locations like NFS shares. It also discusses configuring rear by editing the /etc/rear/local.conf file to specify settings like the backup location, program, and options.
This document discusses Checkpoint/Restore In Userspace (CRIU), a tool for live migration of processes and containers. CRIU works by dumping the memory, file descriptors, and other process state of a running process, then restoring it elsewhere. This allows live migration of processes between systems for purposes like load balancing, maintenance, and high performance computing. The document provides details on how CRIU works, its uses cases, limitations, and how to install and use it on Red Hat Enterprise Linux 7.
The document provides a summary of the hardware, licenses, and features of a Data Domain system. It includes:
- Hardware information such as memory, disks, network cards, and enclosure details.
- License keys for shelf capacities in the active and archive tiers, as well as feature licenses for encryption, expanded storage, and secure multi-tenancy.
- Descriptions of the different licenses and what features they enable, such as encryption of the filesystem or sharing the system among multiple tenants.
This document discusses using Btrfs and Snapper to enable full system rollbacks in Linux. It describes how snapshots are automatically created to capture the state of the system before changes. Using Snapper, administrators can rollback the entire system to a previous snapshot to undo changes or revert to a known good state. The document provides examples of rolling back packages, kernels and system configuration changes while ensuring system integrity and compliance.
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-DeviceSUSE
This document summarizes a presentation about SUSE Linux Enterprise High Availability Cluster Multi-Device. It discusses the main features of SUSE HA including policy driven clusters, cluster aware filesystems, and continuous data replication. It then describes the HA storage stack architecture and various options for doing HA storage including DRBD, clustered LVM2, and Cluster-MD. Cluster-MD is presented as a software-based RAID storage that provides redundancy at the device level across multiple nodes. Performance comparisons show Cluster-MD outperforming clustered LVM mirroring. Extensions to Cluster-MD are discussed including expanding the size of a Cluster-MD device.
This document discusses high availability for HDFS and provides details on NameNode HA design. It begins with an overview of HDFS availability and reliability. It then discusses the initial goals for NameNode HA, which were to support an active and standby NameNode configuration with manual or automatic failover. The document also outlines some high-level use cases and provides a high-level overview of the NameNode HA design.
This is to introduce the related components in SUSE Linux Enterprise High Availability Extension product to build High Available Storage (ha-lvm/drbd/iscsi/nfs, clvm, ocfs2, cluster-raid1).
This document provides an overview of a distributed systems course taught in French. It includes the following key points:
- The course objectives are to understand challenges in distributed systems, implement distributed systems, discover distributed algorithms, study examples of distributed systems, and explore distributed systems research.
- The course consists of 8 sessions over 4 hours each that include lectures, tutorials, labs, presentations, and an exam.
- Distributed systems are defined as independent computers that appear as a single coherent system to users. Key characteristics include concurrency, lack of global state, potential node and message failures, unsynchronized clocks, and heterogeneity.
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-BaljevicCircling Cycle
The document describes how to mount a 3PAR virtual copy volume onto a RHEL server. It involves creating host definitions and exporting volumes from 3PAR to the server. The volumes are then mapped, formatted, and mounted. Finally, a virtual copy is created on 3PAR and exported to the server, where it is detected as a new volume.
This document provides a summary of Sitaram Chalasani's work experience and qualifications. He has over 8 years of experience administering servers running operating systems like Red Hat Linux, AIX, Solaris, and Unix. Some of his responsibilities have included implementing NTP servers, Red Hat clustering, backup solutions, and coordinating with vendors. He has a bachelor's degree from Nagarjuna University and certifications in Red Hat engineering and Microsoft server administration. He is proficient in technologies like Linux, AIX, Solaris, clustering, virtualization, storage solutions, backup tools, databases, networking tools, and monitoring tools.
This document discusses NameNode high availability (HA) in Hadoop Distributed File System (HDFS). It provides an overview of the current HDFS architecture, goals of NN HA, design approaches considered including active-standby with automatic failover, key use cases, design details around failover control, client failover, shared storage, fencing, and operations/administration. It also outlines future work such as alternative methods for sharing metadata and improving client failover.
FreeNAS 9.1.0 is an update to the open source Network Attached Storage system. Key updates include support for the latest ZFS features like LZ4 compression, an improved volume manager, and easier installation of plugins and software using a new AppCafe browser. The update also features alerts that can be dismissed, enhanced shell functionality, and other administrative improvements. FreeNAS is based on FreeBSD and provides enterprise-grade file sharing and storage capabilities for home or business use.
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...xKinAnx
This document discusses quorum nodes in Spectrum Scale clusters and recovery from failures. It describes how quorum nodes determine the active cluster and prevent partitioning. The document outlines best practices for quorum nodes and provides steps to recover from loss of a quorum node majority or failure of the primary and secondary configuration servers.
This session will use Novell Open Enterprise Server 2 SP2 to demonstrate how to cluster critical services—from NSS and Novell iPrint to Novell GroupWise, AFP and beyond. We'll cover the new features of Novell Cluster Services in the latest release of Novell Open Enterprise Server, and we'll show you how you can ensure consistency by using AutoYaST to build your nodes. This will be a practical session, so be prepared for a few thrills and spills along the way!
Speakers:
Tim Heywood CTO NDS 8
Mark Robinson CTO Linux NDS8
How Nyherji Manages High Availability TSM Environments using FlashCopy ManagerIBM Danmark
This document discusses Nyherji's use of IBM Tivoli Storage Manager (TSM) and FlashCopy Manager (FCM) to create high availability backup environments. Some key points:
- Nyherji manages around 50 TSM servers backing up 5-5,000 TB of data across various operating systems and hardware. They have transitioned to using deduplication and FCM where possible.
- Their goals are to have recovery time objectives (RTO) of less than 1 hour for important data and less than 6 hours for TSM servers. Solutions need to be cost effective.
- For VMware backups, they use FCM to take daily incremental and weekly full backups, achieving much faster
Credit suisse research - global-investment-returns-yearbook-2014Sergiy Kurbatov
This document summarizes a research report on long-term investment returns in emerging markets from 1900 to present. It constructs the first emerging markets index spanning this entire period to analyze historical performance from a global investor perspective. Key findings include:
- Emerging market equities experienced exceptional returns from 2000-2010 but have recently underperformed and faced setbacks.
- Volatility is shown to dampen as countries develop economically.
- International correlations and style returns within emerging markets are examined.
- Trading strategies for long-term investors in emerging markets are explored.
Morgan Stanley had a very successful 2004 fiscal year, with net revenues increasing 14% and earnings per share growing 18%. However, the firm's stock price did not increase and it did not achieve a higher return on equity than its competitors. The letter discusses Morgan Stanley's strategic focus on growth areas like payments, financial advice, asset management and capital markets. It emphasizes the firm's commitment to putting clients and employees first to generate superior long-term returns for shareholders.
An introduction to the Design of Warehouse-Scale ComputersAlessio Villardita
A brief overview of the main factors involved in the design of Warehouse-Scale Computers (WSC), from the hardware, to the cooling system to the overall plant energy efficiency, always keeping in mind the costs of such a big architecture.
Co-Author: Pietro Piscione (https://www.linkedin.com/pub/pietro-piscione/84/b37/926)
A work based on:
"The Datacenter as a Computer, An Introduction to the Design of Warehouse-Scale Machines, Second Edition"
by
Luiz André Barroso
Jimmy Clidaras
Urs Hölzle
Trivadis TechEvent 2017 ACFS Replication as of 12 2 by Mathias ZarickTrivadis
A replication for the ASM cluster file system was introduced already with version 11.2.0.2. Oracle database version 12.2 comes along with fundamental alteration in architecture of ACFS replication. The talk brings in some light into this change, and explains setup and operating aspects. Possible use cases for application are discussed.
Linux Disaster Recovery Best Practices with rearGratien D'haese
The document discusses Linux disaster recovery best practices using the Relax and Recover (rear) tool. It recommends deciding on a disaster recovery strategy, including which backup mechanism and location to use. It provides details on using the NETFS backup type with rear to back up to network locations like NFS shares. It also discusses configuring rear by editing the /etc/rear/local.conf file to specify settings like the backup location, program, and options.
This document discusses Checkpoint/Restore In Userspace (CRIU), a tool for live migration of processes and containers. CRIU works by dumping the memory, file descriptors, and other process state of a running process, then restoring it elsewhere. This allows live migration of processes between systems for purposes like load balancing, maintenance, and high performance computing. The document provides details on how CRIU works, its uses cases, limitations, and how to install and use it on Red Hat Enterprise Linux 7.
The document provides a summary of the hardware, licenses, and features of a Data Domain system. It includes:
- Hardware information such as memory, disks, network cards, and enclosure details.
- License keys for shelf capacities in the active and archive tiers, as well as feature licenses for encryption, expanded storage, and secure multi-tenancy.
- Descriptions of the different licenses and what features they enable, such as encryption of the filesystem or sharing the system among multiple tenants.
This document discusses using Btrfs and Snapper to enable full system rollbacks in Linux. It describes how snapshots are automatically created to capture the state of the system before changes. Using Snapper, administrators can rollback the entire system to a previous snapshot to undo changes or revert to a known good state. The document provides examples of rolling back packages, kernels and system configuration changes while ensuring system integrity and compliance.
SUSE Expert Days Paris 2018 - SUSE HA Cluster Multi-DeviceSUSE
This document summarizes a presentation about SUSE Linux Enterprise High Availability Cluster Multi-Device. It discusses the main features of SUSE HA including policy driven clusters, cluster aware filesystems, and continuous data replication. It then describes the HA storage stack architecture and various options for doing HA storage including DRBD, clustered LVM2, and Cluster-MD. Cluster-MD is presented as a software-based RAID storage that provides redundancy at the device level across multiple nodes. Performance comparisons show Cluster-MD outperforming clustered LVM mirroring. Extensions to Cluster-MD are discussed including expanding the size of a Cluster-MD device.
This document discusses high availability for HDFS and provides details on NameNode HA design. It begins with an overview of HDFS availability and reliability. It then discusses the initial goals for NameNode HA, which were to support an active and standby NameNode configuration with manual or automatic failover. The document also outlines some high-level use cases and provides a high-level overview of the NameNode HA design.
This is to introduce the related components in SUSE Linux Enterprise High Availability Extension product to build High Available Storage (ha-lvm/drbd/iscsi/nfs, clvm, ocfs2, cluster-raid1).
This document provides an overview of a distributed systems course taught in French. It includes the following key points:
- The course objectives are to understand challenges in distributed systems, implement distributed systems, discover distributed algorithms, study examples of distributed systems, and explore distributed systems research.
- The course consists of 8 sessions over 4 hours each that include lectures, tutorials, labs, presentations, and an exam.
- Distributed systems are defined as independent computers that appear as a single coherent system to users. Key characteristics include concurrency, lack of global state, potential node and message failures, unsynchronized clocks, and heterogeneity.
How to-mount-3 par-san-virtual-copy-onto-rhel-servers-by-Dusan-BaljevicCircling Cycle
The document describes how to mount a 3PAR virtual copy volume onto a RHEL server. It involves creating host definitions and exporting volumes from 3PAR to the server. The volumes are then mapped, formatted, and mounted. Finally, a virtual copy is created on 3PAR and exported to the server, where it is detected as a new volume.
This document provides a summary of Sitaram Chalasani's work experience and qualifications. He has over 8 years of experience administering servers running operating systems like Red Hat Linux, AIX, Solaris, and Unix. Some of his responsibilities have included implementing NTP servers, Red Hat clustering, backup solutions, and coordinating with vendors. He has a bachelor's degree from Nagarjuna University and certifications in Red Hat engineering and Microsoft server administration. He is proficient in technologies like Linux, AIX, Solaris, clustering, virtualization, storage solutions, backup tools, databases, networking tools, and monitoring tools.
This document discusses NameNode high availability (HA) in Hadoop Distributed File System (HDFS). It provides an overview of the current HDFS architecture, goals of NN HA, design approaches considered including active-standby with automatic failover, key use cases, design details around failover control, client failover, shared storage, fencing, and operations/administration. It also outlines future work such as alternative methods for sharing metadata and improving client failover.
FreeNAS 9.1.0 is an update to the open source Network Attached Storage system. Key updates include support for the latest ZFS features like LZ4 compression, an improved volume manager, and easier installation of plugins and software using a new AppCafe browser. The update also features alerts that can be dismissed, enhanced shell functionality, and other administrative improvements. FreeNAS is based on FreeBSD and provides enterprise-grade file sharing and storage capabilities for home or business use.
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...xKinAnx
This document discusses quorum nodes in Spectrum Scale clusters and recovery from failures. It describes how quorum nodes determine the active cluster and prevent partitioning. The document outlines best practices for quorum nodes and provides steps to recover from loss of a quorum node majority or failure of the primary and secondary configuration servers.
This session will use Novell Open Enterprise Server 2 SP2 to demonstrate how to cluster critical services—from NSS and Novell iPrint to Novell GroupWise, AFP and beyond. We'll cover the new features of Novell Cluster Services in the latest release of Novell Open Enterprise Server, and we'll show you how you can ensure consistency by using AutoYaST to build your nodes. This will be a practical session, so be prepared for a few thrills and spills along the way!
Speakers:
Tim Heywood CTO NDS 8
Mark Robinson CTO Linux NDS8
How Nyherji Manages High Availability TSM Environments using FlashCopy ManagerIBM Danmark
This document discusses Nyherji's use of IBM Tivoli Storage Manager (TSM) and FlashCopy Manager (FCM) to create high availability backup environments. Some key points:
- Nyherji manages around 50 TSM servers backing up 5-5,000 TB of data across various operating systems and hardware. They have transitioned to using deduplication and FCM where possible.
- Their goals are to have recovery time objectives (RTO) of less than 1 hour for important data and less than 6 hours for TSM servers. Solutions need to be cost effective.
- For VMware backups, they use FCM to take daily incremental and weekly full backups, achieving much faster
Credit suisse research - global-investment-returns-yearbook-2014Sergiy Kurbatov
This document summarizes a research report on long-term investment returns in emerging markets from 1900 to present. It constructs the first emerging markets index spanning this entire period to analyze historical performance from a global investor perspective. Key findings include:
- Emerging market equities experienced exceptional returns from 2000-2010 but have recently underperformed and faced setbacks.
- Volatility is shown to dampen as countries develop economically.
- International correlations and style returns within emerging markets are examined.
- Trading strategies for long-term investors in emerging markets are explored.
Morgan Stanley had a very successful 2004 fiscal year, with net revenues increasing 14% and earnings per share growing 18%. However, the firm's stock price did not increase and it did not achieve a higher return on equity than its competitors. The letter discusses Morgan Stanley's strategic focus on growth areas like payments, financial advice, asset management and capital markets. It emphasizes the firm's commitment to putting clients and employees first to generate superior long-term returns for shareholders.
Risk Appetite: A new Menu under Basel 3? Pieter Klaassen (UBS - Firm-wide Risk Control & Methodology) voor het Zanders Risicomanagement Seminar 1 november 2012
The distinguished lecture from the Executive Director, Wealth Management Head of South and South East UK at UBS, Martyn Begbour. Exclusively for the University of Southampton Investment and Finance Society.
Morgan Stanley: Barclays Financial Services Conferenceinvestorrelation
Morgan Stanley's Co-President James Gorman and CFO Colm Kelleher presented at the Barclays Financial Services Conference. They discussed Morgan Stanley's strategic priorities which include optimizing Institutional Securities, successfully integrating the Morgan Stanley Smith Barney joint venture, restructuring Asset Management, and developing a strategic alliance with MUFG. They provided an update on the integration of Morgan Stanley Smith Barney, noting that cost synergies exceeded $1.1 billion and revenue synergies were $275 million. Gorman stated that Morgan Stanley Smith Barney is positioned to achieve industry-leading margins of over 20% by 2011.
Handbook of credit derivatives and structured credit strategies, Morgan Stanl...quantfinance
This document provides an introduction and overview of the Morgan Stanley Credit Derivatives Insights Handbook Fifth Edition. It discusses how credit derivatives markets have evolved since the financial crisis, transitioning to a new "Credit Derivatives 2.0" culture with standardized trading processes, central clearing, and regulatory reforms. The introduction argues that further innovation is still needed to advance the market, but past innovations also brought unintended risks, so new ideas must be implemented carefully. The handbook aims to cover topics across credit derivatives instruments, valuation, portfolio management applications, and specialized asset classes in a simplified and updated format.
The document discusses drivers of global internet growth including broadband/PC penetration. It notes that internet penetration continues to have significant room for growth worldwide. Growth is expected to be highest in developing regions like Asia Pacific and Latin America. Broadband and PC adoption, which are tied to rising GDP per capita, are key in increasing internet access. As broadband and internet-enabled devices spread, search is becoming more important for finding information online.
This document discusses the case of Rob Parson, a banker at Morgan Stanley who is being considered for promotion to Managing Director. While Parson is a top performer who has significantly increased revenues, his individualistic work style and disrespectful behavior towards colleagues does not align with Morgan Stanley's culture. Paul Nasr, who recruited Parson, wants to promote him but others argue this could reward poor behavior and hurt company culture. A 360-degree performance review was implemented, finding that while Parson excels in professional and sales skills, he is weak in teamwork and cultural fit. The summary considers whether Parson should be promoted or given feedback to improve first.
UBS reported on its capital management and liquidity position in the second quarter of 2009. Key points include:
- UBS strengthened its capital ratios, with the Tier 1 ratio increasing to 13.2% and BIS total ratio to 17.7% due to a capital increase and lower risk-weighted assets.
- Risk-weighted assets were reduced by CHF 30 billion during the quarter through asset reductions in the Investment Bank, Wealth Management & Swiss Bank, and Wealth Management Americas divisions.
- UBS issued long-term debt totaling over CHF 10 billion to increase its funding sources while continuing to reduce its balance sheet.
The document discusses sustainable investing with ETFs. It provides an overview of sustainable indexed investing using MSCI ESG research and indices. MSCI's methodology screens companies based on their ESG ratings and controversies, excluding those involved in specific business activities or facing severe controversies. The document also discusses how sustainable ETFs can be used in portfolios to reduce carbon exposure, support renewable energy, and foster human and labor rights by investing in companies with higher ESG standards.
The document provides an overview of the SnapProtect SE backup solution. It discusses the product positioning and capabilities including disk-to-disk-to-tape backup, application awareness, unified management and recovery. The technical overview section explains the key terminology used in SnapProtect including CommCell, CommServe, media agents, clients, backup sets, subclients, storage policies and disk libraries. Release payloads for several SnapProtect versions are also summarized.
SnapDiff is NetApp's proprietary indexing engine that identifies file differences between two snapshots. It compares the base and diff snapshots using inode-walk and returns a list of added, deleted, modified, and renamed files. Backup software vendors can integrate their solutions with SnapDiff through its API to perform incremental backups using snapshots. SnapDiff provides faster indexing compared to traditional file system crawlers or NDMP backups by identifying file changes at the inode level rather than performing a full tree walk.
Open System SnapVault (OSSV) is a disk-to-disk backup solution that uses block-level incremental backups to efficiently backup data from non-NetApp systems to NetApp storage. OSSV can also be used for data migrations by performing block-level incremental transfers, which significantly reduces transfer times compared to file-level tools when migrating large frequently changed files. During a restore, the DFM server uses the NDMP protocol to communicate with both the OSSV host and NetApp filer to initiate and manage the restore process.
SnapDiff is NetApp's proprietary file differencing engine that identifies files changed between snapshots. It compares block-level changes rather than traversing the entire file system, improving backup performance. The SnapDiff API allows backup applications to integrate with SnapDiff. SnapDiff performance is measured in files processed per hour and is impacted by very large or deeply nested directory structures rather than total storage size.
NetApp technologies like snapshots, replication, and cloud integration can help organizations recover from ransomware and other data disasters. The document discusses several customer examples where:
1) Ransomware like NotPetya encrypted files and disrupted backups, but NetApp snapshots would have allowed much faster restores.
2) Backup methods like full backups slowed restoration and checking, but FlexClone from snapshots provides instant restores.
3) Storing backups in the cloud did not prevent ransomware impacts, but solutions like Cloud Volumes ONTAP can integrate snapshots and replication with public clouds for improved recovery.
The key message is that software and hypervisor-level replication is not enough - hardware snapshots provide additional imm
The document is a slide presentation about running Linux on IBM Power systems. It discusses why Linux is widely used, best practices for installing and configuring Linux on Power systems, and options for deploying Linux workloads including the Integrated Facility for Linux (IFL). The IFL allows customers to activate unused cores and memory on Power 770, 780, and 795 systems running only Linux at a lower cost than other hardware platforms.
SnapVault in Clustered Data ONTAP 8.2 provides efficient, reliable backups of data using incremental Snapshot copies. It supports backups over any distance and fast, simple recovery from data loss. Storage efficiency is preserved during backups. Key features include policy-based backups, reliable restore capabilities including browsing, restoring files and volumes, and usable replica copies for read-only or read-write access. It can replicate data between clustered and 7-Mode systems and supports fan-in, fan-out, and cascaded configurations. SnapVault is configured through setting schedules, policies and creating relationships between volumes.
A Step-By-Step Disaster Recovery Blueprint & Best Practices for Your NetBacku...Symantec
In this technical session we will share a few customer tested blueprints for implementing DR strategies with NetBackup appliances showing support for onsite and offsite disaster recovery. This includes the architecture design with Symantec best practices, down to execution of the wizards and command lines needed to implement the solution.
Watch the recording of this Google+ Hangout: http://bit.ly/13oTjvp
Abhishek Mallik has over 9 years of experience in storage administration. He has extensive experience working with NetApp storage technologies including FAS arrays, Data ONTAP, snapshots, replication, and backup software. He has worked on large projects for clients in the US like Mattel and Intel as well as in India. Currently he is the Storage Tower Lead for Mattel managing an 8 person team responsible for NetApp, EMC, and Brocade administration.
Fut1441 SUSE Linux Enterprise Server for SAP SolutionsChristian Holsing
This document provides a roadmap for SUSE Linux Enterprise Server for SAP Applications, covering recent developments and future directions. Key points include expanded long-term support for 4.5 years, new tools like saptune for configuration, support for technologies like NVDIMMs and containers, and ongoing work in high availability, automation, and consolidating tooling. SUSE works closely with SAP and partners to ensure their Linux platform meets the needs of SAP workloads now and in the future.
Veeam Webinar - Backing up Zarafa with SureBackupJoep Piscaer
This document discusses backing up the Zarafa Collaboration Platform application with Veeam SureBackup to maintain application consistency. It explains that application consistency is important to prevent data loss or corruption during backups. It evaluates the Zarafa components and identifies the MySQL database as requiring consistency. It describes using scripts to lock the database during backups for consistency. It also discusses how SureBackup can be used to automatically verify restores of the Zarafa application from backups.
Krishna P is a system administrator with over 5 years of experience working with Unix, Linux, and NetApp storage systems. He has skills in ONTAP, Solaris, Veritas, and Linux administration. Currently he works as a technical service specialist for IBM India, where his responsibilities include monitoring and maintaining NetApp filers, provisioning storage, implementing data protection solutions, and ensuring high availability. Previously he held roles as a technical support engineer and delivery software engineer.
The document outlines the job activities and responsibilities of a storage engineer including:
- Designing and implementing NetApp and EMC storage solutions such as FAS arrays, SnapMirror, and VPLEX in various environments.
- Migrating large amounts of data between different storage arrays using technologies like SnapMirror.
- Configuring and managing NetApp, EMC, Cisco, and Isilon storage arrays, switches, and related software.
- Identifying and resolving performance issues.
TSM FastBack provides block-level, incremental backups of Windows applications and file systems with continuous data protection and instant recovery capabilities. It offers centralized management and disaster recovery by replicating backups to a central location. The product supports virtual environments, Microsoft Exchange, SQL Server, and bare metal recovery. A new release provides additional support for Windows 2008, virtualization platforms, and high availability clusters.
Micro Focus Data Protection provides concise summaries of new features in Data Protector 10.x, Backup Navigator 9.x, and VM Explorer 7.x. The Data Protector 10.x summary includes a new next-generation GUI dashboard for managing backups, an improved security model, and certifications for Microsoft Exchange 2016 and SAP HANA 2.0. Backup Navigator 9.x features improved dedupe reports and enhanced service level objective reporting. VM Explorer 7.x provides an easy-to-use and affordable VM backup solution for VMware and Hyper-V.
What is NetBackup appliance? Is it just NetBackup pre-installed on hardware?
The answer is both yes and no.
Yes, NetBackup appliance is simply backup in a box if you are looking for a solution for your data protection and disaster recovery readiness. That is the business problem you are solving with this turnkey appliance that installs in minutes and reduce your operational costs.
No, NetBackup appliance is more than a backup in box if you are comparing it with rolling your own hardware for NetBackup or if you are comparing it with third party deduplication appliances. Here is why I say this…
NetBackup appliance comes with redundant storage in RAID6 for storing your backups
Symantec worked with Intel to design the hardware for running NetBackup optimally for predictable and consistent performance. Eliminates the guesswork while designing the solution.
Many vendors will talk about various processes running on their devices to perform integrity checks, some solutions even need blackout windows to do those operations. NetBackup appliances include Storage Foundation at no additional cost. The storage is managed by Veritas Volume Manager (VxVM) and presented to operating system through Veritas File System. Why is this important? SF is industry-leading storage management infrastructure that powers the most mission-critical applications in the enterprise space. It is built for high-performance and resiliency. NetBackup appliance provides 24/7 protection with data integrity on storage provided by the industry leading technology.
The Linux based operating system, optimized for NetBackup, harden by Symantec eliminates the cost of deploying and maintaining general purpose operating system and associated IT applications.
NetBackup appliances include built-on WAN Optimization driver. Replicate to appliances on remote sites or to the cloud up to 10 times faster on across high latency links.
Your backups need to be protected. Symantec Critical System Protection provides non-signature based Host Intrusion Prevention protection. It protects against zero-day attacks using granular OS hardening policies along with application, user and device controls, all pre-defined for you in NetBackup appliance so that you don’t need to worry about configuring it.
Best of all, reduce your operational expenditure and eliminate complexity! One patch updates everything in this stack! The most holistic data protection solution with the least number of knobs to operate.
Huawei Symantec Oceanspace N8000 clustered NAS OverviewUtopia Media
The document discusses the Oceanspace N8000 clustered NAS storage solution. It provides an overview of the solution and its advantages over traditional NAS architectures. Specifically, it notes that the N8000 provides high availability through an active-active mode, flexible scalability, dynamic tiered storage, efficient backup solutions, and a centralized management interface. It also provides examples of how the N8000 can be used for unified storage applications and file services consolidation.
Ibm spectrum scale_backup_n_archive_v03_ashAshutosh Mate
IBM Spectrum Scale can be used as both the source and destination for backup and archiving. As a source, Spectrum Scale data can be backed up to products like Spectrum Protect, Spectrum Archive, and third-party backup software. As a destination, Spectrum Protect can use Spectrum Scale and ESS storage for storing backed up or archived data, providing scalability, performance, and cost benefits over other solutions. Case studies demonstrate how large enterprises and regional hospital networks have consolidated backup infrastructure and improved availability, capacity, and backup/restore speeds by combining Spectrum Scale and Spectrum Protect.
Seeking position as a Linux Administrator by utilizing “6+ years of experience”
In multiple Linux & UNIX platforms, specialized in Red Hat Linux. Self-motivated, dedicated and up to any task that I am given.
Tharun Kumar Padigala is seeking a challenging position in the IT industry where he can apply his 4.6 years of technical experience. He has experience working with OpenStack, Linux administration, virtualization, networking and monitoring tools. His most recent role was as a Development Engineer at Pramati Technologies where he provided OpenStack deployment, support and development for clients.
Similar to JP Morgan Remote to Core Implementation (20)
JP Morgan North American Remote to Core JustificationJohn Napier
This document provides an overview of North American DCS Solutions' backup project. It discusses the goals of providing reliable system backups and focuses on restoration. Current status shows that 70% of requests are last minute and 8% are emergencies, showing a lack of planning. Improvements include implementing project management tools, training project managers, and increasing structured work over time. The project aims to increase automation and fixes like validity checks and notifications to improve the backup process.
John Napier PMP, CSM Infrastructure Migration ProjectsJohn Napier
John Napier PMP, CSM Infrastructure Migration Projects.
A sampling of projects from Northern Trust, Lurie Children's Hospital, American Electric Power, JP Morgan Chase, ABN Amro, Options Clearing Corp of Chicago, PACTIV,
Information security trends and concernsJohn Napier
The document discusses major trends in information security from 2009-2010. It identifies six major trends: 1) Increasingly complex regulatory environment, 2) Increased focus of attacks on specific targets, 3) Increased threats to privacy and reputational risk, 4) Mass accumulation of system access, 5) The evolution of cloud computing and the "extended enterprise", and 6) The evolution of security into an enterprise risk management function. The document provides analysis and recommendations for addressing each of these trends.
Implementing a project delivery frameworkJohn Napier
The document outlines a project delivery framework that defines 7 phases of a project lifecycle: Concept, Initiation, Requirements, Design, Build, Test, and Implementation. It provides definitions and objectives for each phase to ensure consistency. Additionally, it describes ongoing project management processes that are applied throughout the lifecycle to monitor progress, control quality, and manage risks to ensure successful project delivery. The framework is intended to provide common language and guidance across an organization for collaborative project work and reporting.
This document provides an overview of the ERCOT Nodal project kickoff. The project aims to meet new ERCOT market requirements while minimizing costs and timelines. Key deliverables include updating systems like webTrader and ODS to replace base power schedules. The approved budget is $813,155 in capital and $59,376 in operating costs. The plan phase will involve documenting processes and requirements, developing designs, and beginning testing with ERCOT. Steering and core teams are established with responsibilities defined for members and functional areas. A timeline and communication plan are also outlined.
JP Morgan Data Center Technology Town Hall 2008John Napier
This document provides an agenda for a Data Center Technologies Townhall meeting on January 30th, 2008. The agenda includes presentations on the 2008 DCT game plan, data center operations, enterprise computing services, regional management for EMEA, architecture/infrastructure technology management, business management/risk and resiliency, and promotions and appointments. Time is allotted for questions and answers at the end.
Bio nano elements of early stage valuation and value growthJohn Napier
Gene Scout advises clients along several aspects of technology commercialization depending on their needs. This includes performing IP, literature and clinical trial scans to define a company's market position and economic value. Gene Scout also helps create operating financial and marketing plans and provides benchmarks for business development terms relevant to valuation. When evaluating an early stage technology, it is important to define the application, market, and product as well as consider the IP estate, market potential, team, and available funding. Building company value occurs in stages from alpha to beta testing to exit, with each stage ideally providing a return on investment.
Day before and day after pictures of the Children's Memorial Hospital to Lurie Children's Hospital move as documented by John Napier PMP, CSM. IT Project Manager for the move
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
How to Get CNIC Information System with Paksim Ga.pptx
JP Morgan Remote to Core Implementation
1. December 2006
S T R I C T L Y P R I V A T E A N D C O N F I D E N T I A L
Remote to Core – Part 2 - Implementation
This deck is for Remote to Core presentation purposes and process understanding. It is not
intended to replace detailed installation, deployment guidelines or standards
documentation maintained by those groups primarily responsible for the associated tasks
described.
Author: J. Napier
3. Solution: Netbackup Snapvault Management (NSVM)
Symantec NetBackup utilizes Network Appliance Snapvault functionality to perform backups on NFS-
mounted and/or CIFS-mapped file systems residing on NetApp filers. The Netbackup job scheduler
initiates a NetApp Snapvault backup; a NetApp Snapshot copy is created on the primary NetApp storage
system and is then replicated to a NetApp secondary storage system
Based on the quick creation of a snapshot and a transfer of only changed blocks via NetApp Qtree
Snapmirror replication, fast backups and restores can be easily accomplished from local to remote
locations.
Backups in the form of Snapshot copies on the primary(Remote) and secondary(Core) NetApp storage
systems are managed (creation, retention, deletion) with NetBackup.
All backup data stored is in file format, including all incremental backups, and each incremental
backup can be viewed as a full backup image by any administrator or end-user with associated access
control and security rights. Data to be restored can be quickly located with a full view into what the
environment looked like at the time the backup was created, and as far back as data retention will
permit.
The “file system” view of backups can be accessed by end users as well, and empowers the user to
perform self-restores, if the administrator permits.
Standard Solution for Remote to Core provides for 30days local backup protection and 60 days remote
backup retention – all backups on disk and online.
3
5. Current State Diagram – Remote Site
Standard Medium Office Deployment
Clustered Novell/Netware Servers (or)
Clustered Windows 2000/3 Servers
1Pair each for Print, File, Build/App
EMC/FastT SAN storage
Local Tape Backup(NBU,other)
AD
5
6. Target State Diagram
Core
Not in use after client migration
Daily Backup
Image Resides at
Core(60d)
Filer Initiates Snapshot
Snapshot is Remote-side Backup(30d)
Begins Snapvault Replication
AD
Remote
6
7. High-Level Process – Bringing Filer Online
Network Configuration – established in System Configuration sheet
Interfaces to User Network
Prefer Etherchannel GbE or 100Mb (multi-mode VIF, Active-Active Trunk)
Minimally Multi-Homed GbE or 100Mb (single-mode VIF, Active-Passive)
Interfaces to WAN (Snapvault replication) – utilize same interfaces.
Volumes and Share Creation – established through F&P Share Discovery and Backup Policy
Includes NFS mount points for UNIX-based NBU instances
Disable standard OnTap scheduled snapshots on non-Root volumes
Limit Qtrees to 6 per volume (to control QSM snapshot limits of 255 per volume)
Join filer to AD Domain
Map shares Source/Target for Robocopy Process (if Filer not initially used as Primary storage)
Applications running on F&P Shares should be identified in advance. Applications typically
generate a higher data change rate and need to be tracked accordingly.
7
8. Netbackup SnapVault Mgmt (NSVM) Req’s.
NetApp Hardware
Any platform combination that SnapVault supports
Data ONTAP
Primary: Data ONTAP 6.5.2 or later
Secondary: Data ONTAP 7.1 or later
NetApp Software
Snapshot, SnapRestore[1]
Primary: sv_ontap_pri
Secondary: sv_ontap_sec
NetBackup
NetBackup Enterprise 6.0 or later
NDMP Option
Advanced Client Option
NetApp SnapVault Option
Protocols
NFS, CIFS
Client
Solaris
Windows
NetBackup 6.0 or later
Media Server
N/A (but Master must be NetBackup 6.0 or later)
Applications
File Sharing
Oracle (8i or later) on Solaris
8
9. High-Level NSVM Setup
NetBackup
NBU 6 Enterprise Server installed on Master & Media Servers
NDMP Option licensed and installed within NBU
Veritas ICS is installed
NBU Advanced Client is licensed and installed
NetApp/OnTap
SnapVault Primary and SnapVault Secondary licenses added
NDMP on the filers is ON
Enable SnapVault
Disable local Snapshot schedules
9
10. High-Level Process – NSVM Setup (continued)
Setup NDMP Credentials for NDMP Host
Verify information
Grant Access on Primary
Grant Access on Secondary
Create SnapVault Target Volume on SnapVault Secondary(Filer)
10
11. High-Level Process – NSVM Setup (continued)
All NSVM SnapVault backups are managed via NetBackup for both backups and restores.
NSVM Manages the Snapshot copies (retention/copies) on the Primary and Secondary)
Storage Unit type is set to Disk and Disk type is set to SnapVault
Media Server is appropriate selected
The SnapVault server should be specified as the name of the NetApp Secondary system
Specify the absolute pathname to volume as the volume that was created on the Secondary
11
12. High-Level Process – Setup NBU Policy Attributes
Policy Type is WindowsNT
Policy Storage Unit is the Storage unit setup in earlier step
Check off
Set # of Snapshots to keep on the primary to 30
Schedules:
Name Schedule – Type (FULL) – Frequency
Destination (Snapshots Only)
Retention (Manages retention of copies on Secondary) – 60 Days
Clients
Backup Selections
12
13. High-Level Process – Notes:
Backup Policy changes
Backup policies should be aligned to remote volumes and incorporate all qtree paths
within the volume having similar retention policy
Any policy requirement for longer than 60day retention should have an alternate
Secondary(target) vault volume defined.
Currently – the “first” snapvault relationship between a new Remote site and Core, should be
manually performed from NAS Admin in a create & break process.
In our environment, we have seen NBU fail to create Snapvault relationships for first time
Remote sites.
Configurations for NBU must have the Media Server and Advanced Client components
running on the same platform. Either both on Windows or both on Solaris.
If UNIX is the Media Server/Advanced Client platform, then all shares must have NFS exports
added.
30-Day Adjustment: Remote volumes should be originally configured with extra capacity
(estimate 20%) as data change rate and snapshot consumption for the first 30 days of backups
will not be known. After 30days of Snapshots/Backups are accumulated, the volumes can be
re-adjusted to normal free-space levels. Flex-vols allow for dynamic growth or size reduction.
13
15. Reference - FAS270C Wiring Diagram (M.Bachert)
OF F
OP EN
OPEN
OP EN
OPEN
ON
FIBRE
CHANNEL 1
F IBRE
CHANNEL 2
10/100/1000 ET HERNET
CONS OLE
F AULT
SH ELF
1Gb
ID
2Gb
PSU1
MODULE A
PSU2
MODULE B
FA ULT
10/100/1000 ETHERNE T
FIB RE
CHANNEL 1
CONS OLE
FIBRE
CHANNEL 2
CAUT ION
CAUT ION
ON
OFF
NETWORK SWITCH
TERMINAL SERVER
NETWORK SWITCH
POWER
POWER
Hardware
Dimensions
Metric
Height
5.25 in.
13.3 cm
Width
17.6 in.
44.7 cm
Depth
20 in.
50.85 cm
Weight
FAS270
U.S.
Rack Units*
77 lbs. (loaded)
35 kg
3U (Rack Mount)
Amps @ 100-120V
Rated
Hardware
FAS270
Amps @ 200-240V
Actual
Rated
Actual
P/S Volt Range
7
3.95
3.5
1.9
100 – 240 VAC
15