In a globally dispersed enterprise with private cloud environment, where unstructured data is exponentially growing, there is a need to provide 24x7 accesses to business-critical data and be able to restore in case of loss of data. IBM Scale Out Network Attached Storage (IBM SONAS) with its integrated IBM Tivoli Storage Manager client enables enterprises to back up and restore data seamlessly and the IBM Active Cloud Engine offers the capability to replicate data to remote sites.
This document discusses QNAP's TVS-ECx80+ Edge Cloud Turbo vNAS series:
1. It provides centralized management of multiple NAS units through Q'center CMS, allowing monitoring of system status, firmware updates, and logs from one interface.
2. File Station allows easy file management, sharing, and media playback from any browser, along with features like quick search, thumbnails, and a recycle bin.
3. The Qfile mobile app provides access to NAS files on the go for browsing, sharing, and streaming multimedia.
This document provides an overview of virtualization. It defines virtualization as separating a resource or request for a service from the underlying physical delivery of that service. Virtualization allows for more efficient utilization of IT infrastructure by running multiple virtual machines on a single physical server. There are two main approaches to virtualization - hosted architectures which run on top of an operating system, and hypervisor architectures which install directly on hardware for better performance and scalability. Virtualization can provide benefits like server consolidation, test environment optimization, and business continuity.
The document provides an overview of Oracle Database Backup Service (ODBS), which enables customers to securely store database backups in Oracle's cloud storage. It describes how the Oracle Database Cloud Backup Module (ODCBM) installs on the database server and uses familiar RMAN commands to transparently backup databases to ODBS and restore from ODBS. The document also outlines the steps to set up ODBS, including purchasing storage, installing ODCBM, configuring RMAN and encryption settings, performing backups, and restoring from backups.
Server virtualization has forever changed the way we think about compute resources. Traditional storage architecture is a mismatch for today's virtualized environments. Gridstore's unique and patented architecture solves this problem and increases performance while decreasing costs. Learn how.
The document provides instructions and guidelines for installing and managing Citrix XenServer Dell Edition. It includes sections on installing and configuring XenServer, using XenCenter management software, configuring storage options like local disks and Dell storage arrays, backup and recovery procedures, best practices, and troubleshooting. The document aims to help users optimize the virtualization platform on Dell servers and storage.
The document provides a reference architecture for deploying 3000 virtual desktops using Nutanix Complete Clusters with VMware vSphere 5 and View 5. It details the setup of a 50 node Nutanix cluster, including compute, storage, and networking configurations. Performance tests were run using the VMware RAWC tool to simulate a morning login boot storm and daily user workload across the 3000 virtual desktops. Test results showed the infrastructure could handle the workload with predictable performance even at scale.
This document discusses QNAP's TVS-ECx80+ Edge Cloud Turbo vNAS series:
1. It provides centralized management of multiple NAS units through Q'center CMS, allowing monitoring of system status, firmware updates, and logs from one interface.
2. File Station allows easy file management, sharing, and media playback from any browser, along with features like quick search, thumbnails, and a recycle bin.
3. The Qfile mobile app provides access to NAS files on the go for browsing, sharing, and streaming multimedia.
This document provides an overview of virtualization. It defines virtualization as separating a resource or request for a service from the underlying physical delivery of that service. Virtualization allows for more efficient utilization of IT infrastructure by running multiple virtual machines on a single physical server. There are two main approaches to virtualization - hosted architectures which run on top of an operating system, and hypervisor architectures which install directly on hardware for better performance and scalability. Virtualization can provide benefits like server consolidation, test environment optimization, and business continuity.
The document provides an overview of Oracle Database Backup Service (ODBS), which enables customers to securely store database backups in Oracle's cloud storage. It describes how the Oracle Database Cloud Backup Module (ODCBM) installs on the database server and uses familiar RMAN commands to transparently backup databases to ODBS and restore from ODBS. The document also outlines the steps to set up ODBS, including purchasing storage, installing ODCBM, configuring RMAN and encryption settings, performing backups, and restoring from backups.
Server virtualization has forever changed the way we think about compute resources. Traditional storage architecture is a mismatch for today's virtualized environments. Gridstore's unique and patented architecture solves this problem and increases performance while decreasing costs. Learn how.
The document provides instructions and guidelines for installing and managing Citrix XenServer Dell Edition. It includes sections on installing and configuring XenServer, using XenCenter management software, configuring storage options like local disks and Dell storage arrays, backup and recovery procedures, best practices, and troubleshooting. The document aims to help users optimize the virtualization platform on Dell servers and storage.
The document provides a reference architecture for deploying 3000 virtual desktops using Nutanix Complete Clusters with VMware vSphere 5 and View 5. It details the setup of a 50 node Nutanix cluster, including compute, storage, and networking configurations. Performance tests were run using the VMware RAWC tool to simulate a morning login boot storm and daily user workload across the 3000 virtual desktops. Test results showed the infrastructure could handle the workload with predictable performance even at scale.
www.doubletake.com Data Protection Strategies for Virtualizationwebhostingguy
This document discusses data protection strategies for virtualization using Double-Take software. It provides an overview of Double-Take and how it enables real-time replication for virtual machines. It outlines the business benefits of virtualization such as reduced costs, improved resource utilization, and streamlined disaster recovery. It also describes how Double-Take can be used to replicate virtual machines for high availability and disaster recovery purposes.
Virtualization allows multiple operating systems and applications to run on the same server simultaneously, improving hardware utilization. It reduces IT costs while increasing efficiency and flexibility. Virtualization provides hardware independence so operating systems and applications can run on any system, and virtual machines can be easily provisioned and managed.
This document discusses virtualization and cloud computing. It defines virtualization as creating an illusion of computer hardware or resources. Cloud computing is defined as on-demand network access to configurable computing resources. The traditional server concept of dedicating physical servers is described as well as the benefits of the virtual server concept, including scalability, fault tolerance, and efficiency. Virtualization techniques like full virtualization, paravirtualization, and hardware-assisted virtualization are covered. Hypervisors are defined as the software that creates virtual machines, and the differences between type 1 and type 2 hypervisors are explained. KVM is provided as an example of a type 2 hypervisor for Linux.
The document discusses blade servers and provides information about their history, features, types, advantages, applications and future scope. Some key points:
1. Blade servers are compact, modular servers that fit into blade enclosures to minimize space and resource usage while maintaining functionality.
2. They emerged in the 1970s but were commercialized in 2001. Growth is driven by cost savings from shared infrastructure.
3. Features include lower hardware costs, simplified deployment/maintenance, maximized data center space, and reduced power consumption.
4. Types include blades for switching, routing, storage, and fiber access that can slot into enclosures to provide shared services.
This document provides an architectural overview of cloud computing and describes how a payroll processing application could be migrated to the cloud. It discusses the key attributes and layers of cloud computing including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It then describes how the existing payroll application, which is deployed on-premises at many government locations, could be re-architected as a multi-tenant SaaS application in the cloud to reduce costs and maintenance burdens.
Fulcrum Group Storage And Storage Virtualization PresentationSteve Meek
The document discusses storage solutions and SANs. Exponential data growth is expected to continue challenging data protection efforts. Different storage types fit different business needs. By understanding storage design and an organization's needs, storage virtualization may be a good fit. SANs can help with general server needs, virtualization, and disaster recovery/backup needs. Planning is key to deploying storage in a centralized way.
Carlos Mayol is a Premier Field Engineer at Microsoft focused on Azure infrastructure services including Azure Site Recovery. Azure Site Recovery allows for replication, failover, and recovery of workloads between on-premises and Azure. It supports various scenarios including replication between on-premises Hyper-V and VMWare sites, as well as migration from on-premises to Azure. The presentation provides an overview of Azure Site Recovery capabilities and scenarios.
ShadowProtect Server and ShadowProtect Small Business Server provide fast and reliable disaster recovery, data protection, and system migration for Windows servers. They maximize business continuity by minimizing recovery time. The software backs up operating systems, applications, configurations, and data. It allows rapid recovery to the same or different hardware, or to virtual environments. Recovery can be of entire servers or individual files and folders.
The document discusses the Distributed Management Task Force (DMTF) Open Virtualization Format (OVF) standard.
OVF is an open standard for packaging and distributing virtual appliances or virtual machines. It allows virtual machines to be transported across different platforms and hypervisors while maintaining their configuration.
The OVF standard uses XML to describe virtual machines, their relationships, configuration details, and other metadata. It packages VMs and disks into a single file for easy distribution. This facilitates automated deployment of pre-configured virtual machines.
This document discusses data management strategies in a virtualized environment. It covers topics such as storage design impacts on reliability, availability and scalability. It also discusses VMware backup challenges and solutions like VMware Consolidated Backup (VCB), vStorage APIs for Data Protection (VADP), and vStorage APIs for Array Integration (VAAI). Specific solutions mentioned include data deduplication, thin provisioning, replication and snapshots.
This document discusses hybrid cloud storage solutions from Microsoft, focusing on StorSimple. It provides an overview of Carlos Mayol, a Premier Field Engineer at Microsoft, and his expertise in areas like Azure Infrastructure Services. It then summarizes Microsoft's StorSimple product which provides hybrid cloud storage across on-premises and Azure environments, highlighting benefits like cost reduction, simplified management, and support for various workloads. Use cases and customer examples are provided for StorSimple 8000 series appliances and the StorSimple Virtual Array solution.
This solution guide demonstrates the advanced data-protection and management strategies for the Microsoft SharePoint environment on the IBM PureFlex System with the IBM Storwize V7000 storage system. To know more about the IBM PureSystem familly, visit http://ibm.co/J7Zb1v.
This paper addresses the redundant array of
inexpensive/independent disks (RAID) in the field of diskless
clients’ where the centralized Disk Less NFS Server is present
to share the OS bit to diskless clients over TCP IP over Local
area network. Disk less client technology is very much
practical and useful where a cost efficient and low end clients
can be made useful without presence of a local disk itself. The
clients which has the whole low end Computer system i.e.
Keyboard, mouse, CPU + motherboard, monitor and may or
may not have a disk can still use the advantages of RAID
system through the diskless client server in such a way that
any disk faults can be tolerated online. The underlying present
disk (if any) in diskless client can also be used for specific
purposes.
Dell emc back up solution in azure cloud vipinvips
The document discusses backup solutions for Azure virtual machines and identifies limitations with native Azure backup. It recommends using Dell EMC Networker and DataDomain as a third-party enterprise backup solution that can meet tier 1 RPO and RTO requirements of less than an hour. It provides details on the proposed solution architecture with Networker and DataDomain instances in each region along with replication capabilities. The solution aims to address limitations of native Azure backup and provide application-aware backups, encryption, short RPOs, and support for workloads like Oracle databases.
Virtualization, A Concept Implementation of CloudNishant Munjal
This presentation will guide through deploying virtualization in linux environment and get its access to another machine followed by virtualization concept.
This document discusses IBM's cloud storage solution for transforming information infrastructure. It provides three examples of how cloud storage could help organizations by allowing dynamic storage management: 1) A company running out of disk space on a Friday could non-disruptively add storage in the cloud. 2) Old storage systems can be replaced by migrating data to the cloud without downtime. 3) Cloud storage provides disaster recovery by replicating and accessing data in the cloud when primary storage fails.
IBM's cloud storage solution allows organizations to transform their information infrastructure by providing dynamic storage management, scalable capacity and performance, and centralized management. It implements a storage cloud using proven IBM technologies like GPFS and Tivoli Storage Manager. Administrators can dynamically allocate storage space, migrate data between storage tiers automatically using policies, and manage billions of files across multiple petabytes of storage from a single interface.
ACIC Rome & Veritas: High-Availability and Disaster Recovery ScenariosAccenture Italia
A white paper to illustrate High-Availability and Disaster Recovery Scenarios and use-cases developed by Accenture and Veritas in the Accenture Cloud Innovation Center of Rome.
SIOS DataKeeper software allows users to add disaster recovery protection to Windows clusters or create SANless clusters using local storage. It uses efficient block-level replication to synchronize data across servers, enabling continuous operations even after failover. DataKeeper is offered in Standard and Cluster Editions, and can replicate within or across data centers. It protects applications in physical, virtual, and cloud environments with high performance and at a lower cost than traditional solutions.
New Features in PSP2 for SANsymphony™-V10 Software-defined Storage Platform and DataCore™ Virtual SAN. New enhancements include OpenStack support, deduplication and compression, veeam backup integration and random write accelerator.
Software Defined Everything infrastructure that virtualizes compute, network, and storage resources and delivers it as a service. Rather than by the hardware components of the infrastructure, the management and control of the compute, network, and storage infrastructure are automated by intelligent software that is running on the Lenovo x86 platform.
www.doubletake.com Data Protection Strategies for Virtualizationwebhostingguy
This document discusses data protection strategies for virtualization using Double-Take software. It provides an overview of Double-Take and how it enables real-time replication for virtual machines. It outlines the business benefits of virtualization such as reduced costs, improved resource utilization, and streamlined disaster recovery. It also describes how Double-Take can be used to replicate virtual machines for high availability and disaster recovery purposes.
Virtualization allows multiple operating systems and applications to run on the same server simultaneously, improving hardware utilization. It reduces IT costs while increasing efficiency and flexibility. Virtualization provides hardware independence so operating systems and applications can run on any system, and virtual machines can be easily provisioned and managed.
This document discusses virtualization and cloud computing. It defines virtualization as creating an illusion of computer hardware or resources. Cloud computing is defined as on-demand network access to configurable computing resources. The traditional server concept of dedicating physical servers is described as well as the benefits of the virtual server concept, including scalability, fault tolerance, and efficiency. Virtualization techniques like full virtualization, paravirtualization, and hardware-assisted virtualization are covered. Hypervisors are defined as the software that creates virtual machines, and the differences between type 1 and type 2 hypervisors are explained. KVM is provided as an example of a type 2 hypervisor for Linux.
The document discusses blade servers and provides information about their history, features, types, advantages, applications and future scope. Some key points:
1. Blade servers are compact, modular servers that fit into blade enclosures to minimize space and resource usage while maintaining functionality.
2. They emerged in the 1970s but were commercialized in 2001. Growth is driven by cost savings from shared infrastructure.
3. Features include lower hardware costs, simplified deployment/maintenance, maximized data center space, and reduced power consumption.
4. Types include blades for switching, routing, storage, and fiber access that can slot into enclosures to provide shared services.
This document provides an architectural overview of cloud computing and describes how a payroll processing application could be migrated to the cloud. It discusses the key attributes and layers of cloud computing including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It then describes how the existing payroll application, which is deployed on-premises at many government locations, could be re-architected as a multi-tenant SaaS application in the cloud to reduce costs and maintenance burdens.
Fulcrum Group Storage And Storage Virtualization PresentationSteve Meek
The document discusses storage solutions and SANs. Exponential data growth is expected to continue challenging data protection efforts. Different storage types fit different business needs. By understanding storage design and an organization's needs, storage virtualization may be a good fit. SANs can help with general server needs, virtualization, and disaster recovery/backup needs. Planning is key to deploying storage in a centralized way.
Carlos Mayol is a Premier Field Engineer at Microsoft focused on Azure infrastructure services including Azure Site Recovery. Azure Site Recovery allows for replication, failover, and recovery of workloads between on-premises and Azure. It supports various scenarios including replication between on-premises Hyper-V and VMWare sites, as well as migration from on-premises to Azure. The presentation provides an overview of Azure Site Recovery capabilities and scenarios.
ShadowProtect Server and ShadowProtect Small Business Server provide fast and reliable disaster recovery, data protection, and system migration for Windows servers. They maximize business continuity by minimizing recovery time. The software backs up operating systems, applications, configurations, and data. It allows rapid recovery to the same or different hardware, or to virtual environments. Recovery can be of entire servers or individual files and folders.
The document discusses the Distributed Management Task Force (DMTF) Open Virtualization Format (OVF) standard.
OVF is an open standard for packaging and distributing virtual appliances or virtual machines. It allows virtual machines to be transported across different platforms and hypervisors while maintaining their configuration.
The OVF standard uses XML to describe virtual machines, their relationships, configuration details, and other metadata. It packages VMs and disks into a single file for easy distribution. This facilitates automated deployment of pre-configured virtual machines.
This document discusses data management strategies in a virtualized environment. It covers topics such as storage design impacts on reliability, availability and scalability. It also discusses VMware backup challenges and solutions like VMware Consolidated Backup (VCB), vStorage APIs for Data Protection (VADP), and vStorage APIs for Array Integration (VAAI). Specific solutions mentioned include data deduplication, thin provisioning, replication and snapshots.
This document discusses hybrid cloud storage solutions from Microsoft, focusing on StorSimple. It provides an overview of Carlos Mayol, a Premier Field Engineer at Microsoft, and his expertise in areas like Azure Infrastructure Services. It then summarizes Microsoft's StorSimple product which provides hybrid cloud storage across on-premises and Azure environments, highlighting benefits like cost reduction, simplified management, and support for various workloads. Use cases and customer examples are provided for StorSimple 8000 series appliances and the StorSimple Virtual Array solution.
This solution guide demonstrates the advanced data-protection and management strategies for the Microsoft SharePoint environment on the IBM PureFlex System with the IBM Storwize V7000 storage system. To know more about the IBM PureSystem familly, visit http://ibm.co/J7Zb1v.
This paper addresses the redundant array of
inexpensive/independent disks (RAID) in the field of diskless
clients’ where the centralized Disk Less NFS Server is present
to share the OS bit to diskless clients over TCP IP over Local
area network. Disk less client technology is very much
practical and useful where a cost efficient and low end clients
can be made useful without presence of a local disk itself. The
clients which has the whole low end Computer system i.e.
Keyboard, mouse, CPU + motherboard, monitor and may or
may not have a disk can still use the advantages of RAID
system through the diskless client server in such a way that
any disk faults can be tolerated online. The underlying present
disk (if any) in diskless client can also be used for specific
purposes.
Dell emc back up solution in azure cloud vipinvips
The document discusses backup solutions for Azure virtual machines and identifies limitations with native Azure backup. It recommends using Dell EMC Networker and DataDomain as a third-party enterprise backup solution that can meet tier 1 RPO and RTO requirements of less than an hour. It provides details on the proposed solution architecture with Networker and DataDomain instances in each region along with replication capabilities. The solution aims to address limitations of native Azure backup and provide application-aware backups, encryption, short RPOs, and support for workloads like Oracle databases.
Virtualization, A Concept Implementation of CloudNishant Munjal
This presentation will guide through deploying virtualization in linux environment and get its access to another machine followed by virtualization concept.
This document discusses IBM's cloud storage solution for transforming information infrastructure. It provides three examples of how cloud storage could help organizations by allowing dynamic storage management: 1) A company running out of disk space on a Friday could non-disruptively add storage in the cloud. 2) Old storage systems can be replaced by migrating data to the cloud without downtime. 3) Cloud storage provides disaster recovery by replicating and accessing data in the cloud when primary storage fails.
IBM's cloud storage solution allows organizations to transform their information infrastructure by providing dynamic storage management, scalable capacity and performance, and centralized management. It implements a storage cloud using proven IBM technologies like GPFS and Tivoli Storage Manager. Administrators can dynamically allocate storage space, migrate data between storage tiers automatically using policies, and manage billions of files across multiple petabytes of storage from a single interface.
ACIC Rome & Veritas: High-Availability and Disaster Recovery ScenariosAccenture Italia
A white paper to illustrate High-Availability and Disaster Recovery Scenarios and use-cases developed by Accenture and Veritas in the Accenture Cloud Innovation Center of Rome.
SIOS DataKeeper software allows users to add disaster recovery protection to Windows clusters or create SANless clusters using local storage. It uses efficient block-level replication to synchronize data across servers, enabling continuous operations even after failover. DataKeeper is offered in Standard and Cluster Editions, and can replicate within or across data centers. It protects applications in physical, virtual, and cloud environments with high performance and at a lower cost than traditional solutions.
New Features in PSP2 for SANsymphony™-V10 Software-defined Storage Platform and DataCore™ Virtual SAN. New enhancements include OpenStack support, deduplication and compression, veeam backup integration and random write accelerator.
Software Defined Everything infrastructure that virtualizes compute, network, and storage resources and delivers it as a service. Rather than by the hardware components of the infrastructure, the management and control of the compute, network, and storage infrastructure are automated by intelligent software that is running on the Lenovo x86 platform.
This technical paper provides the essential technical information about the advanced storage management solution for VMware virtual infrastructure using the VMware vSphere 5.0 Storage DRS feature with the IBM SONAS storage system. To know more about the VMware vSphere, visit http://ibm.co/Lx6hfc.
This document discusses how to build a private cloud on IBM Power Systems using Tivoli Service Management software. A private cloud can provide cloud-like attributes like self-service provisioning, monitoring, and chargeback within a company's internal IT environment. Tivoli Service Automation Manager is a key tool that allows non-IT users to easily provision new virtual servers from standardized software stacks without IT assistance. IBM Power Systems provide the virtualization, stability, support, and scalability required to support a private cloud environment. When combined with automation and standardization, a private cloud can reduce IT costs through more efficient use of resources and reduced labor.
The document is a report on cloud computing written by Abdul-Rehman Aslam for his course instructor Mr. Safee. It discusses key topics such as what cloud computing is, the cloud service model of Infrastructure as a Service, Platform as a Service and Software as a Service. It also covers the different types of clouds including public, private, hybrid and community clouds. The report highlights the key characteristics of cloud computing such as cost, device and location independence, multi-tenancy, reliability, scalability and security. It concludes that cloud computing brings many possibilities and is a technology that has taken the software and business world by storm.
OpenStack is an open source cloud computing platform that provides services for managing compute, storage, and networking resources. It allows users to deploy virtual machines and other instances across many servers to handle different cloud computing tasks. Major components of OpenStack include Nova (compute), Swift (object storage), Cinder (block storage), Glance (images), Neutron (networking), Horizon (dashboard), Ceilometer (telemetry), and Heat (orchestration). While OpenStack can be complex to install and configure, its large community and industry adoption suggest it will continue to be an important platform for private and hybrid clouds.
Learn about IBM zEnterprise Strategy for the Private Cloud.The white paper defines the strategy for implementing the zEnterprise System as an integrated, heterogeneous, and virtualized infrastructure, ideal for supporting Infrastructure as a Service (IaaS) in cloud computing deployments.To know more about System z, visit http://ibm.co/PNo9Cb.
White Paper: Best Practices for Data Replication with EMC Isilon SyncIQ EMC
This White Paper provides a detailed overview of the key features and benefits of EMC Isilon SynclQ software and describes how SyncIQ enables enterprises to flexibly manage and automate data replication between two Isilon clusters. This paper also describes best practices and use cases to maximize the benefits of cluster-to-cluster replication.
White Paper: Hadoop on EMC Isilon Scale-out NAS EMC
This White Paper details how EMC Isilon can be used to support an enterprise Hadoop data analytics workflow. Core architectural components are covered as well as how an enterprise can gain reliable business insight quickly and efficiently while maintaining simplicity to meet the storage requirements of an evolving Big Data analytics workflow.
Lecture #6 - ET-3010
Cloud Computing - Overview and Examples
Connected Services and Cloud Computing
School of Electrical Engineering and Informatics SEEI / STEI
Institut Teknologi Bandung ITB
Update April 2017
The IBM BladeCenter Foundation for Cloud white paper provides an overview of the platform and its advantages for enterprises. It discusses how the solution combines servers, storage, networking, and software into an optimized unified architecture. The paper highlights how the platform delivers outstanding performance through its converged networking and scalable architecture. It also emphasizes how the solution provides reliability through redundancy and quality support from IBM.
1Running head WINDOWS SERVER DEPLOYMENT PROPOSAL2WINDOWS SE.docxaulasnilda
1
Running head: WINDOWS SERVER DEPLOYMENT PROPOSAL
2
WINDOWS SERVER DEPLOYMENT PROPOSAL
Windows server deployment proposal
My Name
University of Maryland University College
WINDOWS SERVER / CMIT 369
December 8, 2019
Windows server deployment proposal
This proposal is a description of the implementation and configuration of the core IT services as a solution to "We Make Windows" Inc. This solution will supply the needs of the company for 2-3 years. As part of this proposal, six topics will be addressed in detail and both the business and technical reasoning for the choice of each of these topic will be provided. The 6 topics that will be addressed in this proposal include the new features of windows server 2016 that that the company can take advantage of, deployment and server editions, active directory domains, DNS and DHCP designs, deployment of application services, and last but not the least, printer and file sharing. That said, this proposal progresses as follows.
New features of windows server 2016 that WMW can take advantage
Nano server
One of the new features of windows server 2016 that WMW Inc can take advantage of is the nano server feature. At this point in time, it should be understood that the a "nano server is the server that is responsible for refactoring the core pieces of the windows server, turning them into their minimally functional state" (Ferrill, 2015). To expound further on the refactoring aspect, it should be know that refactoring is that process of analyzing a given code, in this case, the core pieces of the windows serve, the goal of which is to simplify it. Having described a nano server, it is time to address both the technical and business reasoning for this feature.
One of the technical reasoning for this new feature is that a nano server can run on a bare-metal operating system. In basic terms, a bare metal operating system is basically a hard disk which is the usual medium on which many computer operating systems are installed. So, the capacity of the nano server running on a bare metal operating system is advantageous in that the system will require fewer updates. At the same time, this means that fewer rebooting of the system when the updates are done will be necessary. From the business standpoint, fewer updates and reboots will ensure the business operations remain online and functional most of the time with little interruptions. In other words, there will be little down times. Since down times are costly to the business, this means that the element of cost due to down times will be addressed by the nano server.
Another technical reasoning for this feature is that nano servers are so small that they could be ported across physical sites, data centers as well as other servers. In fact, compared to other installation options, this feature posses a 92% smaller installation. This means that the installation can connected easily across physical sites, data centers, and even across other server ...
This document discusses best practices for deploying VMware vSphere 5 on IBM SONAS scale-out network attached storage. It provides an overview of new features in vSphere 5 including Storage vMotion, Storage DRS, and centralized logging. It then covers planning the creation of NFS shares on SONAS, installing and configuring vSphere, and adding NFS data stores. Recommendations are provided such as using large SONAS storage pools and fewer larger NFS data stores. The document is intended to help customers implement effective storage solutions for enterprise virtual environments requiring extreme scalability.
Many loosely linked, containerized factors create modern applications nowadays. Discover how automating and simplifying the provision of virtual environments allows container orchestration to arrange the activities of different components and application layers.
Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet
Similar to IBM SONAS Enterprise backup and remote replication solution in a private cloud (20)
This IBM Redpaper provides a brief overview of OpenStack and a basic familiarity of its usage with the IBM XIV Storage System Gen3. The illustration scenario that is presented uses the OpenStack Folsom release implementation IaaS with Ubuntu Linux servers and the IBM Storage Driver for OpenStack. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn how all flash needs end to end Storage efficiency. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about vSphere Storage API for Array Integration on the IBM Storwize family. IBM Storwize V7000 Unified combines the block storage capabilities of Storwize V7000 with file storage capabilities into a single system for greater ease of management and efficiency. For more information on IBM Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://bit.ly/KWh5Dx to 'Follow' the official Twitter handle of IBM India Smarter Computing.
Learn about IBM FlashSystem 840 and its complete product specification in this Redbook. FlashSystem 840 provides scalable performance for the most demanding enterprise class applications. IBM FlashSystem 840 accelerates response times with IBM MicroLatency to enable faster decision making. For more information on IBM FlashSystem, visit http://ibm.co/10KodHl.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about the IBM System x3250 M5,.The x3250 M5 offers the following energy-efficiency features to save energy, reduce operational costs, increase energy availability, and contribute to a green environment, energy-efficient planar components help lower operational costs. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210746104/IBM-System-x3250-M5
This Redbook talks about the product specification of IBM NeXtScale nx360 M4. The NeXtScale nx360 M4 server provides a dense, flexible solution with a low total cost of ownership (TCO). The half-wide, dual-socket NeXtScale nx360 M4 server is designed for data centers that require high performance but are constrained by floor space. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210745680/IBM-NeXtScale-nx360-M4
The IBM System x3650 M4 HD is a (1) 2-socket 2U rack-optimized server that supports up to 32 internal drives and features an innovative design for optimal performance, uptime, and dense storage. It offers (2) excellent reliability, availability, and serviceability for improved business environments. The server is (3) designed for easy deployment, integration, service, and management.
Here are the product specification for IBM System x3300 M4. This product can be managed remotely.The x3300 M4 server contains IBM IMM2, which provides advanced service-processor control, monitoring, and an alerting function. The IMM2 lights LEDs to help you diagnose the problem, records the error in the event log, and alerts you to the problem. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System x iDataPlex dx360 M4. IBM System x iDataPlex is an innovative data center solution that maximizes performance and optimizes energy and space efficiency. The iDataPlex solution provides customers with outstanding energy and cooling efficiency, multi-rack level manageability, complete flexibility in configuration, and minimal deployment effort. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210744055/IBM-System-x-iDataPlex-dx360-M4
The IBM System x3500 M4 server provides powerful and scalable performance for business applications in an energy efficient tower or rack design. It features the latest Intel Xeon E5-2600 v2 or E5-2600 processors with up to 24 cores, 768GB RAM, 32 hard drives, and 8 PCIe slots. Comprehensive systems management tools and redundant components help ensure high availability, while its small footprint and 80 Plus Platinum power supplies reduce data center costs.
Learn about system specification for IBM System x3550 M4. The x3550 M4 offers numerous features to boost performance, improve scalability, and reduce costs. Improves productivity by offering superior system performance with up to 12-core processors, up to 30 MB of L3 cache, and up to two 8 GT/s QPI interconnect links. For more information on System x, visit http://ibm.co/Q7m3iQ.
Learn about IBM System x3650 M4. The x3650 M4 is an outstanding 2U two-socket business-critical server, offering improved performance and pay-as-you grow flexibility along with new features that improve server management capability. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741926/IBM-System-x3650-M4
Learn about the product specification of IBM System x3500 M3. System x3500 M3 has an energy-efficient design which works in conjunction with the IMM to govern fan rotation based on the readings that it delivers. This saves money under normal conditions because the fans do not have to spin at high speed. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210741626/IBM-System-x3500-M3
Learn about IBM System x3400 M3. The x3400 M3 offers numerous features to boost performance and reduce costs, x3400 M3 has the ability to grow with your application requirements with these features. Powerful systems management features simplify local and remote management of the x3400 M3. For more information on System x, visit http://ibm.co/Q7m3iQ.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about IBM System 3250 M3 which is a single-socket server that offers new levels of performance and flexibility
to help you respond quickly to changing business demands. Cost-effective and compact, it is well suited to small to mid-sized businesses, as well as large enterprises. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210740347/IBM-System-x3250-M3
Learn about IBM System x3200 M3 and its specifications. The System x3200 M3 features easy installation and management with a rich set of options for hard disk drives and memory. The efficient design helps to save energy and provide a better work environment with less heat and noise. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210739508/IBM-System-x3200-M3
Learn about the configuration of IBM PowerVC. IBM PowerVC is built on OpenStack that controls large pools of server, storage, and networking resources throughout a data center. IBM Power Virtualization Center provides security services that support a secure environment. Installation requires just 20 minutes to get a virtual machine up and running. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
Learn about Ibm POWER7 Virtualization Performance. PowerVM Lx86 is a cross-platform virtualization solution that enables the running of a wide range of x86 Linux applications on Power Systems platforms within a Linux on Power partition without modifications or recompilation of the workloads. For more information on Power Systems, visit http://ibm.co/Lx6hfc.
http://www.scribd.com/doc/210734237/A-Comparison-of-PowerVM-and-Vmware-Virtualization-Performance
This reference architecture document describes deploying the VMware vCloud Enterprise Suite on the IBM PureFlex System hardware platform. Key points:
- The vCloud Suite software provides components for managing and delivering cloud services, while the IBM PureFlex System provides an integrated hardware platform in a single chassis.
- The reference architecture focuses on installing the vCloud Suite management components as virtual machines on an ESXi host to manage consumer resources.
- The IBM PureFlex System provides servers, networking, and storage in a single chassis that can then be easily scaled out. This standardized deployment accelerates provisioning of cloud infrastructure.
- Deployment considerations cover systems management using IBM Flex System Manager, server, networking, storage configurations
Learn how x6: The sixth generation of EXA Technology is fast, agile and Resilient for Emerging Workloads from Alex Yost. Vice President, IBM PureSystems and System x
IBM Systems and Technology Group. x6 drives cloud and big data for enterprises by achieving insight faster thereby outperforming competitors. For more information on System x, visit http://ibm.co/Q7m3iQ.
http://www.scribd.com/doc/210715795/X6-The-sixth-generation-of-EXA-Technology
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
2. Table of contents
Abstract........................................................................................................................................1
Executive summary ....................................................................................................................1
Private cloud and Active Cloud Engine ....................................................................................1
Prerequisites ...............................................................................................................................2
Introduction to IBM SONAS .......................................................................................................2
IBM SONAS architecture ............................................................................................................3
Introduction to Tivoli Storage Manager ....................................................................................5
Overview of Active Cloud Engine ..............................................................................................6
SONAS – Active Cloud Engine integration ...............................................................................7
Active Cloud Engine: Caching modes in SONAS ....................................................................8
Overview of the solution and usecases ....................................................................................8
Setting up the application and manual changes.................................................................................... 10
Data back up at home site using Tivoli Storage Manager ..................................................................... 19
Initial data migration to cache site using Active Cloud Engine pre-population and policy engine......... 20
Synchronization of data at home and cache site using Snapshots ....................................................... 23
Restoration of data at the home site...................................................................................................... 25
Two-way replication (home to remote and back ).................................................................................. 27
From home site to cache site .......................................................................................... 27
From Cache site to home site ......................................................................................... 27
Summary....................................................................................................................................30
Acknowledgement ....................................................................................................................31
Appendix A: Resources............................................................................................................32
Appendix B: Glossary...............................................................................................................33
About the authors .....................................................................................................................35
Trademarks and special notices..............................................................................................36
IBM SONAS Enterprise backup and remote replication solution in a private cloud
3. Abstract
In a globally dispersed enterprise with private cloud environment, where unstructured data is
exponentially growing, there is a need to provide 24x7 accesses to business-critical data and be
able to restore in case of loss of data. IBM Scale Out Network Attached Storage (IBM SONAS)
with its integrated IBM Tivoli Storage Manager client enables enterprises to back up and restore
data seamlessly and the IBM Active Cloud Engine offers the capability to replicate data to
remote sites.
This paper provides an end-to-end solution for users using an application, such as Apple Final
Cut Pro (a video editing s/w) and one of the currently supported protocols from SONAS, for
example Network File System (NFS). This user data is then replicated to other remote (cache)
sites using Active Cloud Engine. The data is also backed-up using Tivoli Storage Manager at the
home (primary) site. The paper also explains the various steps in setting up and configuring
Active Cloud Engine.
Executive summary
Small and large enterprises continue to demand storage solutions that can store massive amounts of file-
based data in data centers across geographies with ease of management and can scale on demand.
Often enterprises with fast-growing file systems face limitations of scalability and performance with
traditional network-attached storage (NAS) filers because of the requirement to work on millions and
billions of active files in parallel. IBM® SONAS is a multipetabyte scale-out NAS storage offering for
information storage. It is designed to scale out to store substantial amount of active files with superior
performance and ease of management.
IBM Tivoli® Storage Manager Client is fully integrated in IBM SONAS. The integrated solution enables
enterprises to back up and restore data in the least possible time. IBM SONAS with Tivoli Storage
Manager provides a backup and restore solution to the organizations thus increasing efficiency,
performance, and reliability. While taking a full back up every time ensures reliability of backups, it also
supports incremental backups.
IBM SONAS built-in IBM Active Cloud Engine™ feature helps to selectively distribute files to local or
remote sites globally for sharing, data protection, content distribution, and so on. Active Cloud Engine
enables consolidation by centralizing all files under a single file system view, enables multiple SONAS
systems across the globe to be connected and presented to the user as a single namespace. It enables
local access to files from across the globe quickly and efficiently. Active Cloud Engine also helps to create
powerful and scalable storage environment and improve data protection by identifying candidates for
backup or disaster recovery.
Private cloud and Active Cloud Engine
Applications built on cloud architectures are such that the underlying computing infrastructure is used only
when it is needed (for example to process a user request), draw the necessary resources on demand
(such as compute servers or storage), perform a specific job, then relinquish the unnecessary resources
and often dispose themselves after the job is done. While in operation, the application scales up or down
elastically based on resource needs.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
1
4. One of the definitions of private clouds is, in situations where activities and functions are provided as a
service over a company's intranet. Private clouds are built by an organization for its own users, and
everything is delivered within the organization's firewall (instead of the Internet). The private cloud owner
does not share resources with any other company, and therefore multi-tenancy is not an issue.
By totally owning a cloud computing environment, an enterprise can provide and govern computing
resources (such as physical servers, application servers, storage space, applications, services, and so on)
in an efficient, compliant, and secure manner. At the same time, by using a private cloud, an enterprise
can also achieve significant cost saving from the infrastructure's consolidation and virtualization.
In a private cloud environment, Active Cloud Engine helps to alleviate some of the challenges of a private
cloud. The policy engine helps to seamlessly and efficiently manage data, which is distributed across the
globe. Active Cloud Engine is tightly integrated with IBM General Parallel File System™ (IBM GPFS). The
different caching modes of Active Cloud Engine enable the organizations to customize their internal
collaboration. Also, Active Cloud Engine ensures that users always access the latest and updated file and
the same is changed at the remote site, and only incremental changes are sent on the wire, thereby
optimally using the network resources and reducing the cost. Active Cloud Engine creates a single
namespace for its users.
Prerequisites
The following are the minimum requirements to be able to set up Active Cloud Engine in a private cloud
environment.
Minimum two SONAS with same version Release 1.3 or higher
([root@sonas1isv.mgmt001st001 ~]# get_version; Node Version: 1.3.0.0-74f)
SONAS should be reachable (ping) from each other
All the nodes and CTDB-status in SONAS clusters should be healthy (command to check,
cli – lsnode)
In addition, the paper assumes familiarity with the following concepts.
Basic knowledge of IBM SONAS
Basic knowledge of Tivoli Storage Manager
Introduction to IBM SONAS
The demand to manage and store massive amounts of data continues to challenge the cloud environment.
For that reason, IBM has designed a new IBM SONAS solution to embrace the cloud storage and the
petabyte (PB) age. It can meet today’s storage challenges with quick and cost-effective, IT-enabled
business enhancements that can grow with unprecedented scale. It can also deliver computing services
that make the technology, underlying user devices, almost invisible. It enables applications and services to
be uncoupled from the underlying infrastructure, enabling businesses to adjust quickly to change. As a
result, IBM SONAS can easily integrate with an organization’s strategies to develop a more dynamic
enterprise.
Applications and services such as digital media, healthcare, defense and government, interactive games,
and web content are driving enormous data usage in the industry. This is increasingly generating more
requirements for architectural scalability and I/O response times than ever before. All these industries
deploying network-attached storage (NAS) environment face challenges, such as:
IBM SONAS Enterprise backup and remote replication solution in a private cloud
2
5. Tremendous growth of unstructured data with same (or less) administrative staff
Current NAS solutions do not scale both in performance and capacity
No parallel access to data
Backup windows are too small for hundreds of terabytes (TB) of data
No easy way to apply policies across independent filers
No policies to automatically migrate data
Difficulty in globally implementing a centrally-managed and centrally-deployed automatic-
tiered storage, for example, information lifecycle management (ILM) that is essential when
TB multiply into PB
IBM SONAS helps to overcome all these limitations with its rich features.
IBM SONAS architecture
IBM SONAS, when attached to a customer’s Internet Protocol (IP) network, serves files using industry-
standard protocols, such as NFS, Common Internet File System (CIFS), File Transfer Protocol (FTP),
Hypertext Transfer Protocol (HTTP), and Hypertext Transfer Protocol Secure (HTTPS). You can use
SONAS for storage needs ranging from 100 TB to 21 PB.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
3
6. Figure 1: SONAS architecture
As shown in Error! Reference source not found., the unique architecture of IBM SONAS offers greater
flexibility when compared to traditional NAS solutions. Depending on the kind of workload involved, you
can choose to go for multiple interface nodes or storage pods.
Element Function
Integrated interface / Management nodes Performs both management and console tasks for
system as well as allows clients to connect to the
system for file based services and asynchronous
replication function
Fast InfiniBand® network Interconnects management, interface, and storage
nodes
Private management network Interconnects all physical components of the system
IBM SONAS Enterprise backup and remote replication solution in a private cloud
4
7. One or more 42U Enterprise Racks Places all the elements of the system
Optional IBM Tivoli Storage Manager Performs hierarchical storage management (HSM)
nodes and backup services for the system
Table 1: SONAS system components
Introduction to Tivoli Storage Manager
IBM Tivoli Storage Manager is a client/server program that provides centralized, automated data
protection, and storage management solutions to customers in a multivendor computer environment. Tivoli
Storage Manager provides a policy-managed backup, archive, and space-management facility for file
servers, workstations, applications, and application servers.
Tivoli Storage Manager includes the following components:
Server − Server program
− Administrative interface
− Server database and recovery logs
− Server storage
Client nodes − Backup-archive clients
− Network-attached storage file server
− Application client
− Application program interface (API)
Tivoli Storage Manager for space management
Storage agents
Network clients use Tivoli Storage Manager to store data for any of the following purposes:
Backup and restore
Archive and retrieve
Instant archive and rapid recovery
Migrations and recall
Through Tivoli Storage Manager server, you can manage the devices and media used to store client data.
The server integrates the management of storage with the policies you define for managing client data.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
5
8. Overview of Active Cloud Engine
Active Cloud Engine is a scalable, high performance remote file data caching solution integrated with
GPFS and the single global namespace features of SONAS. In Active Cloud Engine terminology, the
nodes where file system operations originate leading to Active Cloud Engine operations are termed as
application or compute nodes. Active Cloud Engine remote operations are ran on one or more GPFS
nodes called gateway nodes, which have connectivity to remote nodes in home cluster. All the nodes
become compute nodes by default, and no special designation required to be assigned. Figure 2 shows a
high level view of the Active Cloud Engine concepts.
Figure 2: Overview of Active Cloud Engine
The caching functionality is enabled by defining an Active Cloud Engine file set at the caching site with a
caching relationship to the file set at the home site exported over NFS.
All the gateway nodes do the NFS mount of the exported path from the home site. A request on the
application node is satisfied from cache if the data is already cached. If the data is not in cache, it sends
the request to the gateway node. The gateway node then brings data over the WAN and stores it in the
local GPFS file system from where application node accesses it directly.
Active Cloud Engine also offers disconnected mode operations, where if gateway nodes are
disconnected from home cluster then data already cached continues to be served by the application
nodes. Cached data can be configured to expire after being in the disconnected state for a certain amount
of time. This prevents access to stale data, where staleness is defined as the amount of time, cache is out
of synchronization with the data at home or as the amount of time, cache is in disconnected state from
home.
Active Cloud Engine performs whole file caching, where data is brought into the cache even when the file
is read partially. However, read requests are split among various gateway nodes for better performance
and caching of data happens asynchronously.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
6
9. SONAS – Active Cloud Engine integration
The integration work aims to integrate and leverage Active Cloud Engine into SONAS for WAN caching
features by mapping Active Cloud Engine concepts onto the SONAS environment.
Figure 3 gives a high-level view on mapping GPFS Active Cloud Engine onto SONAS. Some of the
SONAS interface nodes acts as Active Cloud Engine gateway nodes. A SONAS configuration CLI is
provided to configure Active Cloud Engine within SONAS by defining gateway node designations. Data
access in SONAS happen through interface nodes, hence they act as application nodes also. It is
possible to configure a dedicated network between the home and cache cluster for WAN data transfer.
Figure 3: SONAS – Active Cloud Engine integration
The various steps followed at the cache cluster for pulling data from home cluster, if data isn’t cached
already are:
1. NAS client issues a read request to SONAS cache cluster through the SONAS interface node.
2. If data is not present in the cache, the request is forwarded to the SONAS wcache node.
3. The wcache node then sends a request to the home cluster through NFS mounts from the home
cluster.
4. The wcache node receives data in response.
5. The wcache node stores the data received from home into the local cache cluster storage
persistently.
6. The SONAS interface node which was waiting for this process till now, reads the data from the
local cache cluster storage.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
7
10. 7. The SONAS interface node serves data to the NAS client.
Active Cloud Engine: Caching modes in SONAS
User can create an Active Cloud Engine file set in different modes. The following three modes are
supported in SONAS.
Single-writer: An Active Cloud Engine file set in this mode has exclusive write permission
to the data which is present at home. No other cache cluster or NAS clients connected to
the home cluster would be able to write to the data at home cluster, that is, only one cache
cluster can be configured as single-writer mode. It avoids conflicts between the data at the
cache cluster and home cluster.
SONAS restricts only one cache site to be configured as single-writer through the CLI
commands: mkwcachesource and mkwcache.
Local-updates: An Active Cloud Engine file set will be allowed to read data from home
server but any writes done to the cache will not be synchronized back to the home server.
This is similar to a sandbox or scratch pad environment.
Read-only: An Active Cloud Engine file set will be permitted to read data. Any form of
write operations will fail.
Overview of the solution and usecases
For example, consider a Media and Entertainment enterprise, using Apple Final Cut Pro 10.0.1 for creating
(reading and writing ) and editing media files ( typically *.mov files).
Figure 4 illustrates a typical enterprise environment, which is spread across geographies, where users are
authenticated through Microsoft® Active Directory server and access SONAS as backend for storage and
backup of their day-to-day data.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
8
11. Figure 4: A typical enterprise setup, where users write to the home site
Figure 5: Users writing to cache site after some manual changes in application
IBM SONAS Enterprise backup and remote replication solution in a private cloud
9
12. Setting up the application and manual changes
In order to be able to use the Apple Final Cut Pro and Active Cloud Engine features, the following steps
need to be performed on the Mac system and Final Cut Pro. The user application can interact with SONAS
using any of the SONAS supported protocols, however, the internal communication through Active Cloud
Engine happens over NFS.
1. Click Finder Go Connect to Server to mount the SONAS NFS export. Specify the SONAS
home IP address and path.
Figure 6: Connecting to NFS export
Figure 7: SONAS export connection in progress
2. After successfully mounted, the export is displayed in the Finder window.
Figure 8: Mac Finder, showing the mounted export
IBM SONAS Enterprise backup and remote replication solution in a private cloud
10
13. Figure 9: SONAS export on Mac desktop
3. Start the Final Cut Pro application.
Figure 10: Final Cut Pro main screen
4. Click File Add SAN Location.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
11
14. Figure 11: Adding SAN location in Final Cut Pro
5. In the Select a SAN Location Folder window, the SONAS export is displayed. Double-click the
SONAS export and click Add.
Figure 12: Selecting the SONAS home export
IBM SONAS Enterprise backup and remote replication solution in a private cloud
12
15. Figure 13: Adding the SONAS home export
6. The newly added SAN location is now updated in the main window of Final Cut Pro. Right-click the
SONAS export and click New Event.
Figure 14: The newly added export seen in the main screen
IBM SONAS Enterprise backup and remote replication solution in a private cloud
13
16. Figure 15: The home export displayed in the main window with the menu options
You can now create and edit files as per requirement. The newly created files are also reflected in the
Mac Finder window.
Figure 16: Media files displayed in Mac Finder
In the event of home or primary site outage, after few manual changes in the Final Cut Pro application,
users can now use the cache site with the same files now being available.
Perform the following steps to configure the Final Cut Pro to now mount and use remote (cache) site:
IBM SONAS Enterprise backup and remote replication solution in a private cloud
14
17. Figure 17: Users switching to the cache site after few modifications in application
1. Click Finder Go Connect to Server to mount the SONAS NFS export. Specify the SONAS
cache IP address and path.
Figure 18: Mounting the remote site export in Mac
IBM SONAS Enterprise backup and remote replication solution in a private cloud
15
18. Figure 19: Connection to export in progress
After successful mounting, the remote export is displayed on the Mac desktop.
Figure 20: The remote export as seen on Mac desktop
2. Click File Add SAN Location to add the new SAN location in Final Cut Pro.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
16
19. Figure 21: Adding the remote export in Final Cut Pro SAN location
3. Double-click the cache export.
Figure 22: Selecting the remote export
IBM SONAS Enterprise backup and remote replication solution in a private cloud
17
20. 4. As the replication was set (refer to “Initial data migration to cache site using Active Cloud Engine
pre-population and policy engine” section) the file (data) that was created on the home site is now
displayed on the cache site as well. Click Add.
Figure 23: Adding the remote export
The new SAN location is now displayed in the Final Cut Pro main window.
Figure 24: Remote export displayed in the Final Cut Pro main window
The files can now be imported and editing can continue.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
18
21. Figure 25: The files from home site replicated to cache site, now used by the application
Data back up at home site using Tivoli Storage Manager
Tivoli Storage Manager Client software is preinstalled on SONAS on each of the interface nodes. This is
done during the SONAS software installation. Not all the interface nodes need to participate in the backup
and restore activities. All the interface nodes which are configured for backup purposes are called backup
nodes. First backup node which is configured with Tivoli Storage Manager Server is called the primary
backup node. Other backup nodes are called as helper backup nodes.
Perform the following steps to back up data at home site.
1. Verify the Tivoli Storage Manager nodes configuration by using the lstsmnode command.
[root@sonas1isv.mgmt001st001 ~]# lstsmnode
Node name TSM target node name TSM server name TSM server address TSM node name
int004st001 sonas1 isvp13_lpm 9.11.83.12 sonas1int04
int005st001 sonas1 isvp13_lpm 9.11.83.12 sonas1int05
int006st001 sonas1 isvp13_lpm 9.11.83.12 sonas1int06
EFSSG1000I The command completed successfully.
2. Validate the configured backup file system and backup scheduled task on the SONAS system.
[root@sonas1isv.mgmt001st001 ~]# lsbackupfs
File system TSM server List of nodes
gpfs1 isvp13_lpm int004st001,int005st001,int006st001
EFSSG1000I The command completed successfully.
3. Invoke the file system backup using the following command. The time taken to back up depends
on the size of the file system.
[root@sonas1isv.mgmt001st001 ~]# startbackup gpfs1
EFSSG0543I A backup of file system gpfs1 started with JobID 0.
4. Verify the back-up log
[root@sonas1isv.mgmt001st001 ~]# showlog 0
Primary node: int004st001
Job ID : 0
PID:1431277
Primary backup node: int004st001
Helper backup nodes: int005st001,int006st001
2011-12-19 02:31:19-07:00 Starting backup.
mmbackup: ignoring unrecognized TSM query output:
Accessing as node: SONAS1
stat: cannot stat `/ibm/gpfs1/mmbackup.audit.gpfs1': No such file or directory
stat: cannot stat `/ibm/gpfs1/mmbackup.audit.gpfs1': No such file or directory
stat: cannot stat `/ibm/gpfs1/mmbackup.audit.gpfs1': No such file or directory
IBM SONAS Enterprise backup and remote replication solution in a private cloud
19
22. --------------------------------------------------------
Backup of /ibm/gpfs1 begins at Mon Dec 19 02:31:23 MST 2011.
--------------------------------------------------------
Mon Dec 19 02:31:30 2011 Could not restore previous shadow file from TSM server isvp13_lpm
Mon Dec 19 02:31:30 2011 Querying files currently backed up in TSM server:isvp13_lpm.
Mon Dec 19 02:31:31 2011 Query of TSM server isvp13_lpm returned 12
Mon Dec 19 02:31:31 2011 No inventory found in TSM server isvp13_lpm
Mon Dec 19 02:31:31 2011 Created empty shadow for first backup to TSM server: isvp13_lpm
Mon Dec 19 02:31:32 2011 Generating policy rules file: /var/mmfs/mmbackup/.mmbackupRules.gpfs1
Mon Dec 19 02:31:33 2011 Scanning file system gpfs1
Mon Dec 19 02:31:33 2011 Determining file system changes for gpfs1 [isvp13_lpm].
Mon Dec 19 02:31:33 2011 Finished calculating lists [19 changed, 0 expired] for server isvp13_lpm.
Mon Dec 19 02:31:33 2011 Sending files to the TSM server [19 changed, 0 expired].
Mon Dec 19 02:44:30 2011 Policy returned 0 Highest TSM error 0
TSM Summary Information:
Total number of objects inspected: 22
Total number of objects backed up: 22
Total number of objects updated: 0
Total number of objects rebound: 0
Total number of objects deleted: 0
Total number of objects expired: 0
Total number of objects failed: 0
Mon Dec 19 02:44:32 2011 Done working with files for TSM Server: isvp13_lpm.
Mon Dec 19 02:44:32 2011 Completed successfully exit 0
----------------------------------------------------------
Backup of /ibm/gpfs1 completed successfully at Mon Dec 19 02:44:33 MST 2011.
----------------------------------------------------------
2011-12-19 02:44:33-07:00 INFO: backup successful (rc=0).
----------------------------------------------------------
End of log - backup completed
----------------------------------------------------------
EFSSG1000I The command completed successfully.
For more detailed information on how to backup and restore files using Tivoli Storage Manager on
SONAS, refer to the references at the end of paper.
Initial data migration to cache site using Active Cloud Engine pre-population and
policy engine
Usually, the cache (remote) site fetches data from home on a need basis. However, when setting up the
home and remote sites for the first time, typically there is a requirement to move the relatively huge initial
data to cache site. This is achieved using the pre-population feature of Active Cloud Engine. Currently it is
the only way to transfer the initial data.
You need to perform the following steps to setup pre-population for initial data transfer across sites.
At the home site:
1. Create a regular file set. You can also use an existing file set .
[root@sonas1isv.mgmt001st001 ~]# mkfset gpfs1 home_src1
EFSSG0070I File set home_src1 created successfully.
EFSSG1000I The command completed successfully.
2. Link a regular file set to a junction path.
[root@sonas1isv.mgmt001st001 ~]# linkfset gpfs1 home_src1
EFSSG0015I Refreshing data.
EFSSG0078I File set home_src1 successfully linked.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
20
23. EFSSG1000I The command completed successfully.
3. Check for the successful creation of a regular file set; you can write / enter data into this file set
using the cp command or use any application for the same
[root@sonas1isv.mgmt001st001 ~]# lsfset gpfs1
ID Name Status Path Is independent Creation time Comment Timestamp
0 root Linked /ibm/gpfs1 yes 12/12/11 4:09 AM root fileset 1/10/12 12:56 AM
1 home_src1 Linked /ibm/gpfs1/home_src1 no 1/10/12 12:56 AM 1/10/12 12:56 AM
EFSSG1000I The command completed successfully.
4. Create an Active Cloud Engine home export in read-only mode. Here the export is in read-only
mode, and therefore, when users are trying to access the files from cache / remote site, they can
only read and not edit. However, by creating the NFS export in the rw mode will allow to edit the
files.
[root@sonas1isv.mgmt001st001 ~]# mkwcachesource homeshare1 /ibm/gpfs1/home_src1 --client '9.11.82.90(ro)'
EFSSG1000I The command completed successfully.
5. Check for the successful creation of Active Cloud Engine home export.
[root@sonas1isv.mgmt001st001 ~]# lswcachesource
WCache-Source Name WCache-Source Path ClientClusterId ClientClusterName WCache-Source Access Mode Is
Cached
homeshare1 /ibm/gpfs1/home_src1 12402779243272901319 sonas3isv.storage.tucson.ibm.com ro no
EFSSG1000I The command completed successfully.
6. Create an NFS export for the application. This is the export which will be used as part of Add SAN
Location in the Apple Final Cut Pro video editing software
[root@sonas1isv.mgmt001st001 ~]# mkexport rwshare1 /ibm/gpfs1/home_src1 --nfs '*(rw,no_root_squash,insecure)'
EFSSG0019I The export rwshare1 has been successfully created.
EFSSG1000I The command completed successfully.
7. Confirm the successful creation of the export.
[root@sonas1isv.mgmt001st001 ~]# lsexport
Name Path Protocol Active Timestamp
rwshare1 /ibm/gpfs1/home_src1 NFS true 1/10/12 1:00 AM
EFSSG1000I The command completed successfully.
At the remote site:
1. Configure few interface nodes on cache site as Gateway Nodes.
[root@sonas3isv.mgmt001st001 ~]# mkwcachenode --nodelist int001st001,int002st001
EFSSG1000I The command completed successfully.
2. Check for the creation of Gateway nodes. The Is Cache column for the selected interface nodes
should be yes
[root@sonas3isv.mgmt001st001 ~]# lsnode -v -r
EFSSG0015I Refreshing data.
Hostname IP Description Role Product version Connection status GPFS status CTDB status Username Is
manager Is quorum Daemon ip address Daemon version Is Cache Recovery master Monitoring enabled Ctdb ip address OS name
OS family Serial number Last updated
int001st001 172.31.132.1 interface 1.3.0.0-74f OK active active root yes no
172.31.132.1 1211 yes no yes 172.31.132.1 RHEL 6.1 x86_64 Linux 78998P3 1/10/12 1:04
AM
int002st001 172.31.132.2 interface 1.3.0.0-74f OK active active root yes no
IBM SONAS Enterprise backup and remote replication solution in a private cloud
21
24. 172.31.132.2 1211 yes no yes 172.31.132.2 RHEL 6.1 x86_64 Linux 78998R5 1/10/12 1:04
AM
int003st001 172.31.132.3 interface 1.3.0.0-74f OK active active root yes yes
172.31.132.3 1211 no no yes 172.31.132.3 RHEL 6.1 x86_64 Linux 78998P1 1/10/12 1:04
AM
int004st001 172.31.132.4 interface 1.3.0.0-74f OK active active root yes no
172.31.132.4 1211 no yes yes 172.31.132.4 RHEL 6.1 x86_64 Linux 78998T0 1/10/12 1:04
AM
int005st001 172.31.132.5 interface 1.3.0.0-74f OK active active root yes yes
172.31.132.5 1211 no no yes 172.31.132.5 RHEL 6.1 x86_64 Linux 78998N4 1/10/12 1:04
AM
int006st001 172.31.132.6 interface 1.3.0.0-74f OK active active root yes yes
172.31.132.6 1211 no no yes 172.31.132.6 RHEL 6.1 x86_64 Linux 787271F 1/10/12 1:04
AM
mgmt001st001 172.31.136.2 active management node management,interface 1.3.0.0-74f OK active active root
yes no 172.31.136.2 1211 no no yes 172.31.136.2 RHEL 6.1 x86_64 Linux KQ977R4
1/10/12 1:04 AM
strg001st001 172.31.134.1 storage 1.3.0.0-74f OK active root no yes
172.31.134.1 1211 no no no 127.0.0.1 RHEL 6.1 x86_64 Linux 78977M5 1/10/12 1:04 AM
strg002st001 172.31.134.2 storage 1.3.0.0-74f OK active root no yes
172.31.134.2 1211 no no no 127.0.0.1 RHEL 6.1 x86_64 Linux 78977M4 1/10/12 1:04 AM
EFSSG1000I The command completed successfully.
3. Create a read-only Active Cloud Engine file set.
root@sonas3isv.mgmt001st001 ~]# mkwcache gpfs1 remotefset1 /ibm/gpfs1/remotefset1 --cachemode read-only --homeip
9.11.82.16 --remotepath /ibm/gpfs1/home_src1
EFSSG1000I The command completed successfully.
4. Check for the successful creation of Active Cloud Engine file set.
[root@sonas3isv.mgmt001st001 ~]# lswcache gpfs1
ID Name Status Path CreationTime Comment RemoteFilesetPath CacheState CacheMode
18 remotefset1 Linked /ibm/gpfs1/remotefset1 1/10/12 1:23 AM 9.11.82.28:/ibm/gpfs1/home_src1 enabled read-only
EFSSG1000I The command completed successfully.
5. The initial data transfer (prepopulation) can be configured based on the policy. You can specify
the file selection through the prefetch policy. Here, all the files from a storage pool are selected.
[root@sonas3isv.mgmt001st001 ~]# mkpolicy prefetchpolicy -R "RULE 'allfiles' LIST 'files' FROM POOL system WHERE NAME LIKE
'%'"
EFSSG1000I The command completed successfully.
6. Check for the creation of a policy
[root@sonas3isv.mgmt001st001 ~]# lspolicy
Policy Name Declarations (define/RULE)
default default
prefetchpolicy allfiles
EFSSG1000I The command completed successfully.
7. Start the transfer. This is an asynchronous operation.
[root@sonas3isv.mgmt001st001 ~]# runprepop gpfs1 remotefset1 prefetchpolicy
EFSSG0015I Refreshing data.
EFSSG1000I The command completed successfully.
8. Check the status of prepopulation.
[root@sonas3isv.mgmt001st001 ~]# lsprepop -l 1
Cluster ID filesystem FilesetName Status Last update Timestam Message
12402779243272901319 gpfs1 remotefset1 RUNNING 1/10/12 1:30 AM RUNNING
EFSSG1000I The command completed successfully.
[root@sonas3isv.mgmt001st001 ~]# lsprepop -l 1
IBM SONAS Enterprise backup and remote replication solution in a private cloud
22
25. Cluster ID filesystem FilesetName Status Last update Timestam Message
12402779243272901319 gpfs1 remotefset1 FINISHED 1/10/12 1:30 AM FINISHED
EFSSG1000I The command completed successfully.
9. The initial contents from the home site are now seen on the remote site.
[root@sonas3isv.mgmt001st001 ~]# ll /ibm/gpfs1/remotefset1
total 15120
-rw------- 1 root root 15734851 Jan 10 01:11 clip-2011-12-21 14;16;39.mov
drwxrwxrwx 3 root root 8192 Jan 10 01:30 Final Cut Events
Synchronization of data at home and cache site using Snapshots
Active Cloud Engine provides the peer-snapshot (psnap) utility. Snapshot is a point-in-time copy of Active
Cloud Engine file set. Peer snapshots are used for backup and disaster recovery purposes. The psnap
utility takes a snapshot of cache and pushes all the data in cache to the home site so that the data at the
home site is consistent with cache and then takes a snapshot of home. This results in a pair of snapshots
at both sides referring to the same consistent copy of data.
Psnap can be taken for a single-writer mode file set only. It is currently not supported for read-only or
local-update modes. Also, currently the limit of maximum number of snapshots is 100 per file set.
The various steps involved are as follows:
At the home site:
1. Create a file set named cloud_nfs_src2.
root@sonas1isv.mgmt001st001 ~]# mkfset gpfs1 cloud_nfs_src2
EFSSG0070I File set cloud_nfs_src2 created successfully.
EFSSG1000I The command completed successfully.
2. Link the file set to the /ibm/gpfs1/cloud_nfs_src2 path.
[root@sonas1isv.mgmt001st001 ~]# linkfset gpfs1 cloud_nfs_src2
EFSSG0015I Refreshing data.
EFSSG0078I File set cloud_nfs_src2 successfully linked.
EFSSG1000I The command completed successfully.
3. Create a WAN cache source in the rw mode.
[root@sonas1isv.mgmt001st001 ~]# mkwcachesource homeshare2 /ibm/gpfs1/cloud_nfs_src2 --client '9.11.82.90(rw)'
EFSSG1000I The command completed successfully.
At the remote site:
1. Create a corresponding WAN cache in the single writer mode.
[root@sonas3isv.mgmt001st001 remotefset]# mkwcache gpfs1 remotefset2 /ibm/gpfs1/remotefset2 --cachemode single-writer --
homeip 9.11.82.16 --remotepath /ibm/gpfs1/cloud_nfs_src2
EFSSG1000I The command completed successfully.
2. Add data to remotefset2.
[root@sonas3isv.mgmt001st001 remotefset2]# ls -lrt
total 0
-rw------- 1 root root 5242880000 Dec 16 03:13 sample1.mov
IBM SONAS Enterprise backup and remote replication solution in a private cloud
23
26. 3. Take a peer snapshot, snap1.
[root@sonas3isv.mgmt001st001 /]# mkpsnap gpfs1 remotefset2 snap1
If the maximum limit of psnaps is reached, this will automatically delete the oldest snapshot.
Do you really want to perform the operation (yes/no - default no):yes
EFSSG0019I The psnap snap1 has been successfully created.
EFSSG1000I The command completed successfully.
4. List snap1.
[root@sonas3isv.mgmt001st001 /]# lspsnap gpfs1 -r
EFSSG0015I Refreshing data.
Filesystem name Fileset name Snapshot ID Status Creation ID Timestamp
gpfs1 remotefset2 snap1 Valid 12/16/11 3:22 AM 1 12/16/11 3:23 AM
EFSSG1000I The command completed successfully.
5. Verify the contents of snap1.
[root@sonas3isv.mgmt001st001 /]# cd /ibm/gpfs1/remotefset2/.snapshots/
[root@sonas3isv.mgmt001st001 .snapshots]# ls
snap1-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-22-53
[root@sonas3isv.mgmt001st001 .snapshots]# cd snap1-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-22-
53/
[root@sonas3isv.mgmt001st001 snap1-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-22-53]# ls -lrt
total 0
-rw------- 1 root root 5242880000 Dec 16 03:13 sample1.mov
6. Add a new file, Sample5.mov to remotefset2 and remove the earlier sample1.mov file.
[root@sonas3isv.mgmt001st001 remotefset2]# ls
sample1.mov sample5.mov
[root@sonas3isv.mgmt001st001 remotefset2]# rm sample1.mov
[root@sonas3isv.mgmt001st001 remotefset2]# ls
sample5.mov
7. Take another peer snapshot, snap2.
[root@sonas3isv.mgmt001st001 /]# mkpsnap gpfs1 remotefset2 snap2
If the maximum limit of psnaps is reached, this will automatically delete the oldest snapshot.
Do you really want to perform the operation (yes/no - default no):yes
EFSSG0019I The psnap snap2 has been successfully created.
EFSSG1000I The command completed successfully.
8. List the snapshots created in the previous steps.
[root@sonas3isv.mgmt001st001 /]# lspsnap gpfs1
Filesystem name Fileset name Snapshot ID Status Creation ID Timestamp
gpfs1 remotefset2 snap2 Valid 12/16/11 3:29 AM 2 12/16/11 3:29 AM
gpfs1 remotefset2 snap1 Valid 12/16/11 3:22 AM 1 12/16/11 3:29 AM
EFSSG1000I The command completed successfully.
9. Verify the contents of snap2. Note that sample1.mov is deleted from snapshot2 and sample5.mov
is added to snap2
[root@sonas3isv.mgmt001st001 /]# cd /ibm/gpfs1/remotefset2/.snapshots/
[root@sonas3isv.mgmt001st001 .snapshots]# ls
snap1-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-22-53 snap2-psnap-12402779243272901319-
AC1F8405:4EDFB110-12-11-12-16-03-29-42
[root@sonas3isv.mgmt001st001 .snapshots]# cd snap2-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-29-
42/
[root@sonas3isv.mgmt001st001 snap2-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-29-42]# ls
IBM SONAS Enterprise backup and remote replication solution in a private cloud
24
27. sample5.mov
At the home site:
Corresponding peer snapshots are automatically created on the home side (which reflect data
consistency)
1. List the created snapshots on the gpfs1 file system.
[root@sonas1isv.mgmt001st001]# lssnapshot gpfs1 -r
EFSSG0015I Refreshing data.
Device name Fileset name Snapshot ID Rule Name Status Creation Used (metadata)
Used (data) ID Timestamp
gpfs1 snap2-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-29-42 N/A Valid 12/16/11 3:29
AM 0 0 2 12/16/11 3:30 AM
gpfs1 snap1-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-22-53 N/A Valid 12/16/11 3:22
AM 0 0 1 12/16/11 3:30 AM
EFSSG1000I The command completed successfully.
2. Further explore the snapshot directory and compare the contents of the corresponding snapshots
with the contents of the cache site.
[root@sonas1isv.mgmt001st001 /]# cd /ibm/gpfs1/.snapshots/
[root@sonas1isv.mgmt001st001 .snapshots]# ls
snap1-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-22-53
snap2-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-29-42
[root@sonas1isv.mgmt001st001 .snapshots]# ls snap1-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-22-
53/cloud_nfs_src2
sample1.mov
[root@sonas1isv.mgmt001st001 .snapshots]# ls snap2-psnap-12402779243272901319-AC1F8405:4EDFB110-12-11-12-16-03-29-
42/cloud_nfs_src2
sample5.mov
Restoration of data at the home site
There can be instances when SONAS users might want to restore objects that are deleted accidentally or
in some cases want to restore previous version of the objects. Restores are performed only by request
and normally for small set of objects within a file system. As individual restore of objects within a file
system is allowed, restore commands also support wild cards. As opposed to the backup, where multiple
interface nodes might be configured to take backup on the Tivoli Storage Manager server, restore can only
be performed on the single interface node.
The data can be restored using Tivoli Storage Manager again at the home site. In this example, after
restoring the original file, the changes that were done at the remote site are lost.
You can use the startrestore command to restore SONAS data from the Tivoli Storage Manager server.
This command not only specifies the file pattern and the information on where the file system is mounted
but also the location where it has to be restored. Users cannot restore multiple file systems at the same
time.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
25
28. Perform the following steps to restore data at home site.
1. Restore the file at a different location by using the -T option at the /ibm/gpfs1/test_restore
location.
[root@sonas1isv.mgmt001st001 ~]# startrestore '/ibm/gpfs1/cloud_nfs_src1/clip-2011-12-21 14;16;39.mov' -T
/ibm/gpfs1/test_restore/
EFSSA0184I The restore is started on gpfs1 with JobID 5.
EFSSG1000I The command completed successfully.
2. Verify command completion and then check the restore log.
[root@sonas1isv.mgmt001st001 ~]# showlog 5
Primary node: int004st001
Job ID : 5
2012-01-10 23:17:31+07:00 Restorepattern: /ibm/gpfs1/cloud_nfs_src1/clip-2011-12-21 14;16;39.mov
2012-01-10 23:17:31+07:00 IBM Tivoli Storage Manager
2012-01-10 23:17:31+07:00 Command Line Backup-Archive Client Interface
2012-01-10 23:17:31+07:00 Client Version 6, Release 3, Level 0.4
2012-01-10 23:17:31+07:00 Client date/time: 2012-01-10 23:17:31
2012-01-10 23:17:31+07:00 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved.
2012-01-10 23:17:31+07:00
2012-01-10 23:17:31+07:00 Node Name: SONAS1INT04
2012-01-10 23:17:32+07:00 Session established with server ISVP13_LPM: AIX
2012-01-10 23:17:32+07:00 Server Version 6, Release 2, Level 2.2
2012-01-10 23:17:32+07:00 Server date/time: 2012-01-10 23:17:31 Last access: 2012-01-10 23:17:30
2012-01-10 23:17:32+07:00
2012-01-10 23:17:32+07:00 Accessing as node: SONAS1
2012-01-10 23:17:32+07:00 Restore function invoked.
2012-01-10 23:17:32+07:00
2012-01-10 23:17:35+07:00
2012-01-10 23:17:35+07:00 Restore processing finished.
2012-01-10 23:17:35+07:00
2012-01-10 23:17:35+07:00 Total number of objects restored: 2
2012-01-10 23:17:35+07:00 Total number of objects failed: 0
2012-01-10 23:17:35+07:00 Total number of bytes transferred: 15.00 MB
2012-01-10 23:17:35+07:00 Data transfer time: 0.12 sec
2012-01-10 23:17:35+07:00 Network data transfer rate: 125,461.99 KB/sec
2012-01-10 23:17:35+07:00 Aggregate data transfer rate: 4,951.26 KB/sec
2012-01-10 23:17:35+07:00 Elapsed processing time: 00:00:03
2012-01-10 23:17:35+07:00 dsmc return code: 0
----------------------------------------------------------
End of log - restore completed
----------------------------------------------------------
EFSSG1000I The command completed successfully.
3. Verify that no errors are reported and then check the error log.
[root@sonas1isv.mgmt001st001 ~]# showerrors 5
Primary node: int004st001
Job ID : 5
----------------------------------------------------------
End of error - restore completed
----------------------------------------------------------
EFSSG1000I The command completed successfully.
4. Verify the file after restoration.
[root@sonas1isv.mgmt001st001 test_restore]# ll
total 8
drwxrwxrwx 2 root root 8192 Dec 21 02:15 cloud_nfs_src1
[root@sonas1isv.mgmt001st001 test_restore]# cd cloud_nfs_src1/
[root@sonas1isv.mgmt001st001 cloud_nfs_src1]# ll
IBM SONAS Enterprise backup and remote replication solution in a private cloud
26
29. total 15368
-rw------- 1 root root 15734851 Dec 21 02:15 clip-2011-12-21 14;16;39.mov
For more detailed information on how to backup and restore files using TSM on SONAS, refer to the
references at the end of paper.
Two-way replication (home to remote and back )
This section refers to the movement of data from the home to the remote site and back from remote to the
home site.
From home site to cache site
Refer to the earlier section on “Initial data migration to cache site using Active Cloud Engine pre-
population and policy engine” for details.
From Cache site to home site
In continuation from the “Initial data migration to cache site using Active Cloud Engine pre-population
and policy engine” section, the next steps to replicate the data back to the home site in a different
(new) file set are as follows:
At the home site:
1. Create a new fileset, home_src2.
[root@sonas1isv.mgmt001st001 /]# mkfset gpfs1 home_src2
EFSSG0070I File set home_src2 created successfully.
EFSSG1000I The command completed successfully.
2. Link the file set to the /ibm/gpfs1/home_src2 path.
[root@sonas1isv.mgmt001st001 /]# linkfset gpfs1 home_src2
EFSSG0015I Refreshing data.
EFSSG0078I File set home_src2 successfully linked.
EFSSG1000I The command completed successfully.
3. Create a WAN cache source in the rw mode.
[root@sonas1isv.mgmt001st001 /]# mkwcachesource homeshare2 /ibm/gpfs1/home_src2 --client '9.11.82.90(rw)'
EFSSG1000I The command completed successfully.
4. List the WAN caches and check whether the home WAN export is created.
[root@sonas1isv.mgmt001st001 /]# lswcachesource
WCache-Source Name WCache-Source Path ClientClusterId ClientClusterName WCache-Source Access Mode Is
Cached
homeshare1 "/ibm/gpfs1/home_src1" 12402779243272901319 sonas3isv.storage.tucson.ibm.com ro yes
homeshare2 /ibm/gpfs1/home_src2 12402779243272901319 sonas3isv.storage.tucson.ibm.com rw no
EFSSG1000I The command completed successfully.
5. List the file sets.
[root@sonas1isv.mgmt001st001 home_src1]# lsfset gpfs1
IBM SONAS Enterprise backup and remote replication solution in a private cloud
27
30. ID Name Status Path Is independent Creation time Comment Timestamp
0 root Linked /ibm/gpfs1 yes 12/12/11 4:09 AM root fileset 1/11/12 12:06 AM
1 home_src1 Linked /ibm/gpfs1/home_src1 no 1/10/12 12:56 AM 1/11/12 12:06 AM
2 home_src2 Linked /ibm/gpfs1/home_src2 no 1/10/12 2:07 AM 1/11/12 12:06 AM
EFSSG1000I The command completed successfully.
6. Verify that there are no contents at home site, before replication from cache site
[root@sonas1isv.mgmt001st001 /]# ll /ibm/gpfs1/home_src2
total 0
7. After replication from remotefset2, verify that the cache site is now populated with the replicated
data.
[root@sonas1isv.mgmt001st001 /]# ll /ibm/gpfs1/home_src2
total 15120
-rwx------ 1 root root 15734851 Jan 11 00:12 clip-2011-12-21 14;16;39.mov
drwx------ 3 root root 8192 Jan 11 00:12 Final Cut Events
At the remote site:
1. List WAN cache.
[root@sonas3isv.mgmt001st001 ~]# lswcache gpfs1
ID Name Status Path CreationTime Comment RemoteFilesetPath CacheState CacheMode
18 remotefset1 Linked /ibm/gpfs1/remotefset1 1/10/12 1:23 AM 9.11.82.28:/ibm/gpfs1/home_src1 enabled read-only
EFSSG1000I The command completed successfully.
2. Create a new export in the rw mode, so that the Apple Final Cut Pro application can add this as a
SAN Location, (where the future read and write operation on the files will take place), when the
home or the primary site is down for various reasons.
[root@sonas3isv.mgmt001st001 remotefset1]# mkexport remoteshare1 /ibm/gpfs1/remotefset1 --nfs '*(rw,no_root_squash,insecure)'
EFSSG0019I The export remoteshare1 has been successfully created.
EFSSG1000I The command completed successfully.
3. List the export.
[root@sonas3isv.mgmt001st001 remotefset1]# lsexport
Name Path Protocol Active Timestamp
remoteshare1 /ibm/gpfs1/remotefset1 NFS true 1/11/12 12:02 AM
EFSSG1000I The command completed successfully.
4. Create a single-writer Active Cloud Engine file set.
[root@sonas3isv.mgmt001st001 /]# mkwcache gpfs1 remotefset2 /ibm/gpfs1/remotefset2 --cachemode single-writer --homeip
9.11.82.16 --remotepath /ibm/gpfs1/home_src2
EFSSG1000I The command completed successfully.
5. Copy the initial replicated data from the remotest directory (where the data was initially replicated
using the prepopulation feature) to the newly created remotefeset2 file set. As the new caching
relationship is established between the cache and the home sites in sw mode, the newly copied
data (remotefset2) is automatically replicated back to home site in the newly created home_src2
file set.
[root@sonas3isv.mgmt001st001 /]# cp -R /ibm/gpfs1/remotefset1/* /ibm/gpfs1/remotefset2
IBM SONAS Enterprise backup and remote replication solution in a private cloud
28
31. Known limitation: When trying to replicate back from remote or cache site to home, first the data
needs to be placed in a new file set at the cache site. Then there are two options for home site:
If the existing file set is deleted, then the replicated data can be transferred to same file
set (after creation)
The replicated data can be transferred to a new file set, as an old file set already exists.
(Similar scenario is already explained in this paper.)
IBM SONAS Enterprise backup and remote replication solution in a private cloud
29
32. Summary
IBM SONAS system is a true and scalable solution for today's enterprise, file-based workloads and can be
fully integrated with the Tivoli Storage Manager server. Tivoli Backup-Archive client comes as a
preinstalled software with SONAS and data is reliably backed up directly to the external Tivoli Storage
Manager servers using interface nodes, providing faster performance. Active Cloud Engine on the other
hand helps to make sure that the Enterprise’s data is always replicated and updated across the data
centers spread worldwide. The prepopulation feature of Active Cloud Engine helps to transfer the initial
data from the home to the remote site. Also, the peer snapshots feature helps to ensure that data-
consistent snapshots are taken periodically across the home and remote sites, in case required for
restoration.
This paper provides the guidelines for backing up data on SONAS using Tivoli Storage Manager and
exploits the features of Active Cloud Engine in an Enterprise’s private cloud environment.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
30
33. Acknowledgement
This paper could not have been completed without the valuable suggestions from Shekhar Agrawal, Active
Cloud Engine Team and Mandar Vaidya, ISV Team. The testing efforts went smooth with their deep
hands-on experience with Active Cloud Engine on SONAS and Tivoli Storage Manager. Author also likes
to thank Pratap Banthia and Kedar M Karmarkar for supporting this effort.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
31
34. Appendix A: Resources
The following websites provide useful references to supplement the information contained in this paper:
IBM SONAS administration and user documentation
http://publib.boulder.ibm.com/infocenter/sonasic/sonas1ic/index.jsp
IBM Scale Out Network Attached Storage Administrator's Guide (GA32-0713)
http:/publib.boulder.ibm.com/infocenter/sonasic/sonas1ic/topic/com.ibm.sonas.doc/sonas_
admin_guide.pdf
IBM SONAS Introduction and Planning Guide (GA32-0716)
http://publib.boulder.ibm.com/infocenter/sonasic/sonas1ic/topic/com.ibm.sonas.doc/sonas
_ipg.pdf
IBM Scale Out Network Attached Storage: Architecture, Planning, and Implementation
Basics
ibm.com/redbooks/redbooks/pdfs/sg247875.pdf
Backup and implementing IBM Tivoli Storage Manager solution on the IBM Storwize
V7000 Unified system
ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_sonas_tivoli_storwize
_v700_unified_system
Integrating IBM SONAS with IBM Tivoli Storage Manager
ibm.com/partnerworld/wps/servlet/ContentHandler/whitepaper/sonas/tivoli/use
IBM SONAS Enterprise backup and remote replication solution in a private cloud
32
35. Appendix B: Glossary
Application nodes – Nodes (such as interface nodes) within the cache cluster that handle application
requests where NFS server performs file operations to access data in order to serve applications on the
NFS client. By default, all the nodes that access a caching file set are application nodes.
Application request - A file system data or metadata request made by an application running on an
application node.
Cache cluster – A SONAS cluster where data is cached from the home cluster. It is also known as the
SONAS Active Cloud Engine cluster.
Cache server – A single server within the cache cluster.
Caching – A process where data from home SONAS cluster is brought into the caching SONAS cluster
upon access. The cached data is persistent and stored on the local SONAS cluster storage.
Caching file set – A GPFS file set that caches data from a single home cluster. GPFS can support
multiple caching file sets. A file set must be cache-enabled when it is created. Existing file sets, also
known as GPFS 3.3 file sets or file sets created before this feature is enabled, cannot be set for WAN
caching. Caching file sets are GPFS file sets and all the limitations to a normal file sets also apply to them.
Cache prepopulation daemon – A component of the prepopulation architecture that runs at the cache
cluster.
Disconnected mode – When a SONAS cache cluster loses connectivity to home cluster, it goes into a
state termed as disconnected mode.
FileRefreshLookupInterval, DirRefreshLookupInterval – After a file is cached in the cache cluster, data
is served from the cache itself. However, there is a need to perform a validation with the home cluster to
ensure that the cached file contents are up-to-date with the home cluster. For performance reasons,
revalidation is not performed on every file access, instead it is done when access happens after an interval
is passed. This interval is called refreshLookupInterval.
Gateway nodes – In Active Cloud Engine terms, these are nodes within the cache cluster that are
enabled for access to home cluster. These nodes use internal NFS clients to read and write data from the
home cluster. Active Cloud Engine Gateway nodes handle remote access for all Active Cloud Engine-
enabled file sets. In the SONAS environment, Active Cloud Engine gateway nodes are called as wcache
nodes.
GPFS – General Parallel File System and IBM cluster file system product.
Home cluster – The SONAS cluster where data to be pulled in, resides. The data is in the form of
exported file system through NFSv4 or parallel NFS. It is also known as the remote storage cluster.
Home server – A single interface node within the home cluster.
NAS cluster – NAS refers to an appliance or a device, which stores data and serves it over the network.
Today's NAS appliances includes multiple nodes/machines clustered together to make a NAS appliance.
Such appliances are referred to as NAS clusters in this paper.
NFS - A generic term referring to all the versions of the Network File System protocol.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
33
36. Portable Operating System Interface (POSIX) –. All the file-related OS interfaces such as open, read,
write, close and so on are referred to as POSIX operations.
Prepopulation – The activity of bringing data into the cache cluster before it is accessed from the home
cluster .
Recovery mode - The mode of the Active Cloud Engine cluster when it is recovering from a failure
(gateway node, GPFS internal failures, such as queue drops, memory allocation failures, and so on). All
application requests are blocked in this mode.
Replication – The process of copying data or files from one SONAS cluster to another SONAS cluster.
There are many ways (such as rsync, explicit copy, and so on) to replicate data. Caching is another way to
achieve replication.
Revalidation – When the data is brought into the cache, it is necessary to ensure that it is up-to-date. The
process of verifying the validity of data with the home cluster is termed as revalidation. This normally
involves a NFS request to get the file attributes, (which contains the time when the file was changed or
modified at the home cluster) and comparing it with attributes stored from previous verification.
Wcache node – When Active Cloud Engine Gateway node is mapped into SONAS concepts, it is termed
as wcache node.
Synchronization lag - The time delay between the read at the home cluster and the time reflecting the
last write at the cache cluster. This attribute can be set using the mmchconfig command.
Validity lag - The time delay between the read at the cache cluster and the time reflecting the last write at
the home cluster. This attribute can be set system-wide or per file set using the
mmchfileset/mmchconfig command.
WAN – Wide area network connecting two SONAS clusters separated by locations.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
34
37. About the authors
Gaurav Chhaunker, certified Project Management Professional (PMP), is a Senior Staff Software Engineer
in the IBM SONAS ISV Enablement Group. He has more than 8 years of experience working with various
Storage and System Management Technologies. Gaurav holds a Bachelor degree in Engineering from the
Kakatiya University, Andhra Pradesh, India and a Master degree in Computer and Information Science
from Cleveland State University, CSU, Ohio, USA. You can reach Gaurav at
gaurav.chhaunker@in.ibm.com.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
35
39. presented here to communicate IBM's current investment and development activities as a good faith effort
to help with our customers' future planning.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled
environment. The actual throughput or performance that any user will experience will vary depending upon
considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the
storage configuration, and the workload processed. Therefore, no assurance can be given that an
individual user will achieve throughput or performance improvements equivalent to the ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in production models.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.
IBM SONAS Enterprise backup and remote replication solution in a private cloud
37