IBM Spectrum Scale with Active File Management (AFM) allows storing data safely across geographically distributed sites using a clustered file system cache. AFM moves data between the home cluster where data is primarily stored and cache clusters where data is made available on demand or periodically to increase availability. Modes like read-only, single-writer, and independent-writer define how data is cached, modified, and synchronized between sites.
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...xKinAnx
The document provides an overview of IBM Spectrum Scale Active File Management (AFM). AFM allows data to be accessed globally across multiple clusters as if it were local by automatically managing asynchronous replication. It describes the various AFM modes including read-only caching, single-writer, and independent writer. It also covers topics like pre-fetching data, cache eviction, cache states, expiration of stale data, and the types of data transferred between home and cache sites.
This document discusses Spectrum Scale memory usage. It outlines Spectrum Scale basics like clusters, nodes, and filesystems. It describes the different Spectrum Scale memory pools: pagepool for data, shared segment for metadata references, and external heap for daemons. It provides information on calculating memory needs based on parameters like files to cache, stat cache size, nodes, and access patterns. Other topics covered include related Linux memory usage and out of scope memory components.
This document discusses authentication and ID mapping in IBM Spectrum Scale. It provides an overview of authentication basics, UNIX and Windows authentication, and ID mapping. It then describes authentication and ID mapping in IBM Spectrum Scale, including supported authentication methods, ID mapping methods, and configuration prerequisites. Active Directory authentication with automatic, RFC2307, and LDAP ID mapping is explained in more detail.
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...xKinAnx
This document provides information about replication and stretch clusters in IBM Spectrum Scale. It defines replication as synchronously copying file system data across failure groups for redundancy. While replication improves availability, it reduces performance and increases storage usage. Stretch clusters combine two or more clusters to create a single large cluster, typically using replication between sites. Replication policies and failure group configuration are important to ensure effective data duplication.
IBM Spectrum Scale Authentication for ProtocolsSandeep Patil
The document discusses IBM Spectrum Scale protocol authentication. It provides an overview of configuring file protocol authentication with Active Directory using RFC2307 ID mapping. It also discusses configuring object protocol authentication with a local user database. The authentication configuration is managed using the mmuserauth service command, which allows creating, listing, checking, and removing authentication configurations for file and object access protocols.
Ibm spectrum scale_backup_n_archive_v03_ashAshutosh Mate
IBM Spectrum Scale can be used as both the source and destination for backup and archiving. As a source, Spectrum Scale data can be backed up to products like Spectrum Protect, Spectrum Archive, and third-party backup software. As a destination, Spectrum Protect can use Spectrum Scale and ESS storage for storing backed up or archived data, providing scalability, performance, and cost benefits over other solutions. Case studies demonstrate how large enterprises and regional hospital networks have consolidated backup infrastructure and improved availability, capacity, and backup/restore speeds by combining Spectrum Scale and Spectrum Protect.
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...xKinAnx
The document provides an overview of IBM Spectrum Scale Active File Management (AFM). AFM allows data to be accessed globally across multiple clusters as if it were local by automatically managing asynchronous replication. It describes the various AFM modes including read-only caching, single-writer, and independent writer. It also covers topics like pre-fetching data, cache eviction, cache states, expiration of stale data, and the types of data transferred between home and cache sites.
This document discusses Spectrum Scale memory usage. It outlines Spectrum Scale basics like clusters, nodes, and filesystems. It describes the different Spectrum Scale memory pools: pagepool for data, shared segment for metadata references, and external heap for daemons. It provides information on calculating memory needs based on parameters like files to cache, stat cache size, nodes, and access patterns. Other topics covered include related Linux memory usage and out of scope memory components.
This document discusses authentication and ID mapping in IBM Spectrum Scale. It provides an overview of authentication basics, UNIX and Windows authentication, and ID mapping. It then describes authentication and ID mapping in IBM Spectrum Scale, including supported authentication methods, ID mapping methods, and configuration prerequisites. Active Directory authentication with automatic, RFC2307, and LDAP ID mapping is explained in more detail.
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...xKinAnx
This document provides information about replication and stretch clusters in IBM Spectrum Scale. It defines replication as synchronously copying file system data across failure groups for redundancy. While replication improves availability, it reduces performance and increases storage usage. Stretch clusters combine two or more clusters to create a single large cluster, typically using replication between sites. Replication policies and failure group configuration are important to ensure effective data duplication.
IBM Spectrum Scale Authentication for ProtocolsSandeep Patil
The document discusses IBM Spectrum Scale protocol authentication. It provides an overview of configuring file protocol authentication with Active Directory using RFC2307 ID mapping. It also discusses configuring object protocol authentication with a local user database. The authentication configuration is managed using the mmuserauth service command, which allows creating, listing, checking, and removing authentication configurations for file and object access protocols.
Ibm spectrum scale_backup_n_archive_v03_ashAshutosh Mate
IBM Spectrum Scale can be used as both the source and destination for backup and archiving. As a source, Spectrum Scale data can be backed up to products like Spectrum Protect, Spectrum Archive, and third-party backup software. As a destination, Spectrum Protect can use Spectrum Scale and ESS storage for storing backed up or archived data, providing scalability, performance, and cost benefits over other solutions. Case studies demonstrate how large enterprises and regional hospital networks have consolidated backup infrastructure and improved availability, capacity, and backup/restore speeds by combining Spectrum Scale and Spectrum Protect.
This document provides an overview of EMC's VNX storage solutions. It discusses the VNX and VNXe series which provide unified storage for file, block and object storage. It highlights key features like FAST cache and virtual provisioning which optimize performance and efficiency. Management is simplified through Unisphere which provides centralized management and wizard-based configuration. Software solutions are offered in packaged suites to provide protection, replication and efficiency.
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...xKinAnx
The document provides instructions for installing and configuring Spectrum Scale 4.1. Key steps include: installing Spectrum Scale software on nodes; creating a cluster using mmcrcluster and designating primary/secondary servers; verifying the cluster status with mmlscluster; creating Network Shared Disks (NSDs); and creating a file system. The document also covers licensing, system requirements, and IBM and client responsibilities for installation and maintenance.
The document provides an overview of Logical Volume Management (LVM) in Linux. It discusses what LVM is, its main components like physical volumes, volume groups, logical volumes, and how they relate. It then gives steps to use LVM by creating a physical volume, volume group and logical volume. It also discusses how LVM allows expanding logical volumes and live resizing of file systems.
Step by Step Restore rman to different hostOsama Mustafa
1. Take a backup of the database and archived logs on the source system using RMAN.
2. Copy the backup files to the new target system using the same directory structure.
3. Restore the control file, SPFILE, and database files to the target system using RMAN, changing the data file locations and redo log file locations as needed.
4. Open the database with a resetlogs after restoring the database, control file, and archived redo logs from backup.
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...xKinAnx
This document provides an overview of Spectrum Scale 4.1 system administration. It describes the Elastic Storage Server options and components, Spectrum Scale native RAID (GNR), and tips for best practices. GNR implements sophisticated data placement and error correction algorithms using software RAID to provide high reliability and performance without additional hardware. It features auto-rebalancing, low rebuild overhead through declustering, and end-to-end data checksumming.
This document provides an introduction to storage concepts and the history of disk and tape storage. It discusses how storage has evolved from the earliest mainframes using punched cards and magnetic tape, to the introduction of disk drives and disk arrays. The key developments covered include the transition from tape to disk drives for faster direct access storage, the benefits of RAID technology for performance and redundancy, and how storage architectures continue advancing with higher capacity and faster disks.
The document provides information about Engr. Jed G. Concepcion's background and experience in data backup solutions, cloud technology, and IT services. It includes details about his educational background, professional affiliations, certifications, and past work experience in engineering, teaching, and management roles. The document also contains sections about data backup concepts and best practices, different backup architectures, storage options, backup methods, and disaster recovery.
1. DB2 Data Sharing allows applications running on multiple DB2 subsystems to concurrently read and write to the same data, providing high scalability, performance, and continuous availability.
2. It provides benefits like increased capacity, continuous availability during planned and unplanned outages, easier growth accommodation, and dynamic workload balancing.
3. The Parallel Sysplex and Data Sharing architecture, along with features like rolling maintenance and dynamic workload balancing, work to ensure continuous availability even if a DB2 subsystem or z/OS system fails.
Best practices for DB2 for z/OS log based recoveryFlorence Dubois
The need to perform a DB2 log-based recovery of multiple objects is a very rare event, but statistically, it is more frequent than a true disaster recovery event (flood, fire, etc). Taking regular backups is necessary but far from sufficient for anything beyond minor application recovery. If not prepared, practiced and optimised, it can lead to extended application service downtimes – possibly many hours to several days. This presentation will provide many hints and tips on how to plan, design intelligently, stress test and optimise DB2 log-based recovery.
Disaster Recovery & Data Backup StrategiesSpiceworks
This document discusses data backup strategies and planning. It emphasizes that backups are critical for businesses to protect their data and recover from data loss. The document outlines planning considerations like identifying critical systems and data, recovery objectives, and capacity needs. It then covers various backup methods and factors to consider when developing a backup plan such as repository type, media type, and testing procedures. Regularly monitoring and testing backups is key to ensuring the plan is effective.
Implementing a Disaster Recovery Solution using VMware Site Recovery Manager ...Paula Koziol
IBM Spectrum Virtualize delivers business continuity capabilities using a stretched cluster configuration together with VMware Site Recovery Manager (SRM). The result is an end-to-end disaster recovery solution for organizations of all sizes. Join this session to understand how IBM Spectrum Virtualize, including offerings like IBM SAN Volume Controller (SVC) and IBM Storwize Family, integrates with VMware SRM to automate and optimize disaster recovery operations. Everyone who works in mission critical environments understands the need for high availability and effective solutions for planned and unplanned outages. Organizations demand disaster recovery operations that are fully automated and can be executed in a repeatable manner, so that they are always prepared for disaster situations. This IBM-VMware solution offers SMB and enterprise customers the ability to survive a wide range of failures and enables seamless migration of applications across company sites for various planned activities, enabling zero-downtime application mobility.
Backing up data involves taking copies of data so it can be recovered if the original is lost. Archiving moves less frequently used data to backup storage to free up space. An effective backup strategy includes choosing backup media, determining backup methods and frequency, storing and rotating backups, and being able to recover data from backups. Common media are tapes and external drives, while full, differential, and incremental backups are frequent methods. Rotation schemes like grandfather-father-son improve cost-efficiency and ensure all files are protected. Verification and recovery processes are also important parts of the strategy.
EMC Data domain advanced features and functionssolarisyougood
This document provides an overview of advanced features and functions of Data Domain systems. It covers topics such as virtual tape libraries (VTL), snapshots, replication, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document consists of multiple lessons that describe these topics in detail and includes configuration examples.
Linux Memory Management with CMA (Contiguous Memory Allocator)Pankaj Suryawanshi
Fundamentals of Linux Memory Management and CMA (Contiguous Memory Allocator) In Linux.
Virtual Memory, Physical Memory, Swap Space, DMA, IOMMU, Paging, Segmentation, TLB, Hugepages, Ion google memory manager
The document discusses DB2's use of storage on the mainframe. It notes that DB2 uses VSAM data sets to store tablespaces, indexes, and other objects. These data sets can be managed by DB2 storage groups or SMS. Storage groups are lists of volumes where data sets are placed. The document recommends letting DB2 manage data sets using storage groups for less administrative work, but with less control, or defining your own data sets for more control but more work. It also provides details on where to find storage-related information in the DB2 catalog.
SQL Server High Availability and Disaster RecoveryMichael Poremba
High availability and disaster recovery strategies for Microsoft SQL Server databases are discussed. Key points include:
1) High availability aims to minimize downtime through redundant components and automatic failover, while disaster recovery protects against total data center outage through redundant systems and facilities.
2) Various SQL Server high availability options are examined, including database mirroring, log shipping, and failover clustering, each with different capabilities like automatic failover speed and hardware requirements.
3) Disaster recovery focuses on having a redundant system in a separate location that can be switched over to if the primary system fails. It requires strategies for backup, offsite storage, and recovery of data at the redundant location.
This document provides guidance on diagnosing and addressing CSM storage issues in z/OS. It describes CSM and how it manages storage, symptoms of CSM storage problems like error messages and abends, how to gather diagnostic information like dumps and traces, common causes like insufficient CSM storage parameters, and recommendations such as increasing IVTPRM00 values and CSA allocation.
Disk and File System Management in LinuxHenry Osborne
This document discusses disk and file system management in Linux. It covers MBR and GPT partition schemes, logical volume management, common file systems like ext4 and XFS, mounting file systems, and file system maintenance tools. It also discusses disk quotas, file ownership, permissions, and the umask command for setting default permissions.
This document discusses MongoDB sharding which involves horizontally scaling MongoDB across multiple machines or shards. It describes the components of a sharded MongoDB cluster including shards, config servers, and mongos query routers. It provides examples of when and why sharding would be used such as for large datasets, high throughput, hardware limitations, storage engine limitations, isolating failures, and separating hot and cold data. The document then outlines steps to set up a basic two node sharded cluster with one shard, three config servers, and mongos query routers on the same two machines.
The document discusses Oracle Golden Gate software. It provides real-time data integration across heterogeneous database systems with low overhead. It captures transactional data from source databases using log-based extraction and delivers the data to target systems with transactional integrity. Key features include high performance, flexibility to integrate various database types, and reliability during outages.
This document provides an overview of EMC's VNX storage solutions. It discusses the VNX and VNXe series which provide unified storage for file, block and object storage. It highlights key features like FAST cache and virtual provisioning which optimize performance and efficiency. Management is simplified through Unisphere which provides centralized management and wizard-based configuration. Software solutions are offered in packaged suites to provide protection, replication and efficiency.
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...xKinAnx
The document provides instructions for installing and configuring Spectrum Scale 4.1. Key steps include: installing Spectrum Scale software on nodes; creating a cluster using mmcrcluster and designating primary/secondary servers; verifying the cluster status with mmlscluster; creating Network Shared Disks (NSDs); and creating a file system. The document also covers licensing, system requirements, and IBM and client responsibilities for installation and maintenance.
The document provides an overview of Logical Volume Management (LVM) in Linux. It discusses what LVM is, its main components like physical volumes, volume groups, logical volumes, and how they relate. It then gives steps to use LVM by creating a physical volume, volume group and logical volume. It also discusses how LVM allows expanding logical volumes and live resizing of file systems.
Step by Step Restore rman to different hostOsama Mustafa
1. Take a backup of the database and archived logs on the source system using RMAN.
2. Copy the backup files to the new target system using the same directory structure.
3. Restore the control file, SPFILE, and database files to the target system using RMAN, changing the data file locations and redo log file locations as needed.
4. Open the database with a resetlogs after restoring the database, control file, and archived redo logs from backup.
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...xKinAnx
This document provides an overview of Spectrum Scale 4.1 system administration. It describes the Elastic Storage Server options and components, Spectrum Scale native RAID (GNR), and tips for best practices. GNR implements sophisticated data placement and error correction algorithms using software RAID to provide high reliability and performance without additional hardware. It features auto-rebalancing, low rebuild overhead through declustering, and end-to-end data checksumming.
This document provides an introduction to storage concepts and the history of disk and tape storage. It discusses how storage has evolved from the earliest mainframes using punched cards and magnetic tape, to the introduction of disk drives and disk arrays. The key developments covered include the transition from tape to disk drives for faster direct access storage, the benefits of RAID technology for performance and redundancy, and how storage architectures continue advancing with higher capacity and faster disks.
The document provides information about Engr. Jed G. Concepcion's background and experience in data backup solutions, cloud technology, and IT services. It includes details about his educational background, professional affiliations, certifications, and past work experience in engineering, teaching, and management roles. The document also contains sections about data backup concepts and best practices, different backup architectures, storage options, backup methods, and disaster recovery.
1. DB2 Data Sharing allows applications running on multiple DB2 subsystems to concurrently read and write to the same data, providing high scalability, performance, and continuous availability.
2. It provides benefits like increased capacity, continuous availability during planned and unplanned outages, easier growth accommodation, and dynamic workload balancing.
3. The Parallel Sysplex and Data Sharing architecture, along with features like rolling maintenance and dynamic workload balancing, work to ensure continuous availability even if a DB2 subsystem or z/OS system fails.
Best practices for DB2 for z/OS log based recoveryFlorence Dubois
The need to perform a DB2 log-based recovery of multiple objects is a very rare event, but statistically, it is more frequent than a true disaster recovery event (flood, fire, etc). Taking regular backups is necessary but far from sufficient for anything beyond minor application recovery. If not prepared, practiced and optimised, it can lead to extended application service downtimes – possibly many hours to several days. This presentation will provide many hints and tips on how to plan, design intelligently, stress test and optimise DB2 log-based recovery.
Disaster Recovery & Data Backup StrategiesSpiceworks
This document discusses data backup strategies and planning. It emphasizes that backups are critical for businesses to protect their data and recover from data loss. The document outlines planning considerations like identifying critical systems and data, recovery objectives, and capacity needs. It then covers various backup methods and factors to consider when developing a backup plan such as repository type, media type, and testing procedures. Regularly monitoring and testing backups is key to ensuring the plan is effective.
Implementing a Disaster Recovery Solution using VMware Site Recovery Manager ...Paula Koziol
IBM Spectrum Virtualize delivers business continuity capabilities using a stretched cluster configuration together with VMware Site Recovery Manager (SRM). The result is an end-to-end disaster recovery solution for organizations of all sizes. Join this session to understand how IBM Spectrum Virtualize, including offerings like IBM SAN Volume Controller (SVC) and IBM Storwize Family, integrates with VMware SRM to automate and optimize disaster recovery operations. Everyone who works in mission critical environments understands the need for high availability and effective solutions for planned and unplanned outages. Organizations demand disaster recovery operations that are fully automated and can be executed in a repeatable manner, so that they are always prepared for disaster situations. This IBM-VMware solution offers SMB and enterprise customers the ability to survive a wide range of failures and enables seamless migration of applications across company sites for various planned activities, enabling zero-downtime application mobility.
Backing up data involves taking copies of data so it can be recovered if the original is lost. Archiving moves less frequently used data to backup storage to free up space. An effective backup strategy includes choosing backup media, determining backup methods and frequency, storing and rotating backups, and being able to recover data from backups. Common media are tapes and external drives, while full, differential, and incremental backups are frequent methods. Rotation schemes like grandfather-father-son improve cost-efficiency and ensure all files are protected. Verification and recovery processes are also important parts of the strategy.
EMC Data domain advanced features and functionssolarisyougood
This document provides an overview of advanced features and functions of Data Domain systems. It covers topics such as virtual tape libraries (VTL), snapshots, replication, DD Boost integration, capacity and throughput planning, and system monitoring tools. The document consists of multiple lessons that describe these topics in detail and includes configuration examples.
Linux Memory Management with CMA (Contiguous Memory Allocator)Pankaj Suryawanshi
Fundamentals of Linux Memory Management and CMA (Contiguous Memory Allocator) In Linux.
Virtual Memory, Physical Memory, Swap Space, DMA, IOMMU, Paging, Segmentation, TLB, Hugepages, Ion google memory manager
The document discusses DB2's use of storage on the mainframe. It notes that DB2 uses VSAM data sets to store tablespaces, indexes, and other objects. These data sets can be managed by DB2 storage groups or SMS. Storage groups are lists of volumes where data sets are placed. The document recommends letting DB2 manage data sets using storage groups for less administrative work, but with less control, or defining your own data sets for more control but more work. It also provides details on where to find storage-related information in the DB2 catalog.
SQL Server High Availability and Disaster RecoveryMichael Poremba
High availability and disaster recovery strategies for Microsoft SQL Server databases are discussed. Key points include:
1) High availability aims to minimize downtime through redundant components and automatic failover, while disaster recovery protects against total data center outage through redundant systems and facilities.
2) Various SQL Server high availability options are examined, including database mirroring, log shipping, and failover clustering, each with different capabilities like automatic failover speed and hardware requirements.
3) Disaster recovery focuses on having a redundant system in a separate location that can be switched over to if the primary system fails. It requires strategies for backup, offsite storage, and recovery of data at the redundant location.
This document provides guidance on diagnosing and addressing CSM storage issues in z/OS. It describes CSM and how it manages storage, symptoms of CSM storage problems like error messages and abends, how to gather diagnostic information like dumps and traces, common causes like insufficient CSM storage parameters, and recommendations such as increasing IVTPRM00 values and CSA allocation.
Disk and File System Management in LinuxHenry Osborne
This document discusses disk and file system management in Linux. It covers MBR and GPT partition schemes, logical volume management, common file systems like ext4 and XFS, mounting file systems, and file system maintenance tools. It also discusses disk quotas, file ownership, permissions, and the umask command for setting default permissions.
This document discusses MongoDB sharding which involves horizontally scaling MongoDB across multiple machines or shards. It describes the components of a sharded MongoDB cluster including shards, config servers, and mongos query routers. It provides examples of when and why sharding would be used such as for large datasets, high throughput, hardware limitations, storage engine limitations, isolating failures, and separating hot and cold data. The document then outlines steps to set up a basic two node sharded cluster with one shard, three config servers, and mongos query routers on the same two machines.
The document discusses Oracle Golden Gate software. It provides real-time data integration across heterogeneous database systems with low overhead. It captures transactional data from source databases using log-based extraction and delivers the data to target systems with transactional integrity. Key features include high performance, flexibility to integrate various database types, and reliability during outages.
Memory management handles allocation of memory to processes and tracks used and free memory. It uses techniques like paging, segmentation, and dynamic allocation from a heap. Paging maps logical addresses to physical pages, avoiding external fragmentation. Segmentation divides memory into logical segments of varying sizes. Dynamic allocation fulfills requests from the heap, managing free blocks and avoiding fragmentation and memory leaks.
This document summarizes a presentation on Oracle RAC (Real Application Clusters) internals with a focus on Cache Fusion. The presentation covers:
1. An overview of Cache Fusion and how it allows data to be shared across instances to enable scalability.
2. Dynamic re-mastering which adjusts where data is mastered based on access patterns to reduce messaging.
3. Techniques for handling contention including partitioning, connection pools, and separating redo logs.
4. Benefits of combining Oracle Multitenant and RAC such as aligning PDBs to instances.
5. How Oracle In-Memory Column Store fully integrates with RAC including fault tolerance features.
TechDay - Toronto 2016 - Hyperconvergence and OpenNebulaOpenNebula Project
Hyperconvergence integrates compute, storage, networking and virtualization resources from scratch in a commodity hardware box supported by a single vendor. It offers scalability, performance, centralized management, reliability and is software-focused. StorPool is a storage software that can be installed on servers to pool and aggregate the capacity and performance of drives. It provides standard block devices and replicates data across drives and servers for redundancy. StorPool integrates fully with Opennebula to provide a robust hyperconverged infrastructure on commodity hardware using distributed storage.
This document discusses best practices for migrating database workloads to Azure Infrastructure as a Service (IaaS). Some key points include:
- Choosing the appropriate VM series like E or M series optimized for database workloads.
- Using availability zones and geo-redundant storage for high availability and disaster recovery.
- Sizing storage correctly based on the database's input/output needs and using premium SSDs where needed.
- Migrating existing monitoring and management tools to the cloud to provide familiarity and automating tasks like backups, patching, and problem resolution.
This document discusses accelerating Spark workloads on Amazon S3 using Alluxio. It describes the challenges of running Spark interactively on S3 due to its eventual consistency and expensive metadata operations. Alluxio provides a data caching layer that offers strong consistency, faster performance, and API compatibility with HDFS and S3. It also allows data outside of S3 to be analyzed. The document demonstrates how to bootstrap Alluxio on an AWS EMR cluster to accelerate Spark workloads running on S3.
Ceph - High Performance Without High CostsJonathan Long
Ceph is a high-performance storage platform that provides storage without high costs. The presentation discusses BlueStore, a redesign of Ceph's object store to improve performance and efficiency. BlueStore preserves wire compatibility but uses an incompatible storage format. It aims to double write performance and match or exceed read performance of the previous FileStore design. BlueStore simplifies the architecture and uses algorithms tailored for different hardware like flash. It was in a tech preview in the Jewel release and aims to be default in the Luminous release next year.
2Windows Server Proposal for Dynamic SolarKelvin L.docxtamicawaysmith
2
Windows Server Proposal for Dynamic Solar
Kelvin Le
CMIT370
Professor Joseph Marshall
10/11/2016
As an Information Technology consultant, I have gathered up important and beneficial ideas that will help Dynamic Solar manufacture and distribute solar panels successfully to the consumer market. The company has three locations which are spread out evenly across the country. These locations are San Diego, Houston, and Baltimore. Due to increase cost in electricity, the demand and growth of solar panels are also increasing which make it necessary to make data security a priority since patent and trademarks are at stake. San Diego will be the headquarters of this operation and the Baltimore and Houston sales personnel will need secure remote access to the San Diego office.
In this proposal, I will cover Active Directory, Group Policy, DNS, File Services, Remote Services, and WSUS.
Active Directory:
Active Directory is a centralized database that contains user account and security information. In a workgroup, security and management takes place on each computer, with each computer holding information about users and resources. With Active Directory, all computers share the same central database. In this case, Dynamic Solar would want the Houston and Baltimore location to share the same central database as the San Diego since it is the headquarters. Dynamic Solar should implement the Trees and Forests model. In this model, multiple domains are grouped together. There will be one Forest for Dynamic Solar which will help establish the relationship between trees that have different DNS name spaces. The Forest would be called Corp.DynamicSolar.Com that will span across the three different locations. The domains in a tree would be connected with a two-transitive trust so that way all domains in the forest would be able to trust one another. They would also share a common schema. This defines the object classes that can be created in Active Directory and the attributes they contain. Lastly, they would have common global catalogs. Within this domain an organization unit will be used to subdivide and organize network resources within the domain. San Diego will have primary and backup server to serve as a global catalog. The global catalog allows for users to authenticate to the domain and to utilize network resources.
The domain controllers should be placed at the San Diego headquarters site since data security is a priority. Read-only domain controller is an additional controller for a domain that hosts read-only partitions of the Active Directory database. The features from RODC that will help improve security measures and prevent the system from being compromised is the Administrator role separation, unidirectional replication, read-only data, and password replication. The administrator role separation allows RODCs to provide a secure mechanism for granting non-administrative domain users the right to log on to a ...
EOUG95 - Client Server Very Large Databases - PaperDavid Walker
The document discusses building large scaleable client/server solutions. It describes breaking the solution into four server components: database server, application server, batch server, and print server. It focuses on the database server, discussing how to make it resilient through clustering and scaleable by partitioning applications and using parallel query options. It also covers backup and recovery strategies.
A Hybrid Cloud Storage solution strategy that integrates local storage with cloud models can dramatically change the performance, cost and reliability parameters. Get detailed insights from Netmagic solutions.
This document discusses high availability strategies for MySQL databases across multiple datacenters. It covers architectural considerations for hot/hot vs hot/cold configurations and disaster recovery approaches. The main sections explore replication techniques like MySQL replication and alternative schemes, application high availability mechanisms, and how Percona can help with high availability solutions and services.
This document discusses memory management in operating systems. It covers topics like how memory management keeps track of allocated and free memory, provides protection using base and limit registers, and different address binding schemes. It also discusses dynamic loading, dynamic linking, logical versus physical addresses, swapping, memory allocation techniques like single allocation and multiple partitions, and issues like fragmentation. Paging and segmentation techniques for managing memory are also summarized.
Alluxio is a data orchestration platform that unifies data access at memory speed across multiple storage systems. It provides a unified namespace and intelligent caching to enable fast access to remote data. Alluxio's architecture includes a master that manages metadata, workers that manage block data on local storage, and clients that access data. New features in version 1.7.0 include asynchronous caching, Kubernetes integration, tiered locality, under store synchronization, and FUSE improvements.
Virtual SAN - A Deep Dive into Converged Storage (technical whitepaper)DataCore APAC
DataCore™ Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses
the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.
DataCore Virtual SAN virtualizes the local storage on two or more physical x86-64 servers. It can leverage any combination of magnetic disks (SAS, SATA) and optionally flash, to provide persistent storage services as close to the application as possible without having to go out over the wire (network or fabric). Virtual disks provisioned from DataCore Virtual SAN can also be shared across the cluster to support the dynamic migration and failover of applications between hosts.
DataCore Virtual SAN addresses the challenges that exist today within many IT organisations such as single points of failure, poor application performance (particularly within virtualized environments), low storage efficiency and utilisation, and high infrastructure costs.
Operating System
Topic Memory Management
for Btech/Bsc (C.S)/BCA...
Memory management is the functionality of an operating system which handles or manages primary memory. Memory management keeps track of each and every memory location either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
The document discusses methods for accelerating Perforce workspace syncs and reducing network storage usage as digital assets grow rapidly. It introduces IC Manage Views, which uses dynamic virtual workspaces and local caching to achieve near-instant workspace syncs and reduce network storage by 4x. Benchmark results show IC Manage Views delivers files 2x faster than traditional methods through intelligent file redirection that separates reads from writes. IC Manage Views is compatible with existing storage technologies and scales as users and data grow.
The document describes the Google File System (GFS), a scalable distributed file system designed and implemented by Google to meet its rapidly growing data storage needs. Key aspects of GFS include using inexpensive commodity hardware, supporting large files and high throughput appending, and providing fault tolerance through replication across multiple servers. GFS differs from previous distributed file systems in its focus on high volume appending over rewriting, use of large files, and relaxed consistency to improve performance for Google's specific workload characteristics.
The document describes the Google File System (GFS), a scalable distributed file system designed and implemented by Google to meet its rapidly growing data storage needs. Key aspects of the GFS design include supporting large files and high throughput appending workloads on inexpensive commodity hardware in the face of frequent component failures. The GFS architecture uses a single master to manage metadata and multiple chunkservers to store and retrieve file chunks, providing fault tolerance through replication.
DataCore™ Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses
the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.
DataCore Virtual SAN virtualizes the local storage on two or more physical x86-64 servers. It can leverage any combination of magnetic disks (SAS, SATA) and optionally flash, to provide persistent storage services as close to the application as possible without having to go out over the wire (network or fabric). Virtual disks provisioned from DataCore Virtual SAN can also be shared across the cluster to support the dynamic migration and failover of applications between hosts.
DataCore Virtual SAN addresses the challenges that exist today within many IT organizations such as single points of failure, poor application performance (particularly within virtualized environments), low storage efficiency and utilization, and high infrastructure costs.
Similar to Data Sharing using Spectrum Scale Active File Management (20)
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Dive into the realm of operating systems (OS) with Pravash Chandra Das, a seasoned Digital Forensic Analyst, as your guide. 🚀 This comprehensive presentation illuminates the core concepts, types, and evolution of OS, essential for understanding modern computing landscapes.
Beginning with the foundational definition, Das clarifies the pivotal role of OS as system software orchestrating hardware resources, software applications, and user interactions. Through succinct descriptions, he delineates the diverse types of OS, from single-user, single-task environments like early MS-DOS iterations, to multi-user, multi-tasking systems exemplified by modern Linux distributions.
Crucial components like the kernel and shell are dissected, highlighting their indispensable functions in resource management and user interface interaction. Das elucidates how the kernel acts as the central nervous system, orchestrating process scheduling, memory allocation, and device management. Meanwhile, the shell serves as the gateway for user commands, bridging the gap between human input and machine execution. 💻
The narrative then shifts to a captivating exploration of prominent desktop OSs, Windows, macOS, and Linux. Windows, with its globally ubiquitous presence and user-friendly interface, emerges as a cornerstone in personal computing history. macOS, lauded for its sleek design and seamless integration with Apple's ecosystem, stands as a beacon of stability and creativity. Linux, an open-source marvel, offers unparalleled flexibility and security, revolutionizing the computing landscape. 🖥️
Moving to the realm of mobile devices, Das unravels the dominance of Android and iOS. Android's open-source ethos fosters a vibrant ecosystem of customization and innovation, while iOS boasts a seamless user experience and robust security infrastructure. Meanwhile, discontinued platforms like Symbian and Palm OS evoke nostalgia for their pioneering roles in the smartphone revolution.
The journey concludes with a reflection on the ever-evolving landscape of OS, underscored by the emergence of real-time operating systems (RTOS) and the persistent quest for innovation and efficiency. As technology continues to shape our world, understanding the foundations and evolution of operating systems remains paramount. Join Pravash Chandra Das on this illuminating journey through the heart of computing. 🌟
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
WeTestAthens: Postman's AI & Automation Techniques
Data Sharing using Spectrum Scale Active File Management
1. IBM Systems
Technical Symposium
Store your data
safely at a
geographically
distributed site using
Spectrum Scale with
AFM
Trishali Nayar
(Spectrum Scale
Development)
2. Please note
• IBM’s statements regarding its plans, directions, and intent are subject to change or
withdrawal without notice at IBM’s sole discretion.
• Information regarding potential future products is intended to outline our general product
direction and it should not be relied on in making a purchasing decision.
• The information mentioned regarding potential future products is not a commitment, promise,
or legal obligation to deliver any material, code or functionality. Information about potential
future products may not be incorporated into any contract.
• The development, release, and timing of any future features or functionality described for our
products remains at our sole discretion.
• Performance is based on measurements and projections using standard IBM benchmarks in
a controlled environment. The actual throughput or performance that any user will
experience will vary depending upon many factors, including considerations such as
the amount of multiprogramming in the user’s job stream,
the I/O configuration, the storage configuration, and the workload processed. Therefore, no
assurance can be given that an individual user will achieve results similar to those
stated here.
3. Introduction
• Spectrum Scale is a fast, scalable and complete storage
solution for today’s data-intensive enterprise.
• Integrated tools designed to help organizations manage
petabytes of data and millions of files.
• Active File Management is a clustered file system cache,
using the underlying file system.
• Moves data on demand, periodically and continuously
which makes it extremely flexible.
• Helps increase global collaboration and immensely
increases data availability.
4. Definitions
Home Cluster/Site
The cluster or main site where data is stored.
Cache Cluster/Site
The cluster where data is cached.
Note:
The home and cache sites are created independent of each other in
terms of storage and network configuration. The number of nodes in
each of these sites can vary based on workload.
6. Node Definitions
Gateway (GW) Node
On the cache site, a few nodes in the cluster are assigned special
responsibility of acting as gateway nodes. These gateway nodes are
used to send and receive data from the home cluster.
Multiple nodes can be configured as gateway nodes for load balancing,
workload distribution and better performance. The master GW node
manages the entire data transfer for the fileset.
Application Node
An application node is any node in the cache cluster that gets I/O
requests from applications.
A node can be both an application node and a GW node.
7. File system Operations
Synchronous Operations
Operations done at the cache like reads, lookups or stats which need to get a
response from the home site, before the application can be served.
The first time one gets a cache “miss” performance, but future times it becomes a
cache “hit”.
Configuring revalidation is possible for some modes.
Asynchronous Operations
Operations done at the cache like creating directories/files, writes, renames,
removes, truncates or setting permissions/attributes etc.
Once the operation is completed on the local filesystem at the cache and queued at
the GW node, the response is returned to the application.
The GW node maintains a queue of all these asynchronous operations that need to
be performed at the home cluster. These will happen at the home cluster after some
delay and this process is asynchronous, but continuous.
8. Data Flow
Pull Data
This is used to refer to the direction of data flow, when data is pulled
into the AFM cache from the home. Eg- on demand
Push Data
This is used to refer to the direction of data flow, when data is pushed
from the AFM cache to home. Or from primary site to secondary site, in
case of Disaster Recovery scenarios.
Revalidation
The process of comparing the metadata at cache and home to
determine if the data has changed at home. And if it has, then fetch the
latest contents.
9. Modes Available
Read-only (RO)
This is a mode used for pulling data from home. The data can be pulled on-demand i.e. on access or it
can be prefetched as well. The data is modified only at home and any changes get pulled into the cache
after the revalidation duration. The cache behaves like a read-only file system and creating and
modifying files is not allowed.
Single-writer (SW)
When a cache is configured in this mode, the cache site can exclusively write data. All asynchronous
operations at the cache get pushed to the home site asynchronously, hiding WAN latencies. This also
helps provide better performance to any applications which are run at the cache, as write-back caching
is done. When any asynchronous operation happens, an application can proceed as soon as the
operation happens locally on its filesystem at the cache. This same operation also gets queued on the
AFM gateway node.
There is a 1:1 relationship between the AFM single-writer cache fileset and the home fileset. This
implies that all the data is to be written at the single cache site and the home is used only for reading.
AFM cannot detect or prevent home site modification of data, the administrators need to ensure that the
data is not modified or accidently corrupted.
Local-update (LU)
This is used to pull data from home, but any changes made at the cache are not pushed to the home.
When a cache is configured in this mode, the cached data is available for both reading and writing. But
the data modified at the cache site is not sent back to the home site. So, this mode serves as a scratch-
cache. After the data is modified at cache, new updates made at home for that particular data object are
not pulled into the cache.
10. Modes Available
Independent-writer (IW)
This mode allows multiple cache filesets, located in different cache clusters to be associated with a
single home fileset, hence this is an example of N:1 mapping. But the important point to be noted is that
each cache site should perform asynchronous operations (includes writes) on different files. There is no
inter-cluster locking for a file getting modified, at multiple cache clusters. Each cache makes its updates
independently and these changes in the IW caches are pushed to the home. In case multiple sites
modify the same file and cause conflicts then the last writer will win. It is administrator’s responsibility to
control who has write access to files, to avoid such conflicts.
Once data is updated at home, all connected IW caches can fetch those changes on-demand based on
the revalidation intervals set. So on next data access all the IW caches will get synchronized with the
home. Data can also be pre-fetched into the cache.
Note:
As seen in the above modes, depending on where the data is created/modified sometimes the home
site can be referred to as the local site and the cache site can be referred to as the remote or edge or
geographically disperse site. Eg. In RO mode, the home cluster can be called the local site and cache
cluster can be considered as the remote site.
The vice versa is also true Eg- in the SW/IW mode the cache site is where data is generated and can be
considered as the local site and the home site can be considered as the remote site. So these terms
local or remote site can be applied to both the cache and home sites, based on location of data creation
and direction of data flow.
11. Capabilities
Eviction
When the cache needs to be smaller than home, you can save
storage costs.
Eviction means that data blocks of files residing in the cache are
removed from the local file system, but the metadata of these files
is retained at the cache.
Automatic Eviction: The automatic eviction is based on fileset
quotas.
Manual Eviction: can be done for specific files selected by an
Information Lifecycle Management (ILM) policy. This adds more
flexibility in terms of specifying which particular files shouldn’t be
eating up your disk space.
12. Capabilities
Prefetch Data
This refers to pre-populating the cache or pulling in the data from home
in advance. This can be done for the entire data at home. Or it can be
done for selective files, based on Information Lifecycle Management
(ILM) policy where you can specify which file names or files based on
modification time etc., need to be prefetched.
Parallel I/O
If the files written at the cache or read into the cache, are of large size
and above a configurable threshold limit, then the parallel I/O feature of
AFM can be used. This feature helps to break this write/read of a large
file into various chunks and distributes these chunks across multiple
gateway (GW) nodes in the cluster. Hence multiple channels of
communication with the home cluster can be used to quickly move the
data to and from the home site.
17. Capabilities
• Relation is always at a fileset level. Only supports independent
filesets.
• The ‘mmafmctl’ command has options like getstate,
flushpending, resumeRequeued
• Can create a new home for a SW/IW fileset.
• Can create a new cache from a home.
• Peer-snapshots.
18. Disconnection
• Cache can continue despite no connectivity with home or periods
when home is inaccessible.
• Updates to home are queued.
• Data is served from local cache, there is no revalidation with home
• Data not available in cache return as not existing error (ENOENT)
19. Disaster Recovery
Primary and Secondary
AFM can be used for disaster recovery (DR) solutions and there are 2
sites/clusters in this case, the primary and secondary.
Both these sites have the entire data on them.
The AFM modes useful for DR are also called primary and secondary
filesets.
When an AFM fileset is configured in this mode, the primary(RW,
Active) site exclusively creates or writes data. The secondary site(RO,
Passive) cannot modify the data. AFM has a mandatory 1:1 mapping
between primary and secondary filesets. This ensures that only a
particular primary can talk to a secondary.
Note: AFM DR feature is disabled by default and customers need
to review the deployment with the Spectrum Scale development
for approval
21. Disaster Recovery
Failover
An operational mode in which the functions of a system component (such as a
server/network) are assumed by secondary system components when the
primary component becomes unavailable through either failure or scheduled
down time.
Failback
The process of restoring operations and applications to the primary facility after
they had been moved to a secondary machine or facility during failover.
Recovery Point Objective (RPO)
The interval indicating the amount of data loss which can be tolerated in the
event of failures or disasters.
Recovery time objective (RTO)
The amount of time it takes for an application to fail over when a disaster
occurs.
24. Notice and disclaimers continued
Information concerning non-IBM products was obtained from the
suppliers of those products, their published announcements or
other publicly available sources. IBM has not tested those
products in connection with this publication and cannot confirm
the accuracy of performance, compatibility or any other claims
related to non-IBM products. Questions on the capabilities of
non-IBM products should be addressed to the suppliers of those
products. IBM does not warrant the quality of any third-party
products, or the ability of any such third-party products to
interoperate with IBM’s products. IBM expressly disclaims all
warranties, expressed or implied, including but not limited
to, the implied warranties of merchantability and fitness for
a particular, purpose.
The provision of the information contained herein is not intended
to, and does not, grant any right or license under any IBM
patents, copyrights, trademarks or other intellectual
property right.
IBM, the IBM logo, ibm.com, AIX, BigInsights, Bluemix, CICS,
Easy Tier, FlashCopy, FlashSystem, GDPS, GPFS,
Guardium, HyperSwap, IBM Cloud Managed Services, IBM
Elastic Storage, IBM FlashCore, IBM FlashSystem, IBM
MobileFirst, IBM Power Systems, IBM PureSystems, IBM
Spectrum, IBM Spectrum Accelerate, IBM Spectrum Archive,
IBM Spectrum Control, IBM Spectrum Protect, IBM Spectrum
Scale, IBM Spectrum Storage, IBM Spectrum Virtualize, IBM
Watson, IBM z Systems, IBM z13, IMS, InfoSphere, Linear
Tape File System, OMEGAMON, OpenPower, Parallel
Sysplex, Power, POWER, POWER4, POWER7, POWER8,
Power Series, Power Systems, Power Systems Software,
PowerHA, PowerLinux, PowerVM, PureApplica- tion, RACF,
Real-time Compression, Redbooks, RMF, SPSS, Storwize,
Symphony, SystemMirror, System Storage, Tivoli,
WebSphere, XIV, z Systems, z/OS, z/VM, z/VSE, zEnterprise
and zSecure are trademarks of International Business
Machines Corporation, registered in many jurisdictions
worldwide. Other product and service names might
be trademarks of IBM or other companies. A current list of
IBM trademarks is available on the Web at "Copyright and
trademark information" at:
www.ibm.com/legal/copytrade.shtml.
Linux is a registered trademark of Linus Torvalds in the United
States, other countries, or both. Java and all Java-based
trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.