What do you do when disaster strikes? In part 9 of our DB2 Support Nightmare series we look at another DB2 disaster scenario and how it was resolved by the experts at Triton Consulting.
Imagine the scene – a broken database on an unsupported version of DB2, with no backups or log files to recover the database.
Yes – this one really was the stuff of nightmares!
Number 8 in our Top 10 DB2 Support Nightmares series. This month we take a look at what happens when organisations are not able to keep up to date with the latest DB2 technology.
This document discusses a security issue that occurred when improperly configuring DB2 federation. Specifically:
1. A client site configured DB2-LDAP federation but also enabled the FED_NOAUTH parameter, bypassing authentication.
2. This meant any user could connect to the database as any other user without providing the correct password.
3. If the database owner username was guessed, full access to all data could be obtained, potentially exposing the database to a major security breach.
The issue was caused by incorrectly enabling the FED_NOAUTH parameter when federation was set up. Proper authentication should have occurred at the database rather than being bypassed. The moral is to not enable
Drive fragmentation occurs when files are broken into multiple parts that are saved in different locations on a hard drive (HDD). This causes processing speed and performance to decrease. To remove fragmentation, the HDD needs to be defragmented using the automatic tool in Mac, which consolidates files under 20MB, or a third party software like Stellar Drive Defrag for larger files. Defragmentation eliminates unused space and improves reading/writing speeds and the hard drive's lifespan, but should not be done too frequently to avoid shortening the drive's life.
(Dis)Advantages of DHT: A Perspective with Raghavendra GowdappaGluster.org
1) Distributed hash tables (DHTs) used in GlusterFS replicate directory structures across all subvolumes, which can cause performance issues for operations that must visit all subvolumes like readdir and directory removal.
2) DHTs lack a centralized metadata server, making metadata operations like permissions and locks more complex to implement and replicate.
3) Stale in-memory directory layouts can occur when the on-disk layout is changed through rebalancing, which can cause directory healing issues for clients.
IOD 2013 - Crunch Big Data in the Cloud with IBM BigInsights and Hadoop lab s...Leons Petražickis
This document provides instructions for completing a hands-on lab to explore Hadoop and big data technologies including HDFS, MapReduce, Pig, Hive, and Jaql. The lab uses a dataset from Google Books to demonstrate word counting and generating histograms of word lengths. Key steps include using Hadoop commands to interact with HDFS, running the WordCount MapReduce program, writing Pig scripts to analyze the data, and using Hive to load the data and generate results. The overall goal is to gain experience using these big data technologies on a Hadoop cluster.
Slides from the session we (@perusio @rodricels @NITEMAN_es) gave on Drupal Developer Days Barcelona 2012:
http://barcelona2012.drupaldays.org/sessions/beat-devil-towards-drupal-performance-benchmark
Sql Health in a SharePoint environmentEnrique Lima
This document discusses how to maintain a healthy SharePoint environment. It emphasizes the importance of properly configuring and managing the SQL Server database that SharePoint runs on. It provides guidance on capacity planning, hardware sizing, maintenance best practices, and understanding SharePoint limitations and thresholds. The goal is to ensure the SQL Server infrastructure can support the SharePoint implementation and meet performance requirements.
Imagine the scene – a broken database on an unsupported version of DB2, with no backups or log files to recover the database.
Yes – this one really was the stuff of nightmares!
Number 8 in our Top 10 DB2 Support Nightmares series. This month we take a look at what happens when organisations are not able to keep up to date with the latest DB2 technology.
This document discusses a security issue that occurred when improperly configuring DB2 federation. Specifically:
1. A client site configured DB2-LDAP federation but also enabled the FED_NOAUTH parameter, bypassing authentication.
2. This meant any user could connect to the database as any other user without providing the correct password.
3. If the database owner username was guessed, full access to all data could be obtained, potentially exposing the database to a major security breach.
The issue was caused by incorrectly enabling the FED_NOAUTH parameter when federation was set up. Proper authentication should have occurred at the database rather than being bypassed. The moral is to not enable
Drive fragmentation occurs when files are broken into multiple parts that are saved in different locations on a hard drive (HDD). This causes processing speed and performance to decrease. To remove fragmentation, the HDD needs to be defragmented using the automatic tool in Mac, which consolidates files under 20MB, or a third party software like Stellar Drive Defrag for larger files. Defragmentation eliminates unused space and improves reading/writing speeds and the hard drive's lifespan, but should not be done too frequently to avoid shortening the drive's life.
(Dis)Advantages of DHT: A Perspective with Raghavendra GowdappaGluster.org
1) Distributed hash tables (DHTs) used in GlusterFS replicate directory structures across all subvolumes, which can cause performance issues for operations that must visit all subvolumes like readdir and directory removal.
2) DHTs lack a centralized metadata server, making metadata operations like permissions and locks more complex to implement and replicate.
3) Stale in-memory directory layouts can occur when the on-disk layout is changed through rebalancing, which can cause directory healing issues for clients.
IOD 2013 - Crunch Big Data in the Cloud with IBM BigInsights and Hadoop lab s...Leons Petražickis
This document provides instructions for completing a hands-on lab to explore Hadoop and big data technologies including HDFS, MapReduce, Pig, Hive, and Jaql. The lab uses a dataset from Google Books to demonstrate word counting and generating histograms of word lengths. Key steps include using Hadoop commands to interact with HDFS, running the WordCount MapReduce program, writing Pig scripts to analyze the data, and using Hive to load the data and generate results. The overall goal is to gain experience using these big data technologies on a Hadoop cluster.
Slides from the session we (@perusio @rodricels @NITEMAN_es) gave on Drupal Developer Days Barcelona 2012:
http://barcelona2012.drupaldays.org/sessions/beat-devil-towards-drupal-performance-benchmark
Sql Health in a SharePoint environmentEnrique Lima
This document discusses how to maintain a healthy SharePoint environment. It emphasizes the importance of properly configuring and managing the SQL Server database that SharePoint runs on. It provides guidance on capacity planning, hardware sizing, maintenance best practices, and understanding SharePoint limitations and thresholds. The goal is to ensure the SQL Server infrastructure can support the SharePoint implementation and meet performance requirements.
This document provides an introduction and overview of installing Hadoop 2.7.2 in pseudo-distributed mode. It discusses the core components of Hadoop including HDFS for distributed storage and MapReduce for distributed processing. It also covers prerequisites like Java and SSH setup. The document then describes downloading and extracting Hadoop, configuring files, and starting services to run Hadoop in pseudo-distributed mode on a single node.
Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)Adam Kawa
Adam Kawa shares his experiences working with a large, rapidly growing Hadoop cluster at Spotify. He details five "adventures" where various problems broke the cluster or made it unstable. These included issues with user permissions causing NameNode instability, DataNodes becoming blocked in deadlocks, Hive jobs being killed by the Fair Scheduler, and the JobTracker becoming slow due to overly large jobs. Each time, the problems were troubleshot and lessons were learned about proper cluster management, testing changes, and making data-driven decisions.
This document summarizes Drobo data storage devices. Drobo offers several models of data storage arrays that provide RAID-like protection without the limitations of traditional RAID. Key advantages include easier access to all data in one location, automatic healing of drives, and online expandability without reformatting. Drobo arrays can mix drive sizes and speeds and automatically manage storage and redundancy. The document outlines the various Drobo models and features and their benefits for professionals and businesses dealing with large volumes of data.
Raid is a storage system that combines multiple disks into an array to provide benefits like enhanced data integration, fault tolerance, and increased storage capacity or processing power. There are several common Raid types like Raid 0, 1, 5, and 10. If a Raid fails, it is important to turn off power immediately to prevent further data loss. An excellent data recovery software like iFinD Data Recovery can then be used to try restoring the lost data from the Raid array. The software allows scanning and recovering files by selecting the failed Raid device.
The document discusses RAID level 5, which stripes data and parity across all disks rather than dedicating a single disk to parity. This allows write operations to be parallelized across disks for improved performance compared to RAID level 4. Key advantages of RAID 5 include high data protection, support for multiple simultaneous reads and writes, and optimized performance for transaction processing workloads. The document also compares the performance characteristics and suitability of different RAID levels.
With the advent of Hadoop, there comes the need for professionals skilled in Hadoop Administration making it imperative to be skilled as a Hadoop Admin for better career, salary and job opportunities.
Know how to setup a Hadoop Cluster With HDFS High Availability here : www.edureka.co/blog/how-to-set-up-hadoop-cluster-with-hdfs-high-availability/
Apache Hadoop YARN, NameNode HA, HDFS FederationAdam Kawa
The document provides an introduction to YARN, HDFS federation, and HDFS high availability. It discusses limitations of the original MapReduce framework and HDFS, such as single points of failure. It then summarizes improvements in YARN including distributed resource management and the ability to run multiple applications. HDFS federation and high availability address scalability and reliability concerns by partitioning the namespace and introducing redundant NameNodes. Configuration parameters and Apache Whirr are also covered for quickly setting up a YARN cluster.
DB2 is a database manager that runs on Linux, Unix, and Windows operating systems. It allows users to catalog databases, start and stop instances, and configure parameters. Key commands for managing DB2 include db2icrt for creating instances, db2idrop for dropping instances, db2ilist for listing instances, and db2set for setting configuration parameters at the global, instance, and node level. The db2set command provides centralized control over environmental variables.
Administering a Hadoop cluster isn't easy. Many Hadoop clusters suffer from Linux configuration problems that can negatively impact performance. With vast and sometimes confusing config/tuning options, it can can tempting (and scary) for a cluster administrator to make changes to Hadoop when cluster performance isn't as expected. Learn how to improve Hadoop cluster performance and eliminate common problem areas, applicable across use cases, using a handful of simple Linux configuration changes.
Introduction to hadoop high availability Omid Vahdaty
Understand how to create a highly available Hadoop cluster.
Active/passive. with manual failover. links to help you get started, knowing to focus on. common mistakes etc.
1. The document discusses the hardware and software requirements for setting up an institutional repository, noting that repositories can run on a variety of server types from basic to high-powered and require reasonably good server hardware, storage, and memory.
2. It provides examples of specific hardware configurations used by some repositories, including servers from HP, Sun, and Dell with various processors, RAM amounts, and storage capacities.
3. The document states that the repository software installed and resulting user interface are what primarily determine a repository's functionality and appearance to users, giving the example of DSpace which is written in Java and can run on various platforms.
The document provides instructions for installing a font file called diattusa.ttf on Windows and contact information for a person named Avery C. It also lists potential future improvements to a chess program or database including better database organization and handling, separate opening and endgame databases, an improved playing engine, ability to load PGN files and database games, and possible interface improvements.
The document discusses high availability and disaster recovery strategies for IBM PureApplication System. It defines high availability as allowing for short unplanned outages, while disaster recovery refers to reconstructing systems at an alternate site after a full data center outage. The document outlines how PureApplication System provides high availability within a single rack through redundant hardware and workload mobility. It also discusses exporting and importing patterns between racks for cross-site high availability of WebSphere Application Server and using tools like DB2 HADR and MQ multi-instance for database and messaging high availability.
This document provides an overview and comparison of several high availability solutions in SQL Server 2012 including database mirroring, failover clustering, transactional replication, log shipping, and AlwaysOn availability groups. Database mirroring provides redundancy using log transfer but requires separate instances, while failover clustering uses a virtual service name across nodes but requires shared storage. Transactional replication supports load balancing across multiple subscribers. Log shipping and database mirroring both rely on log backups and restores but log shipping allows read access during restore. AlwaysOn maximizes availability across databases in an availability group using Windows clustering without shared disks.
Design patterns and plan for developing high available azure applicationsHimanshu Sahu
1. Design Patterns High Availability of Azure Applications
2. Practical Demo on points to take care for High Availability from Infrastructure point of view(the points we discussed in last seminar)
3. Different Patterns for High Availability
3.1 Health Endpoint Monitoring Pattern
3.2 Queue-based Load Leveling Pattern
3.2 Throttling Pattern
3.3 Retry Pattern
3.4 Multiple Datacenter Deployment Guidance
4. Architecture for High Availability of Azure Applications
5. best practices for developing High Available Azure Applications
High Availability Options for Modern Oracle InfrastructuresSimon Haslam
Today's enterprise architect has a bewildering array of choices when it comes to building a highly available infrastructure to run Oracle. This presentation considers approaches using the Oracle technology layer, resilient virtualisation (Oracle and other vendors), hardware clustering and storage replication. It covers the core Oracle Database and Fusion Middleware products and, based on practical experience, aims to give attendees a broad picture of alternatives with their pros and cons.
Delivered on 5 December 2011 at UKOUG 2011 by Simon Haslam and Julian Dyke.
The document discusses Oracle's Maximum Availability Architecture (MAA) reference architectures for high availability (HA) and data protection on-premises and in hybrid cloud environments. It describes the Bronze, Silver, Gold, and Platinum reference architectures that align Oracle capabilities with different levels of customer service level requirements. It also discusses using Oracle Database Backup Cloud Service for offsite backups and Data Guard/Active Data Guard for disaster recovery to the Oracle Cloud.
SQLSaturday Bulgaria : HA & DR with SQL Server AlwaysOn Availability Groupsturgaysahtiyan
The AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise. In this session we will talk about what’s coming with Always On, and how does it help to improve high availability and disaster recovery solutions.
This document provides an introduction and overview of installing Hadoop 2.7.2 in pseudo-distributed mode. It discusses the core components of Hadoop including HDFS for distributed storage and MapReduce for distributed processing. It also covers prerequisites like Java and SSH setup. The document then describes downloading and extracting Hadoop, configuring files, and starting services to run Hadoop in pseudo-distributed mode on a single node.
Hadoop Adventures At Spotify (Strata Conference + Hadoop World 2013)Adam Kawa
Adam Kawa shares his experiences working with a large, rapidly growing Hadoop cluster at Spotify. He details five "adventures" where various problems broke the cluster or made it unstable. These included issues with user permissions causing NameNode instability, DataNodes becoming blocked in deadlocks, Hive jobs being killed by the Fair Scheduler, and the JobTracker becoming slow due to overly large jobs. Each time, the problems were troubleshot and lessons were learned about proper cluster management, testing changes, and making data-driven decisions.
This document summarizes Drobo data storage devices. Drobo offers several models of data storage arrays that provide RAID-like protection without the limitations of traditional RAID. Key advantages include easier access to all data in one location, automatic healing of drives, and online expandability without reformatting. Drobo arrays can mix drive sizes and speeds and automatically manage storage and redundancy. The document outlines the various Drobo models and features and their benefits for professionals and businesses dealing with large volumes of data.
Raid is a storage system that combines multiple disks into an array to provide benefits like enhanced data integration, fault tolerance, and increased storage capacity or processing power. There are several common Raid types like Raid 0, 1, 5, and 10. If a Raid fails, it is important to turn off power immediately to prevent further data loss. An excellent data recovery software like iFinD Data Recovery can then be used to try restoring the lost data from the Raid array. The software allows scanning and recovering files by selecting the failed Raid device.
The document discusses RAID level 5, which stripes data and parity across all disks rather than dedicating a single disk to parity. This allows write operations to be parallelized across disks for improved performance compared to RAID level 4. Key advantages of RAID 5 include high data protection, support for multiple simultaneous reads and writes, and optimized performance for transaction processing workloads. The document also compares the performance characteristics and suitability of different RAID levels.
With the advent of Hadoop, there comes the need for professionals skilled in Hadoop Administration making it imperative to be skilled as a Hadoop Admin for better career, salary and job opportunities.
Know how to setup a Hadoop Cluster With HDFS High Availability here : www.edureka.co/blog/how-to-set-up-hadoop-cluster-with-hdfs-high-availability/
Apache Hadoop YARN, NameNode HA, HDFS FederationAdam Kawa
The document provides an introduction to YARN, HDFS federation, and HDFS high availability. It discusses limitations of the original MapReduce framework and HDFS, such as single points of failure. It then summarizes improvements in YARN including distributed resource management and the ability to run multiple applications. HDFS federation and high availability address scalability and reliability concerns by partitioning the namespace and introducing redundant NameNodes. Configuration parameters and Apache Whirr are also covered for quickly setting up a YARN cluster.
DB2 is a database manager that runs on Linux, Unix, and Windows operating systems. It allows users to catalog databases, start and stop instances, and configure parameters. Key commands for managing DB2 include db2icrt for creating instances, db2idrop for dropping instances, db2ilist for listing instances, and db2set for setting configuration parameters at the global, instance, and node level. The db2set command provides centralized control over environmental variables.
Administering a Hadoop cluster isn't easy. Many Hadoop clusters suffer from Linux configuration problems that can negatively impact performance. With vast and sometimes confusing config/tuning options, it can can tempting (and scary) for a cluster administrator to make changes to Hadoop when cluster performance isn't as expected. Learn how to improve Hadoop cluster performance and eliminate common problem areas, applicable across use cases, using a handful of simple Linux configuration changes.
Introduction to hadoop high availability Omid Vahdaty
Understand how to create a highly available Hadoop cluster.
Active/passive. with manual failover. links to help you get started, knowing to focus on. common mistakes etc.
1. The document discusses the hardware and software requirements for setting up an institutional repository, noting that repositories can run on a variety of server types from basic to high-powered and require reasonably good server hardware, storage, and memory.
2. It provides examples of specific hardware configurations used by some repositories, including servers from HP, Sun, and Dell with various processors, RAM amounts, and storage capacities.
3. The document states that the repository software installed and resulting user interface are what primarily determine a repository's functionality and appearance to users, giving the example of DSpace which is written in Java and can run on various platforms.
The document provides instructions for installing a font file called diattusa.ttf on Windows and contact information for a person named Avery C. It also lists potential future improvements to a chess program or database including better database organization and handling, separate opening and endgame databases, an improved playing engine, ability to load PGN files and database games, and possible interface improvements.
The document discusses high availability and disaster recovery strategies for IBM PureApplication System. It defines high availability as allowing for short unplanned outages, while disaster recovery refers to reconstructing systems at an alternate site after a full data center outage. The document outlines how PureApplication System provides high availability within a single rack through redundant hardware and workload mobility. It also discusses exporting and importing patterns between racks for cross-site high availability of WebSphere Application Server and using tools like DB2 HADR and MQ multi-instance for database and messaging high availability.
This document provides an overview and comparison of several high availability solutions in SQL Server 2012 including database mirroring, failover clustering, transactional replication, log shipping, and AlwaysOn availability groups. Database mirroring provides redundancy using log transfer but requires separate instances, while failover clustering uses a virtual service name across nodes but requires shared storage. Transactional replication supports load balancing across multiple subscribers. Log shipping and database mirroring both rely on log backups and restores but log shipping allows read access during restore. AlwaysOn maximizes availability across databases in an availability group using Windows clustering without shared disks.
Design patterns and plan for developing high available azure applicationsHimanshu Sahu
1. Design Patterns High Availability of Azure Applications
2. Practical Demo on points to take care for High Availability from Infrastructure point of view(the points we discussed in last seminar)
3. Different Patterns for High Availability
3.1 Health Endpoint Monitoring Pattern
3.2 Queue-based Load Leveling Pattern
3.2 Throttling Pattern
3.3 Retry Pattern
3.4 Multiple Datacenter Deployment Guidance
4. Architecture for High Availability of Azure Applications
5. best practices for developing High Available Azure Applications
High Availability Options for Modern Oracle InfrastructuresSimon Haslam
Today's enterprise architect has a bewildering array of choices when it comes to building a highly available infrastructure to run Oracle. This presentation considers approaches using the Oracle technology layer, resilient virtualisation (Oracle and other vendors), hardware clustering and storage replication. It covers the core Oracle Database and Fusion Middleware products and, based on practical experience, aims to give attendees a broad picture of alternatives with their pros and cons.
Delivered on 5 December 2011 at UKOUG 2011 by Simon Haslam and Julian Dyke.
The document discusses Oracle's Maximum Availability Architecture (MAA) reference architectures for high availability (HA) and data protection on-premises and in hybrid cloud environments. It describes the Bronze, Silver, Gold, and Platinum reference architectures that align Oracle capabilities with different levels of customer service level requirements. It also discusses using Oracle Database Backup Cloud Service for offsite backups and Data Guard/Active Data Guard for disaster recovery to the Oracle Cloud.
SQLSaturday Bulgaria : HA & DR with SQL Server AlwaysOn Availability Groupsturgaysahtiyan
The AlwaysOn Availability Groups feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring. Introduced in SQL Server 2012, AlwaysOn Availability Groups maximizes the availability of a set of user databases for an enterprise. In this session we will talk about what’s coming with Always On, and how does it help to improve high availability and disaster recovery solutions.
The document discusses the evolution of DB2 HADR tool from version 8.2 to 10. It provides an overview of HADR and how it works, describes the key features introduced in each version, provides an example of how to set up HADR, and discusses techniques for optimizing HADR performance and using HADR beyond high availability for database migration.
This document discusses concepts related to high availability and disaster recovery. It defines key terms like availability, reliability, outages, fault tolerance, and redundancy. It describes strategies for high availability including data replication, virtualization, host clustering, and ensuring reliability of network and middleware components. The document emphasizes the importance of basing HA/DR strategies and investments on business needs and conducting proper scoping and planning.
The document describes IBM DB2's High Availability Disaster Recovery (HADR) multiple standby configuration. It allows a primary database to have one principal standby and up to two auxiliary standbys. The principal standby supports all sync modes, while auxiliary standbys use super async mode. Takeovers can occur from any standby and DB2 will automatically reconfigure other standbys to connect to the new primary if they are in its target list. The document provides details on configuration, initialization, failover behavior and an example deployment across four servers.
The document discusses high availability and disaster recovery in cloud environments. It describes basic, intermediate, and advanced cloud deployment architectures with increasing levels of redundancy. The basic option uses a single cloud zone, intermediate uses multiple zones for failover, and advanced fully duplicates zones. The ultimate option fully duplicates deployments across multiple cloud providers for the highest availability. Challenges discussed include applications not being designed for high availability features like clustering or replication.
Oracle database high availability solutionsKirill Loifman
This document discusses Oracle database high availability strategies, architectures, and solutions. It covers elements of high availability like eliminating single points of failure through redundancy. It also discusses disaster recovery, Oracle Maximum Availability Architecture (MAA), downtime, service level agreements (SLAs), availability targets and costs, levels of high availability, Oracle's solutions to downtime like RAC and Data Guard, best practices, and examples of high availability configurations including using Oracle RAC and Data Guard together.
Linux Disaster Recovery Best Practices with rearGratien D'haese
The document discusses Linux disaster recovery best practices using the Relax and Recover (rear) tool. It recommends deciding on a disaster recovery strategy, including which backup mechanism and location to use. It provides details on using the NETFS backup type with rear to back up to network locations like NFS shares. It also discusses configuring rear by editing the /etc/rear/local.conf file to specify settings like the backup location, program, and options.
A Step-By-Step Disaster Recovery Blueprint & Best Practices for Your NetBacku...Symantec
In this technical session we will share a few customer tested blueprints for implementing DR strategies with NetBackup appliances showing support for onsite and offsite disaster recovery. This includes the architecture design with Symantec best practices, down to execution of the wizards and command lines needed to implement the solution.
Watch the recording of this Google+ Hangout: http://bit.ly/13oTjvp
I gave this talk at Krakow/Poland DevOPS meetup. It was a lightning talk covering subject of High Availability solutions, architecture, planning and deploying.
SharePoint Backup And Disaster Recovery with Joel OlesonJoel Oleson
This walks through the various options around backup and restore with SharePoint. This deck was presented at Tech Ed South East Asia 2008 by Joel Oleson
The document discusses architecting for high availability on AWS. It defines high availability as having minimal downtime and being always accessible. It recommends designing for failure by avoiding single points of failure, using multiple availability zones, implementing auto-scaling for flexibility, enabling self-healing through health checks and auto-scaling, and loosely coupling components. AWS services like EC2, EBS, ELB, RDS, SQS help provide high availability when combined with these best practices. The goal is to build applications that can continue functioning even when outages occur.
Best Practices in Disaster Recovery Planning and TestingAxcient
Axcient and industry expert Paul Kirvan have put together this presentation on avoiding common disaster recovery mistakes and leveraging industry best practices to create a technology disaster recovery plan that works best for you.
This presentation gives you the many elements necessary of a well-executed disaster recovery plan, including:
- Guidelines for creating your own Disaster Recovery plan
- A checklist of key items to consider based on your business objectives
- The common mistakes and pitfalls to avoid
- Technology considerations for Disaster Recovery
- Tips for planning and executing a successful Disaster Recovery test
Whether you're in the process of creating a disaster recovery plan or you already have one in place, this presentation will guide you through the steps you need to follow to help ensure your plan is complete.
AWS re:Invent 2016: Disaster Recovery and Business Continuity for Systemicall...Amazon Web Services
Modern financial services organizations rely heavily on technology and automated systems to run business-as-usual. However, if this technology were interrupted by natural disasters or other events, there could be a devastating impact on investors and market participants, and in turn your reputational brand. In this session, we provide a step-by-step disaster recovery solution employed by a major exchange. This solution leverages Amazon EC2 Container Service to provide Docker containers, Weave Net to support a multicast overlay network that enables high volume multicast feeds in a cloud environment, and AWS CloudFormation for the ability to easily create and manage AWS assets. The session also covers the importance of redundancy (not just operationally, but for SEC compliance reasons as well) and how financial services organizations can increase geographical diversification of their primary and disaster recovery data centers. We dive deep into each major component of the solution.
TSA provides automatic monitoring and availability management of resources configured for high availability in a cluster domain. It monitors DB2 HADR resources and DB2 instance resources, and can start, stop, and fail over these resources between nodes when failures occur. The document provides examples of how DB2 HADR and instance resources are defined and monitored by TSA using the IBM.Application resource type.
The document provides guidance on troubleshooting Cassandra, including determining the root cause of issues. It outlines a troubleshooting process of 1) determining which nodes have problems, 2) examining bottlenecks, 3) finding and understanding errors, 4) asking what changed, 5) determining the root cause, and 6) taking corrective action. It then discusses various tools for troubleshooting like nodetool, OpsCenter, and Cassandra logs and how to configure logging levels.
The document describes the steps involved in resolving a URL to an IP address and retrieving a webpage. It involves:
1. The browser sends a DNS query to resolve the domain name to an IP address, going through a hierarchy of DNS servers starting from the root servers down to the authoritative name servers.
2. Once the IP address is obtained, the browser uses TCP to establish a connection and sends an HTTP request to the web server at that IP address.
3. The web server responds with the HTML content which the browser then parses and renders to display the webpage. Traceroute commands are shown to trace the path packets take from the local network to the destination server.
This document provides an overview of common Active Directory (AD) disasters, including hardware/software failures, human errors, and complete disasters. It discusses specific issues like morphed SYSVOL folders, broken GPT/GPC linkages, DNS aging/scavenging not being enabled, and improper time synchronization. Solutions provided include using tools like NTDSUTIL, GPOTOOL, and REPADMIN to fix issues and prevent disasters. The document emphasizes the importance of logging and monitoring to troubleshoot AD problems.
This document discusses several new features and tools in Oracle 11g R2 for monitoring, performance tuning, and troubleshooting. It introduces the Automatic Diagnostic Repository (ADR) as the new paradigm for monitoring Oracle databases in 11g. ADR provides centralized access to alert logs, trace files, and incidents. The document demonstrates how to use the command line ADR interface (ADRCI) to view alerts, traces, and incidents. It also covers performance tools like Autotrace, SQL Trace, TKPROF, and DBMS_STATS and how to use them with Oracle 11g R2.
The document discusses deadlocks in operating systems. It defines deadlock as a situation where a set of processes are blocked waiting for resources held by other processes in the set, resulting in none of the processes making any progress. Four conditions must be met for deadlock to occur: mutual exclusion, hold and wait, no preemption, and circular wait. The document presents examples to illustrate deadlock and discusses different strategies for dealing with it, including deadlock prevention, avoidance, and detection and recovery. It specifically describes the Banker's Algorithm for deadlock avoidance.
The document discusses new features in Oracle NoSQL Database Release 3.4. Key highlights include improved data center failover and switchover operations to continue operations if a zone fails, new support for complex data types in Apache Hive and Oracle Big Data SQL, a new bulk get operation to retrieve multiple records in parallel for improved performance, and an off-heap cache for better performance and scalability. The release aims to improve ease of adoption, performance, and business continuity for users.
This document contains 12 multiple choice questions about Terraform concepts and functionality. Key points covered include:
- Infrastructure as code allows reusing best practice configurations and settings.
- Features like private module registry, audit logs, and private network connectivity are only available in the Terraform Enterprise edition.
- The Terraform state file tracks the mapping between configurations and real-world infrastructure, dependencies between resources, and cached attribute values for performance.
- Terraform workspaces allow maintaining multiple state files for a single configuration depending on the environment.
- If a resource is successfully created during a plan but fails provisioning, it will be marked as "tainted" rather than automatically rolling back
The document describes the steps to configure Oracle sharding in an Oracle 12c environment. It includes installing Oracle software on shardcat, shard1, and shard2 nodes, creating an SCAT database, installing the GSM software, configuring the shard catalog, registering the shard nodes, creating a shard group and adding shards, deploying the shards to create databases on shard1 and shard2, verifying the shard configuration, creating a global service, and creating a sample schema and shard table to verify distribution across shards.
The document discusses domain generation algorithms (DGAs) used in malware command and control networks and how they have evolved over time. It provides examples of specific DGAs like Conficker, Cryptolocker, and Tinba. It also describes how intelligence can be gathered on DGAs by reverse engineering them, monitoring generated domains for resolutions, and tracking associated IP addresses and nameservers. The goal of DGAs from an adversary perspective is to increase the resilience of command and control structures against takedowns.
The document discusses High Availability Disaster Recovery (HADR) in DB2. It describes how HADR uses log shipping to replicate transactions from a primary database to a standby database. HADR supports three synchronization modes - SYNC, NearSync and Async - which determine how transaction logs are replicated. The document provides steps for setting up and configuring HADR, including required database parameters. It also discusses using reorgchk and runstats utilities to check for table/index reorganization needs and update database statistics.
This document discusses CQRS/DDD patterns and the predaddy PHP framework. CQRS separates reads and writes by having commands handle writes to the domain and events update read models for queries. DDD focuses on modeling the domain by identifying aggregates, entities, and value objects. While predaddy is a framework-agnostic PHP library that supports CQRS and DDD patterns, the document warns that performance and developer velocity can be issues and the patterns can overcomplicate things sometimes.
CPN208 Failures at Scale & How to Ride Through Them - AWS re: Invent 2012Amazon Web Services
At scale, rare and unexpected events will happen. Things eventually will go wrong. This talk dives into what can go wrong at scale and how to architect applications to ride through disaster obliviously. We’ll talk about AWS infrastructure design including Regions and Availability Zones and show how applications can be written and operated to best exploit this industry-unique infrastructure redundancy model. Believing that experience is one of the best teachers, we will go through some of the more interesting and educational industry post mortems including some experienced at AWS to motivate these application design decisions and show how they can mitigate the damage of the truly unexpected.
An iteration of the Hoya slides put up as part of a review of the code with others writing YARN services; looks at what Hoya offers -what we need from Apps to be able to deploy them this way, and what we need from YARN
This document provides an overview of the DB2DART tool and examples of how it can be used to analyze and repair issues with DB2 databases and tables. Key points include:
- DB2DART is an offline tool that can be used to check the architectural correctness of databases and investigate problems like data corruption.
- It allows inspection of entire databases, specific tablespaces, tables, and indexes. Examples demonstrate using it to check for index corruption and reduce high water marks.
- The document shows the command syntax and provides a sample report output. It also provides steps to use DB2DART to export table data in delimited format when the original data is corrupted.
[Hic2011] using hadoop lucene-solr-for-large-scale-search by systexJames Chen
This document discusses using Hadoop/MapReduce with Solr/Lucene for large scale distributed search. It begins with an introduction to the speaker and his experience with Hadoop. The agenda then outlines discussing why search big data, an overview of Lucene, Solr and Zookeeper, distributed searching and indexing with Hadoop, and a case study on web log categorization.
This document discusses HBase on YARN (Hoya), which allows dynamic HBase clusters to be launched on a YARN cluster. Hoya is a Java tool that uses a JSON specification to deploy HBase clusters. The HBase master and region servers run as YARN containers while Zookeeper handles coordination. Hoya's Application Master interfaces with YARN to deploy, manage, and handle failures of the HBase cluster. This allows efficient sharing of resources and elastic scaling of HBase clusters based on workload.
Download if you dare! In part six of our DB2 Nightmares series we see what can happen when an experienced DBA goes on holiday leaving the Junior DBA in charge with no support.
Consultancy on Demand is a specially designed service for customers who need varying levels of DB2 support throughout the year.
You purchase a block of 20, 50 or 100 hours. You can then call off hours as and when you need them. No commitment required!
A Time Traveller's Guide to DB2: Technology Themes for 2014 and BeyondLaura Hood
This document discusses technology themes for DB2 in 2014 and beyond, including cost reduction, high availability, in-memory computing, skills availability, database commoditization, and big data. It summarizes DB2's focus on these areas today and potential future directions, such as further optimization to reduce software licensing fees, expanded data sharing capabilities, increased memory capacities, evolving skills needs, and continued integration with big data platforms. The document aims to help DB2 professionals consider strategies for addressing these themes.
A junior DBA accidentally deleted all rows from a critical table in a pre-production environment. The DBA had connected to the wrong system and used the instance owner userid. The system administrator had enabled the FED_NOAUTH parameter, which bypasses authentication at the instance level. This meant any user could connect as any other user without the correct password and impact the database. The moral is that unintended consequences can occur from small configuration changes and it is important to get skilled DB2 support.
Db2 10 memory management uk db2 user group june 2013 [read-only]Laura Hood
DB2 10 provides significant enhancements to memory management that allow for much greater scalability. Key changes include moving most objects above the 2GB bar, enabling larger buffer pools through 1MB page support, and enhanced real storage monitoring. Migrating to DB2 10 requires ensuring sufficient real storage is available, monitoring real storage usage, and addressing other limiting factors before taking advantage of new features to further scale vertically.
DbB 10 Webcast #3 The Secrets Of ScalabilityLaura Hood
The third in the Migration Month webcast series looking at DB2 10 migration planning. This webcast goes into the scalability benefits available in DB2 10, with Julian Stuhler of Triton Consulting & Jeff Josten of IBM.
DB2 10 Webcast #2 - Justifying The UpgradeLaura Hood
This document discusses justifying an upgrade from DB2 9 or 8 to DB2 10 for z/OS. It outlines potential CPU, productivity, and availability savings from the upgrade. CPU savings can come from improved performance in conversion mode through features like high performance database application transition support. Productivity savings may result from features that improve plan stability and temporal tables. Availability improvements like online reorganization of LOBs can reduce downtime costs. The presentation recommends using IBM's DB2 10 Business Value Assessment Estimator Tool to quantify specific savings for an organization.
DB2 10 for z/OS introduced temporal data support which allows applications to query data as it existed at different points in time. The document discusses system temporal tables, business temporal tables, and bi-temporal tables. It provides examples of temporal DDL, SELECT extensions for querying historical data, and discusses early experiences and performance considerations with temporal data in DB2 10.
DB2DART is a tool that allows DBAs to inspect, format, and repair DB2 databases and objects. It can be used to handle storage reclamation issues by lowering high water marks, detect and repair index corruption, extract data from corrupt tables, and remove backup pending states. DB2DART provides granular analysis at the database, tablespace, and table level and its repair capabilities save DBAs from having to call support or restore from backups in many cases.
Temporal And Other DB2 10 For Z Os HighlightsLaura Hood
The document discusses DB2 10 for z/OS and its new temporal data support feature. It provides an overview of DB2 10, describing new features such as temporal data, virtual storage enhancements, and optimizer enhancements. It then discusses temporal data concepts in more detail, including temporal tables, periods, business temporal tables and system temporal tables. The document provides examples and explains how to implement temporal tables in DB2 10. It concludes by listing further reading materials on DB2 10.
DB210 Smarter Database IBM Tech Forum 2011Laura Hood
DB2 10 for z/OS is a new version of IBM's database software that provides significant performance improvements, new security and temporal data features, and easier migration paths from prior versions. Key enhancements in DB2 10 include 5-20% CPU reductions, up to 10x more threads per subsystem due to virtual storage improvements, row and column access controls, and built-in support for tracking historical data. Customers running DB2 8 or 9 can upgrade directly to DB2 10 using new "skip migration" functionality, or upgrade sequentially from earlier versions. Migrating to DB2 10 requires meeting prerequisites and following steps to move to conversion mode and then normal mode.
Pure Genius: How To Get Mainframe-Like Scalability & Availability For Midrange DB2 discusses pureScale, an optional feature for DB2 that implements shared-disk clustering to provide high scalability and availability. It can support up to 128 members. The architecture uses a shared database, coordination facilities, and InfiniBand networking. Customers experience scalability gains, easy installation, and resilience like continued operation despite coordination facility failure. The presentation evaluates pureScale's benefits and customer experiences.
The document discusses IBM's pureScale technology which allows DB2 databases to scale up to 128 nodes for high availability and scalability. PureScale forms a shared-disk cluster and uses proven "data sharing" technology from DB2 for z/OS. It provides agility to rapidly scale up or down capacity as needed with little application change. The company Triton built a basic 2-node pureScale cluster within a budget of under £1K to validate IBM's claims and gain hands-on experience. Their testing showed the cluster delivered 1000 transactions per second under load. The summary concludes that pureScale provides robust clustering with excellent price/performance.
Episode 4 DB2 pureScale Performance Webinar Oct 2010Laura Hood
DB2 pureScale provides scalability and high performance through its clustered database architecture. It uses a cluster caching facility to manage data consistency across member nodes and leverage low-latency interconnects like InfiniBand. The architecture features two-level buffer pool caching between local and global pools for improved read performance. Monitoring and tuning focuses on optimizing buffer pool hit ratios at both levels. Initial proof points showed near-linear scalability up to 12 nodes and over 80% scalability even at 128 nodes, demonstrating the architecture's ability to transparently scale database workloads across many servers.
DB2 pureScale provides high availability and continuous operations by automatically recovering from component failures through workload redistribution and fast in-flight transaction recovery. It protects databases by balancing workloads across nodes and uses duplexed secondary components to tolerate multiple simultaneous node failures while keeping other nodes online and services available.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Securing your Kubernetes cluster_ a step-by-step guide to success !
Top 10 DB2 Support Nightmares #9
1. Top 10 DB2 SupportTop 10 DB2 SupportTop 10 DB2 SupportTop 10 DB2 Support
Nightmares & How toNightmares & How toNightmares & How toNightmares & How to
Avoid ThemAvoid ThemAvoid ThemAvoid Them
#9#9#9#9
2. Part 9 – In the event of an emergency call
The situation
Image of a junior DBA
A customer was running HADR for disaster
recovery. They had no cluster software used
for monitoring or failover. HADR was being
monitored on a regular basis using a shell
script.
3. Then what happened…?
Disk failure on the primary site! Some tablespaces
were put into “Rollforward Pending” state.
Transactions accessing data in these tablespaces
failed
4. Image of a junior DBA
The last run of the HADR state monitoring
script indicated a Peer State so it was decided
to issue TAKEOVER command on the DR site to
switch roles. When the application started,
some transactions failed with the same error as
on the primary site.
List tablespaces command showed a number of
tables in Rollforward Pending state. To get out
of the pending state, ROLLFORWARD command
was issued with the list of affected tablespaces.
The rollforward was trying to retrieve a log,
which was a few thousand logs older than the
current one. After a few tries the
ROLLFORWARD was given up.
The database was restored from the latest
backup image
5. • We went through the db2diag.log and the notification logs. We could see that there
were physical errors reported in some of the tablespaces on the DR site around 100 days
prior to the incident. This was reported in the db2diag.log and the affected tablespaces
were “excluded from the rollforward set”
• Based on other entries in the db2diag file, we were able to confirm that the log file
requested for rollforward on the DR site was used at the time the physical errors
occurred there.
• HADR continued to apply logs for the other tablespaces and was reporting to be in “Peer”
State. In reality, some of the tablespaces were being ignored.
Triton Analysis
6. Regular monitoring on the log files is essential to identify and resolve the
issue on the DR site in advance of this incident.
Make sure you know who to call before disaster strikes!
Triton Consulting +44 870 2411 550
www.triton.co.uk
The moral of the story