This document discusses database replication in MySQL. It begins by explaining the concepts and principles of replication, including that it allows for maintaining synchronized copies of databases across multiple servers. It then covers how to set up replication between a master and slave server by enabling binary logging on the master, creating a backup, configuring the servers, restoring the backup on the slave, and starting the replication process. Finally, it discusses how to verify replication is working properly and manage the replication process ongoing.
The document discusses optimizing database performance in MySQL. It covers optimizing at the database level through proper database structure and indexing. Some key points include structuring tables efficiently with the right data types, columns, and row formats. Indexes should support common queries. The document also discusses optimizing queries through indexing columns used in WHERE clauses and using LIMIT to improve performance. Hardware optimizations involve issues like disk seeks, reads/writes, CPU usage, and memory bandwidth.
The document discusses how MySQL provides an efficient database management system for storing vast amounts of biological data generated by advances in molecular biology techniques and projects like the Human Genome Project. It describes MySQL's architecture including layers for applications, logical queries and transactions, and storage management. MySQL is an open-source relational database widely used for biological data storage and querying due to its performance and ease of use.
The document discusses Unix kernel parameters that should be monitored and potentially increased after making changes to related Oracle Init.ora parameters. It provides a table matching Init.ora parameters like db_block_buffers and processes to Unix kernel parameters like shmmax and nproc. It also defines several common Unix kernel parameters and provides references on Unix configuration files where semaphores and shared memory can be set for different Unix platforms.
This document provides an overview of various database administration concepts in DB2 including tables, views, indexes, procedures, triggers, tablespaces, and buffer pools. It discusses how tables are used to store column and row data, and how system catalog tables track metadata. It also describes views, indexes, procedures, triggers, how they are used and created. The document outlines how tablespaces are used to logically group database objects and storage, and how buffer pools cache data pages in memory to improve performance.
DB2 manages memory at four levels: database instance memory, database global memory, application global memory, and agent private memory. Database instance memory is used for instance-level tasks and is allocated when DB2 starts. Database global memory is allocated for each database and is used for backup, locking, and SQL execution. Application global memory coordinates messages between DB2 agents working for an application. Agent private memory is used by agents to perform tasks like building query plans, executing queries, and sorting. The amount of memory for each level is determined by configuration parameters that are usually set automatically.
The document discusses High Availability Disaster Recovery (HADR) in DB2. It describes how HADR uses log shipping to replicate transactions from a primary database to a standby database. HADR supports three synchronization modes - SYNC, NearSync and Async - which determine how transaction logs are replicated. The document provides steps for setting up and configuring HADR, including required database parameters. It also discusses using reorgchk and runstats utilities to check for table/index reorganization needs and update database statistics.
This document provides instructions for using the BRBACKUP and BRRESTORE utilities to copy an Oracle database. The backup is initiated with the brbackup command, and the log and profile files must then be copied to the new location. The restore is performed with brrestore, which will automatically adjust file names. Several cautions are provided regarding required pre- and post-restore activities, and structure-maintaining backups directly to disk are also described.
DB2 architecture includes databases, the database configuration file, tablespaces, buffer pools, and log files. Databases are independent collections of objects like tables and indexes. The database configuration file defines database properties. Tablespaces are logical storage areas for tables, and the default tablespaces are SYSCATSPACE, TEMPSPACE1, and USERSPACE1. Buffer pools in memory improve query performance by caching table data and logs record all database operations for recovery.
The document discusses optimizing database performance in MySQL. It covers optimizing at the database level through proper database structure and indexing. Some key points include structuring tables efficiently with the right data types, columns, and row formats. Indexes should support common queries. The document also discusses optimizing queries through indexing columns used in WHERE clauses and using LIMIT to improve performance. Hardware optimizations involve issues like disk seeks, reads/writes, CPU usage, and memory bandwidth.
The document discusses how MySQL provides an efficient database management system for storing vast amounts of biological data generated by advances in molecular biology techniques and projects like the Human Genome Project. It describes MySQL's architecture including layers for applications, logical queries and transactions, and storage management. MySQL is an open-source relational database widely used for biological data storage and querying due to its performance and ease of use.
The document discusses Unix kernel parameters that should be monitored and potentially increased after making changes to related Oracle Init.ora parameters. It provides a table matching Init.ora parameters like db_block_buffers and processes to Unix kernel parameters like shmmax and nproc. It also defines several common Unix kernel parameters and provides references on Unix configuration files where semaphores and shared memory can be set for different Unix platforms.
This document provides an overview of various database administration concepts in DB2 including tables, views, indexes, procedures, triggers, tablespaces, and buffer pools. It discusses how tables are used to store column and row data, and how system catalog tables track metadata. It also describes views, indexes, procedures, triggers, how they are used and created. The document outlines how tablespaces are used to logically group database objects and storage, and how buffer pools cache data pages in memory to improve performance.
DB2 manages memory at four levels: database instance memory, database global memory, application global memory, and agent private memory. Database instance memory is used for instance-level tasks and is allocated when DB2 starts. Database global memory is allocated for each database and is used for backup, locking, and SQL execution. Application global memory coordinates messages between DB2 agents working for an application. Agent private memory is used by agents to perform tasks like building query plans, executing queries, and sorting. The amount of memory for each level is determined by configuration parameters that are usually set automatically.
The document discusses High Availability Disaster Recovery (HADR) in DB2. It describes how HADR uses log shipping to replicate transactions from a primary database to a standby database. HADR supports three synchronization modes - SYNC, NearSync and Async - which determine how transaction logs are replicated. The document provides steps for setting up and configuring HADR, including required database parameters. It also discusses using reorgchk and runstats utilities to check for table/index reorganization needs and update database statistics.
This document provides instructions for using the BRBACKUP and BRRESTORE utilities to copy an Oracle database. The backup is initiated with the brbackup command, and the log and profile files must then be copied to the new location. The restore is performed with brrestore, which will automatically adjust file names. Several cautions are provided regarding required pre- and post-restore activities, and structure-maintaining backups directly to disk are also described.
DB2 architecture includes databases, the database configuration file, tablespaces, buffer pools, and log files. Databases are independent collections of objects like tables and indexes. The database configuration file defines database properties. Tablespaces are logical storage areas for tables, and the default tablespaces are SYSCATSPACE, TEMPSPACE1, and USERSPACE1. Buffer pools in memory improve query performance by caching table data and logs record all database operations for recovery.
The document discusses database backup and recovery techniques. It defines database backup as backing up the operational state, architecture and stored data of a database to create a duplicate instance in case of crashes or data loss. It describes different backup methods like transaction log backups and full backups. It also discusses the importance of backups to restore data after damage or deletion. The document then covers different types of backups like full, incremental and differential backups. It further discusses database recovery, causes of database failures, and solutions like log-based recovery and shadow paging. Finally, it discusses the importance of backups and recovery for business continuity.
DB2 uses a storage model with tablespaces to organize database objects logically and physically. Each database must have at least three required tablespaces: 1) SYSCATSPACE stores metadata, 2) TEMPSPACE is for temporary data and intermediate query results, and 3) USERSPACE1 stores user-created objects. Tablespaces contain containers which are made up of pages at the physical layer. The OS file system provides directories to organize DB2 storage on disk.
The document discusses database recovery techniques, including:
- Recovery algorithms ensure transaction atomicity and durability despite failures by undoing uncommitted transactions and ensuring committed transactions survive failures.
- Main recovery techniques are log-based using write-ahead logging (WAL) and shadow paging. WAL protocol requires log records be forced to disk before related data updates.
- Recovery restores the database to the most recent consistent state before failure. This may involve restoring from a backup and reapplying log entries, or undoing and reapplying operations to restore consistency.
The document discusses various database backup, restore, load, and import utilities in DB2. It provides information on taking full and table space backups online and offline, restoring from backups, and loading and importing data. Backup options include incremental and delta backups. The load utility loads data in four phases and supports restarting failed loads. The import utility inserts data from files into tables and supports restarting failed imports.
The document discusses Oracle database memory management. It describes the basic memory structures as software code areas, the system global area (SGA), and the program global area (PGA). It recommends enabling automatic memory management, which allows Oracle to dynamically manage and tune the total instance memory between the SGA and instance PGA. The document provides steps to enable automatic memory management, which involves calculating a MEMORY_TARGET parameter size and restarting the database.
The document discusses DB2 security concepts including authentication, authorization, administrative authorities, and database object privileges. It describes how authentication can be configured on the server and client. The major DB2 administrative authorities like SYSADM, SYSCTRL, and DBADM are explained along with how privileges can be granted and revoked for database objects, schemas, tables, indexes, and packages. Examples are provided for granting privileges using SQL statements. The document also includes a case study about troubleshooting a user not having insert privileges on a table.
This document provides information on setting up high availability disaster recovery (HADR) between two DB2 pureScale clusters. It outlines the basic steps, which include creating a standby database, configuring HADR parameters on the primary and standby servers, and starting HADR. It also discusses some HADR restrictions in pureScale environments and considerations for configuration parameters.
This document discusses C++ memory management. It describes the different memory segments used in a C++ program including the code segment, BSS segment, data segment, heap, and stack. The stack and heap are discussed in more detail. The stack stores function parameters and local variables and uses a last-in, first-out approach. The heap stores dynamically allocated memory using new until explicitly freed. The document also provides details on how the call stack works, including pushing and popping stack frames when functions are called and returned from.
This document provides an agenda and overview for a training session on Oracle Database backup and recovery. The agenda covers the purpose of backups and recovery, Oracle data protection solutions including Recovery Manager (RMAN) and flashback technologies, and the Data Recovery Advisor tool. It also discusses various types of data loss to protect against, backup strategies like incremental backups, and validating and recovering backups.
Recovery Techniques and Need of RecoveryPooja Dixit
Recovery Techniques and Need of Recovery, 3 states of database Recovery:, DBMS Failure , Transaction Failure…, System Crash…, Disk Failure…,LOG BASED , CONCURRENT TRANSACTION, Checkpoint…
The document discusses transactions and transaction management in database systems. It defines transactions as logical units of work that must follow the ACID properties of atomicity, consistency, isolation, and durability. Transactions access and update data using operations like read and write. The transaction model ensures concurrent transactions execute reliably by enforcing serializability through techniques like conflict analysis and precedence graphs. Maintaining serializability guarantees the isolation property and prevents anomalous behavior from transaction interleaving.
This document provides an overview of memory management concepts including swapping, paging, virtual memory, hardware components like TLB and MMU, and software policies implemented by the operating system. It discusses key topics like locality of reference, thrashing problem, working set model, replacement policy, placement policy, scan rate policy, and fetch policy. The document explains these concepts through examples and diagrams to help the reader understand how memory is managed both in hardware and software.
This document provides an introduction to memory management techniques in operating systems. It discusses the differences between logical and physical addresses, the role of the memory management unit (MMU) in mapping logical to physical addresses, and memory allocation schemes including static contiguous allocation with equal or unequal partitions, dynamic contiguous allocation, and non-contiguous allocation using paging and segmentation. It also covers dynamic relocation, hardware support via relocation and limit registers, swapping, and fragmentation. The goal of memory management is to efficiently allocate and deallocate memory to running processes.
This document discusses database backup and recovery. It defines backup as additional copies of data for restoration if the primary copy is lost or corrupted. There are several types of backups including full, incremental, differential, and mirror backups. Recovery brings the database back to a prior consistent state, using techniques like log files, check pointing, and immediate or deferred transaction updates. Factors like backup location, test restores, automation, and database design can influence recovery duration. Alternatives to traditional backup and recovery include standby databases, replication, and disk mirroring.
Cache memory is a fast memory located between the CPU and main memory that stores frequently accessed instructions and data. It improves system performance by reducing memory access time. Cache is organized into multiple levels - L1 cache is closest to the CPU, L2 cache is next, and some CPUs have an L3 cache. (Level 1, 2, 3 caches refer to their proximity to the CPU.) Cache memory uses SRAM instead of DRAM for faster access. It is organized into rows containing a data block, tag, and flag bits. Optimization techniques for cache include improving data locality through code transformations and maintaining coherence across cache levels.
This document discusses processes, threads, and multithreading. It covers:
- Processes have code, data, heap, and stack memory sections. Processes can be in new, ready, running, waiting, or terminated states.
- Each process has a Process Control Block storing its state, IDs, registers, scheduling info, memory management info, and I/O status.
- Threads are units of CPU utilization with a program counter, stack, and registers that can run concurrently in a process, sharing code and data.
- Multithreading models map user to kernel threads, like many-to-one, one-to-one, or many-to-many. Linux implements threads as
This document discusses memory management techniques including paging, segmentation, and page replacement algorithms. It begins with an overview of memory hierarchy and basic memory management. It then covers topics such as swapping, virtual memory, page tables, TLBs, page replacement algorithms like FIFO, LRU and clock, and design issues for paging systems including page size and locality. The document also discusses segmentation, its implementation, and examples like MULTICS and the Pentium that use both paging and segmentation.
The document provides an overview of DB2 and discusses key concepts such as instances, databases, tablespaces, and recovery. It describes how to install and configure DB2, create instances and databases, load and move data between databases, and perform backups and recovery. Examples are given of commands used to create tablespaces and load data. The document also mentions tools for visualizing queries and monitoring performance.
The document discusses database backup and recovery. It describes four basic facilities for database backup and recovery: 1) backup facility, 2) journalizing facility, 3) checkpoint facility, and 4) recovery manager. It also describes five types of recovery techniques: 1) disk mirroring, 2) restore/rerun, 3) transaction integrity, 4) backward recovery, and 5) forward recovery. The types of recovery used depend on the nature of the database failure.
The document discusses database backup and recovery techniques. It defines database backup as backing up the operational state, architecture and stored data of a database to create a duplicate instance in case of crashes or data loss. It describes different backup methods like transaction log backups and full backups. It also discusses the importance of backups to restore data after damage or deletion. The document then covers different types of backups like full, incremental and differential backups. It further discusses database recovery, causes of database failures, and solutions like log-based recovery and shadow paging. Finally, it discusses the importance of backups and recovery for business continuity.
DB2 uses a storage model with tablespaces to organize database objects logically and physically. Each database must have at least three required tablespaces: 1) SYSCATSPACE stores metadata, 2) TEMPSPACE is for temporary data and intermediate query results, and 3) USERSPACE1 stores user-created objects. Tablespaces contain containers which are made up of pages at the physical layer. The OS file system provides directories to organize DB2 storage on disk.
The document discusses database recovery techniques, including:
- Recovery algorithms ensure transaction atomicity and durability despite failures by undoing uncommitted transactions and ensuring committed transactions survive failures.
- Main recovery techniques are log-based using write-ahead logging (WAL) and shadow paging. WAL protocol requires log records be forced to disk before related data updates.
- Recovery restores the database to the most recent consistent state before failure. This may involve restoring from a backup and reapplying log entries, or undoing and reapplying operations to restore consistency.
The document discusses various database backup, restore, load, and import utilities in DB2. It provides information on taking full and table space backups online and offline, restoring from backups, and loading and importing data. Backup options include incremental and delta backups. The load utility loads data in four phases and supports restarting failed loads. The import utility inserts data from files into tables and supports restarting failed imports.
The document discusses Oracle database memory management. It describes the basic memory structures as software code areas, the system global area (SGA), and the program global area (PGA). It recommends enabling automatic memory management, which allows Oracle to dynamically manage and tune the total instance memory between the SGA and instance PGA. The document provides steps to enable automatic memory management, which involves calculating a MEMORY_TARGET parameter size and restarting the database.
The document discusses DB2 security concepts including authentication, authorization, administrative authorities, and database object privileges. It describes how authentication can be configured on the server and client. The major DB2 administrative authorities like SYSADM, SYSCTRL, and DBADM are explained along with how privileges can be granted and revoked for database objects, schemas, tables, indexes, and packages. Examples are provided for granting privileges using SQL statements. The document also includes a case study about troubleshooting a user not having insert privileges on a table.
This document provides information on setting up high availability disaster recovery (HADR) between two DB2 pureScale clusters. It outlines the basic steps, which include creating a standby database, configuring HADR parameters on the primary and standby servers, and starting HADR. It also discusses some HADR restrictions in pureScale environments and considerations for configuration parameters.
This document discusses C++ memory management. It describes the different memory segments used in a C++ program including the code segment, BSS segment, data segment, heap, and stack. The stack and heap are discussed in more detail. The stack stores function parameters and local variables and uses a last-in, first-out approach. The heap stores dynamically allocated memory using new until explicitly freed. The document also provides details on how the call stack works, including pushing and popping stack frames when functions are called and returned from.
This document provides an agenda and overview for a training session on Oracle Database backup and recovery. The agenda covers the purpose of backups and recovery, Oracle data protection solutions including Recovery Manager (RMAN) and flashback technologies, and the Data Recovery Advisor tool. It also discusses various types of data loss to protect against, backup strategies like incremental backups, and validating and recovering backups.
Recovery Techniques and Need of RecoveryPooja Dixit
Recovery Techniques and Need of Recovery, 3 states of database Recovery:, DBMS Failure , Transaction Failure…, System Crash…, Disk Failure…,LOG BASED , CONCURRENT TRANSACTION, Checkpoint…
The document discusses transactions and transaction management in database systems. It defines transactions as logical units of work that must follow the ACID properties of atomicity, consistency, isolation, and durability. Transactions access and update data using operations like read and write. The transaction model ensures concurrent transactions execute reliably by enforcing serializability through techniques like conflict analysis and precedence graphs. Maintaining serializability guarantees the isolation property and prevents anomalous behavior from transaction interleaving.
This document provides an overview of memory management concepts including swapping, paging, virtual memory, hardware components like TLB and MMU, and software policies implemented by the operating system. It discusses key topics like locality of reference, thrashing problem, working set model, replacement policy, placement policy, scan rate policy, and fetch policy. The document explains these concepts through examples and diagrams to help the reader understand how memory is managed both in hardware and software.
This document provides an introduction to memory management techniques in operating systems. It discusses the differences between logical and physical addresses, the role of the memory management unit (MMU) in mapping logical to physical addresses, and memory allocation schemes including static contiguous allocation with equal or unequal partitions, dynamic contiguous allocation, and non-contiguous allocation using paging and segmentation. It also covers dynamic relocation, hardware support via relocation and limit registers, swapping, and fragmentation. The goal of memory management is to efficiently allocate and deallocate memory to running processes.
This document discusses database backup and recovery. It defines backup as additional copies of data for restoration if the primary copy is lost or corrupted. There are several types of backups including full, incremental, differential, and mirror backups. Recovery brings the database back to a prior consistent state, using techniques like log files, check pointing, and immediate or deferred transaction updates. Factors like backup location, test restores, automation, and database design can influence recovery duration. Alternatives to traditional backup and recovery include standby databases, replication, and disk mirroring.
Cache memory is a fast memory located between the CPU and main memory that stores frequently accessed instructions and data. It improves system performance by reducing memory access time. Cache is organized into multiple levels - L1 cache is closest to the CPU, L2 cache is next, and some CPUs have an L3 cache. (Level 1, 2, 3 caches refer to their proximity to the CPU.) Cache memory uses SRAM instead of DRAM for faster access. It is organized into rows containing a data block, tag, and flag bits. Optimization techniques for cache include improving data locality through code transformations and maintaining coherence across cache levels.
This document discusses processes, threads, and multithreading. It covers:
- Processes have code, data, heap, and stack memory sections. Processes can be in new, ready, running, waiting, or terminated states.
- Each process has a Process Control Block storing its state, IDs, registers, scheduling info, memory management info, and I/O status.
- Threads are units of CPU utilization with a program counter, stack, and registers that can run concurrently in a process, sharing code and data.
- Multithreading models map user to kernel threads, like many-to-one, one-to-one, or many-to-many. Linux implements threads as
This document discusses memory management techniques including paging, segmentation, and page replacement algorithms. It begins with an overview of memory hierarchy and basic memory management. It then covers topics such as swapping, virtual memory, page tables, TLBs, page replacement algorithms like FIFO, LRU and clock, and design issues for paging systems including page size and locality. The document also discusses segmentation, its implementation, and examples like MULTICS and the Pentium that use both paging and segmentation.
The document provides an overview of DB2 and discusses key concepts such as instances, databases, tablespaces, and recovery. It describes how to install and configure DB2, create instances and databases, load and move data between databases, and perform backups and recovery. Examples are given of commands used to create tablespaces and load data. The document also mentions tools for visualizing queries and monitoring performance.
The document discusses database backup and recovery. It describes four basic facilities for database backup and recovery: 1) backup facility, 2) journalizing facility, 3) checkpoint facility, and 4) recovery manager. It also describes five types of recovery techniques: 1) disk mirroring, 2) restore/rerun, 3) transaction integrity, 4) backward recovery, and 5) forward recovery. The types of recovery used depend on the nature of the database failure.
This document discusses database views in MySQL. It defines a view as a virtual table composed of the result set of a SELECT query. Views allow users to retrieve and update data as if it were a table. The document outlines how to create views from SELECT statements, how to make views updatable by following certain rules, and how to manage views using SHOW, ALTER, and DROP commands.
The document discusses how to use the mysqldump utility in MySQL to back up and restore databases. It explains how to back up individual tables or entire databases using mysqldump and various options. It also discusses how to restore databases from backup files by running the SQL statements in the backup file. The document emphasizes that binary log files should also be used to update the restored database to the most recent state after the backup was performed. It provides examples of restoring data directly from binary log files or exporting the binary log data to a text file first before restoring.
Lập trình sáng tạo creative computing textbook mastercode.vnMasterCode.vn
Lập trình sáng tạo bao gồm việc sáng tạo. Khoa học máy tính và các ngành liên quan tới tính toán đã từ lâu đƣợc giới thiệu tới những ngƣời trẻ tuổi một cách rời rạc – quá nhấn mạnh kĩ thuật so với khả năng sáng tạo. Lập trình sáng tạo giúp cho sự phát triển của cá nhân đối với tính toán trở nên tốt hơn, bằng cách hỗ trợ vẽ vời dựa trên sáng tạo, tƣởng tƣợng, và sở thích.
This document discusses various topics relating to query optimization in database management systems, including:
- Optimizing SQL statements and database structure to improve query performance
- Understanding query execution plans and how they are generated by the optimizer
- Using the EXPLAIN statement to analyze queries and identify optimizations
- Common join types like nested loops, indexes that can improve joins, and optimizing column data types for joins
- Estimating query performance based on factors like disk seeks and index usage
- Measuring actual performance with tools like BENCHMARK()
- Internal optimizations in MySQL like those for NULL values and different join types
The document provides examples of using EXPLAIN to optimize a join query involving multiple tables, and
This document discusses database indexing and how indexes are used in MySQL. It begins with an introduction to indexing and describes several types of indexes, including single-level ordered indexes, multilevel indexes, and dynamic multilevel indexes using B-trees and B+-trees. It then provides examples of how to create and use indexes on tables in MySQL, including creating indexes on single or multiple columns and viewing existing indexes. The document aims to explain how database indexes improve query performance in MySQL.
This document discusses database transactions in three parts:
1) It introduces transaction types including explicit, implicit, and auto-commit transactions.
2) It explains the ACID properties that transactions must satisfy - Atomicity, Consistency, Isolation, and Durability.
3) It provides examples of using transactions in MySQL, including starting a transaction, rolling back or committing changes, and setting the isolation level.
This document discusses database transactions in three parts:
1) It introduces transaction types including explicit, implicit, and auto-commit transactions.
2) It explains the ACID properties that transactions must satisfy - Atomicity, Consistency, Isolation, and Durability.
3) It provides examples of using transactions in MySQL, including starting a transaction, rolling back or committing changes, and setting the isolation level.
This document provides an overview of setting up MySQL replication between a master database server and one or more slave servers. It discusses creating a user for replication, configuring the master with binary logging and a server ID, configuring slaves with unique server IDs, obtaining the replication information from the master, and using mysqldump or raw data files to initialize slaves with data from the master. It also covers starting new replication environments and adding additional slaves.
This document describes how to configure MySQL database replication between a master and slave server. The key steps are:
1. Configure the master server by editing its configuration file to enable binary logging and set the server ID. Create a replication user and grant privileges.
2. Export the databases from the master using mysqldump.
3. Configure the slave server by editing its configuration file to point to the master server. Import the database dump. Start replication on the slave.
4. Verify replication is working by inserting data on the master and checking it is replicated to the slave.
MySQL replication allows data from a master database server to be copied to one or more slave database servers. It provides advantages like improving performance through load balancing, increasing data security with backups on slaves, and enabling analytics on slaves without impacting the master. Basic replication involves setting up a master server and slave server with unique IDs, configuring the master to log binary changes, and pointing the slave to the master so it can copy the binary log entries.
The document provides an overview of MySQL database including:
- A brief history of MySQL and descriptions of some early and modern storage engines.
- Explanations of the physical and logical architectures of MySQL, focusing on InnoDB storage engine components like the tablespace, redo logs, and buffer pool.
- An overview of installing, configuring, and optimizing MySQL for production use, including storage engine, server variable, and hardware recommendations.
- Descriptions of MySQL administration tools and methods for monitoring performance and activity.
- Explanations of MySQL replication including configuration, best practices, and use of global transaction identifiers.
- Discussions of backup strategies including logical dumps and binary backups.
Replication allows data from a MySQL master database to be synchronized with one or more slave databases. The master records all data changes in its binary log. Slave databases connect to the master and receive the binary log transactions, which they then apply locally to stay synchronized with the master database. Replication can be used for load balancing reads across multiple slave servers or for high availability by failing over to a slave if the master fails.
This document provides instructions for setting up different types of MySQL replication architectures:
1) It describes how to configure basic master-slave replication between two servers with step-by-step instructions for configuring the master and slave servers.
2) It also provides a second method for implementing master-slave replication with additional details on configuring the replication user and importing databases.
3) Finally, it outlines how to set up a master-master replication configuration between two MySQL servers to provide high availability, with each server acting as both a master and slave.
MySQL Replication Evolution -- Confoo Montreal 2017Dave Stokes
MySQL Replication has evolved since the early days with simple async master/slave replication with better security, high availability, and now InnoDB Cluster
ConFoo MySQL Replication Evolution : From Simple to Group ReplicationDave Stokes
MySQL Replication has been around for many years but how wee do you under stand it? Do you know about read/write splitting, RBR vs SBR style replication, and InnoDB cluster?
The document provides information about MySQL, including that it is an open source database software that is widely used. It describes how to install and configure MySQL on Linux, and provides examples of common SQL queries like creating tables, inserting/updating/deleting data, and exporting/importing databases. Key topics covered include the benefits of MySQL, installing it on Linux, basic configuration, and using SQL statements to define schemas and manipulate data.
The document provides information about MySQL training including:
- Installing MySQL and configuring multiple instances on a single host
- Taking backups with mysqldump including full database backups and consistent backups for InnoDB tables
- Restoring from MySQL backups by executing the SQL dump files
- Common MySQL commands for checking the server status, creating databases and users
- Storage engines like MyISAM and InnoDB and how to check the current storage engine for a table
Pacemaker is a high availability cluster resource manager that can be used to provide high availability for MySQL databases. It monitors MySQL instances and replicates data between nodes using replication. If the primary MySQL node fails, Pacemaker detects the failure and fails over to the secondary node, bringing the MySQL service back online without downtime. Pacemaker manages shared storage and virtual IP failover to ensure connections are direct to the active MySQL node. It is important to monitor replication state and lag to ensure data consistency between nodes.
Database Mirror for the exceptional DBA – David Izahksqlserver.co.il
1. Rafael Advanced Defense Systems designs, develops, manufactures and supplies high tech defense systems for air, land, sea and space applications with sales exceeding $1.851 billion in 2010 and about 7000 employees.
2. The presentation discusses database mirroring architecture, automation, monitoring and best practices. Automation options covered include T-SQL with SQLCMD, PowerShell and linked servers.
3. Rafael chooses asynchronous database mirroring without automatic failover for high performance, with manual failover when the principal fails to avoid possible data loss from unsynchronized transactions.
Mysql replication allows data to be replicated from a master database server to slave database servers. It works by having the master record all write queries to its binary log which is then used by slaves to replicate the same queries. Replication can be synchronous for high data integrity or asynchronous for higher performance. Configuring replication involves setting up a replication user on the master, enabling binary logging, taking a snapshot of data, and configuring the slaves to connect to the master and replay the binary logs.
MariaDB Auto-Clustering, Vertical and Horizontal Scaling within Jelastic PaaSJelastic Multi-Cloud PaaS
Availability and performance have a direct business impact for most of the companies nowadays. No one wants to lose money because of occasional downtime or data loss. Thus, to minimize the risk and ensure an extra level of redundancy, clustering and automatic scaling should be used. In this video Ruslan Synytsky presented how Jelastic PaaS implemented auto-clustering of MariaDB by providing the customers with different replication options out-of-box with no need in manual configurations. It is also detailed how to automate vertical and horizontal scaling of databases running in the cloud.
Video recording of the session https://www.youtube.com/watch?v=6MND3feb5zM
The document describes the Google File System (GFS), which was developed by Google to handle its large-scale distributed data and storage needs. GFS uses a master-slave architecture with the master managing metadata and chunk servers storing file data in 64MB chunks that are replicated across machines. It is designed for high reliability and scalability handling failures through replication and fast recovery. Measurements show it can deliver high throughput to many concurrent readers and writers.
The document provides information about installing and configuring MySQL database on Linux and Windows systems. It discusses downloading and installing MySQL using RPM packages on Linux and running the installer on Windows. It also covers verifying the MySQL installation, setting the root password, creating user accounts, and configuring the MySQL configuration file. The document then provides an overview of important MySQL commands and functions for connecting to and manipulating data in MySQL databases from PHP scripts.
MySQL HA and Capacity Planning and ArchitectureAbishek V S
This architecture provides simultaneous protection against several failure modes: primary-zone server infrastructure failure, single-zone block-storage degradation, or full-zone outage.
The application or database layer replication is not required because regional Persistent Disks provide continuous and synchronous block-level data replication, which is fully managed by Google Cloud. A regional Persistent Disk automatically detects errors and slowness, switches the replication mode, and performs catch up of data that is replicated to only one zone.
If there are storage problems in a primary zone, a regional Persistent Disk automatically performs reads from the secondary zone. This operation can result in increased read latency, but your application can continue to operate without any manual action.
Replication allows data to be shared between multiple MySQL databases. The master database records all changes in binary logs which are used to replicate data to slave databases. Slaves pull data from the master's binary logs and execute the same statements locally to match the master's data. This allows for high availability, load balancing, and off-site processing capabilities.
MySQL Database Replication - A Guide by RapidValue SolutionsRapidValue
For many years, MySQL replication used to be based on binary log events. It was considered that all a slave knew was the exact event and the exact position it just read from the master. Any single transaction from a master could have ended in different binary logs, and also, in different positions in these logs. GTID was introduced along with MySQL 5.6. It has brought, along, some major changes in the way MySQL operates. Every transaction has a unique identifier which identifies it in a same way on every server. It’s not important, anymore, in which binary log position a transaction was recorded, all you need to know is the GTID.
Database replication is used to handle multiple copies of data, automatically, from the master database server to slave database servers. If we have changed data or schema in the master database, it will, automatically, update the slave database. The main advantage of replication is that it prevents the data loss. If the master database server is crashed, the exact copy of data will be there in the slave server. In MySQL, you can use MySQL Utility for implementing database replication between master and slave. MySQL Utility is a package that is used for maintenance and administration of MySQL servers. You can install MySQL utility, along with MySQL Workbench, or install it as a stand-alone package.
MySQL Replication.
This article explains how it is implemented, with an example. In this example, two servers have been used – one master and one slave. Both servers are configured in the same manner with MySQL server and MySQL Utility.
The document provides information on MongoDB replication and sharding. Replication allows for redundancy and increased data availability by synchronizing data across multiple database servers. A replica set consists of a primary node that receives writes and secondary nodes that replicate the primary. Sharding partitions data across multiple machines or shards to improve scalability and allow for larger data sets and higher throughput. Sharded clusters have shards that store data, config servers that store metadata, and query routers that direct operations to shards.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
1. Hệ quản trị cơ sở dữ liệu
Replication
Dư Phương Hạnh
Bộ môn Hệ thống thông tin
Khoa CNTT, trường Đại học Công nghệ
Đại học Quốc gia Hanoi
hanhdp@vnu.edu.vn
2. Outline
Replication concepts and principles
Setting up replication on master and slave servers
Managing replication
Replication settings
Read more at:
http://mysql-tools.com/en/replication-in-mysql.html
2
Hệ quản trị CSDL @ BM HTTT
4. Introduce
Beside performing regular backups of your
databases, you can also replicate your databases.
It means you maintain a copy of the database that
is kept up-to-date (synchronized) with the original
database. If the original database becomes
unavailable, the replicated database can continue
to provide users immediate access to the same
data with a minimal amount of downtime.
4
Hệ quản trị CSDL @ BM HTTT
5. Replicating MySQL Databases
Updates made to one database copy are
automatically propagated to all the other replicas.
Generally, one of the replicas is designated as
the master where Updates are directed to the
master while read queries can be addressed to
either the master or the slaves.
If replicas other than master handle the updates,
then keeping the replicas identical becomes more
complex.
5
Hệ quản trị CSDL @ BM HTTT
6. Benefits of database replication
Availability: Any replica can be used as a "hot"
backup.
– If the master database becomes unavailable, this replica
can take over and can be designated as the master. The
failures can be fixed and the failed replica can rejoin as a
slave replica.
Backups: Replicas can be used as active backups
and can be used to perform tedious offline backups
to file systems without locking up the primary
instance.
6
Hệ quản trị CSDL @ BM HTTT
7. Benefits of database replication
Load Balancing: Replicas can serve split loads, also
called load balancing, for heavily used databases.
– Read queries can be distributed to the different replicas
while updates are handled by the master. This scenario
works well when the number of read queries is far greater
than the number of update queries.
Proximity: Some replicas can be closer to users
leading to improved response time.
Security: It is harder for malicious users to damage
7
all the replicas.
Hệ quản trị CSDL @ BM HTTT
8. Availability
One database is
the master while
the other is
designated as a
slave. Only
updates to the
master are
logged.
8
Hệ quản trị CSDL @ BM HTTT
9. Load Balancing
Read queries are
directed to either
the master or any
of the replicas
while update
queries are
directed only to
the master.
9
Hệ quản trị CSDL @ BM HTTT
10. MySQL Replication Model
MySQL replication is based on a number of principles:
1.Replication is a one-way, asynchronous process changes
are always propagated from the master server to the slave
server, but never the other way around.
2.The primary MySQL server acts as the Master server, and
the servers that contain the copied databases are considered
the Slave servers.
3.Data always moves from the master server to the slave
server. As a result, only databases on the master server should
be updated. The updates are then propagated to the slave
servers
10
Hệ quản trị CSDL @ BM HTTT
11. MySQL Replication Model
4. The master server must be configured with a user account
that grants replication privileges to the slave server. The
account allows the slave server to access the master server
in order to receive updates.
5. Replication is based on the master database server
recording all updates in a binary log binary logging must
be enabled on the master server. The logged updates are
used to synchronize the database on the slave server.
6. To avoid conflicts, update queries are directed to the master
while read queries can either go to the master or to the
slaves.
11
Hệ quản trị CSDL @ BM HTTT
12. MySQL Replication Model
7. The replicas connect to the master to read the
binary log and then apply the updates to catch up
with the master.
8. The slave server uses replication coordinates to
track updates.
– The coordinates are based on the name of a binary log
file on the master server and the position in that file. The
file and position represent where MySQL left off when the
last update was performed on the slave server. The
coordinates - along with other logon information - are
12
stored in the master.info file on the slave host.quản trị CSDL @ BM HTTT
Hệ
13. MySQL Replication Model
9. Each server that participates in the replication process must
be assigned a unique numerical server ID. You assign the
ID by specifying the server-id option in the [mysqld] section
of the option file for each server.
10. A master server can replicate data to one or more slave
servers.
11. To set up replication, the master server and slave server
must begin with databases in a synchronized state. In other
words, the databases to be replicated must be identical
when replication is initiated.
13
Hệ quản trị CSDL @ BM HTTT
14. MySQL Replication Model
12. No slave server can ever have two master servers.
13. It is generally best to have the master server and
slave servers run the same version of MySQL.
14. There are two core types of replication format
– Statement Based Replication (SBR): replicates entire SQL
statements,
– Row Based Replication (RBR): replicates the changed rows
– You may also use a third variety, Mixed Based Replication
(MBR), which is the default mode within MySQL 5.1.14 and
14
later.
Hệ quản trị CSDL @ BM HTTT
15. Setting up Replication
Enable binary logging on the master server.
Make a backup of the master database.
Start a new binary log immediately after making the
backup.
Set up a user account on the master server that
grants replication privileges to the slave server. The
account allows the slave server to access the
master server in order to receive updates.
15
Hệ quản trị CSDL @ BM HTTT
16. Setting up Replication (2)
Assign a unique numerical server ID to each server
that participates in the replication process.
Block all updates to the master.
Create a Slave instance.
Load the backup of the master database into the
slave
Apply the updates from the binary log to the slave to
sync up with the master.
Get both the master and slave running.
16
Hệ quản trị CSDL @ BM HTTT
17. Replication Files on the Slave (1)
When replication is implemented, the slave server
maintains a set of files to support the replication.
MySQL automatically creates the three types of files
on the slave server:
1.<host>-relay-bin.<extension>:Contains the
statements to be used to synchronize the replicated
database with the database on the master server, and
then it is deleted.
–
17
The relay log files receive their data from the binary log
files on the master server.
Hệ quản trị CSDL @ BM HTTT
18. Replication Files on the Slave (2)
2. master.info: Contains connection information such
as the master server hostname, user account and
its password. It also maintains information about
the last binary log file on the master server to be
accessed and the position in that file.
3. relay-log.info: Contains information about the relay
log files and tracks the last position in those files in
which the replicated database was updated.
18
Hệ quản trị CSDL @ BM HTTT
20. Set up Replication User (on Master)
To allow a master server to replicate data to a slave
server, you must set up a user account on the
master server.
The slave server then uses that account to establish
a connection to the master server
20
Hệ quản trị CSDL @ BM HTTT
21. Set up Replication User (on Master)
GRANT REPLICATION SLAVE ON *.* TO '<slave
account>'@'<slave host>' IDENTIFIED BY '<password>';
The REPLICATION SLAVE privilege at the global level allows
all changes to a database to be replicated to the copy of the
database on the slave server.
TO clause defines the username on the account and host from
which that account can connect. This is the host where the slave
server resides.
The IDENTIFIED BY clause then identifies the password that
should be used when the slave server logs on to the master
server.
21
Hệ quản trị CSDL @ BM HTTT
22. Making Initial Backup
Make a backup of the databases that you want to replicate.
Use the --master-data option in the mysqldump command.
The --master-data option adds a CHANGE MASTER
statement similar to the following to your backup file:
CHANGE MASTER TO MASTER_LOG_FILE='mastsrvbin.000201', MASTER_LOG_POS=64;
CHANGE MASTER statement identifies the binary log file
and the position in that file at the time that the backup file is
created. You use this information later when you set up
replication on the slave server to synchronize the slave
server with the master server.
22
Hệ quản trị CSDL @ BM HTTT
23. Configuration Changes on Master
(1)
Shut down the master server.
Modify the [mysqld] group in the option file on the master
server to specify a server ID for the master server.
– The master server and any slave servers must each be assigned a
unique numerical ID.
If you don't want to replicate a specific database, such as the
mysql or test databases, you can add a binlog-ignoredb option for each database to prevent changes to that
database from being logged to the binary file.
Restart the master server.
23
Hệ quản trị CSDL @ BM HTTT
24. Configuration Changes on Master
(2)
Two binlog-ignore-db options specify
[mysqld]
log-bin
that changes to the mysql and test
binlog-db=sakila
databases should not be logged to
binlog-ignore-db=mysql
the binary files.
binlog-ignore-db=test
server-id=masterserverID;
log-bin option specifies
The server-id option specifies the
numbered ID for the master server.
Note: If you use an existing option
that binary logging should
file, a server-id may already be
be enabled.
present. If multiple options are
specified and the numerical IDs are
different, replication might not work.
24
Hệ quản trị CSDL @ BM HTTT
25. Configuration Changes on the Slave
Shut down the slave server.
Modify the option file on the slave server so that
the [mysqld] section includes the following settings:
server-id=<slave server id>
Make certain that this server ID is different from the
master server ID and different from any other slave
server IDs. Also be sure that this is the only serverid option defined on the slave server.
Restart the slave server.
25
Hệ quản trị CSDL @ BM HTTT
26. Restore Backup on Slave
Use the backup file created on Master to load the
databases into the slave server
26
Hệ quản trị CSDL @ BM HTTT
27. Set up Connection to Master
Specify the settings that
will be used for the slave
Syntax
CHANGE MASTER TO
server to connect to the
MASTER_HOST='<master host>',
master server and
MASTER_USER='<user account>',
determine which binary log
MASTER_PASSWORD='<passwor
file to access. Launch the
mysql client utility on the
slave server, and execute
CHANGE MASTER
statement.
d>',
MASTER_LOG_FILE='<log file>',
MASTER_LOG_POS=<position>;
The slave server adds this
information to the master.info file,
which is used when connecting to
the master server.
27
Hệ quản trị CSDL @ BM HTTT
28. Start Replication on Slave
The final is to start the replication process on the
slave server. To do so, execute the following SQL
statement on the slave server:
START SLAVE;
The statement initiates the threads that connect
from the slave server to the master server.
28
Hệ quản trị CSDL @ BM HTTT
29. Verifying Replication
Once replication is set up, update a table on the master
server and then confirm whether that change has been
replicated to the slave server.
To support administering replication, MySQL provides a
number of SQL statements that allow you to view information
about the replication environment or take a specific action.
MySQL supports statements for both the master server and
the slave server.
29
Hệ quản trị CSDL @ BM HTTT
30. Verifying Replication
Managing the Master Server:
– RESET MASTER Statement
– SHOW MASTER STATUS Statement
– SHOW SLAVE HOSTS Statement
30
Hệ quản trị CSDL @ BM HTTT
31. Verifying Replication
Managing the Slave Server:
– SHOW SLAVE HOSTS Statement
– CHANGE MASTER Statement
– RESET SLAVE Statement
– SHOW SLAVE STATUS Statement
– START SLAVE Statement
– STOP SLAVE Statement
31
Hệ quản trị CSDL @ BM HTTT
32. Exercise
Thực hiện đồng bộ hoá CSDL test của bài trước.
– Cấu hình slave server giả lập trên 1 máy tính
– Bật chế độ --log-bin trên master server
– Backup CSDL test trên master server với option –masterdata
– Tạo người dùng cho slave
– Slave server thực hiện restore CSDL test
– Khai báo thông tin để truy cập bin log thực hiện đồng bộ
hoá
– Bắt đầu tiến hành đồng bộ hoá
32
Hệ quản trị CSDL @ BM HTTT