Case Study For Replication For PCMS


Published on

Dated: 19th July 2009
By:Shahzad Sarwar To: Related Project Managers/Consultants,Client
Case Study:
To sync data of different branches of office via replication who are running Comsoft application named PCMS.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Case Study For Replication For PCMS

  1. 1. Replication of PCMS Database A cross referenced white paper Dated: 19th July 2009 By: Shahzad Sarwar, PSE, Comsoft To: Related Project Managers/Consultants Client
  2. 2. Table of Content
  3. 3. Case Study: To sync data of different branches of office via replication who are running Comsoft application named PCMS. Environment: MS .Net 2.0 MS SQL Server 2005 WAN Connected MS SQL Servers on different branches of office Business problem: Client has regional offices or entities that collect and process data that must be sent to a central location. For example: • Estimation/quotation/job data can be "rolled up" or consolidated from a number of servers at local warehouses/parties into a central server at corporate headquarters. • Information from autonomous business divisions within a company can be sent to a central server. In some cases, data is also sent from the central site to remote sites. This data is typically intended to be read-only data at the remote site, such as a set of All the base/administration tables that are only updated at a central site. It is useful to divide replication into two broad categories: replicating data in a server to server environment and replicating data between a server and clients. This section of the documentation describes scenarios that involve replicating data between servers. Data is typically replicated between servers to support the following applications and requirements:
  4. 4. • Improving scalability and availability Maintaining continuously-updated copies of data allows read activity to be scaled across multiple servers. The redundancy resulting from maintaining multiple copies of the same data is crucial during planned and unplanned system maintenance. • Data warehousing and reporting Data warehouse and reporting servers often use data from online transaction processing (OLTP) servers. Use replication to move data between OLTP servers and reporting and decision support systems. • Integrating data from multiple sites Data is often "rolled up" from remote offices and consolidated at a central office. Similarly, data can be replicated out to remote offices. • Integrating heterogeneous data Some applications depend on data being sent to or from databases other than SQL Server. Use replication to integrate data from non-SQL Server databases. • Offloading batch processing Batch operations are often too resource intensive to run on an OLTP server. Use replication to offload processing to a dedicated batch processing server. Common Requirements for This Scenario Applications for regional offices typically have the following requirements, which an appropriate replication solution must address: • The system must maintain transactional consistency. • The system should have low latency: updates at the remote sites must reach the central site quickly. • The system should have high throughput: it should handle the replication of a large number of transactions. • Replication processing should require minimal overhead on the remote sites. • Data changes might flow in both directions: in some cases, read-only data is sent to remote sites, in addition to data being consolidated from the remote sites to the central site. • The data required at the central site might be a subset of the data available at each remote site. Technical Architecture of Solution: Replication Publishing Model Overview Microsoft SQL Server uses a publishing industry metaphor to describe the components of the replication system. The components include the Publisher, Subscribers, publications and articles, and subscriptions. • In the diagram above, each remote site is a Publisher. Some or all of the data at the remote site is included in the publication, with each table of data being an article (articles can also be other database objects, such as stored procedures). The central site is a Subscriber to these publications, receiving schema and data as subscriptions. • The central site also serves as a Publisher for the data that is sent to the remote sites. Each remote site Subscribes to the publication from the central site.
  5. 5. For more information on the components of the system, see Replication Publishing Model Overview. Microsoft SQL Server 2005 provides the following types of replication for use in distributed applications: • Transactional replication. • Merge replication. • Snapshot replication. Our scenario is best implemented with transactional replication, which is well suited to handle the requirements outlined in the previous section. By design, transactional replication addresses the principal requirements for this scenario: • Transactional consistency • Low latency • High Throughput • Minimal overhead Transactional Replication Overview Transactional replication typically starts with a snapshot of the publication database objects and data. As soon as the initial snapshot is taken, subsequent data changes and schema modifications made at the Publisher are usually delivered to the Subscriber as they occur (in near real time). The data changes are applied to the Subscriber in the same order and within the same transaction boundaries as they occurred at the Publisher; therefore, within a publication, transactional consistency is guaranteed. Transactional replication is implemented by the SQL Server Snapshot Agent, Log Reader Agent, and Distribution Agent. The Snapshot Agent prepares snapshot files containing schema and data of published tables and database objects, stores the files in the snapshot folder, and records synchronization jobs in the distribution database on the Distributor. The Log Reader Agent monitors the transaction log of each database configured for transactional replication and copies the transactions marked for replication from the transaction log into the distribution database, which acts as a reliable store-and-forward queue. The Distribution Agent
  6. 6. copies the initial snapshot files from the snapshot folder and the transactions held in the distribution database tables to Subscribers. Incremental changes made at the Publisher flow to Subscribers according to the schedule of the Distribution Agent, which can run continuously for minimal latency, or at scheduled intervals. Because changes to the data must be made at the Publisher (when transactional replication is used without immediate updating or queued updating options), update conflicts are avoided. Ultimately, all Subscribers will achieve the same values as the Publisher.
  7. 7. Implementation Plan: • Test case requirement are as:  2 deployments of PCMS with SQL Server 2005  3 days replication monitoring. • Querying client for environment about production setup. • Transactional replication implementation and monitoring for 1 week. • Training to client end database administrators about replication monitoring. • User Guide/ monitoring guide for client. Reference: