• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
How we setup Rsync-powered Incremental Backups

How we setup Rsync-powered Incremental Backups






Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as OpenOffice

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.


11 of 1 previous next

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
  • Very interesting solution. Thanks for sharing!
    Out of curiosity.. any reason why you have disabled saving the powerpoint/pdf ?
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment
  • TTP Conference 2011 – Celebrating 25 years of supporting Novell Solutions
  • According to the man page, --link-dest hard links to files in the link destination when the backup is the same as the source. (ie. identical files).
  • We set our link dest to current, a symbolic link always pointing to the current backup, then preform the backup. Next we set our current link to the latest backup and delete backups older than 30 days.
  • This is how we backed up a Netware server using NCP.
  • ..and a Windows one, using CIFS.
  • This is a screenshot of alpine-backup's code. As you can see, it is quite similar to our bash scripts, but adds error checking and is written in PHP. If you want to have a look at alpine-backup, the URLs are on the bottom.
  • This what our backup directory looks like. As you can see, only the first backup is large, the rest are quite small, only about 200MB large.
  • This an example of the configuration file format alpine-backup uses. It is an INI file. (I used this format because PHP had a built-in parser for it.)
  • This what our mount drivers look like. This particular one actually stops and restarts services for backup. (We use it for iPrint backups.)
  • Here is a screenshot of the mail we get from cron. As you can see from the summary, the backups are all successful!
  • This is a diagram of Selkirk College's organizational structure. All our servers are backed up in at least two locations. As you can see, there are two islands: the Nelson-centric one and the Castlegar-centric one. This is due to the fact that there are no fibre links between Castlegar and Nelson.
  • ...about half. (we sometimes use different parameters for our main and offsite backups.)
  • At this point I'd like to show you our first demo: How no-change syncing works. QNAP doing a Live Backup
  • QNAP in Slave Mode
  • Well, this does not explain much, let's have a look at snapup.sh
  • The first two commands SSH into our GroupWise server and use the perl scripts to create and activate a snapshot. Then we mount the snapshot and perform the backup via SSH.
  • Now, I'm going to move on to our next demo. Live Post Office
  • We start up the POA using an a backup.
  • ..and now: some old mail.
  • alp-dbrestore does not like dashes in DB Names.
  • Next, I'm going to show you how we restore our MySQL backups. First, we create test article.
  • Then, we create a new DB with an old version of the site. (Avoid dashes in database name)
  • We switch Drupal to the restore DB.
  • The article has now been rolled back.
  • This is a screenshot of our Oracle backups in action. Since we're running early, I'm going to show you what alpine-backup's code is like.
  • Any Questions? TTP Conference 2011 – Celebrating 25 years of supporting Novell Solutions

How we setup Rsync-powered Incremental Backups How we setup Rsync-powered Incremental Backups Presentation Transcript

    • TTP Conference 2011
      Celebrating 25 years of supporting Novell Solutions
  • Rsync Backups
      How we set up rsync powered incremental backups for Linux.
    Nicholas and Nick Paun
  • The Plan
    • Because our obsolete tape backups were very slow, unreliable and hard to manage, we decided to go hard disk.
    • We needed to be able to keep the backups for at least 30 days.
    • The only way we could do this is by using incremental backups.
    • After ruling out large, expensive proprietary systems, we chose rsync .
  • About rsync
    • Using -- link-dest , rsync can make incremental backups look like full ones without wasting any disk space by using hard links. This makes for easy restoring!
    • Because of rsync 's algorithm we do not transmit the full data over the network (except for the first time.)
    • Rsync is very reliable and rarely breaks due to file changes.
  • Version 1: The Scripts
    • Our scripts were based on some heavily tweaked examples on the rsync site.
    • These simple scripts had no error checking, but they worked very well!
    • We needed to backup Netware servers too, so we used NCP mounting , which was very slow.
  • The Scripts echo "Here we rsync DATA volume on CASADMLX server to /bkup/casadmlx/DATA/ folder on CASARKLX server" rsync -av --link-dest= /bkup/casadmlx/DATA/current root@casadmlx:/media/nss/DATA/ /bkup/casadmlx/DATA/ $date rm /bkup/casadmlx/DATA/current ln -s /bkup/casadmlx/DATA/ $date /bkup/casadmlx/DATA/current rm -r /bkup/casadmlx/DATA/ `date --date='30 days ago' +%F`
  • The Scripts echo "Here we rsync DATA volume on CASADM server to /bkup/casadm/DATA/ folder on CASARKLX server" ncpmount -S casadm -A casadm -U .bkup.cs.users.selkirk -P provo /mnt/casadm/ rsync -av --link-dest= /bkup/casadm/DATA/current /mnt/casadm/DATA/ /bkup/casadm/DATA/ $date rm /bkup/casadm/DATA/current ln -s /bkup/casadm/DATA/ $date /bkup/casadm/DATA/current rm -r /bkup/casadm/DATA/ `date --date='30 days ago' +%F` umount /mnt/casadm/
  • The Scripts echo "Here we rsync INSTRUCTORS volume on FS1 server to /bkup/fs1/INSTRUCTORS/ folder on CASARKLX server" mount.cifs //fs1/INSTRUCTORS /mnt/fs1/ -o user=Administrator,pass=provo rsync -av -e 'ssh -p7774' --link-dest= /bkup/fs1/INSTRUCTORS/current /mnt/fs1/INSTRUCTORS/ /bkup/fs1/INSTRUCTORS/ $date rm /bkup/fs1/INSTRUCTORS/current ln -s /bkup/fs1/INSTRUCTORS/ $date /bkup/fs1/INSTRUCTORA/current rm -r /bkup/fs1/INSTRUCTORS/ `date --date='30 days ago' +%F` umount /mnt/fs1/
  • Version 2: alpine-backup
    • Eventually we decided that we needed a proper backup system.
    • We wanted to have good error checking, replication, reporting and central control.
    • We also wanted to actually understand what was going on and we wanted a system that would fit our exact needs. So we wrote our own.
  • Version 2: alpine-backup
    • After learning PHP programming and experimenting for awhile, I wrote alpine-backup , our current backup solution.
    • We decided to implement only the bare-minimum that we needed.
    • The result was a 100-line script that supports custom mount drivers and performs an incremental backup using rsync .
  • alpine-backup function  backup ( $opt , $src , $dest , $date , $name )  {    $_opt  =  OPTIONS ;    passthru ( "rsync  $_opt   $opt  --link-dest  $dest /current  $src   $dest / $date " , $return );   if ( $return  !=  0 )    {     echo( "* Backup Failure ( $name ): rsync did not finish sucessfully.n" );     die( 1 );    }      unlink ( " $dest /current" );    symlink ( " $dest / $date " , " $dest /current" );   echo( "* Backup Successful:  $name n" );  }
      • Website: http://www.icebergsystems.ca/
      • Google Code: http://alpine-backup.googlecode.com
  • Our backup directory
  • Configuration files ;; silvoes2lx backup config ;; September 2010 -- Nicholas Paun [APPS] source = "root@silvoes2lx:/media/nss/APPS/" dest = "/bkup/silvoes2lx/APPS" opt = "-e 'ssh -p7774'" delete_after = "+30" [HOME] source = "root@silvoes2lx:/media/nss/HOME/" dest = "/bkup/silvoes2lx/HOME" opt = "-e 'ssh -p7774'" delete_after = "+30"
  • Mount Drivers function  mountfs ( $user , $server , $command )   {     $return  =  passthru ( "ssh  $user @ $server   ' $command   stop'" );     return(! $return );   } function  umountfs ( $user , $server , $command )   {     $return  =  passthru ( "ssh  $user @ $server   ' $command   start'" );     return(! $return );   }
  • Logging
  • Experiments
    • We setup offsite mirroring
    • Experimented with rsync daemons
    • We used batch files for replication
    • We ran rsync on cygwin (too slow.)
    • Backed up Windows servers via CIFS mounting
    • We setup Groupwise backups
    • We also started iPrint backups
  • Syncing using Batch Files
    • Batch files allow you to send the same file list to multiple backup servers at once.
      • ie. mirroring, replication, etc.
    • We used them to sync our backups between 2 campuses.
    • This was our first attempt at offsite backups.
    • It had a drawback of losing sync if files had changed in between batches.
  • Syncing using Batch Files
    • The first command uses --write-batch to create a batch file.
    • The second one uses --read-batch to perform the backup.
    • rsync -av --write-batch=SCRATCH.batch --link-dest=/bkup/caslablx/SCRATCH/current /mnt/caslablx/SCRATCH/ /bkup/caslablx/SCRATCH/$date
    • ssh n10arklx rsync -av --read-batch=- --link-dest=/bkup/caslablx/SCRATCH/current /bkup/caslablx/SCRATCH/$date <SCRATCH.batch
  • Offsite Backups
    • The college bought 5 QNAPs, for offsite backups.
    • The QNAPs are NASs running some mystery meat Linux, that was the source of many headaches.
    • Initially, we added more configuration files that would copy the backups from the main backup server to the offsite one, but this grew quite complicated.
  • Selkirk College Backup System
  • No-change Syncing
    • So, we came up with a better idea: the configuration files would not be modified, and still point at the real backup server.
    • But, when a QNAP reads the configuration files, it would run in slave mode and pull the backups from the backup server, instead of the live server.
    • This change removed the need for half of our configuration files.
  • Demo: No-change Syncing
  • Demo: No-change Syncing
  • GroupWise Backups
    • Because dbcopy was slow and inefficient for backups, we decided to snapshot our GroupWise volumes.
    • This means that we can apply rsync 's deduplication ( --link-dest ) to GroupWise too.
    • We save 90-95% disk space and it is 10 times faster compared to a full backup.
  • GroupWise Backups
    • As an added benefit, GroupWise and FS backups can now be handled by the same tool.
    • To snapshot NSS volumes we used Dean Giles' perl scripts (found on Cool Solutions).
    • If we ran GroupWise on an LVM volume, we could use LVM snapshots instead.
  • GroupWise Mount Driver function  mountfs ( $user , $server , $command )   {     $return  =  passthru ( &quot;snapup.sh&quot; );     return(! $return );   } function  umountfs ( $user , $server , $command )   {     $return  =  passthru ( &quot;snapdn.sh&quot; );     return(! $return );   }
  • Snapup.sh
      ssh casgrplx perl /usr/local/bin/AddPoolSnapshot.pl /dev/evms/CASGRPLXPOOL sdb 200000000 CASGRPLXPOOL_S1 ssh casgrplx perl /usr/local/bin/ActivatePoolSnapshot.pl CASGRPLXPOOL_S1 ssh casgrplx mount GRPWISE_SV /media/nss/GRPWISE_SV -t nssvol -o rw,name=GRPWISE_SV
  • Demo: GroupWise PO Restore
  • Demo: GroupWise PO Restore
  • Demo: GroupWise PO Restore
  • MySQL Backups
    • We wanted to incrementally backup a database, but still use native dumping tools.
    • So we added a MySQL agent to our tools.
    • To gain maximum space savings, we split up the SQL files into one file per table.
    • Now we can use rsync 's --link-dest feature on our DBs too.
  • MySQL Backups
    • This is the the command we use to backup databases
    mysql -p password db -e 'show tables' | grep -v 'Tables_in_' #show all tables in database mysqldump -p password --skip-dump-date db $table > $dumpdir/$table #create a dump of each individual table (with no timestamp, for linking) find $dumpdir -exec touch -d 2010-04-01 {} ; #touch dumps to a consistent date (for linking)
  • Demo: Drupal DB Restore
  • Demo: Drupal DB Restore
  • Demo: Drupal DB Restore
  • Demo: Drupal DB Restore
  • Oracle Backups
    • One day, we got a user request to set up Oracle backups on a Windows server.
    • We had never seen an Oracle database before, so we did some searching and figured out how to use expdp.
    • First we logged in to the database and set up a directory handle:
    • CREATE DIRECTORY sgrcdev AS '/g/sgrcdev' ;
  • Oracle Backups
    • Next, we use cygwin to run expdp on the server:
    • expdp system/$passwd@$db directory=$db full=y dumpfile=$db.dmp logfile=$db.log
    • … And then, we just copy the backups over and reset dump directory.
  • Oracle Backups
  • Conclusion
    • In 2007, we didn't have a real backup, even if we bought an enterprise solution.
    • Now, we can control all of our backups (FS, GroupWise and Database) from one place, using one lightweight tool.
    • We save a lot of disk space by using rsync .
    • Most importantly, we achieved peace of mind.
    • TTP Conference 2011
      Celebrating 25 years of supporting Novell Solutions