Your SlideShare is downloading. ×
  • Like
A32 Database Virtulization Technologies
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.


Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

A32 Database Virtulization Technologies



Published in Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
  • Thanks for posting my presentation. Had an wonderful time coming to Tokyo and meeting everyone and seeing the city.
    - Kyle
    Are you sure you want to
    Your message goes here
No Downloads


Total Views
On SlideShare
From Embeds
Number of Embeds



Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

    No notes for slide


  • 1. Database Virtualization Technologies
  • 2. Database Virtualization• Comes of age – CloneDB : 3 talks @ OOW • Clone Online in Seconds with CloneDB (EMC) • CloneDB with the Latest Generation of Database (Oracle) • Efficient Database Cloning with Clonedb (Cern) – Companies: • Delphix • EMC • NetApp • Vmware – Oracle 12c: new feature• What is it ? – database virtualization is for data tier as VMware is for compute tier • Support – do you have a copy, do ya? • Dot com, 6 copies, most have 8-10 • OEM 10g, Cloning manager, no clone technology • Consulting, if only I had a copy to test on
  • 3. Problem ReportsProduction First copy QA and UAT• CERN - European Organization for Nuclear Research• 145 TB database Developers• 75 TB growth each year• 10s of developers want copies.
  • 4. Partial Solutions Developers Sub set copy QA and UAT Sub set copyProduction Reports First copy
  • 5. Full copies problematic sometimes impossible• Space consuming – 100 devs x 10TB production • = 1 Petabyte • This is 100x actual unique data • Unique data is – 10 TB original – 2TB of changed data  = 12TB total unique data• Time consuming – Time to make copes – Meetings • System • Storage • Database • Network Admins • manager coordination
  • 6. Partial solutions, create more problems• Share copies -> slow projects down – Stale copies -> long delays for new copies – Incomplete results – Performance results may be wrong• Subset copies -> misleading and/or wrong – Incomplete results – Performance results may be wrong
  • 7. Solution: Clone and ShareInstead of full copies of same data Copy 1 Copy 2 Copy 3 Copy 4 Copy 5One Read Only Copy plus thin layer of changes per clone Clone 1 Clone 2 Clone 3 Clone 4 Clone 5 Clone manages modified Read only data snapshot
  • 8. Goal: Virtualization ProductionInitial Incremental Incremental Redo Redo Clones: fast to create, small foot print, can create from any point in time
  • 9. Technologies:1. CloneDB (Oracle)2. ZFS Storage Appliance (Oracle)3. Delphix4. Data Director (Vmware)5. EMC6. NetApp7. Oracle 12c Snap Manager Utility (SMU)
  • 10. Virtualization : Advantages• Space – Clones sharing a single snapshot • 100 copies of 10 TB goes from 1 Petabyte down to 5 TB with compression• Speed – Eliminate Coordination • System, Storage, Database, Network Admins + manager coordination – Creation = time to start a database• Agility
  • 11. What do the technologies offer?1. Snapshot – All (some more limited than others)2. Roll Snapshot forward – NetApp, Delphix, ZFS3. Clone – All (some more limited than others)4. Provision – Oracle12c, Delphix5. Automate – Delphix
  • 12. Automation• Automated – Regular incremental backups, redo collection, retention windows – Presentation via FC Luns, NFS mounts, iSCSI, etc – Provisioning – assigning SID, parameters, file structure, recovery• User provisioning – Security and segregation, who can provision which databases and how often – Chain of custody (auditing who has VDBs, when used ) – Cycle of ownership dev -> qa -> uat -> production• Cloud ready – Hardware agnostic, Multi database support• Masking data – For QA, Test , Dev• Load Balancing – mobility between servers , Provision to servers that have resources
  • 13. Types of solution – (part 1)• Hardware Vendor verses Software – Hardware lock in: EMC, NetAPP, Oracle ZFS Storage Appliance – Software: CloneDB, Delphix, Data Director• Database Specific versus General purpose Copies – Oracle Specific: CloneDB – General Purpose : EMC, NetApp, Oracle ZFS Appliance, Data Director – Multi Database Specific: Delphix*• LUNs/Files system versus datafiles – LUNs : EMC, NetApp, Data Director, – Datafiles with Scipting: Oracle ZFS Appliance – Datafile: Delphix, CloneDB*Oracle, SQL Server, User Data , other DBs coming
  • 14. Types of solution – (part II)• Golden Copy – Required: EMC, DataDirector, CloneDB – Not Required: Delphix, Oracle ZFS Appliance, NetApp• Clones of Clones (snaps of snaps) – No: clonedb, limited on EMC to none on low end or one level high end – Limited: NetApp, Data Director – Unlimited: Delphix,, Oracle ZFS Appliance,• Performance Issues – Data Director (Linked Clones)
  • 15. You Should be cloning nowIf you have any of :• Oracle• Oracle ZFS Storage Appliance• NetApp• EMC• VMware Data DirectorGives you• Storage savings• More importantly time savings Agility How many copies are of database are made? What size are these databases? How often are the copies made?
  • 16. CloneDB Tim Hall• RMAN backup as copyTarget• Get a copy of RMAN backup (local or NFS)• Create an NFS mount• Setup dNFS and – – /etc/oranfstab• initSOURCE.ora outout.sql – export MASTER_COPY_DIR="/backuplocal" # backup location NFS or not – export CLONE_FILE_CREATE_DEST="/clone" # requires NFS MOUNT – export CLONEDB_NAME="clone" # export ORACLE_SID="clone“• sqlplus / as sysdba @clone_crtdb.sql – startup nomount PFILE=/clone/initclone.ora – Create control file with backup location – dbms_dnfs.clonedb_renamefile(/backup/sysaux01.dbf ,/clone/ora_data_clone0.dbf); – alter database open resetlogs;
  • 17. Clone DB : requires dNFS and 1. physical RMAN Three machines 1. Physical 2. NFS Server 3. Target Copy 2. Target dNFS 3. NFS Server Problem: Clone 1 Clone 1 No Versioning Read only Clone 2 Clone 2 Clone 3 Clone 3830264 /backup/sysaux01.dbf 760 /clone/ora_data_clone0.dbf du -sk727764 /backup/system01.dbf 188 /clone/ora_data_clone1.dbf425388 /backup/undotbs01.dbf 480 /clone/ora_data_clone2.dbf
  • 18. Clone DB : everything could be on NFS physical RMAN Target A NFS Server Target B Clone 1 Clone 1 Clone 4 Clone 4 Clone 2 Clone 2 Clone 5 Clone 5 Read only Clone 3 Clone 6 Clone 3 Clone 6
  • 19. Clone DB: refresh: either destroy or duplicate physical RMAN Target A NFS Server Target B Clone 1 Clone 1 Clone 4 Clone 4 Clone 2 Clone 2 Clone 5 Clone 5 Read only Clone 3 Clone 6 Clone 3 Clone 6 Level 0 + 1
  • 20. ZFS Appliance cloning-solution-353626.pdf 44 pages only partial solutionZFS Appliance – Create backup project db_master • With 4 file systems: datafile, redo, archive,alerts – Create project for db_clone (with same 4 filesystems)Source – NFS Mount Backup locations from ZFS Appliance – Backup with RMAN as copy, archive logs as wellZFS Appliance – Login to Appliance shell, Snapshot backup location • Set pool=storage_pool_name • Select db_master • Snapshots snapshot snap_0 • Then each filesystem on db_master clone it onto db_cloneTarget Host – Mount db_clone directories over NFS from ZFS Appliance – Startup and recover clone
  • 21. Oracle ZFS Appliance Target A 1. physical Clone 1 ZFS Storage Appliance NFS ZFS snapshot RMAN Snapshot Clone 1 instantaneous Copy read only to NFS mount ZFS Clone RMAN instantaneous copy read write
  • 22. Oracle ZFS Appliance: RMAN incremental Target A Production RMAN Clone 1 ZFS Storage Appliance Target B Snapshot Clone 2 Clone 1 Snapshot Clone 2 Full Backup Snapshot Clone 3 Target C Snapshot Clone 4 Clone 3 Clone 4 Incremental Backups
  • 23. Oracle ZFS Appliance: archive redo Target A Production Clone 1 ZFS Storage Appliance Target BRMAN Clone 2 Snapshot Clone 1 Snapshot Clone 2 Full Backup Snapshot Incremental Redo Backups
  • 24. Clone Clone redo redo Snapshot Snapshot Snapshot Level 1 Level 1 Level 1 Level 0 Level 0 Level 0RMANLevel 0 Level 1 Level 1
  • 25. Clone Clone redo redo Snapshot Snapshot Snapshot free Level 1 Level 1 Level 1 Level 0 Level 0 Level 0RMANLevel 0 Level 1 Level 1
  • 26. ZFS• Prehistory: 1 disk = 1 filesystem• ~1990: volume managers: N disks : 1 FS• 2001-2005: ZFS development• 2005: ZFS ships, code open-sourced• 2006-2008: ZFS matures• 2008: ZFS is enterprise-class, ZFS storage appliance ships • enables several ZFS-based startups including Delphix, Nexenta, Joyent,• 2010: ZFS development moves to Illumos • headed by Delphix
  • 27. FS/Volume Model vs. Pooled Storage Traditional Volumes ZFS Pooled Storage• Abstraction: virtual disk • Many filesystems in one pool• Partition/volume for each FS • No partitions to manage• Grow/shrink by hand • Grow automatically• Each FS has limited bandwidth • All bandwidth always available• Storage is fragmented, stranded • All storage in the pool is shared FS FS FS ZFS ZFS ZFSVolume Volume Volume Storage Pool Delphix Proprietary and Confidential
  • 28. Always consistent on disk (COW) 1. Initial block tree 2. COW some blocks 3. COW indirect blocks 4. Rewrite uberblock (atomic) Delphix Proprietary and Confidential
  • 29. Always consistent on disk (COW) 1. Initial block tree Meta Data Uber Block File Systems Dnodes File File Data Blocks
  • 30. Bonus: Snapshots • At end of TX group, dont free COWed blocks • Clones are writeable snapshots Snapshot Live root (file root system ) Delphix Proprietary and Confidential
  • 31. Bonus: Constant-Time Snapshots• Younger snapshots than blocks => keep• No younger snapshots => free Sync writes are written immediately Snapshot out to Live Intent log root (file root Data and Metadata system ) Is batch written out later Zil Intent Log Delphix Proprietary and Confidential
  • 32. ZFS Data Relationships • Snapshot is a read-only point-in-time copy of a filesystem o Instantaneous o Unlimited o No additional space • Clone is a writable copy of a snapshot o Instantaneous o unlimited o No additional space • Send / receive : replication o Can send a full snapshot o Can send incremental changes between snapshots o Incremental send/receive quickly locates modified blocks  Traverses new snapshots blocks, comparing birth time with source snapshots birth time
  • 33. ZIL (ZFS Intent Log) Overview • ZIL is per filesystem o Responsible for handling synchronous write semantics • Creates log records for events that change the filesystem (write, create, etc.) • Log records will have enough information to replay them • Log records are stored in memory until : o Transaction group commits or o a synchronous write requirement is encountered (i.e. fsync() or O_DSYNC) • In the event of power failure / panic, the log records are replayed
  • 34. ZFS at Delphix • Compression • typically ~2-4x • Block sharing o Via clones, Faster , cheaper than Deduplication which is too slow with overhead • Link Source DB o create new filesystems for datafile, archive, etc. o set recordsize of datafile FS to match DB • Snapshot Source o take ZFS snapshot of datafile fs o retain relevant log files in archive fs • Clone Provision VDB o create clone of Sources datafile snapshot o share the dSources blocks; no additional space used o new data takes space • SAN provides redundancy; no mirror/raidz in ZFS (except root pool)
  • 35. ZFS anti-patterns• 128K for data blocks• Full 80%• Mixed size LUNs, with some full – Delphix has improved this with the Delphix appliance• Scrubs run in middle of business dayZFS improvements at Delphix• Single copy ARC• Multi-threaded space map compression• NPM mode• Fast Snap Shot delete 100x
  • 36. Delphix RMAN backup sets Source Datafiles DelphixRMAN backup sets Allows control over send 2-4x compression Unused blocks not sent This analysis shows lzjb compression comes atDelphix no performance cost: rebuilds the datafiles rebuilds unused blocks pression-on-the-sun-storage-7000/ compresses datafiles highly compressed zero regions
  • 37. Delphix Target A Production Clone 1 Delphix Appliance Target BRMAN Clone 2 Snapshot Clone 1 Free-able Snapshot Clone 2 Full Backup Full Backup Snapshot Incremental Redo Backups
  • 38. Delphix
  • 39. Data Director : Linked Clones (Vmware)• Performance issues – “Having several linked clones can affect the performance of the source database and the performance of the linked clones.” (on ) – “If you are focused on performance, you should prefer a full clone over a linked clone.” – Performance worse with more snapshots – Can only take 16 snapshots – Performance worse with more concurrent users• Golden Copy issue – original copy has to always exist• x86 hosts only – Linux – OpenSolaris
  • 40. NetApp Target A Production Database Clone 1NetApp Filer NetApp Filer snapshot snapshot Target B Database Luns Clone 2 snapshot Target C File system level Clone 3 Clone 4
  • 41. NetApp Target A Production Database Clone 1NetApp Filer NetApp Filer snapshot snapshot Target B Database Luns Clone 2 snapshot snapshot
  • 42. NetApp Target A Physical Database Clone 1NetApp Filer NetApp Filer snapshot snapshot Target B Database Luns Clone 2 snapshot snapshot
  • 43. NetApp Limits Limit of 255 snapshots snaps are limited to the same aggregate (storage pool) Aggregates have size limits depending on controller Controller Size Limit 32 bit controllers 16TB FAS3140/FAS3040/FAS3050 40TB FAS3160/FAS3070 50TB FAS6040/FAS3170 70TB FAS6080 100TB All sources have to be in the same aggregate to be snapshot together.
  • 44. EMC• Point of view: DR , backup and testing off of a full copy – Create BCV , a full copy ( – Promote BCV to make accessible – Take snaps of BCV (limit 32?) – Zone and mask LUN to target host – Full copy of disk, now recover (may have to rename the LUNs)• “Golden Copy” – Not a pointer based file system like NetApp and ZFS – EMC uses a save area, the amount of area for changes to the snapshot – No time flow – Ie initial snapshot has to stay – To get rid of “golden copy” have to recreated it with the new changes• Snapshots – Can’t take a snap of a snap on low end – Can only take one level snap of a snap on high end
  • 45. Oracle 12c• Oracle Snap Manager Utility for ZFS Appliance• Pay for option• Requires source database hosted on ZFS appliance• Principally a manual GUI – utility to snapshot source databases and provision virtual databases• No concept of time flow – Virtual databases have to be provisioned of snapshots.
  • 46. Conclusion• EMC Timefinder, VMware Data Director – offer limited ability to benefit from cloning• Clonedb – fast easy way to create many clones of the same copy – limited to and systems with sparse file system capability – suffers the golden image problem• NetApp Flexclone, Snap Manager for Oracle – offers a rolling solution – limited database awareness – file system clones – limited snapshots – Vendor lock-in• Oracle ZFS Appliance – Vendor Lock-in• Delphix – Agility : Automation, unlimited snapshots, clones, multi-database
  • 47. Matrix of features CloneDB ZFS Delphix Data NetApp EMC Appliance DirectorTime Flow No Yes Yes No Yes NoCapableHardware Yes No Yes Yes No NoAgnosticSnapshots per No Unlimited Unlimited 31 255 16file system or (96 read only)LUNSnapshots of No Unlimited Unlimited 30 255 1snapshotsAutomated No No Yes No Yes NoSnapshotsAutomated No No Yes No No NoProvisioningAny DB host Yes* Yes Yes No Yes YesO/S (windows?)
  • 48. Appendix• CloneDB –• ZFS –• ZFS Appliance – solution-353626.pdf• Data Director – – &externalId=1015180• EMC – 45992/h8728-snapsure-oracle-dnfs-wp.pdf• NetApp – RAC provision example flexclone-rac-database/ – basic info
  • 49. •END