Your SlideShare is downloading. ×
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
İndir
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
717
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Moving to OpenEdge ® 10.1B Change is difficult but moving to OpenEdge isn’t! Tom Harris, Director OpenEdge RDBMS Technology OpenEdge Product Management
  • 2. Agenda
    • General Migration Strategy
      • The Fast and Easy Upgrade
      • Physical Upgrade
    • Tuning Opportunity
  • 3. General Migration Strategy
    • Preparation
      • Truncate BI, Disable AI, 2PC, and Replication
      • Backup Install
      • Install OpenEdge
      • Don’t need to uninstall V9
    • Upgrade
      • Upgrade DB to OpenEdge 10
      • Do your backups !!!!
      • Recompile/Re-deploy your ABL code
    • Run your application - no ABL changes (usually)
    First detail plan, review plan, test plan THEN execute… Prepare Install Upgrade Run
  • 4. Agenda
    • General Migration Strategy
      • The Fast and Easy Upgrade
      • Physical Upgrade
    • Tuning Opportunity
  • 5. Database Server Side Conversion
    • Preparation
      • Truncate BI, disable AI, 2PC, Replication (V9)
      • Backup database (V9)
      • Validate backup
      • Store backup away from database
    • Install OpenEdge 10 on server machine
      • And everywhere else!
    • Rebuild and re-deploy application (if need be)
      • Remote V9 to OpenEdge 10 disallowed
    Prepare Install
  • 6. Database Conversion Steps
    • Run conversion utility
    • _proutil <db> -C conv910 –B 512
      • Conversion runs in 5-minutes or less
    • Basically just a schema upgrade
    • Backup your new database
    • Re-start application
    Run
  • 7. Done - 5 minutes later
    • You are now “Good To Go” with OpenEdge
    • No physical changes to existing user data
      • Data remains in Type I storage areas
      • Data fragmentation exists as before
    • Optimize physical layout when time permits...
    or…
  • 8. Agenda
    • General Migration Strategy
      • The Fast and Easy Upgrade
      • Physical Upgrade
    • Tuning Opportunity
  • 9. Why Do A Physical Database Change?
    • Performance, Scalability & Maintenance
    • Take advantage of new features
    • No adverse application effects
      • Physical reorg does NOT change the application
      • Definitions are abstracted from the language by an internal mapping layer
      • Different physical deployments can run with the same compiled r-code
    Get your database in good physical shape… Upgrade
  • 10. Database Physical Change Overview
    • Preparation (same as before)
      • Truncate BI, disable AI, backup, validate, install…
    • Before Physical Reorg
      • Upgrade database to OpenEdge 10
        • conversion utility
        • prostrct create (a must if changing block size)
    • Physical Updates (no r-code changes required)
      • Separate schema from user data
      • Create new storage areas
        • Specify records per block
        • Specify Type II cluster sizes
    Time for a change… a database re-org Reorg
  • 11. Database Physical Reorg Overview (cont.)
    • Physical Reorg
      • Spread data out amongst new areas
      • Move indexes
    • Online options vs offline options
      • E.g. Database block size changes are offline
    • After Reorg
      • Reclaim Unused Space
        • Truncate old data area
        • Delete old data area
    Time for a change… Reclaim
  • 12. Physical Database Update Options
    • In Place (same database)
      • Transformation done all at once or over time
      • Add new storage areas to existing database
      • Migrate data from old areas to new
      • Reclaim space from old areas
    • New database
      • Transformation done in one maintenance window
      • Dump old data
      • Create new database
      • Load into new database
      • Prodel old database
    • Mix of Option #1 and Option #2 (custom)
      • Create new database
      • Move data from old database to new database
      • Reclaim space by deleting old database
    Option 1 Option 2 Option 1+2
  • 13. Upgrade – An Example of the Steps Getting started: In-Place Changes Separate user data from schema Option 1
  • 14. Moving schema tables Separate Schema from user data (in place): proutil <db> -C mvsch (offline operation) Schema Area Old Default Area Renames existing schema area Option 1
  • 15. Moving schema tables Separate Schema from user data: proutil <db> -C mvsch (offline operation) Old Default Area Schema Area Renames existing schema area Creates new schema area Moves schema Tables & Indexes Schema Area Option 1
  • 16. Moving schema tables Separate Schema from user data: proutil <db> -C mvsch (offline operation) Old Default Area Schema Area Renames existing schema area Creates new schema area Moves schema Tables & Indexes Schema Area You move data To new areas User Area User Area User Area User Area You truncate Old Default Area Option 1
  • 17. Physical Changes: Location, Location, Location
    • Create .st file with new layout
    • Use Type II Storage Areas
      • Tables 64 or 512 block clusters
      • Indexes 8 or 64 block clusters
    d “Cust/Bill Indexes&quot;,1;8 /d_array2/myDB_7.d1 f 512000 d “Cust/Bill Indexes&quot;,1;8 /d_array2/myDB_7.d2 # d “Customer Data&quot;,16;64 /d_array2/myDB_8.d1 f 1024000 d “Customer Data&quot;,16;64 /d_array2/myDB_8.d2 # d “Billing Data&quot;,32;512 /d_array2/myDB_9.d1 f 1024000 d “Billing Data&quot;,32;512 /d_array2/myDB_9.d2 Option 1
  • 18. Physical Changes
    • Validate first
    • prostrct add <db> new.st -validate
    • Then update
    • prostrct add <db> new.st
    • OR:
    • prostrct addonline <db> new.st
    The Structure file format is valid. (12619) Option 1
  • 19. Moving Tables and Indexes
    • Table move and Index move
      • Online (somewhat)
    • Dump and Load
      • With and w/out index rebuild
      • Application must be offline (without additional customization)
    • Suggestion : mix of option #1 and #2
      • Table move small tables
      • D&L everything else
      • 1 st purge/archive unneeded data
    3 Options for Data Movement: Option 1
  • 20. Option #1: Table/Index Move
    • Advantages:
      • Online (with no-lock access)
      • Dump & load in one step
      • Schema is automatically updated
      • No object # changes to deal with
      • Parallelism
    • Disadvantages:
      • Only No-lock accessible during move
      • Moves but doesn’t “rebuild” indexes
      • Too slow for large tables
      • Changes are logged!
        • BI growth may be of concern
    Option 1
  • 21. Table Move
    • proutil <db> -C tablemove [ owner-name . ] table-name table-area [ index-area ]
    • Move/reorg a table by its primary index
    • Move a table AND its indexes
      • Preferred performance
    • Fast for small tables
    • No object # changes!
    Option 1
  • 22. Index Move
    • proutil <db> -C idxmove [ owner-name . ] table-name . index-name
    • area-name
    • Move an index from one area to another
    • Does NOT alter/optimize index block layout
    • Fast but does not “rebuild” indexes
    • Online but changes to owning table blocked
    Option 1
  • 23. Upgrade – An Example of the Steps Getting started: Make a New Database Separate user data from schema Option 1
  • 24. Option #2: New Database - Dump and Load
    • Textual Data
      • Data Dump
      • Data Load
      • Bulk Load followed by index rebuild
    • Binary
      • Binary dump
      • Binary load
        • With index rebuild
        • Followed by index rebuild
    • Custom (Textual or raw)
      • Dump/Load w/triggers
      • buffer-copy
      • Export/Import
    3 Dump and Load Flavors: Option 2
  • 25. Dump and Load General Strategy
    • Create new database structure
      • Add to existing DB
      • New database
    • Run tabanalys
    • Dump table data
    • Data definitions
      • Dump
      • Modify
      • Load definitions*
    • Load table data
    • Build indexes (if needed)
    • Backup
    If you have the disk space, creating a new db saves time *If done “in place” the existing tables must be delete prior the load (but you knew that already) Option 2
  • 26. Dumping the data Option 2
  • 27. Using Binary Dump (Fastest)
    • Advantages:
      • Fastest and Easy
      • No 2 GB file size limit
      • No endian issues
      • Can choose dump order (by index)
      • Can dump table data in portions
      • Multi threaded (10.1B)
      • Can dump multiple tables concurrently (parallel)
    • Disadvantages:
      • Must ensure no table changes between D&L
      • Not character based
    Option 2
  • 28. Binary Dump Threaded (Fastest)
    • proutil <db> -C dump <table> <dir> -index <index #>
    • -thread 1 -threadnum <n>
    • -dumpfile <filelist> -Bp 64
    • -index <n>
      • Choose index based on read order
      • -index 0
        • Faster dump, slower read
        • Assumes coming from Type II
    • -thread indicates threaded dump
      • # threads automatic (# CPUs)
      • – threadnum max of # CPUs * 2
      • Threads only available in multi user mode
      • Division of labor according to root block
    • -dumpfile used as input for load
    Option 2
  • 29. Binary Dump Specified
    • proutil <db> -C dumpspecified <table.field>
    • <operator> ‘<value>’ <dir> -preferidx <idx-name>
    • Switches
      • table.field MUST be lead participant in index
      • Valid operators: LT, GE , LE, GT , EQ
      • -preferidx determines specific index to use
      • -index, -thread are ignored
    • Performance
      • Same as w/2 threads (until 10.1B03)
      • Inherent max of 2 concurrent dumpers per table (till 10.1B03)
    • Cautions
      • Must use an ascending indexes
      • Output dir must be different!
    Option 2
  • 30. Dictionary Data Dump
    • Database Admin tool
    • OR: run prodict/dump_d.p(“<table>&quot;, “<dir>“, “u1108”).
    • Advantages:
      • Fast and Easy
      • Parallel
      • No endian issues
    • Disadvantages:
      • 2 GB File size limit (10.1B)
      • Can’t choose dump order
      • Have to dump entire table
      • Must ensure no one changes table between D&L
    Option 2
  • 31. Reorganizing Areas/Objects Data Dump Completed. Option 2
  • 32. Dump & Modify data definitions
    • Use Data Administration tool
    • OR:
      • run prodict/dump_df.p(“ALL”, “<mydb>.df“, “ ”).
      • If using bulk load (slowest approach of all):
      • run prodict/dump_fd.p(“ALL”, “<mydb>.fd”).
    Option 2
  • 33. Dump & Modify Data Definitions
    • Update .df files
      • Optionally delete old table
      • Change table’s area information
    • Delete/Drop tables
    • Load Data Definitions
      • Data Administration tool
      • OR: run prodict/load_df.p(“<mytable>.df&quot;).
    Option 2 ADD TABLE &quot;mytable2&quot; AREA “Old Default Area&quot; DROP Table “mytable2” ADD TABLE &quot;mytable2&quot; AREA “New Data Area&quot;
  • 34. Alternative Data Definition Modification
    • Truncate objects for fast move/delete
    • proutil <db> -C truncate area “Old Default Area”
    • Warns then deletes data (but NOT schema)
    • Rebuild/activate empty indexes (if moving)
    • proutil <db> -C idxbuild inactiveindexes
    • Move “empty” tables/indexes to new area
    • proutil <db> -C tablemove <table> <area> [ index-area ]
    • No effect on CRC’s
    If all data in area dumped… Option 2
  • 35. Dump Completed. Now We reload Load the data back in (finally). Remember to load all data files. Be sure to validate data loaded. Option 2
  • 36. Bulkload (The slowest alternative)
    • proutil <db> -C bulkload <fd-file> -B 1000 –i –Mf 10
    • Data input from dictionary or custom data dump
      • Mentioned here for completeness only
    • Drawbacks:
      • 2 GB file limit (10.1B)
      • Loads one table at a time ( single user )
      • Does not insert index entries
        • Requires index rebuild as separate step
      • No advantage over other loads
      • Slower than all other loads
    Option 2
  • 37. Dictionary Load
    • Data Administration Tool
    • OR:
    • run prodict/load_d.p(“table1&quot;, “table1.d&quot;).
    • Data input from dictionary or custom data dump
      • 2 GB file limit (per load) in 10.1B
    • Load data in parallel (to separate tables)
      • Inserts index entries
      • Index tree not perfect
    • Performance close to binary load + index rebuild
      • (when loading multiple tables)
    Option 2
  • 38. Binary Load (The fastest option)
    • proutil <db> -C load <table>.bd [build]
    • Load to new or truncated area
      • Truncated rather than “emptied”
    • Parallel load to different tables
      • Same or different areas w/o scatter!
    • Optionally load with build indexes
      • Somewhat better performance
      • Restart db before running (bug)
      • Suggested not to use online yet
    Option 2
  • 39. Binary Load
    • proutil <db> -C load <table>.bd
    • -dumplist <filename>
    • Dump List File:
    • /usr1/richb/mytable.bd
    • /usr1/richb/mytable2.bd
    • /usr1/richb/mytable3.bd
    • Must load ALL dumps (.db, db2, .db3, …)
    From a threaded dump or dumpspecified… Option 2
  • 40. Tuning the Process
    • Dump with
      • – RO, high –B and/or -Bp
      • Dump on index w/fewest # blocks (if possible)
    • Load with
      • High –B, –r ** or –i **
      • BIW, 1.5 APW’s per CPU,
      • Large BI clusters w/16K BI blocks
      • No AI or 2PC
    • Spread data, BI and temp files across disks / controllers
    Tune for high (non-recoverable) activity ** only use -r & -I when complete data recovery possible Option 2
  • 41. After the Load
    • proutil <db> -C idxbuild [ table <table> | area <area>]
    • Many new idxbuild choices
    • Helpful parameters
      • -SG 64 (sort groups)
      • -SS filename (file containing sort file list)
      • -TM 32 (merge buffers)
      • -TB 31 (temp block size)
      • -B 1000
    • Run tabanalys
      • validate # records
    • Backup your database
    Build Indexes (where applicable) Option 2
  • 42. Reclaim space
    • proutil <db> -C truncate area <area-name>
    • Warns then deletes data (but area !emptied)
    • proutil <db> -C truncate area
    • Only truncates “empty” areas (but all of them)
    • Area logically truncated (option #2)
    • Extents can be deleted
    • prostrct remove <db> d <old-area-name>
    For areas that were emptied… Option 2 Option 1 or DB Ready!
  • 43. Agenda
    • General Migration Strategy
      • The Fast and Easy Upgrade
      • Physical Upgrade
    • Tuning Opportunity
  • 44. Tuning Opportunity
    • -Bt (temp tables are Type II storage areas)
    • 10.1B changes default Temp table block size
      • From 1K to 4K
      • – tmpbsize 1 restores old behavior
    • Monitor BI Cluster Size
      • BI notes are bigger in OpenEdge 10
    • BI grow
    • Run UPDATE STATISTICS for SQL
    Run
  • 45. In Summary
    • Conversion is quick
      • Physical Upgrade at your leisure
      • Lots of physical re-org options
    • Rollout is simple
    • 10,000 customers on OpenEdge
  • 46. Questions?
  • 47. Thank you for your time
  • 48.  

×