Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Designing Information Structures For Performance And Reliability

1,355 views

Published on

  • Be the first to comment

Designing Information Structures For Performance And Reliability

  1. 1. Designing Information Structures for Performance and Reliability<br />Key elements to maximizing DB Server Performance<br />Bryan Randol<br />IT/Systems Manager<br />1<br />
  2. 2. Designing Information Structures for Performance and Reliability : Discussion Outline<br /> DAY 1: Hardware Performance: <br />Systematic Tuning Concepts<br />CPU<br />Memory Architecture and Front-Side Bus (FSB)<br />Data Flow Concepts<br />Disk Considerations<br />RAID<br />DAY 2: Database Performance:<br />OLAP vs. OLTP<br />GreenPlum vs. PostgreSQL<br />PostgreSQL Concepts and Performance Tweaking<br />PSA v.1 – GreenPlum AOPen mini-PCs “dbnode1-dbnode6”<br />PSA v.2 – Tyan Transport w/PostgreSQL<br />PSA v.3 – Current PSA Implementation, DELL PowerEdge 2950 w/PostgreSQL 8.3<br />2<br />
  3. 3. 3<br />I. Database Server Performance: Hardware & Operating System Considerations<br />DAY 1: Hardware Performance<br />
  4. 4. 4<br />Designing Information Structures for Performance and Reliability : Discussion Outline<br />Systematic tuning essentially follows these five steps:<br />Assess the problem and establish numeric values that categorize acceptable behavior. (Know the system’s specifications and set realistic goals.)<br />Measure the performance of the system before modification. (Benchmark)<br />Identify the part of the system that is critical for improving the performance. This is called the “bottleneck”. (Analyze)<br />Modify that part of the system to remove the bottleneck. (Upgrade/Tweak)<br />Measure the performance of the system after <br /> modification. (Benchmark)<br />Repeat steps 3-6 as needed. <br /> (Continuous Improvement)<br />
  5. 5. 5<br />I. Database Server Performance: Data Flow Concepts<br />DB Files are stored in the filesystem on disk in blocks.<br />A “job” is requested, initiating a “process thread”, associated files are read into memory “pages”.<br />Memory pages are read into the CPU’s cache as needed.<br />“Page-outs” to disk occur to make space as needed. <br />“Page-ins “ fromdisk are what slows down performance<br />Once in CPU cache, jobs are processed in threads per CPU (or “core”).<br />
  6. 6. 6<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Server Performance Considerations:<br />CPU:<br /> Each CPU has at least one core, each core processes jobs (threads) sequentially based on the job’s priority. Higher priority jobs get more CPU time. Multi-threaded jobs are distributed evenly across all cores (“parallelized”).<br />Internal Clock Speed: Operations the CPU can process internally per second in MHz, as advertised.<br />External Clock Speed: Speed at which the CPU interacts with <br /> the rest of the system….also known as the front side bus (FSB).<br />Memory Clock Speed: Speed at which RAM is given requests for data.<br />Important PostgreSQL Performance Note:<br />PostgreSQL uses a multi-process model, meaning each database connection has its own Unix process. Because of this, all multi-cpu operating systems can spread multiple database connections among the available CPUs. <br /> However, if only a single database connection is active, it can only use one CPU. <br /> PostgreSQL does not use multi-threading to allow a single process to use multiple CPUs. <br />
  7. 7. 7<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Server Performance Considerations:<br />Memory Architecture and FSB (Front Side Bus):<br /> On Intel based computers the CPU interfaces with memory through the “North Bridge” memory controller, across the FSB (Front Side Bus).<br /> FSB speed and the NorthBridge MMU (memory management unity) drastically affects the server’s performance, as it determines how fast data can be fed into the CPU from memory.<br /> Unless special care is taken, a database<br /> server running even a simple sequential <br /> scan on a table will spend 95% of its cycles<br /> waitingfor memory to be accessed.<br /> This memory access bottleneck is even more <br /> difficult to avoid in more complex database <br /> operations such as sorting, aggregation and <br />join, which exhibit a random access pattern.<br /> Database algorithms and data structures <br /> should therefore be designed and optimized <br /> for memory access from the outset.<br />
  8. 8. 8<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Intel “Xeon” based systems: Memory Access Challenges<br />FSB is a fixed frequency and requires a separate chip to access memory.<br />Newer processors will run at the same fixed FSB speed. Memory access is delayed by passing through the separate controller chip. <br />Both Processors share the same Front Side Buseffectively halving each processors bandwidth to memory, thereby stalling one processor while the other is accessing memory or I/O.<br />All processor to system I/O and control must use this one path. <br />One interleaved memory bank for both processors,  again, effectively halving each processor’s bandwidth to memory. <br />Half the bandwidth of a 2 memory bank architecture. <br />All program access to graphics, PCI(e), PCI-X or other I/O must be through this bottleneck<br />
  9. 9. 9<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Multiprocessing Memory Access Approaches<br />Intel Xeon Multiprocessing “1st Gen.”<br /><ul><li>FSB cuts bandwidth per CPU
  10. 10. NorthBridge controller produces overhead
  11. 11. UMA (Uniform Memory Access)</li></ul>Access to memory banks is “uniform”.<br />AMD Multiprocessing<br /><ul><li>“HyperTransport”
  12. 12. FSB is on the CPU
  13. 13. NUMA (Non-Uniform Memory Access)</li></ul>Latency to each memory bank varies<br />
  14. 14. 10<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Intel “Harpertown” Xeon Improvements<br />DELL PowerEdge 2950 III<br />(2 x Xeon E5405 = 8 cores)<br />4 cores/CPU + faster FSB ( &gt;= 1333MHz)<br />Northbridge Controller bandwidth increased to 21.3GB/sreads from memory, and 10.7GB/swrites into memory…32GB/s overall bandwidth.<br />DELL PowerEdge 1950<br />(2 x Xeon E5405 = 8 cores)<br />
  15. 15. 11<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Disk Considerations (secondary storage):<br />Seek Time/Rotational Delay:<br /> How fast the read/write head is positioned appropriately for reading/writing and how fast the addressed area is placed under the read/write head for data transfer…<br />SATA (Serial Advanced Technology Attachment) drives are cheap and come in sizes up to 2.5TB, typically maxing out at 7200RPMs. (“Velociraptor” is the exception @ 10,000RPM)<br />SAS (Serial Attached SCSI) drives are twice as fast (15,000 RPMS) and typically twice as expensive, with roughly 1/5 the max capacity of SATA (~450GB).<br />Bandwidth/Throughput (Transfer Time):<br />Raw throughput rate at which data is transferred from disk into memory. This can be aggregated using RAID, which will be discussed later.<br />SATA-I bandwidth is 1Gb/s which translates into ~ 150MB/s real speed.<br />SATA-II and SAS bandwidth is 3Gb/s, which translates into ~ 300MB/s real speed.<br />
  16. 16. 12<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Disk Considerations (secondary storage):<br />Buffer/Cache:<br /> Disks contain intelligent controllers, read cache and write cache. When you ask for a given piece of data, the disk locates the data and sends it back to the motherboard. It also reads the rest of the track and caches this data on the assumption that you will want the next piece of data on the disk. <br /> This data is stored locally in its read cache. If, sometime later you request the next piece of data and it is in the read cache the disk can deliver it with almost no delay.<br />Write back cache improves performance, because a write to the high-speed cache is faster than writes to normal RAM or disk….this cache aids in addressing the disk-to- memory subsystem bottleneck.<br /> Most good drives feature a 32MB buffer cache.<br />
  17. 17. 13<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Disk Considerations :<br />4. Track Data Density :<br /> Defines how much information can be stored on a given track. The higher the track data density, the more information the disk can store.<br /> If a disk can store more data on one track it does not have to move the head to the next track as often. <br /> This means that the higher the recording <br /> density the lower the chances are that the <br /> head will have to be moved to the next track <br /> to get the required data.<br />
  18. 18. 14<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Disk Considerations:<br />5. RAID: (n = number of drives in array)<br /> “Redundant Array of Inexpensive Disks”. Pools disks together to aggregate their throughput by “striping” data in segments across each disk. Also provides fault-tolerance. (n = number of drives)<br />RAID0 “Striping” (n) : Fastest due to no parity…raw cumulative speed. Single drive failure causes the entire array to fail. “All-or-none”<br /> RAID1 “Mirroring” (n/2): Each drive is mirrored, speed and capacity is ½ of RAID0, requires even number of disks in order to be divided. Entire source or mirror array can go bad before data is jeopardized.<br />RAID5 “Striping w/Parity” (n – 1): Fast, with a drive set aside for fault-tolerance. Only one drive can fail before the array is lost.<br />RAID6 “Striping with dual Parity” (n -2): Fast, with 2 drives set aside for fault tolerance. Two drives can fail before the array is lost.<br />
  19. 19. 15<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Disk Considerations:<br />RAID controller<br /> Device responsible for managing the disk drives in an array.<br /> Stores the RAID configuration while also providing additional disk cache. Offloads costly checksum routines from CPU in parity driven RAID configurations (e.g. RAID5 and RAID6)<br /> The type of internal and external interface dramatically impacts the overall I/O performance of the array.<br />Internal bus interface should be PCIe v2.0 (500 MB/s per lane throughput). Most common cards are x2, x4, and x8 “lanes” providing: 1GB/s, 2GB/s, and 4 GB/s throughput respectively.<br /> Notable external storage interfaces to the array enclosure include:<br />
  20. 20. 16<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Filesystem Considerations<br />As an easy performance boost with no downside, make sure the file system on which your database is kept is mounted &quot;noatime&quot;, which turns off the access time bookkeeping.<br />XFS is a 64-bit filesystem, supports a maximum filesystem size of 8 binary exabytes minus one byte.<br />On 32-bit Linux systems, XFS is “limited” to 16 binary terabytes.<br />Journal updates in XFS are performed asynchronously to prevent a performance penalty.<br />Files and directories in XFS can span allocation groups, each allocation group manages its own inode <br />tables (unlike EXT3/EXT2), providing scalability and parallelism.<br />Multiple threads and processes can perform I/O operations on the same filesystem simultaneously.<br />On a RAID array, a “stripe unit” can be specified within XFS at creation time. This maximizes throughput<br /> by aligning inode allocations with RAID stripe sizes.<br />XFS provides a 64-bit sparse address space for each file, which allows both for very large file sizes, <br />and for holes within files for which no disk space is allocated.<br />
  21. 21. 17<br />I. Database Server Performance: Hardware & Operating System Considerations<br />Takeaways from Hardware Performance Concepts:<br /> Keep relevant data closest to the CPU in memory once it has been read from disk. <br />More memory reduces the need for costly “page-in” operations from disk by reducing the need to “page-out” data to make space for new data.<br />Memory bus speed is still much slower than CPU bus speeds, often becoming a bottleneck as CPU speeds increase. It’s important to have the fastest memory speed and FSB that your chipset will support.<br />More CPU cores allows you to parallelize workloads. <br /> A multithreaded database takes advantage of multi-processing by<br /> distributing a query into several threads across multiple CPUs, <br /> drastically increasing the query’s efficiency while reducing its <br /> process time.<br />Faster disks with high bandwidth and low seek times maximize <br /> read performance into memory for CPUs to process complex queries. <br /> OLAP databases benefit from this because they scan large datasets <br /> frequently.<br /> Using RAID allows you to aggregate disk I/O by striping data across several spindles, drastically decreasing the time it takes to read data into memory and write back onto the disks during commits, while also providing massive storage space, redundancy and fault-tolerance.<br />
  22. 22. 18<br />I. Database Server Performance: Hardware & Operating System Considerations<br />DAY 2: Database Performance<br />
  23. 23. 19<br />II. Software & Application Considerations: OLAP and OLTP <br />OLAP (Online Analytical Processing):<br />Provides big picture, supports analysis, needs aggregate data, evaluates all datasets quickly, uses a multidimensional model.<br />DB size is typically 100GB to several TB (even petabytes)<br />Mostly read-only operations, lots of scans, complex queries.<br />Benefits from multi-threading, parallel processing, and fast drives with highread throughput/low seek times.<br />Key Performance Metrics: Query throughput/Response time.<br />OLTP (Online Transactional Processing):<br /> Provides detailed audit, supports operations, needs detailed data, finds one dataset quickly, uses a relational model.<br />DB size typically &lt; 100GB<br />Short, atomic transactions. Heavy emphasis on <br />lightning fast writes.<br />Key Performance Metrics: Transaction Throughput, Availability<br />
  24. 24. 20<br />II. Software & Application Considerations: OLAP and OLTP <br />Database Types:<br />OLAP (Online Analytical Processing):<br />OLAP databases should only receive historical business data and remain isolated from OLTP (transactional) databases. Summaries not transactions.<br />Data in OLAP databases never change, OLTP data constantly changes.<br />OLAP databases typically contain fewer tables arranged into a “star” or “snowflake” schema. <br />The central table in this star schema is called the “fact table”. The leaf tables are called “dimension tables”. The facts within a dimension table are called “members”.<br />The joins between the dimension and fact tables allow you to browse through the facts across any number of dimensions.<br />The simple design of the star schema makes it easier to write queries, and they run faster. OLTP database could involve dozens of tables, making query design complicated. In addition, the resulting query could take hours to run.<br />OLAP databases make heavy use of indexes because they help find records in less time. In contrast, OLTP databases avoid them because they lengthen the process of inserting data.<br />
  25. 25. 21<br />II. Software & Application Considerations: OLAP and OLTP <br />Database Types:<br />OLAP (Online Analytical Processing):<br />The process by which OLAP databases are populated is called: Extract, Transform, and Load (ETL). No direct <br />data-entries are made into a OLAP database, only summaritive bulk ETL transactions.<br />A cube aggregates the facts in each level of each dimension in a given OLAP schema. <br />Because the cube contains all of the data in an aggregated form, it seems to know the answers to queries in advance.<br />This arrangement of data into cubes overcomes a limitation of relational databases.<br />
  26. 26. 22<br />II. Software & Application Considerations: OLAP and OLTP <br />OLAP (Online Analytical Processing):<br />What happens during a query?<br />Client statement is issued <br />Database Server Processes the query by locating extents <br />Data is found on Disk<br />Results are sent through database server to client.<br />
  27. 27. 23<br />II. Software & Application Considerations: PostgreSQL Query Flow<br />PostgreSQL: The Path of a Query<br />1. Connection from Application.<br />2. Parsing Stage<br />3. Rewrite Stage<br />4. Cost comparison and Plan/Optimization Stage<br />5. Execution Stage<br />6. Result<br />
  28. 28. 24<br />II. Software & Application Considerations: OLAP and OLTP <br />GreenPlum and PostgreSQL:<br />Of the open source database options, PostgreSQL is the most robust, object-relational database management system.<br />GreenPlum is a commercially based PostgreSQL DBMS, adding enterprise (OLAP) oriented enhancements to PostgreSQL, promising the following features:<br /><ul><li> Economical Petabyte Scaling
  29. 29. Massively Parallel Query Execution
  30. 30. Unified Analytical Processing
  31. 31. Shared-nothing massively parallel processing architecture
  32. 32. Fault tolerance
  33. 33. Linear Scalability
  34. 34. “In-database” compression, 3-10x disk space reduction, </li></ul> with corresponding I/O improvement.<br />License was $20,000 every 6 months ($40,000/yr.)<br />It’s important to note that PostgreSQL is free and can be modified to perform similarly to GreenPlum. We did just that with our PSA server reconstruction project.<br />
  35. 35. PostgreSQL tweaks explained:<br />PostgreSQL is tweaked through a configuration file called: “postgresql.conf” <br />This flat file contains several dozen parameters from which the master<br />PostgreSQL service “postmaster” reads at startup. <br />Changes made to this file require the “postgresql “ service to be bounced (restarted) via the command as root: “service postgresql restart”<br />Corresponding “postgresql.conf” parameter affecting query performance:<br />Maximum Connections (max_connections): Determines the maximum number of concurrent connections to the database server. Keep in mind that this figure is used as a multiplier for work_mem. <br />Shared Buffers (shared_buffers): The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data. If you have a system with 1GB or more of RAM, a reasonable starting value for shared_buffers is 1/4 of the memory in your system. <br />Working Memory (work_mem): If you do a lot of complex sorts, and have a lot of memory, then increasing the work_mem parameter allows PostgreSQL to do larger in-memory sorts which, unsurprisingly, will be faster than disk-based equivalents. <br />25<br />II. Software & Application Considerations: PostgreSQL Tweaks<br />
  36. 36. 26<br />The default POSTGRESQL configuration allocates 1000 shared buffers. Each buffer is 8 kilobytes. Increasing the number of buffers makes it more likely backends will find the information they need in the cache, thus avoiding an expensive operating system request. The change can be made with a postmaster command-line flag or by changing the value of shared_buffers in postgresql.conf.<br />The default POSTGRESQL configuration allocates 1000 shared buffers. Each buffer is 8 kilobytes. Increasing the number of buffers makes it more likely backends will find the information they need in the cache, thus avoiding an expensive operating system request. The change can be made with a postmaster command-line flag or by changing the value of shared_buffers in postgresql.conf.<br />II. Software & Application Considerations: PostgreSQL Tweaks<br />PostgreSQL tweaks explained:<br />Shared Buffers<br />PostgreSQL does not directly change information on disk. Instead, it requests data be read into the PostgreSQL shared buffer cache. PostgreSQL backends then read/write these blocks, and finally flush them back to disk.<br />Backends that need to access tables first look for needed blocks in this cache. If they are already there, they can continue processing right away. <br />If not, an operating system request is made to load the blocks. The blocks are loaded either from the kernel disk buffer cache, or from disk. These can be expensive operations. <br />The default PostgreSQL configuration allocates 1000 shared buffers. Each buffer is 8 kilobytes. <br />Increasing the number of buffers makes it more likely backends will find information in cache...to a limit.<br />
  37. 37. 27<br />The default POSTGRESQL configuration allocates 1000 shared buffers. Each buffer is 8 kilobytes. Increasing the number of buffers makes it more likely backends will find the information they need in the cache, thus avoiding an expensive operating system request. The change can be made with a postmaster command-line flag or by changing the value of shared_buffers in postgresql.conf.<br />The default POSTGRESQL configuration allocates 1000 shared buffers. Each buffer is 8 kilobytes. Increasing the number of buffers makes it more likely backends will find the information they need in the cache, thus avoiding an expensive operating system request. The change can be made with a postmaster command-line flag or by changing the value of shared_buffers in postgresql.conf.<br />II. Software & Application Considerations: PostgreSQL Tweaks<br />PostgreSQL tweaks explained:<br />Shared Buffers “How much is too much?” <br />Setting “shared_buffers” too high results in expensive “paging”...which severely degrades the database’s performance.<br />If everything doesn&apos;t fit in RAM, the kernel starts forcing memory pages to a disk area called swap. It moves pages that have not been used recently. This operation is called a swap pageout. Pageouts are not a problem because they happen during periods of inactivity. <br />What is bad is when these pages have to be brought back in from swap, meaning an old page that was moved out to swap has to be moved back into RAM. This is called a swap pagein.This is bad because while the page is moved from swap, the program is suspended until the pagein completes.<br />
  38. 38. PostgreSQL tweaks explained:<br />Horizontal “Range” Partitioning:<br />Also known as “shard” involves putting different rows into different tables for improved manageability and performance. <br />Benefits of partitioning include:<br />Query performance can be improved dramatically in certain situations, particularly when most of the heavily accessed rows of the table are in a single partition or a small number of partitions. The partitioning substitutes for leading columns of indexes, reducing index size and making it more likely that the heavily-used parts of the indexes fit in memory.<br />When queries or updates access a large percentage of a single partition, performance can be improved by taking advantage of sequential scan of that partition instead of using an index and random access reads scattered across the whole table.<br />Seldom-used data can be migrated to cheaper and slower storage media. <br />28<br />II. Software & Application Considerations: PostgreSQL Tweaks<br />
  39. 39. PostgreSQL tweaks explained:<br />Partitioning (cont.)<br />The benefits will normally be worthwhile only when a table would otherwise be very large. <br />The exact point at which a table will benefit from partitioning depends on the application, although a rule of thumb is that the size of the table should exceed the physical memory of the database server. <br />The following forms of partitioning can be implemented in PostgreSQL: <br />Range Partitioning (aka “Horizontal”)<br />The table is partitioned into &quot;ranges&quot; defined by a key column or set of columns, with no overlap between the ranges of values assigned to different partitions. For example one might partition by date ranges, or by ranges of identifiers for particular business objects. <br />List Partitioning <br />The table is partitioned by explicitly listing which key values appear in each partition. <br />29<br />II. Software & Application Considerations: PostgreSQL Tweaks<br />
  40. 40. PostgreSQL tweaks explained:<br />VACUUM:<br />Ensures database is ACID<br />Atomic<br />Consistent<br />Isolated<br />Durable<br />PostgreSQL uses MVCC (Multi-version Concurrency Control)…eliminating read locks on records by allowing several versions of data to exist in a database.<br />VACUUM removes old versions of this multi-versioned data in base tables from the database. These old versions waste space once a commit is made.<br />To keep a PostgreSQL database performing well, you must ensure VACUUM is run correctly.<br />AUTOVACUUM suffices for our query based, low transaction database, keeping dead space to a minimum. <br />30<br />II. Software & Application Considerations: PostgreSQL Tweaks<br />
  41. 41. 31<br />III. PSA Server Case Studies: AOPen mini-PCs + GreenPlum <br />PSA Server (v1): “dbnode1 – dbnode6”<br />Originally, PSA was hosted on GreenPlum using 6 AOpen mini-PC nodes.<br />Performance was slow, disk I/O was roughly 90MB/s (realized), Sysco’s weekly reports took roughly 15 minutes. Database volume was constantly around 90% capacity, causing Mike to have to manually delete tables…space was at a premium.<br />Licensing with GreenPlum was expensive ($20,000/6 months….$40,000/yr.) and the system didn’t deliver performance as promised (in either PSA or NewOps). NewOps’ performance should have been significantly better given it’s more robust hardware (12 x DELL PowerEdge 2950’s).<br />Since GreenPlum is based on PostgreSQL, it made sense to leverage the underlying free open source code and scrap the proprietary distributed DB solution, opting for a standalone server with enhanced space and I/O. Migrating existing tables to PostgreSQL required very little modification.<br />The mini-PC’s we used to cluster GreenPlum were limited in capacity and scalability…each box was sealed and didn’t allow for expansion.<br />Mini-PC Details:<br />AOpenMP965-D<br />Intel® Core™2 Duo CPU T7300 @ 2GHz<br />3.24GB Memory<br />Bus Speed: 800MHz<br />150GBSATA Drive<br />
  42. 42. III. PSA Server Case Studies: TYAN Transport + PostgreSQL<br />PSA Server (v2): “sentrana-psa-dw”<br />This is our second generation PSA box, this time using PostgreSQL 8.3 instead of GreenPlum.<br />Formerly used as a testing box at the colo, named “econ.sentrana.com”….consists of a basic Tyan Transport GX28 (B2881) commodity chassis, with a Tyan Thunder K8SR (S2881) motherboard, 2 Dual Core AMD Opteron 270’s @ 1000MHz w/2MBL2 Cache, 8GB memory, and 4 SATA-1 drive bays (SATA-II drives are backwards compatible, able to fit in these bays, however running at SATA-I speed). <br />Filesystem: EXT3 (4KB block size = kernel page size)<br />Storage Configuration: 4 drives bays = 1 OS drive + 3 RAID5 DB Drives @ SATA-I speed (150MB/s)<br />Read Performance: ~ 76.75MB/s<br />32<br />
  43. 43. III. PSA Server Case Studies: DELL PowerEdge 2950 + PostgreSQL<br />PSA Server (v3): “psa-dw-2950”<br />This is our third (and current) generation PSA box, still using PostgreSQL, only the server platform has evolved to a DELL PowerEdge 2950, with dual Xeon Quad Core processors @ 2.5GHz, 16GBDDR memory, 1333MHz FSB, and 6 SATA-II/SAS drive bays configured via PCIe PERC6/I integrated RAID controller.<br />Formerly used as one of the NewOps DBNode’s, with GreenPlum, this box was rebuilt from the OS out using Ubuntu 8.10 Linux as the OS serving PostgreSQL 8.3 as the DB System. <br />Filesystem: XFS (4KB block size = kernel page size)<br />Storage Configuration: <br /> 6 x 1TB Drives @ 7,2KRPMs (300Mb/s SATA-II speed) in single RAID5 array<br /> ~ 5TB actual storage space (5 drive spindles used for data, 1 for RAID5 parity)<br />Read Performance: ~ 507MB/s<br />33<br />
  44. 44. III. PSA Server Case Studies: DELL PowerEdge 2950 + PostgreSQL<br />PSA Server (v3): “psa-dw-2950”<br />Postgresql.conf settings:<br />max_connections = 25<br />shared_buffers = 4096MB (1/4 total physical memory)<br />(Sets the amount of memory the database server uses for shared memory buffers. )<br />temp_buffers = 1024MB<br />(Sets the maximum number of temporary buffers used by each database session.)<br />work_mem = 4096MB<br />Specifies the amount of memory to be used by internal sort operations and hash tables before switching to temporary disk files. (too high = paging will occur, too low = writing to tempdb)<br />maintenance_work_mem = 256MB<br />random_page_cost = 2.0<br />(query planner constant... stating the cost of using disks is 2.0)<br />effective_cache_size = 12288MB<br />(query planner constant)<br />constraint_exclusion = on<br />(query planner uses table constraints to optimize queries...e.g. partitioned tables)<br />34<br />
  45. 45. 1725 Eye St. NW, Suite 900<br />Washington DC, 20006<br />OFFICE 202.507.4480<br />FAX 866.597.3285<br />WEB sentrana.com<br />

×