SAP Sybase IQScaling Out Query Performance withSAP® Sybase® IQ PlexQ™A Shared-Everything Architecture forMassively Paralle...
Scaling Out Query Performance with SAP® Sybase® IQ PlexQ™Table of Contents4	Introduction      Sybase IQ PlexQ – A Multiple...
SAP® Sybase® IQ with PlexQ™ ArchitectureIntroductionThe SAP® Sybase® IQ server, release 15.3,                           Sy...
Balancing Parallelism Benefits with Saturation                        The software layers – from the DBMS, through the ope...
A second, less obvious example is when a query is “I/O bound”          The Importance of SAN Performance in DQP– that is, ...
Maximizing Resource Use to Improve Query PerformanceUnderstanding Parallelism in SAP Sybase IQQuery Processing and Data Fl...
Dynamic Parallelism                                                  more and more concurrent users or queries. However, t...
How DQP Works in an SAP Sybase IQ PlexQ Shared-Everything MPP ArchitectureUnderstanding Distributed Query ProcessingWhat I...
SAP Sybase IQ 15.3 also incorporates a new shared DBSpace,           server. A logical server allows one or more servers o...
There are both public and private configuration options for          The following table shows which query operators are d...
Knowing Whether a Query Was Distributed                                    Fragments are portions of the query tree that c...
When you turn on the database options to create query plan            If a part of a query is distributed, you will see a ...
If you mouse over a thread block, you will see various statistics,        If a query fragment is executing on multiple ser...
For a particular node, you will see how work was distributed to         How Errors Are Handledservers and threads. In “fra...
Optimizing Query PerformanceScalability of DQPA query is likely to benefit from DQP only if it is fully parallel   Certain...
What You Can Do to Influence DQP Scalability                        •• -iqgovern: This server option specifies the number ...
Sizing Shared Temporary Storage                                   •• The maximum amount of shared temporary space used    ...
Best Performance in a Test EnvironmentDQP Single Query Workload Test ResultsThe range in performance of a distributed quer...
Figure 8: Scaling Query_A from One to Eight PlexQ™ NodesQuery Tree                                        30921 rows      ...
Figure 9: Scaling Query_B from One to Eight PlexQ™ NodesQuery Tree                                              0 rows    ...
Figure 10: Scaling Query_C from One to Eight PlexQ™ NodesQuery Tree                                         15037 rows    ...
Faster Answers to Business Questions for a Competitive EdgeSummaryThis document has given you an overview of PlexQ, an exc...
www.sap.com/contactsapCMP21936 (12/09) ©2012 SAP AG. All rights reserved.SAP R/3, SAP NetWeaver, Duet, PartnerEdge, ByDesi...
Sybase IQ ile Muhteşem Performans
Upcoming SlideShare
Loading in …5
×

Sybase IQ ile Muhteşem Performans

1,250 views
1,121 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,250
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
30
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Sybase IQ ile Muhteşem Performans

  1. 1. SAP Sybase IQScaling Out Query Performance withSAP® Sybase® IQ PlexQ™A Shared-Everything Architecture forMassively Parallel Processing
  2. 2. Scaling Out Query Performance with SAP® Sybase® IQ PlexQ™Table of Contents4 Introduction Sybase IQ PlexQ – A Multiplex Foundation Balancing Parallelism Benefits with Saturation Risk The Importance of SAN Performance in DQP7 Understanding Parallelism in SAP Sybase IQ Query Processing and Data Flow Model Dynamic Parallelism Intraoperator Parallelism Enhancements in SAP Sybase IQ9 Understanding Distributed Query Processing What Is DQP? How Does DQP Work? Logical Servers Multiplex Internode Communication Prerequisites for DQP Types of Queries That Can Be Distributed Across a PlexQ Grid Knowing Whether a Query Was Distributed How Errors Are Handled16 Scalability of DQP Queries Highly Likely to Benefit from DQP Queries Generally Unlikely to Benefit from DQP What You Can Do to Influence DQP Scalability Sizing Shared Temporary Storage19 DQP Single Query Workload Test Results23 Summary Find Out More
  3. 3. SAP® Sybase® IQ with PlexQ™ ArchitectureIntroductionThe SAP® Sybase® IQ server, release 15.3, Sybase IQ PlexQ – A Multiplex Foundationintroduces PlexQ™, a massively parallel In the mid-1990s, Sybase (since acquired by SAP) pioneeredprocessing (MPP) architecture that accelerates the concept of column-oriented databases with its Sybase IQhighly complex queries by distributing work product (now known as SAP Sybase IQ). At the time, the benefitsto many computers in a grid. PlexQ uses a of column-oriented database technology were primarily debatedshared-everything approach that dynamically within the academic community, but as data volume require- ments began to explode in the early 2000s, the true value ofmanages and balances query workloads column stores became more evident. Originally architected toacross all the compute nodes in a PlexQ grid. support heavy ad hoc queries and large numbers of concurrentPlexQ works to avoid contention among users users, SAP Sybase IQ took a hybrid approach when it came tofor system resources, thereby providing high clustered configurations, which it called multiplex.performance and resource efficiency for Some vendors, especially those with upstart “parallel databas-concurrent workloads. es,” took the academically pure shared-nothing MPP approach, where data is physically partitioned across a number of inde-At the heart of PlexQ is an exciting and broadly applicable new pendent servers, each with its own memory and storage sub-functionality commonly known within the database community systems. While this model can provide good performance, theas distributed query processing, or DQP. DQP can improve the administrative and maintenance challenges are not trivial. Byperformance of a query by breaking it up into pieces and dis- far the greatest operational challenge for this purist approachtributing those pieces for concurrent execution across multiple isn’t observable during an initial trial or proof of concept. ItSAP Sybase IQ servers in a grid. This approach builds on the only surfaces after months when data distributions across“scale-up” parallel processing model initially delivered in SAP the different independent nodes naturally start to skew. TheSybase IQ 15.0. It adds a “scale-out” parallel processing model administrative overhead of keeping relatively balancedto leverage more independent compute resources for faster amounts of data on each node becomes a nearly impossibleanswers to the increasingly complex and time-critical business feat requiring extensive monitoring and data movement duringquestions that must be met by IT departments under strict the ever-shrinking maintenance windows. In addition, as moreservice-level agreements (SLAs). concurrent users come online in production environments, the shared-nothing architecture tends to slow down considerablyDQP can dramatically speed up many queries, not to mention due to system saturation and bottlenecks arising out of a singledoing so cost-effectively by leveraging all of the existing path through one “master” node for all concurrent queries.compute resources in a cluster rather than forcing yet anotherhardware upgrade to get faster speeds and feeds. However, For these reasons, a “shared disk cluster” model was adoptedit is important to understand that it is not as simple as some for both SAP Sybase IQ and its online transaction processingvendors might lead you to believe. (OLTP)-focused sibling, SAP Sybase Adaptive Server® Enterprise (SAP Sybase ASE). This has now been extended in SAP SybaseThis paper will give you a holistic perspective of the technical IQ with the “shared-everything MPP” model. This approachconcepts behind DQP in general and the DQP implementation makes perfect sense for analytic and data warehousing work-in SAP Sybase IQ 15.3 using PlexQ, an innovative and efficient loads because it allows data center operations to scale outshared-everything massively parallel processing (MPP) data storage volume and concurrent user request volumearchitecture. We will describe parallelism and DQP in general, independently, thereby balancing great performance withexplore the systemic considerations that are key to attaining operational simplicity. This approach also appeals to mostthe speed-up and scale-out benefits desired, and discuss the companies because it leverages their existing capital andtypes of queries it most benefits. Finally, we will quantify some staff skill set investments in highly resilient storage areaof the performance improvements observed in the labs so far. networks (SANs).4
  4. 4. Balancing Parallelism Benefits with Saturation The software layers – from the DBMS, through the operatingRisk system, and into the device drivers – also impact performance, often significantly. For example, some DBMSs, especially tradi-Like the laws of physics, there are certain characteristics of both tional row stores, place constraints on parallelism by mappingcomputer hardware and software layers that are immutable, its usage to tables’ partitioning characteristics. DBMSs andat least with respect to the current products and technologies operating systems often have limits on I/O, in terms of size butyou own. In the storage layers, disks (spindles, flash, and so also concurrent numbers, which could impact performance.forth) each have a maximum number of I/O operations per Most important, SQL queries are not all created equal – somesecond (IOPS) they can service simply due to the mechanics lend themselves to parallelism quite naturally, while others do(or physics) of the technology or a hardware vendor’s specific not. Fundamentally, this is the key issue that IT must understandimplementation. Similarly, host bus adapters (HBAs) and other in order to set the right expectations with business users. Thedisk controllers all have a maximum bandwidth (throughput following two examples should help explain this.measured in MBs/second) of data they can transfer. In thenetworking layer, different networks (1GbE, 10GbE, InfiniBand, First, consider a query that scans a billion rows applying simpleand so on) and network interface cards (NICs) have effectively filtering types of predicates (such as WHERE clauses). If thethe same types of limits. selection criteria are such that the DBMS must return tens or hundreds of thousands of rows to the application, does it makeDifferent processor architectures, not to mention processor sense to parallelize the query? The answer is no. Why? Becausemodels, all have differing numbers, sizes, and sharing capabilities all of those rows have to be sent back to the requesting appli-for the on-chip resources such as caches (L1, L2, and L3) and cation through a single TCP connection, which implies a /IPexecution units. In addition, they each have different memory single DBMS instance on one of the PlexQ nodes. The time tointerconnect architectures, often with huge differences in both funnel all the data back to a single node and send the result setbandwidth and latency. With the processor trends implementing back to the application may dominate the overall executionlighter-weight hardware threads (or strands) within the processor time of the query. Thus, there is little or no value in distributingitself, it is not surprising to see various processor architectures the query across the PlexQ grid.and models offer both differing amounts of parallelism as wellas a different “quality” of parallelism.DQP can improve the performance of a query by breaking it upinto pieces and distributing those pieces for concurrent executionacross multiple SAP Sybase IQ servers in a grid – providing fasteranswers to complex and time-critical business questions.Scaling Out Query Performance with SAP Sybase IQ PlexQ 5
  5. 5. A second, less obvious example is when a query is “I/O bound” The Importance of SAN Performance in DQP– that is, the time waiting for physical I/O is significantly higherthan the time needed to perform calculations. If threads exe- Since the data in SAP Sybase IQ is stored centrally on network-cute for only a brief amount of time, say tens to hundreds of attached storage, most often an enterprise-class SAN, this be-microseconds, before they must wait several milliseconds for comes a critical resource that can make the difference betweenan I/O to return, there is little to be gained from the overhead meeting and missing an application’s SLA with its businessof distributing the workload across the cluster, because the users. From a performance perspective, general parallel andthreads use so little CPU time as a percentage of the overall DQP configurations tend to allocate more simultaneous workquery execution time. from a larger number of concurrent threads performing work in support of applications’ queries. It is this volume of concurrentOne of the challenges to enabling parallelism and DQP on a work that often can stress the systems’ components. Often,system is the risk that it could saturate system components. components in the SAN are the first to show distress as theyThis most often comes into play during times when the system reach various saturation points.is servicing large numbers of concurrent queries from multipleusers, each of which then spawns multiple threads to complete The good news is that SANs are very extensible, in terms ofrequests as quickly as possible. With more traditional databas- both storage capacity and raw performance. In addition, mostes, parallelism is often quite static – numbers of threads are storage teams already have the tools and skills to identify andfixed and unchanging because they were assigned as part of the resolve storage performance problems, often before they be-optimization phase. While customers should certainly monitor come perceptible to the business users. As companies begin totheir systems for saturation, the SAP Sybase IQ PlexQ platform deploy increased parallelism and DQP configurations, theyhelps to minimize this risk by implementing an advanced and must closely involve their storage teams to ensure the storagedynamic model of parallelism as a runtime optimization. This subsystem has sufficient performance, in terms of IOPS andmodel can be scaled up or back depending on the current bandwidth (MB/sec), to support their business requirements.resource availability of the system and workload demands. Thisinnovation operates even within the execution of a single queryto help ensure the best possible performance and resourceutilization irrespective of when a request is made (for example,during a highly concurrent online time period or at low concurrenttimes such as nightly batch reporting). The shared disk cluster approach makes perfect sense for analytic and data warehousing workloads because it allows data center operations to scale out data storage volume and concurrent user request volume independently, thereby balancing great performance with operational simplicity.6
  6. 6. Maximizing Resource Use to Improve Query PerformanceUnderstanding Parallelism in SAP Sybase IQQuery Processing and Data Flow Model Figure 1: Query Execution Processing and Data Flow ModelMost traditional databases create a base table of data, storedas sequential rows of contiguous columns. In SAP Sybase IQ,the columns of a table are stored separately from each other, and Roota row is only a virtual entity, until it is constructed dynamicallyduring the course of running a query.Like any other database, SAP Sybase IQ accepts a query from Group bya front-end tool, parses it, and then passes the parsed query Rows flowingto the optimizer. While SAP Sybase IQ is optimizing a query, it up the querybuilds a “tree” of objects (joins, group by clauses, subqueries, treeand the like). Tables are “leaves” at the bottom of the tree, and Joinrows of data flow up the tree from the leaves to a “root” querynode at the top, where the data is passed from SAP Sybase IQto the requesting user. Table 1 JoinThe data flow tree begins execution at the root query node. Itstarts by requesting a first row from the query node below. Thischild query node “wakes up” and begins asking for rows from Table 2 Table 3the next query node below it. This continues down the treeuntil the execution reaches the leaf query nodes, which readthe actual data from the tables. Figure 1 depicts this commonapproach. Interoperator parallelism is accomplished using two differentA leaf query node performs two functions in an SAP Sybase IQ parallelism models: pipeline parallelism and bushy parallelism.query. First, it processes the local table predicates – the parts With pipelining, a parent node can begin consuming rows as soonof the WHERE clause that access only one table. These local as a child node produces its first row. With bushy parallelism,predicates are processed vertically, meaning that individual two query nodes are independent of each other and cancolumns are evaluated individually using the SAP Sybase IQ execute in parallel without waiting for data from each other.indexes. The second function is to project the set of rows thatsatisfy all the conditions of the local predicates up to the leaf’s Intraoperator parallelism is accomplished by partitioning theparent node. The data is now processed horizontally, as rows operator’s input rows into subsets and assigning the data(tuples). subsets to different threads. SAP Sybase IQ makes heavy use of both inter- and intraoperator parallelism to optimize theSAP Sybase IQ supports two types of parallelism as it processes performance of queries.a query:•• Interoperator parallelism: Multiple query nodes in the query tree execute in parallel•• Intraoperator parallelism: Multiple threads execute in parallel within a single query nodeScaling Out Query Performance with SAP Sybase IQ PlexQ 7
  7. 7. Dynamic Parallelism more and more concurrent users or queries. However, they did nothing to reduce query execution times by leveraging all theParallelism allows maximum utilization of resources to improve compute bandwidth across the grid. SAP Sybase IQ PlexQ liftsthe performance of a query. However, it is often undesirable that restriction, allowing a query to use the CPU resources onfor one “big” query to starve out other queries running at the potentially all the machines in the grid.same time. The SAP Sybase IQ query engine adapts to changesin server activity by increasing or decreasing parallelismdynamically. For example, a resource-intensive query running Figure 2: Balancing Parallelism and Resource Availabilityalone might use many or all of the CPUs, now potentially on allthe servers in the PlexQ grid. Then, as other users start queries,even while the first query is still running, SAP Sybase IQ will Resourcesgracefully scale back CPU resources (threads) and their are rebalancedassociated memory, dynamically allocating them to these newqueries. As these other queries complete, their resources can % of totalbe reallocated back to queries that are still running so they CPU used (per query)leverage more compute resources to complete faster. 100%Figure 2 illustrates how resource availability is balanced. As 75%different queries start, the CPU resources used by each arereduced to ensure the total system does not overcommit the 50%total resources in use and become saturated to the point thatthe entire system starts to “thrash.” When CPU availability in- 25%creases because queries complete, these resources are made 0%available almost immediately to leverage the idle capacity and Query 1 starts Query 1 endsallow the running queries to complete as quickly as possible. 100%Intraoperator Parallelism Enhancements inSAP Sybase IQ 75% 50%The SAP Sybase IQ 15.0 release significantly enhancedintraoperator parallelism. Many query operations can now 25%be performed in parallel using many threads: 0%•• Most table join operations Query 2 starts Query 2 ends•• Group by operations•• Sorting (order by and merge joins) 100%•• Predicate execution in tables (for example, 75% “WHERE last_name like “%son%”, range predicates, IN conditions, “Top N” operations, and many others) 50% 25%Prior to SAP Sybase IQ 15.3, internode and intranode parallel-ism within a single query could only use the CPU resources 0% Query 3 starts Query 3 endson a single server. During that time, SAP Sybase IQ multiplexconfigurations were a very effective way to scale up support for Elapsed Time8
  8. 8. How DQP Works in an SAP Sybase IQ PlexQ Shared-Everything MPP ArchitectureUnderstanding Distributed Query ProcessingWhat Is DQP? work and store and transmit intermediate results. The objective in a DBMS is to execute queries as quickly as possible. A simpleDistributed query processing (DQP) spreads query processing query will run fastest on a single machine. However, large andacross multiple servers in an SAP Sybase IQ PlexQ grid, which complex queries that can exceed the CPU capacity on a machineis a group of servers that each runs SAP Sybase IQ. The servers may be better served by incurring the overhead of distribution.in a grid connect to a central store, such as a shared disk array, If performance is improved, then distribution is a win.for permanent shared data. SAP Sybase IQ PlexQ has a hybridcluster architecture that involves shared storage for permanent How Does DQP Work?IQ data and independent node storage for catalog metadata,private temporary data, and transaction logs. DQP is available to any organization that has deployed SAP Sybase IQ 15.3 in a PlexQ grid. When you install SAP Sybase IQ,When the SAP Sybase IQ query optimizer determines that a DQP is turned on by default, and all servers in the grid may bequery might require more CPU resources than are available on utilized for distributed processing.a single node, it will attempt to break the query into parallel“fragments” that can be executed concurrently on other servers DQP introduces the concept of “leader” and “worker” nodes.in the grid. DQP is the process of dividing the query into multiple, The leader node is the node where a query originates. A workerindependent pieces of work, distributing that work to other nodes node can be any node in the grid that is capable of acceptingin the grid, and collecting and organizing the intermediate distributed query processing work. All grid node types (reader,results to generate the final result set for the query. writer, or coordinator) may serve as leader or worker nodes.It is important to emphasize that if a query does not fully utilize In Figure 3, execution of query 1 and query 2 is distributedthe CPU resources on a single machine, then it will usually not across subsets of nodes in the PlexQ grid. The two queries arebe advantageous to distribute it. For example, if the optimizer is serviced by different leader nodes and sets of worker nodes.going to parallelize a query seven ways (keep seven threads at This is one possible operational scenario. You can configure thea time busy) on an eight-core box, it will probably not distribute set of nodes that participate in a distributed query very flexiblyit. Distribution requires network and storage overhead to assign (see “Logical Servers” below).Figure 3: A Distributed Query in ActionQuery 1 Query 2 L W W W W W W W L SAN fabricL = leader nodeW = worker nodeScaling Out Query Performance with SAP Sybase IQ PlexQ 9
  9. 9. SAP Sybase IQ 15.3 also incorporates a new shared DBSpace, server. A logical server allows one or more servers of a gridcalled shared temporary store, to support DQP. This DBSpace to be grouped together and represented as a logical entity.is named IQ_SHARED_TEMP and must reside on shared disk Users are granted access to logical servers via the login policystorage accessible and writable by all nodes in the grid. These associated with the user.are the same requirements that exist for IQ_SYSTEM_MAINand user-defined DBSpaces for user data. The purpose of There are some built-in logical servers. In particular, theIQ_SHARED_TEMP is to allow transmission of intermediate built-in OPEN logical server includes all servers that aredata in both directions for servers involved in a distributed not members of any user-defined logical server. If you do notquery. IQ_SHARED_TEMP and the local temporary store, create any logical servers, all nodes in the grid may participateIQ_SYSTEM_TEMP both use the temporary cache for in-memory , in DQP, because they are part of the OPEN server.buffering of data. A user’s login policy may allow access to one or more logicalWhen a client submits a query to the SAP Sybase IQ server, servers. A user will connect to a physical server to run a query.the query optimizer uses cost analysis to choose whether to SAP Sybase IQ looks at the login policy of the user and deter-parallelize or distribute execution of the query. A parallelizable mines which logical server the physical server is a part of. Itquery is broken into query fragments – predicates and data then distributes the query execution to only those nodes thatflow subtrees. A query fragment is considered eligible for distri- are members of the logical server. Although a physical serverbution only if the SAP Sybase IQ engine supports parallel and may belong to more than one logical server, it may not belongdistributed execution of all of the query operators contained to more than one logical server assigned to the same loginin the fragment. policy. For example, if a user may connect to logical server A and logical server B, physical server C may not be a memberWhen a query is distributed, the leader node assigns query of both logical servers. This ensures that if user X connects tofragments to workers and collects intermediate results from physical server C, there will be no ambiguity in selecting thethe worker servers. Workers do not make decisions about que- logical server to execute the query. You can dynamically add orry distribution. They simply execute the work assigned to them drop logical server member servers to accommodate theand return results. changing resource needs of applications.If the query optimizer makes a determination that a distributed Multiplex Internode Communicationquery will not scale appropriately, or might even degrade inperformance, then the query will not be distributed and will be In order to support streamlined communication among nodesexecuted on a single node in the grid. Queries are classified as participating in the distribution of a query, SAP Sybase IQ 15.3follows: introduces the multiplex interprocess communication (MIPC)•• Not distributed: No fragments are executed on other nodes framework. The MIPC mesh is a peer-to-peer internode com- of the PlexQ grid. The query is run completely on the leader munication infrastructure that supplements the internode node. communication (INC) protocol added in SAP Sybase IQ 15.0. INC•• Partially distributed: One of more fragments are executed on is used for two-way heartbeat monitoring, version data synchro- other nodes of the PlexQ grid as well as the leader node. nization, and other types of message and data propagation•• Fully distributed: All fragments are executed on multiple required in a PlexQ grid. INC allows nodes to talk to each other nodes of the PlexQ grid. only via the coordinator and has been adequate for the more limited communication requirements of single node queries.Logical Servers MIPC allows PlexQ grid nodes to talk directly with each other, and it supports the more robust communication requirementsYou may not always want to use all the servers in a grid for of DQP.distributed query processing, and you may want to provisiona subset of these resources by application or user. For thispurpose, SAP Sybase IQ introduces the concept of a logical10
  10. 10. There are both public and private configuration options for The following table shows which query operators are distributed:MIPC. The private option allows you to specify host-port pairs(TCP protocol only at this time) that PlexQ grid servers will /IP Class Operatoruse exclusively for DQP-related communications. If no privateinterconnection configuration is provided, MIPC uses the JOIN Nested loop/Nested loop pushdownhost-port pairs specified for other types of communication, Hash/Hash pushdownsuch as external user connections and INC connections. Sort Merge/Sort Merge PushdownA private MIPC network has been found during internal testing GROUP BY GROUP BY SINGLEto provide significant performance benefits over a shared MIPC GROUP BY (HASH)network. In one particular instance, a distributed query running GROUP BY (SORT)on two nodes over a private MIPC network executed almostas quickly as a three-node configuration using a shared MIPC DISTINCT DISTINCT (HASH)network. DISTINCT (SORT)Prerequisites for DQP SORT ORDER BY ORDER BY (N)You do not need to set any configuration options to activate SORTED INdistributed query processing. Unless you disable DQP byturning off the dqp_enabled login policy option or dqp_enabled SUBQUERY Uncorrelatedtemporary database option, DQP occurs automatically forqualifying queries when: PREDICATES Condition execution•• The server is part of a PlexQ grid. (using FP/LF/HG indexes)•• There is a logical server with login permissions, and at least one node available. By default, there is a built-in logical OLAP OLAP RANK and WINDOW with server called the OPEN logical server, so this requirement is PARTITION satisfied out of the box.•• The shared temporary DBSpace has writable files available. SELECT component INSERT...SELECT Initially there are no DBFiles in the shared temporary DBSpace, of INSERT operations INSERT...LOCATION and the PlexQ grid administrator must add at least one raw device DBFile to it in order to activate distributed query processing. Query fragments that do the following are never distributed: •• Write to the database (including DDL, INSERT, LOAD,Types of Queries That Can Be Distributed Across a UPDATE, and DELETE)PlexQ Grid •• Reference temporary tables •• Reference tables that reside in the SYSTEM DBSpaceIn order for a query operator to be distributed, it must be •• Reference proxy tablesable to be executed in parallel. When an operator is executed •• Utilize nondeterministic functions, such as NEWIDin parallel, multiple threads can be applied to execute theprocessing in parallel. In SAP Sybase IQ 15.3, most query Note that a LOAD operation can still be “distributed” by loadingoperators can be parallelized but not all are distributed. individual tables in parallel using multiple writer nodes in the grid.Scaling Out Query Performance with SAP Sybase IQ PlexQ 11
  11. 11. Knowing Whether a Query Was Distributed Fragments are portions of the query tree that can be executed independently. When two fragments may execute in either order,The SAP Sybase IQ query plan gives you visibility into whether they may execute concurrently. If one fragment depends onor not a query was distributed. The query plan provides details intermediate results from another fragment, then the twoindicating which servers participated in the query processing, must execute in the proper order. If all the query operatorsmeasures how the work was distributed, and displays timing in a fragment are parallelizable and distributable, then theinformation. fragment is eligible for distribution across all the worker nodes. A fragment that cannot be distributed will execute completelyDQP begins when a client connects to a physical server and on the leader node. The optimizer divides each query operatorinitiates a query. This server is the leader node for the query. in a fragment into a set of “work units.” A work unit is a subsetThe leader node invokes the query optimizer to build the of data for a processing thread to work on.execution plan for the query. The query optimizer builds a querytree and divides the query into fragments. A fragment is either: Figure 4 illustrates a query plan broken into query fragments.•• A leaf condition (a predicate) You will not actually see the dotted lines in the real query plans.•• A data flow subtree with a particular partitioning: a range of This just gives you a feel for how a query might be fragmented rows or keys by the optimizer. In this example, fragments 1, 2, and 3 will execute concurrently.Figure 4: Sample Query Plan Illustrating Distributed Processing Fragments 1 row #07 Root 1 row 1 row #08 Scrolling Cursor Store Fragment 1 #07 Root of a Distributed Query 1 row 1 row #09 Group by Single #08 Group by Single 40 rows 0 rows #39 Parallel Combiner #38 Parallel Combiner 40 rows 22 rows #06 Group by Single #01 Group by Single 2225120 rows 833469 rows #03 Join (Sort-Merge) #02 Join (Sort-Merge) 6087697 rows 2225120 rows 3481600 rows 833469 rows #03 Order by #05 Order by #48 Order by (Parallel inputs and outputs) #53 Order by (Parallel inputs and outputs) (Parallel outputs) (Parallel outputs) 9813534 rows 2225120 rows 0 rows 0 rows #01 Leaf <tbansf> #02 Leaf <tbanmbssf> #04 DQP Leaf #06 DQP Leaf Fragment 2 Fragment 3 0 rows 0 rows #03 Root of a Distributed Query #03 Root of a Distributed Query 0 rows 0 rows #12 Order by (Parallel inputs) #12 Order by (Parallel inputs) 4906750 rows 998011 rows #02 Leaf <tbansf> #02 Leaf <tbanmbssf>12
  12. 12. When you turn on the database options to create query plan If a part of a query is distributed, you will see a triple black linefiles, the query plan for the whole query will be created on the between nodes that were distributed. When you hover a mouseleader node. When a query fragment is a data flow subtree, cursor over the row count next to the parallel lines in the display,and it is distributed, each worker that participates in executing it will show the number of remote rows (how many werethe fragment will generate a local query plan for that fragment. distributed). The width of the rightmost bar is sized depending(Note that you need to turn on the query plan database options on the number of remote rows.only on the leader node, not the worker nodes, for query fragmentplans to be created on the workers.) The query operator at the Below the query tree is the timing diagram. At the top, for eachtop of a fragment manages the assignment of the fragment’s node in the query tree, you will see timings for each phase of itswork units to threads across all the workers. execution. This now includes timings across all the servers in the grid. The CPU utilization portion of the timing diagram willAllocation of threads to work units is a highly dynamic process also show aggregated times across all servers.that allows threads to be added and removed from a query as itexecutes. Threads are scaled up and down based on machine Below the node phase timings is the threads display. Thisload and resource availability. Availability of temp buffers and shows which threads on which servers are performing work atCPU time are the dominant factors in decisions to add or remove a particular time. Thread assignments are shown as a stackedthreads. In SAP Sybase IQ DQP, physical servers can be added bar graph (see Figure 6).to a logical server dynamically and, after some initialization, canbegin performing DQP work as soon as a new query fragment Figure 6: Query Plan Section Showing Threading andis assigned for distribution. (See Figure 5.) Distribution Across the PlexQ™ GridFigure 5: Displaying the Remote (Distributed) Rows Triple bar for distributed processing Rightmost bar width reflects remote processing 1 row #05 Root 1 row #06 Scrolling Cursor Store 1 row #07 Group by Single 23 rows #15 Parallel Combiner 23 rows #04 Group by Single 349226 rows #03 Join (Hash PushDown) 349226 rows 349226 rows #01 Leaf <tbansf> #02 Leaf <tbanmbssf> 349226 rows (est.), 73430 remote rows Mouseover shows number of rows processed by workersScaling Out Query Performance with SAP Sybase IQ PlexQ 13
  13. 13. If you mouse over a thread block, you will see various statistics, If a query fragment is executing on multiple servers at thesuch as: same time, you will see thread blocks for the same root node of•• #53: The number of the node at the root of the query fragment the fragment stacked above each other. that is executing•• S:2: Server ID (2) that owns the threads doing the processing Below the timing diagram are node-specific details (see Figure 7).•• T: 0–3: Range of threads doing the processing•• A:2: Average number of threads (2) doing the processing during that slice of time•• N:3: Number of samples taken (3) to calculate thread statistics during that slice of time•• 23:25:13…–23:25:14: Start and end times of the time sliceFigure 7: Query Plan Detailing Node-Specific Information Fragment 1 Temp usage split into #39 Parallel Combiner shared and private ... ... Work Units - kwd_nc16165 25 (2, 6, 4, 4, 3, 2, 3, 1) Act. temp space used for this mode (MB) 0.000000000 Work Units - nw29242_kwd 24 (4, 3, 4, 3, 3, 3, 3, 1) Act. shared temp space for this node (Mb) 101.81250000 Max. Possible Parallel Arms 16 Fragment ID 3 ... ... ID of query fragment Work unit distribution by server and thread Fragment 2 Fragment 3 #48 Order by (Parallel inputs and outputs) #53 Order by (Parallel inputs and outputs) ... ... ... ... First Worker Work Unit 1 Parallel Sink Work Units 16 ... ... First distributed work unit; large number can indicate delay initializing Total work units for fragment14
  14. 14. For a particular node, you will see how work was distributed to How Errors Are Handledservers and threads. In “fragment 1” in Figure 7, the value ofwork units for server “kwd_nc16165” is “25 (2, 6, 4, 4, 3, 2, 3, 1)”. DQP is tolerant of worker/network failures and slow workers. IfThis means that 25 work units were assigned to this server, and a worker node fails to complete a work unit due to an error or a2, 6, 4, 4, 3, 2, 3, and 1 of those work units respectively were time-out violation, the work unit is retried on the leader node. Ifassigned to eight different threads. You can also see how much this occurs, the worker node will be given no more work unitsprivate and shared temp space was used to execute the for the duration of the fragment execution.fragment. Although a worker might fail while executing work for one query“Fragment 2” in Figure 7 shows the number of the first work fragment, it may still be assigned work units for a differentunit assigned to a worker. A number greater than 1 means that query fragment later in the process.the leader executed some work first, before a worker was ableto begin processing. This is probably due to a delay getting theworker what it needs to begin performing work.“Fragment 3” in Figure 7 shows “parallel sink work units,” whichare the total number of work units for the entire fragment.The SAP Sybase IQ PlexQ platform helps minimize therisk of saturation by implementing a dynamic model ofparallelism as a runtime optimization. This model canbe scaled up or back depending on the system’s currentresource availability and workload demands.Scaling Out Query Performance with SAP Sybase IQ PlexQ 15
  15. 15. Optimizing Query PerformanceScalability of DQPA query is likely to benefit from DQP only if it is fully parallel Certain types of queries will scale better than others. Queriesand CPU bound on a single node. In addition, the SAP Sybase that are likely to distribute well have the following attributes:IQ main and shared temporary stores must not be I/O bound. •• Compute-intensive column scans, such as LIKE conditions •• Complex queries involving aggregation, expensive expressions,DQP uses the available memory and CPU resources of all nodes and numeric data typesof the logical server. In general, the more nodes and resources •• Query fragments that reduce the size of intermediate or finalthat are available, the better the query performance. There is results; an example of this is a chain of hash joins with aan upper boundary, based on the number of work units. If there “group by hash” at the topare not enough work units to pass to all the available CPUs inthe grid, only a subset of the CPUs will be used. The current Low-cardinality data often uses hash-based processing, whichworkload of the nodes in the logical server will obviously affect is more likely to scale. This occurs with star schemas, whichperformance. are characterized by a large fact table with low-cardinality dimension tables.Allocating more memory to temp cache promotes hash-basedalgorithms that are more likely to scale. A large temp cache is If you have medium-cardinality data, you may be able to tunemore important than a large main cache for DQP. I/O band- database options and allocate more memory to temp cache, towidth of the shared temporary store, which is used to assign bias the query optimizer to choose more hash-basedwork and transmit intermediate results, is critical for the algorithms.performance of a distributed query. Consequently, if yourstorage layer offers tiered performance characteristics, placing Queries Generally Unlikely to Benefit from DQPIQ_SHARED_TEMP on the fastest storage will yield the bestresults. As discussed earlier, certain types of queries inherently do not scale well, and the optimizer may decide not to distribute themThis may seem obvious, but all distributed fragments must at all because they will probably perform best on a single node.complete processing before the final result set can be generated Examples include:and returned to the requesting application. So it should be •• Queries that return many rows, so that returning rows is anoted that the slowest-performing fragment will limit overall large part of the query execution time. Note that producingperformance of the query. In addition, although queries are rows out of the “top” of a query is a serial operation thatbeing distributed and load balanced automatically within the cannot be distributed.DQP layer of SAP Sybase IQ 15.3, it is still a good idea to load •• Small queries. Queries of less than 2 seconds in duration arebalance connections across the grid in order to spread the unlikely to benefit from DQP; those between 2 and 10 secondsmore intensive leader node responsibilities across all the nodes are somewhat more likely to benefit; and those greater thanof the grid. 10 seconds are generally more likely to benefit. •• Queries with many fragments. If there are many fragments,Queries Highly Likely to Benefit from DQP this usually means that sorts are involved. This can lead to less scalability, because sorting large amounts of data usesDQP is intended for PlexQ grid environments that are heavily disk storage in the IQ_SHARED_TEMP DBSpace. This isreport intensive. Load performance is not affected by the DQP another reason that the shared temporary DBSpace shouldoption, although loads can be parallelized by configuring be placed on the fastest storage possible.multiple PlexQ writer nodes. Also, DQP will operate best whenmemory and CPU resources are balanced across the PlexQ Joining large, high-cardinality tables with each other will lead togrid. merge joins. These do not scale as well as hash joins.16
  16. 16. What You Can Do to Influence DQP Scalability •• -iqgovern: This server option specifies the number of concurrent queries on a particular server. By specifying theThere are various server and database operations that affect -iqgovern switch, you can help SAP Sybase IQ maintainparallelism and performance of a query: throughput by giving queries adequate resources to commit•• Max_query_parallelism: This database option sets an upper quickly. The default value is (2 x number of CPUs) + 10. boundary that limits how parallel the optimizer will permit For sites with large numbers of active connections, you might query operators, such as joins, GROUP BY, and ORDER BY. want to set this value lower. The default value is 64. Systems with more than 64 CPU •• -iqtc: This server option sets the temp cache size. Temp cores often benefit from a large value – up to the total cache is used by both the local and shared temporary stores. number of CPU cores on the system, to a maximum of 512. DQP must utilize IQ_SHARED_TEMP in order to do its•• Minimize_storage: Set this database option to “on” prior processing, and therefore requires adequate temp cache. to loading data into tables, or utilize IQ_UNIQUE on column You may want to allocate more memory to it than to the main definitions. FP(1), FP(2), and FP(3) indexes that use lookup cache for DQP workloads. tables will be created instead of flat FP indexes. These take up less space and decrease I/O – although, FP(3) indexes There are a couple of DQP-specific database options that are consume a lot of memory, so use them judiciously. offered as well:•• Force_no_scroll_cursors: If you do not need backward- •• MPX_work_unit_timeout: When a worker node does scrolling cursors, set this database option to “on” to reduce not complete processing of its query fragment within the temporary storage requirements. mpx_work_unit_timeout value, the work is passed back to•• Max_IQ_threads_per_connection: This controls the number the leader to retry. If you find that time-outs are occurring of threads for each connection. With large systems, you may and adversely affecting the performance of DQP you can , see some performance benefit by increasing this value. increase the time-out value to allow a worker to complete.•• Max_IQ_threads_per_team: This controls the number of Generally, though, you are unlikely to hit a time-out issue threads allocated to perform a single operation (such as a unless you have some other underlying problem. LIKE predicate on a column). With large systems, you may •• DQP_enabled: This is an option you can set for a database see some performance benefit by increasing this value. connection. If DQP is occurring but you are not seeing•• Max_hash_rows: Set this database option to 2.5 million benefits from it, you can turn it off. for each 4 GB of RAM on the host. For example, set it to 40 million on a 64 GB system. This will encourage the query optimizer to utilize hash-based join and group by algorithms, which scale better. However, there is a caveat here: with very large hash tables, it is possible for performance to regress when distributed due to the time required to flush hash tables on one node and reconstitute them on another. DQP will attempt to compensate for this and not distribute hash-based operators when the hash table becomes prohibitively large, even if memory can accommodate it.DQP has been designed to take advantage of theCPU power of a PlexQ grid to scale the performanceof large and complex CPU-bound queries.Scaling Out Query Performance with SAP Sybase IQ PlexQ 17
  17. 17. Sizing Shared Temporary Storage •• The maximum amount of shared temporary space used across the PlexQ grid stays constant regardless of theAn adequate amount of shared temporary space on fast number of nodes executing a particular distributed query.storage hardware is critical for the performance of distributed •• The amount of shared temporary space required on a nodequeries. While it is difficult to calculate in advance how much increases with the number of concurrent users executingshared temporary storage you will need for a distributed query, the same distributed query. In other words, a higher workloadthere are some trends that have been observed: requires more shared temporary storage.•• Use of shared temporary space can vary widely among nodes in the PlexQ grid as they are executing a distributed Make sure that you have available storage to add to the query. shared temporary store if you find that it is not sized properly.•• The amount of shared temporary space used does not You can add space dynamically without stopping the correlate with the scalability of the query. Queries that do SAP Sybase IQ server. not scale well may use as much or more shared temporary space as queries that do scale well.•• Queries that use more temporary cache/space when running on a single node will tend to use more shared temporary space when running distributed, but there is not an obvious multiplier that can be derived.In SAP Sybase IQ DQP, physical servers can be added toa logical server dynamically, and after some initialization,can begin performing DQP work as soon as a new queryfragment is assigned for distribution.18
  18. 18. Best Performance in a Test EnvironmentDQP Single Query Workload Test ResultsThe range in performance of a distributed query varies •• 10 GB Private Network NFS Serversignificantly depending on the nature of the query and the –– Dell R710configuration and workload of the SAP Sybase IQ PlexQ grid –– Quad-coreit is executed on. The following results are the best achieved –– 24 GB memoryso far in a controlled, internal test environment. –– 8x 1 TB near-line SAS drives •• StorageThese tests (a single large query initiated by a single client) –– 6x PAC Storage 12-bay 4 Gb dual RAID controllers withwere run on an SAP Sybase IQ PlexQ grid with the following 12x 300 GB 15K SAS drivesconfiguration: –– 6x PAC Storage 12-bay EBOD (expansion shelves) with•• Dell PowerEdge M1000e Blade Enclosure 12x 300 GB 15K SAS drives –– 16x M610 Blade server; 56XX processors (224-8593) –– RAID-0 striping with LUN stripe size = 64 KB –– 2x quad-core (Intel Xeon E5620 2.4 GHz) –– 48 GB memory Each test in Figures 8, 9, and 10 shows the query plan of the –– 2x 300 GB SAS drives (RAID) particular query from the leader node and a bar chart showing –– Dual-channel 8 Gbps Fibre HBA performance scaling from one to eight server nodes. The –– Dual-port 10GbE network card name of the query has no particular significance other than –– 2x Fibre switch to uniquely identify it. In the query plans, note the “three bar” –– Brocade M5424 FC8 switch+AG, 24 ports annotations indicating distribution of query processing. –– 2x 10 GB network switch –– Cisco Catalyst 3130G, Gigabit Ethernet (223-5382)The SAP Sybase IQ query plan gives you visibility intowhether a query was distributed. It indicates which serversparticipated in the query processing, measures how thework was distributed, and displays timing information.Scaling Out Query Performance with SAP Sybase IQ PlexQ 19
  19. 19. Figure 8: Scaling Query_A from One to Eight PlexQ™ NodesQuery Tree 30921 rows #09 Root 30921 rows #10 Group by (Hash) 33149 rows #47 Parallel Combiner 33149 rows #08 Group by (Hash) 95603318 rows #07 Join (Hash) 95603318 rows 3 rows #05 Join (Hash) #06 Leaf <dbo.dim_collateral AS a13> 95603318 rows 153 rows #03 Join (Hash) #04 Leaf <dbo.dim_v_periods_securities AS a12> 95603318 rows 202 rows #01 Leaf <dbo.AT_POOL_ #02 Leaf <dbo.dim_ AGE AS a11> issuers AS a14> DQP performanceTime units 40 35 30 25 20 15 10 5 33.47 19.69 12.36 6.97 0 1 node 2 nodes 4 nodes 8 nodes20
  20. 20. Figure 9: Scaling Query_B from One to Eight PlexQ™ NodesQuery Tree 0 rows #13 Root of a Distributed Query 0 rows #14 Group by (Hash) 0 rows #65 Parallel Combiner 27630 rows #01 Group by (Hash) 29091565 rows #02 Join (Hash) 29091565 rows 0 rows #04 Join (Hash) #03 DQP Leaf 2909156 rows 0 rows #06 Join (Hash) #05 DQP Leaf 29091565 rows 0 rows #08 Join (Hash) #07 DQP Leaf 29091565 rows 0 rows #10 Join (Hash PushDown) #09 DQP Leaf 29053578 rows 0 rows #12 Leaf <dbo.AT_DOC_FICO AS a11> #11 DQP Leaf DQP performanceTime units 80 70 60 50 40 30 20 10 70.94 44.37 27.14 15.23 0 1 node 2 nodes 4 nodes 8 nodesScaling Out Query Performance with SAP Sybase IQ PlexQ 21
  21. 21. Figure 10: Scaling Query_C from One to Eight PlexQ™ NodesQuery Tree 15037 rows #11 Root 15037 rows #12 Group by (Hash) 87824 rows #49 Parallel Combiner 87824 rows #10 Group by (Hash) 1799440237 rows #09 Join (Hash) 1799440237 rows 99 rows #07 Join (Hash) #08 Leaf <dbo.dim_v_curr_spread AS a12> 1799461385 rows 24205498 rows #03 Join (Hash) #65 Parallel Combiner 1799461385 rows 153 rows 24205498 rows #01 Leaf <dbo.s_ #02 Leaf <dbo.dim_v_ #06 Join (Hash) fact_oq AS b> periods_securities AS a13> 24205498 rows 3 rows #04 Leaf <dbo.s_ #05 Leaf <dbo.dim_ fact_mstr_ms AS a> collateral AS a14> DQP performanceTime units 400 350 300 250 200 150 100 50 357.96 214.97 125.15 76.81 0 1 node 2 nodes 4 nodes 8 nodes22
  22. 22. Faster Answers to Business Questions for a Competitive EdgeSummaryThis document has given you an overview of PlexQ, an exciting Find Out Morenew architecture introduced in SAP Sybase IQ 15.3, whichincludes distributed query processing to enable a high- To learn more about SAP Sybase IQ PlexQ, please callperformance, resource-efficient, and operationally simple your SAP representative or visit us at www.sap.com/solutionsplatform. DQP has been designed to take advantage of the /technology/database/big-data-management/index.epx.CPU power of a PlexQ grid to scale the performance of largeand complex CPU-bound queries. DQP can dramaticallyimprove the performance of a query by breaking it up anddistributing the pieces for concurrent execution across multipleSAP Sybase IQ servers. This advances SAP Sybase IQ to ashared-everything MPP architecture that maximizes use ofdistributed resources to drive optimum query performanceand resource utilization. Faster answers to time-critical businessquestions give organizations the edge in today’s increasinglycomplex and competitive world. To support streamlined communication among nodes participating in query distribution, SAP Sybase IQ 15.3 introduces the MIPC frame- work, which allows PlexQ grid nodes to talk directly with each other and supports the more robust communication requirements of DQP.Scaling Out Query Performance with SAP Sybase IQ PlexQ 23
  23. 23. www.sap.com/contactsapCMP21936 (12/09) ©2012 SAP AG. All rights reserved.SAP R/3, SAP NetWeaver, Duet, PartnerEdge, ByDesign, ,SAP BusinessObjects Explorer, StreamWork, SAP HANA, andother SAP products and services mentioned herein as well astheir respective logos are trademarks or registered trademarksof SAP AG in Germany and other countries.Business Objects and the Business Objects logo, BusinessObjects,Crystal Reports, Crystal Decisions, Web Intelligence, Xcelsius, and otherBusiness Objects products and services mentioned herein as well as theirrespective logos are trademarks or registered trademarks of BusinessObjects Software Ltd. Business Objects is an SAP company.Sybase and Adaptive Server, iAnywhere, Sybase 365, SQL Anywhere, andother Sybase products and services mentioned herein as well as theirrespective logos are trademarks or registered trademarks of Sybase Inc.Sybase is an SAP company.Crossgate, m@gic EDDY, B2B 360°, and B2B 360° Services are registeredtrademarks of Crossgate AG in Germany and other countries. Crossgateis an SAP company.All other product and service names mentioned are the trademarks oftheir respective companies. Data contained in this document servesinformational purposes only. National product specifications may vary.These materials are subject to change without notice. These materialsare provided by SAP AG and its affiliated companies (“SAP Group”)for informational purposes only, without representation or warranty ofany kind, and SAP Group shall not be liable for errors or omissions withrespect to the materials. The only warranties for SAP Group products andservices are those that are set forth in the express warranty statementsaccompanying such products and services, if any. Nothing herein shouldbe construed as constituting an additional warranty.

×