8447779800, Low rate Call girls in Saket Delhi NCR
Database performance management
1. Database performance management
1. Introduction:
It is a widely known fact that 80% of performance problems are a direct result of the poor
performance, such as server configuration, resource contention. Assuming you have
tuned your servers and followed the guidelines for your database server, application
server, and web server, most of your performance problems can be addressed by tuning
the PeopleSoft Application.
This article presents methodologies and techniques for optimizing the performance of
PeopleSoft applications. The methodologies that are discussed are intended to provide
useful tips that will help to better tune your PeopleSoft applications. These tips focus on
tuning several different aspects within a PeopleSoft environment ranging from servers to
indexes. You will find some of these tips provide you with a significant improvement in
performance while others may not apply to your environment.
2. Server Performance:
In general, the approach to application tuning starts by examining the consumption of
resources. The entire system needs to be monitored to analyze resource consumption on
an individual component basis and as a whole.
The key to tuning servers in a PeopleSoft environment is to implement a methodology to
accurately capture as much information as possible without utilizing critical resources
needed to serve the end-users.
Traditional tools used to measure utilizations impact the system being measured and
ultimately the end-user experience. Commands like the following provide snapshot data
but not without an associated cost. These tools can consume a significant amount of
resources so care should be taken when executing them.
a) df size
b) iostat swapinfo
c) ipcs timex
d) netstat top
f) ps uptime
g) sar vmstat
2. h) swapinfo also glance & gpm
The goal of using these native commands is to identify, if and where, a bottleneck is in
the server. Is the problem in the CPU, I/O or memory? These native tools provide
indicators, but at the same time could skew the results because of the overhead associated
with them. Typically, additional third party tools are needed to complete the analysis.
The last hurdle being faced in tuning the server is making timing decisions on when to
upgrade the hardware itself. To do this, much more information needs to be collected and
stored in order to understand if an historical spike in resource utilization was a one-time
aberration or a regular occurrence building over time. The recommendation is to look at
third party vendors for solutions that can collect key performance indicators while
minimizing overhead on the system. The collected data can then be put in a repository for
detailed historical analysis.
3. Web Server Performance:
The release of PeopleSoft Pure Internet Architecture(TM) introduces new components to
PeopleSoft architecture--the web server and application server. The application server is
where most shops struggle with appropriate sizing. Web servers are used for handling the
end-user requests from a web browser to eliminate the administrative costs associated
with loading software (fat clients) on individual desktops. The benefit is a significant
savings on software deployment costs, maintenance, and upgrades. While the shift from
fat clients to thin lessens the administrative burden, it increases the need to ensure the
web servers are finely tuned since they will service a large number of clients. The
requirement for these web servers to achieve optimal performance is vital due to the
mission critical-nature PeopleSoft plays in today's enterprise.
Recommendations for ensuring good performance for web servers:
o Ensure load balancing strategy is sound
o Implement a solution to verify and highlight changes in traffic volumes
o Closely monitor the response times to verify that the strategy is optimizing the web
servers
o Measure and review historical patterns on server resource utilization (see server section
above).
o Increase the HEAP size to 200, 250, 300, or 380 MB for the web logic startup script.
4. Tuxedo Performance Management:
Tuxedo is additional middleware PeopleSoft utilizes to manage the following Internet
application server services:
3. o Component Processor--Responsible for executing PeopleSoft Components--the core
PeopleSoft application business logic
o Business Interlink Processor-- Responsible for managing the interactions with third-
party systems
o Application Messaging Processor--Manages messages in a PeopleSoft system
o User Interface Generator--Generates the user interface based on the Component or
Query definition and generates the appropriate markup language (HTML, WML, or
XML) and scripting language (JavaScript, WMLScript) based on the client accessing the
application
o Security Manager--Authenticates end-users and manages their system access privileges
o Query Processor--Executes queries using the PeopleSoft Query tool
o Application Engine--Executes PeopleSoft Application Engine processes
o Process Scheduler--Executes reports and batch processes and registers the reports in the
Portal's Content Registry
o SQL Access Manager--Manages all interaction with the relational DBMS via SQL
This Tuxedo middle tier is another critical and influential component of performance.
Similar to the web server, what is needed is a way to see into the "black box" to further
understand some of the key performance metrics.
Some of the performance metrics to capture when analyzing tuxedo are:
o Transaction volumes by domain, server, and application
o Response time for each end-user request
o Tuxedo service generating a poor performing SQL statement
o Break down of Tuxedo time by Service time and Queue time
o Identify problem origin - is it in tuxedo or the database?
o Response time comparisons for multiple Tuxedo Server
Reports has shown that too often companies throw hardware at a Tuxedo performance
problem when a more effective solution can be as simple as adding another domain to the
existing server(s). This is due to the fact that PeopleSoft and Tuxedo lack management
solutions that provide historical views of performance.
4. 5. Application Performance:
It is an accepted fact that 80% of application and database problems reside in the
application code. But, there are other technical items to consider which could influence
the applications performance. Here are some specific items to focus on when evaluating
database environment:
o Make sure the database is sized and configured correctly
o Make sure that the hardware and O/S environments are set up correctly
o Verify that patch levels are current
o Fix common SQL errors
o Review documentation of known problems with PeopleSoft supplied code
o Be sure to check available patches from PeopleSoft that might address the problem
o Review PeopleSoft suggested kernel parameters
o Set up the right number of processes
o Review the application server blocking for Long Running Queries
o Make sure not to undersize version 8 application server
It is also recommended to continue to review these items on a periodic basis.
6. Database Performance:
The performance of an application depends on many factors. We will start with the
overall general approach to tuning SQL statements. We will then move to such areas as
indexes, performance monitoring, queries, the Tempdb (Tempdb is often referred to as
plain "TEMP"), and, finally, servers and memory allocation.
To understand the effect of tuning, we must compare 'time in Oracle' with 'request wait
time'. Request wait time is the time that a session is connected to Oracle, but not issuing
SQL statements. In Oracle time shows the amount of time resolving a SQL statement
once it has been submitted to Oracle for execution. If time in Oracle is not significantly
smaller than the request wait time, then application tuning should be examined. Request
wait time is almost always much greater than in Oracle time, especially for on line users,
because of think time.
5. One exception to this is for a batch job that connects to Oracle and submits SQL
statements, then processes the returned data. A greater ratio of request wait to Oracle
could indicate a loop in the application outside of Oracle.
This should be identified and eliminated before continuing the performance analysis.
The next step focuses on tuning the SQL statements that use the most resources. To find
the most resource consuming SQL statements, the scheduled collection approach can be
used. The duration time is a commonly used criteria to locate the offensive SQL
statements. Other useful criteria include the following wait states: I/O, row lock, table
lock, shared pool, buffer, rollback segment, redo log buffer, internal lock, log switch and
clear, background process, CPU, memory and I/O. For each offensive SQL statement, the
execution plan and database statistics are analyzed. The following statistics are important:
table and column selectivity, index clustering factor, and storage parameters. First, all the
joins of the SQL are considered. For each join, the ordering of the tables is analyzed. It is
of major importance to have the most selective filter condition for the driving table. Then,
the type of the join is considered. If the join
Represents a Nested Loop, forcing it into a hash join can be advantageous under some
conditions.
The analysis stage usually results in several modification proposals, which are applied
and tested in sequence. Corrective actions include database object changes and SQL
changes. The typical database object changes are: index change, index rebuild and table
reorganization.
The typical SQL changes are: replacing subquery with a join, splitting a SQL into
multiple SQLs, and inserting Oracle hints to direct the Optimizer to the right execution
plan.
7. Indexes:
Tuning indexes is another important factor in improving performance in a PeopleSoft
environment. Index maintenance is crucial to maintaining good database performance.
Statistics about data distribution are maintained in each index. These statistics are used by
the optimizer to decide which, if any, indexes to use. The statistics must also be
maintained so that the optimizer can continue to make good decisions. Thus, procedures
should be setup to update the statistics as often as is practical.
Keep in mind that objects that do not change, do not need to have their statistics created
again. If the object has not changed, the stats will be the same. In this case, recreating the
same statistics over again will waste resources.
Since PeopleSoft uses a lot of temp tables that are loaded and then deleted, but not
dropped, it is helpful to create the statistics when those tables are full of data. If the
statistics are created when the table is empty, the stats will reflect that fact. The
Optimizer will not have correct information when it chooses an access path.
6. Periodically, indexes should be rebuilt to counter index fragmentation. An index creation
script can be created via PeopleTools to drop and rebuild indexes. This procedure will
eliminate index -wasted space on blocks that are created as a result of Oracle logical
deletes. This is only necessary on tables that are changed often (inserts, updates or
deletions).
Index scheme is also important to look at. The indexes in a standard PeopleSoft
installation may not be the most efficient ones for all installations. Closely examine data's
pattern, distribution, and modify the indexes accordingly. For example, the index on
PS_VOUCHER (BUSINESS_UNIT, VOUCHER_ID) could be changed to
(VOUCHER_ID, BUSINESS_UNIT) for an implementation with only a few business
units. Use ISQLW Query Options (Show Query Plan and Show Stats I/O) to determine
the effectiveness of new indexes. However, be careful to thoroughly test the new index
scheme to find all of its ramifications.
8. Queries:
It is a good idea to examine queries to try and fix a problem that is affecting the
application. Query analyzer can be used to see optimizer plans of slow SQL statements.
Choose "Query/Display Plan" to see a graphical representation of a query plan.
Alternatively, by issuing a "set showplan_text on" and running the statement will get a
textual representation of the plan, showing indexes used, the order in which the tables
were used, etc.
When investigating queries, worktables created per second should also be addressed. If a
large number of work tables being are created per second (i.e. hundreds per second), this
means that a large amount of sorting is occurring. This may not be a serious a problem,
especially if it does not correspond with a large amount of I/O.
However, performance could be improved by tuning the queries and indexes involved in
the sorts and, ideally, this will eliminate some sorting.
http://performanceappraisalebooks.info/ : Over 200 ebooks, templates, forms for
performance appraisal.