2. • There are several ways to tune performance in Mule. I’ve just finished
a page on performance tuning in the Mule 2.x User Guide that walks
through the available performance tuning options and provides
formulas for calculating threads. Following is an excerpt of the high-
level information from that page.
3. Overview
• Essentially, a Mule application is a collaboration of a set of services.
Messages are processed by services in three stages:
• Connector receiving stage
• Service component processing stage
• Connector dispatching stage
Tuning performance in Mule involves analyzing and improving these three
stages for each service. You can start by applying the same tuning approach
to all services and then further customize the tuning for each service as
needed.
4. About Thread Pools
• Each request that comes into Mule is processed on its own thread. A
connector’s receiver has a thread pool with a certain number of
threads available to process requests on the inbound endpoints that
use that connector.
5. • If you are using synchronous processing, the same receiver thread will be
used to carry the message all the way through Mule, whereas if you are
doing asynchronous processing, the receiver thread is used only to carry
the message to the component, at which point the message is transferred
to a component thread, and the receiver thread is released back into the
receiver thread pool so it can carry another message. After the component
has finished processing an asynchronous message, it is transferred to a
dispatcher thread and is sent on its way.
• Therefore, the receiver, component, and dispatcher all have separate
thread pools that are in use during asynchronous processing, whereas only
the receiver thread pool is in use for synchronous processing.
6. About Threading Profiles
• The threading profile specifies how the thread pools behave in Mule.
You specify a separate threading profile for each receiver thread pool,
component thread pool, and dispatcher thread pool. The most
important setting of each is maxThreadsActive, which specifies how
many threads are in the thread pool.
7. About Pooling Profiles
• Unlike singleton components, pooled components each have a
component pool, which contains multiple instances of the component
to handle simultaneous incoming requests. A service’s pooling profile
configures its component pool. The most important setting is
maxActive, which specifies the maximum number of instances of the
component that Mule will create to handle simultaneous requests.
Note that this number should be the same as the maxThreadsActive
setting on the receiver thread pool, so that you have enough
component instances available to handle the threads. You can use
Mule HQ to monitor your component pools and see the maximum
number of components you’ve used from the pool to help you tune
the number of components and threads.
8. Calculating Threads
• So how do you calculate the number of threads to set? There are
several factors to consider, including concurrent user requests,
processing time, response time, and timeout time. All of these factors
are described in detail on the Performance Tuning page, along with
formulas you can use to determine the number of threads to set for
the receiver, service, component, and dispatcher, and the number of
component instances to configure.