SlideShare a Scribd company logo
1 of 38
Download to read offline
Performance	Testing	with	IBM	
     Rational	Integration	Tester
Note		
         Before	using	this	information	and	the	product	it	supports,	read	the	information	in	“Legal	
         Notices”	on	page	35.	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
                                                    	
©	Copyright	IBM	Corporation	2001,	2012.
1      INTRODUCTION ............................................................................................................................. 3 
2      BACKGROUND .............................................................................................................................. 4 
3      PERFORMANCE TEST INFRASTRUCTURE ......................................................................................... 5 
     3.1       INTRODUCTION ....................................................................................................................... 5 
     3.2       ENGINES................................................................................................................................ 6 
     3.3       PROBES ................................................................................................................................ 7 
     3.4       AGENTS ................................................................................................................................. 7 
4      ARCHITECTURE SCHOOL ............................................................................................................... 9 
     4.1       INTRODUCTION ....................................................................................................................... 9 
     4.2       BASIC SYSTEM SETUP ............................................................................................................ 9 
     4.3       AGENT & ENGINE SETUP....................................................................................................... 10 
     4.4       PROBE SETUP ...................................................................................................................... 10 
5      CREATING THE LOAD GENERATING TEST ...................................................................................... 12 
     5.1       RE-USING FUNCTIONAL TEST RESOURCES ............................................................................. 12 
     5.2       BASIC SETUP ....................................................................................................................... 13 
     5.3       TIMED SECTIONS .................................................................................................................. 14 
6      CREATING PERFORMANCE TESTS ................................................................................................ 16 
     6.1       INTRODUCTION ..................................................................................................................... 16 
     6.2       INITIAL SETUP ...................................................................................................................... 16 
     6.3       ADDING TESTS ..................................................................................................................... 17 
     6.4       ENGINE SETTINGS ................................................................................................................ 18 
     6.5       MANAGING PROBES .............................................................................................................. 18 
7      RUNNING PERFORMANCE TESTS AND ANALYZING RESULTS........................................................... 20 
     7.1       RUNNING THE TEST .............................................................................................................. 20 
     7.2       VIEWING RESULTS ................................................................................................................ 21 
     7.3       MULTIPLE DATA SETS ........................................................................................................... 23 
8      DATA DRIVING PERFORMANCE TESTS .......................................................................................... 25 
     8.1       DIFFERENCES FROM FUNCTIONAL TESTS ............................................................................... 25 
     8.2       DRIVING A LOAD GENERATING TEST WITH EXTERNAL DATA ..................................................... 25 
9      LOAD PROFILES ......................................................................................................................... 27 
     9.1       PERFORMANCE TESTING SCENARIOS ..................................................................................... 27 
     9.2       CONSTANT GROWTH ............................................................................................................ 28 
     9.3       EXTERNALLY DEFINED LOAD PROFILES .................................................................................. 28 
10          ADVANCED TOPICS .................................................................................................................. 30 
	

             Page	1	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
10.1          BACKGROUND TESTS ........................................................................................................ 30 
    10.2          LOG MEASUREMENT .......................................................................................................... 30 
    10.3          CREATING THE MEASUREMENT TEST ................................................................................... 31 
    10.4          ADDING THE MEASUREMENTS TO A PERFORMANCE TEST ..................................................... 34 
11      LEGAL NOTICES ...................................................................................................................... 35 




	

            Page	2	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
1 Introduction
	
This	document	serves	as	a	training	manual	to	help	familiarize	the	user	with	the	performance	testing	
capabilities	available	in	IBM®	Rational®	Integration	Tester.	It	is	expected	that	the	reader	has	already	
been	through	the	basic	Rational	Integration	Tester	training,	and	understands	the	workflow	of	Rational	
Integration	Tester.		
	
In	this	course	we	will:	
	
       Create	performance	tests	
       Set	up	agents,	probes,	and	engines	to	execute	and	monitor	performance	tests	
       Analyze	results	of	performance	tests	
       Manage	the	amount	of	load	driven	by	a	performance	test	over	time	
       Data	drive	performance	tests	
	




	

        Page	3	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
2 Background
	
When	testing	a	service	oriented	architecture	(SOA),	there	will	be	times	when	simply	verifying	
functional	requirements	will	not	be	enough.	Many	systems	will	need	to	come	with	service	level	
agreements	(SLAs)	that	will	state	a	minimum	level	of	performance	that	must	be	satisfied.	
This	level	of	performance	may	have	a	number	of	components.	In	particular,	system	uptimes	and	
message	response	times	will	be	important.	However,	it	will	not	be	enough	to	test	the	system	to	check	
that	it	can	respond	to	a	single	message	within	a	given	amount	of	time	–	the	system	will	need	to	hold	up	
under	a	certain	amount	of	load	as	well.	This	load	may	take	the	form	of	a	large	number	of	messages,	
extreme	message	rates,	or	large	amounts	of	data.	In	addition,	accurately	modeling	the	load	on	the	
system	may	require	us	to	generate	message	requests	from	a	number	of	different	sources.	
For	experienced	performance	testers,	this	will	all	be	fairly	familiar.	However,	SOA	environments	bring	
challenges	on	top	of	the	traditional	client‐server	model.	For	example,	services	are	often	shared	among	
several	applications	and	failure	can	occur	anywhere	along	the	transaction	path.	Considering	both	the	
number	of	services	in	place	and	the	many	points	at	which	they	intersect—any	one	of	which	may	not	be	
performant—how	can	we	ensure	that	performance	levels	satisfy	the	nonfunctional	requirements?	
In	addition,	there	is	a	fundamental	difference	between	SOA	performance	testing	and	a	traditional,	
client‐server	approach.	Performance	testers	who	are	familiar	with	the	traditional	approach	tend	to	talk	
in	terms	of	the	number	of	users	or	“virtual	users”	that	are	required	to	generate	the	load.	They	also	tend	
to	be	concerned	with	end‐to‐end	response	times	–	the	response	time	experienced	by	an	end	user.	This	
end‐to‐end	performance	testing	is	typically	executed	against	a	functionally	proven,	complete	system.		
SOA	performance	testers	are	still	interested	in	response	times	but	would	be	more	interested	in	the	
volume	of	messages	sent	between	components	–	there	is	no	requirement	to	wait	until	the	system	has	
completed	assembly	or	for	a	front	end	GUI	interface	to	be	created.	Hence,	the	SOA	performance	tester	
can	begin	testing	much	earlier.	
When	running	performance	tests,	you	will	normally	be	faced	with	the	following	questions:	
    1. Does	the	system’s	performance	satisfy	the	requirements	or	SLAs?	
    2. At	which	point	will	the	performance	degrade?		
    3. Can	the	system	handle	sudden	increases	in	traffic	without	compromising	response	time,	
       reliability,	and	accuracy?	
    4. Where	are	the	system	bottlenecks?	
    5. What	is	the	system’s	break	point?	
    6. Will	the	system	recover	(and	when)?	
    7. Does	the	system	performance	degrade	if	run	for	an	extended	period	at	relatively	low	levels	of	
       load?		
    8. Are	there	any	capacity	issues	that	come	from	processing	large	amounts	of	data?	


	

        Page	4	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
3 Performance Test Infrastructure
	

                                 In	this	chapter,	you	will:	
                                               Look	at	the	distributed	nature	of	a	performance	test	
                                                infrastructure.	
                                               See	how	engines	are	used	to	execute	actions	within	a	
                                                performance	test	
                                               Examine	how	data	needs	to	be	recorded	from	the	system	
                                                under	test,	and	how	this	can	be	done	with	probes	
                                               Learn	how	performance	test	licensing	is	handled	
                                 	
                                                                                                                                                                              	
3.1 Introduction	
Before	creating	performance	tests,	we	will	need	to	revise	how	we	create	the	infrastructure	of	Rational	
Integration	Tester.	As	in	regular	functional	tests,	we	have	the	Rational	Integration	Tester	GUI,	and	the	
project	database.	However,	these	will	work	slightly	differently	in	the	context	of	a	performance	test.		
While	in	a	regular	functional	test,	the	GUI	and	the	test	are	normally	run	from	the	same	machine,	a	
performance	test	may	be	run	from	another	machine,	or	may	be	distributed	across	a	number	of	other	
machines.	This	means	that	the	Rational	Integration	Tester	software,	as	presented	by	the	GUI,	also	
provides	a	test	controller,	to	manage	any	remote	systems	involved	in	the	performance	test.		
In	addition,	the	project	database,	which	is	optional	for	a	functional	test,	becomes	mandatory	for	
performance	tests.	This	is	due	to	the	higher	volume	of	data	that	is	recorded	during	a	performance	test	
–	it	cannot	be	easily	presented	in	a	simple	console	window,	but	will	need	to	be	summarized,	and	
possibly	manipulated.		
Besides	the	GUI,	test	controller,	and	project	database,	there	are	also	three	new	items	in	our	
infrastructure:	engines,	probes,	and	agents.	They	fit	together	in	a	framework	to	run	tests	and	monitor	
performance	across	a	number	of	different	systems.	




	

       Page	5	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
3.2 Engines	
An	engine	is	the	process	that	actually	runs	a	test	in	Rational	Integration	Tester.	When	carrying	out	
functional	testing	using	Rational	Integration	Tester,	an	engine	exists	underneath	the	surface,	and	runs	
the	tests	on	behalf	on	the	controlling	instance	of	Rational	Integration	Tester	(ie,	the	instance	of	
Rational	Integration	Tester	that	is	running	the	main	performance	test),	on	the	user’s	machine.	
When	performance	testing,	the	engine	is	separated	from	the	controlling	instance	of	Rational	
Integration	Tester.	The	engine	can	be	on	the	same	machine	as	Rational	Integration	Tester,	or	it	can	be	
on	another	machine.	In	fact,	there	may	be	more	than	one	engine,	spread	across	multiple	machines.	
If	there	is	more	than	one	engine,	test	iterations	are	spread	across	the	available	engines.	For	example,	in	
a	performance	test	which	is	executing	40	tests	per	second	with	2	engines,	each	engine	would	be	
running	20	tests	per	second.	The	distribution	of	the	tests	is	handled	by	the	controlling	instance	of	
Rational	Integration	Tester.	
Using	multiple	engines	lets	us	solve	a	number	of	problems.	Most	simply,	if	one	machine	is	not	capable	
of	generating	a	high	enough	load	for	a	performance	test,	the	load	can	be	split	across	multiple	machines.	
Secondly,	multiple	engines	give	us	the	capability	to	distribute	the	load	across	multiple	endpoints.	For	
example,	if	we	need	to	simulate	requests	arriving	from	different	parts	of	the	world,	or	from	different	
networks,	we	can	set	up	engines	in	such	locations	as	to	satisfy	the	demands	of	the	performance	test.	

	

       Page	6	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
3.3 Probes	
With	such	complex	and	heterogeneous	platforms,	it	can	difficult	to	understand	what	to	measure	apart	
from	transaction	response	times.	It	will	be	impossible	to	measure	everything.		
Probes	are	the	tools	used	by	Rational	Integration	Tester	to	gather	statistics	from	the	system	under	test.	
There	are	a	variety	of	probes	available	to	the	user:	
       System	Statistics	Probe	
       Windows	Performance	Monitor	Probe	
       TIBCO	BusinessWorks	Probe	
       TIBCO	Rendezvous	Probes	
       TIBCO	EMS	Probe	
       Sonic	MQ	Probe	
       webMethods	Broker	Probe	
       webMethods	Integration	Server	Probe	
       JMX	Probe	
The	probes	will	be	deployed	on	the	systems	that	you	want	to	measure,	and	multiple	probes	can	co‐
exist	on	the	one	system.		
Recording	statistics	with	Rational	Integration	Tester’s	probes	gives	us	access	to	much	more	
information	than	just	the	transaction	response	times.	This	can	aid	us	to	determine	the	cause	of	poor	
performance	–	if	response	times	are	becoming	too	long	after	going	past	a	certain	number	of	requests	
per	second,	we	can	use	probes	to	see	if	it	is	due	to	load	on	the	CPU,	excess	memory	usage,	message	
queues	growing	larger,	or	another	cause.	
Whichever	probes	you	choose,	they	will	record	statistics	during	the	performance	test,	and	send	those	
statistics	to	the	controller,	for	writing	to	the	project	database.	These	writes	are	set	up	as	a	low‐priority	
task,	so	that	they	cause	as	small	an	impact	as	possible	on	system	performance.	

3.4 Agents	
Engines	drive	the	tests,	and	probes	monitor	them.	However,	both	need	a	host	controlling	them,	talking	
to	the	instance	of	Rational	Integration	Tester	controlling	the	performance	test.	This	role	is	played	by	
the	Rational	Integration	Tester	Agent.	
The	agent	runs	on	each	machine	that	has	an	engine	or	a	probe,	and	handles	the	communications	with	
Rational	Integration	Tester.	The	agent	can	host	an	engine,	a	probe,	or	both	at	the	same	time.	In	fact,	it	
can	also	handle	multiple	probes	or	engines	within	the	one	agent.	
The	agent	is	installed	with	the	Rational	Test	Performance	Server	(RTPS),	or	the	Rational	Test	
Virtualization	Server	(RTVS).	It	can	be	run	by	hand,	or	set	up	as	a	service	on	the	system	it’s	running	on.	
Due	to	this,	each	machine	that	requires	an	agent	requires	an	installation	of	RTPS	or	RTVS.	Following	
the	installation	of	RTPS	or	RTVS,	the	agent	will	need	to	be	configured	in	the	Library	Manager.	This	
configuration	follows	the	same	procedure	as	that	of	Rational	Integration	Tester	itself,	and	so	will	not	
be	covered	in	this	training	course.	
	

        Page	7	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
Note:	If	you	are	running	through	this	training	material	on	a	cloud	instance	or	virtual	machine,	all	
    parts	of	the	system	will	be	on	a	single	machine.	This	is	purely	for	ease	of	configuration,	and	does	
    not	reflect	a	real‐world	scenario.		




	

        Page	8	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
4 Architecture School
	

                                  In	this	chapter,	you	will:	
                                                Configure	Rational	Integration	Tester	to	connect	to	the	
                                                 system	under	test	
                                                Set	up	an	engine	to	run	the	performance	test,	with	an	
                                                 agent	to	host	it	
                                                Configure	a	probe	to	measure	the	performance	of	the	
                                                 system	during	testing
                                                                                                                                                                               	
4.1 Introduction	
Creating	a	model	of	the	system	under	test	will	be	very	similar	for	performance	tests	to	functional	tests.	
However,	in	addition	to	modeling	the	system	under	test,	the	Architecture	School	perspective	will	also	
be	used	to	provide	configuration	data	for	the	agents,	engines,	and	probes	in	the	system.	
Adding	this	information	to	your	Rational	Integration	Tester	project	should	be	done	after	the	normal	
process	of	modeling	the	system	under	test;	configuration	for	the	performance	testing	components	is	
then	carried	out	in	the	Physical	View	of	Architecture	School.	Note	that	as	it	is	configured	on	a	
physical	basis,	you	may	need	to	configure	new	components	when	setting	up	new	environments.	

4.2 Basic	System	Setup	
In	this	example,	we	will	be	testing	a	web	service	–	a	simple	Login	service	that	takes	a	username	and	
password,	and	returns	a	login	token.	We	will	first	start	the	service	on	our	local	machine,	then	
synchronize	with	the	WSDL	provided	by	the	service.	
    1. On	the	Desktop	of	the	cloud	instance,	there	is	a	folder	called	WebServices.	Open	this	folder,	and	
       execute	RunLoginService.bat.	
    2. This	will	pop	up	a	window	–	keep	it	open,	but	minimize	it.	
    3. Open	up	Rational	Integration	Tester,	and	start	a	new	project.	Note	that	you	will	need	to	use	a	
       project	database	–	one	is	already	specified	on	the	cloud	instance	by	default,	so	you	can	keep	
       using	this,	but	use	the	Test	Connection	option	to	make	sure	that	it	is	working	correctly.	If	you	
       are	not	using	a	cloud	instance,	please	ask	your	instructor	for	the	project	database	settings.	
    4. Once	Rational	Integration	Tester	is	open,	switch	to	the	Synchronization	view	within	
       Architecture	School.	
    5. Press	the	                             	button	in	the	toolbar	at	the	top	of	the	Synchronization	view,	and	select	
       WSDL.	
    6. The	Create	a	new	External	Resource	dialog	will	appear.	Press	the	New…	button.		
    7. The	New	WSDL	dialog	will	appear.	Press	the	Select…	button	to	select	a	new	location.		
	

        Page	9	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
8. Once	the	Select	Location	dialog	appears,	switch	to	the	URL	tab.	In	order	to	get	the	URL	of	the	
       WSDL,	copy	it	from	the	window	that	popped	up	when	you	ran	the	login	service.	
    9. Press	OK	to	close	the	Select	Location	and	New	WSDL	dialogs.	
    10. Click	Next,	and	run	through	the	rest	of	the	synchronization	process	as	normal.	

4.3 Agent	&	Engine	Setup	
    1. In	some	cases,	the	agent	might	be	executed	manually;	however,	in	this	example,	the	agent	is	
       running	as	a	Windows	service	on	the	localhost.	This	means	that	the	agent	can	be	found	at	
       localhost:4476.		However,	in	order	to	enter	its	details	properly,	we	need	to	know	the	name	of	
       the	local	host.		Execute	the	command	hostname	at	a	command	prompt.	
    2. Switch	back	to	Rational	Integration	Tester,	and	go	to	the	Physical	View	of	Architecture	
       School.	
    3. Press	the	                                 	button	at	the	left	of	the	Physical	View	Toolbar,	and	select	the	Agent	
       option.	
    4. In	the	Host	field,	enter	the	hostname	provided	in	step	1.	For	the	Port	number,	make	sure	that	
       the	default	setting	is	4476.	
    5. An	engine	called	default	is	automatically	attached	to	the	engine	–	leave	this	as‐is,	and	press	OK	
       to	close	the	dialog	and	complete	the	agent	configuration.	

4.4 Probe	Setup	
We’re	now	going	to	setup	the	probe	that	we	want	to	run	on	the	same	machine	as	the	agent.	Remember	
that	each	probe	will	need	to	be	running	on	an	agent,	or	tests	using	that	probe	will	fail.	Also,	probes	can	
be	set	up	on	individual	hosts,	or	on	services	running	on	those	hosts	–	for	example,	the	System	Statistics	
probe	will	run	on	a	particular	host,	but	most	of	the	technology‐specific	probes	will	need	to	be	attached	
to	a	particular	process	on	that	host.	If	you	need	to	use	those	particular	probes	in	the	future,	they	can	be	
configured	by	editing	the	properties	of	that	physical	component,	similar	to	the	way	we	will	edit	the	
probe	on	the	host	machine	in	this	exercise.	
    1. In	the	Physical	View,	each	physical	component	will	be	visible	in	a	tree	underneath	a	Subnet	
       and	a	Host.	Double	click	on	the	host	(which	should	have	the	hostname	we	used	in	the	previous	
       exercise)	to	bring	up	its	properties.	The	screenshot	below	shows	where	to	find	the	host,	though	
       your	hostname	and	IP	will	be	different.	




                                                                       	
    2. Once	the	properties	dialog	for	your	host	has	appeared,	switch	to	the	Probes	tab.	
    3. For	our	tests,	we’re	going	to	use	the	Windows	Performance	Monitor	probe.	Select	it	so	that	it	
       can	be	configured.	

	

       Page	10	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
4. The	first,	and	most	important,	setting	to	note	is	the	Hosting	Agent	at	the	very	bottom	of	the	
       dialog	–	it	tells	us	which	agent	is	running	this	probe.	Currently,	we	only	have	one	agent	to	deal	
       with,	but	make	sure	that	the	agent	for	this	probe	is	set	to	the	agent	created	in	the	previous	
       exercise.	If	the	agent	is	not	set	here,	then	any	performance	tests	that	attempt	to	use	this	probe	
       will	fail.	

                                                                                     	
    5. As	for	the	other	settings,	the	probe	should	be	set	to	Collect	Statistics	Every	1	second.	
       Following	that,	we	need	to	specify	what	statistics	we	need	to	collect	from	the	Windows	
       Performance	Monitor.	
    6. Press	the	                	button	to	add	a	new	counter.	The	Add	Counters	dialog	will	appear.	
    7. For	our	first	counters,	we’ll	examine	memory	statistics.	To	do	this,	select	Memory	in	the	
       Performance	Object	field.	Under	Counter,	select	Available Mbytes,	then	press	the	Add	
       button.	
    8. Repeat	this	for	the	Page Faults/sec	counter,	and	any	other	data	you	are	interested	in.	
    9. Now	select	Processor	in	the	Performance	Object	field,	and	add	the	% Processor Time	
       counter,	along	with	any	other	counters	that	are	of	interest.	
    10. Press	Done	to	return	to	the	configuration	of	the	probe.	
    11. Select	all	of	the	Counters,	then	press	the	                                             button	to	validate	that	each	one	is	working.	The	
        dialog	should	now	appear	like	so:	




                                                                                                 	
    12. Press	OK	to	close	the	dialog.	Our	system	is	now	set	up	to	gather	statistics	during	performance	
        tests.	
	

       Page	11	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
5 Creating the Load Generating Test
                                                                                                   	

                               In	this	chapter,	you	will:	
                                            Create	a	functional	test	that	can	be	used	as	a	load	
                                             generating	test	
                                            Encounter	the	test	actions	created	for	use	within	
                                             performance	tests	
                                            Create	a	timed	section	within	a	test	to	capture	timing	and	
                                             status	information
                                                                                                                                                                           	
	

5.1 Re‐using	Functional	Test	Resources	
One	of	the	advantages	in	using	Rational	Integration	Tester	for	SOA	testing	is	that	functional	tests	can	
easily	be	refactored	to	be	run	within	a	performance	scenario.	
This	is	important	because	when	evaluating	the	performance	of	the	system,	it	is	insufficient	to	just	send	
a	request	and	measure	the	time	it	takes	for	a	response	to	arrive.	For	example,	if	a	web	service	
operation	rejects	input	and	sends	back	a	SOAP	Fault	message,	the	time	it	takes	may	be	significantly	
different	from	the	time	it	takes	to	properly	process	a	request	and	return	a	valid	response.	If	a	test	does	
not	truly	validate	the	outcome	of	an	operation,	it	will	provide	an	inaccurate	view	of	the	true	system	
performance.	
In	this	case,	we	will	create	a	simple	functional	test	that	we	will	use	as	the	basis	for	our	performance	
tests,	to	illustrate	how	this	works.	This	functional	test	will	be	used	as	a	load	generating	test	within	our	
main	performance	test.	
When	editing	the	load	generating	test,	there	are	several	new	actions	that	may	be	used.	These	actions	
will	be	ignored	when	running	the	test	as	a	functional	test	–	they	will	only	be	executed	when	it	is	run	as	
part	of	a	performance	test.	
                                                                           Performance	Actions	

                       Begin	Timed	Section:	Mark	the	beginning	of	a	timed	section	for	a	
                       performance	test.	
                     	

                          End	Timed	Section:	Mark	the	end	of	a	timed	section	for	a	performance	test.	
                     	
                       Log	Measurement:	Log	data	to	the	project	database	during	a	performance	
                     	 test.	
                                                                                                   	

	

       Page	12	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
Note:	The	Initialise,	Test	Steps,	and	Tear	Down	sections	become	more	important	in	performance	
    tests.	When	we	are	running	multiple	iterations	of	a	load	generating	test,	the	Initialise	part	of	the	
    test	will	only	be	run	once	at	the	beginning,	and	the	Tear	Down	section	once	at	the	end.	Only	the	
    Test	Steps	will	be	run	for	each	iteration	of	any	load	generating	test	used	in	the	performance	test.	
    This	means	we	can,	for	example,	set	up	and	clean	up	a	database	within	the	Initialise	and	Tear	
    Down	sections	without	impacting	the	data	that	we	are	actually	interested	in.	

5.2 Basic	Setup	
    1. Before	we	can	create	a	performance	test,	we	need	to	provide	a	load	generating	test	that	will	
       contain	the	actions	carried	out	during	each	iteration.	To	do	this,	we’ll	create	a	normal	test,	and	
       add	a	timed	section	to	it.	Go	to	Test	Factory,	and	right	click	on	the	Login	operation	to	bring	up	
       the	context	menu.	Select	New	>	Tests	>	Test	Using	MEP.		
    2. The	Create	dialog	will	appear	–	press	the	Options	button	to	bring	up	a	Settings	dialog.	
    3. On	the	Message	Settings	tab,	make	sure	the	option	Include	Optional	Fields	is	selected,	and	
       then	press	OK	to	return	to	the	Create	dialog.	
    4. Call	the	test	loginBase,	and	press	OK.	A	test	will	be	created.	
    5. Open	up	the	Send	Request	message,	and	fill	in	a	username,	password,	and	application	in	the	
       relevant	fields.	The	contents	don’t	matter	for	this	example	–	our	login	service	will	accept	any	
       input	for	these	fields.	
    6. We	now	need	to	set	up	the	validation	for	this	message.	For	our	purposes,	it	will	be	enough	to	
       check	that	a	login	token	is	returned,	and	that	it	contains	hexadecimal	digits	broken	up	with	
       hyphens.	Open	up	the	Receive	Reply	action,	and	find	the	Token	field.	Double	click	on	the	
       (Text)	section	below	that	to	bring	up	the	Field	Editor.	
    7. In	the	top	half	of	the	Field	Editor,	make	sure	that	the	Equality	validation	is	selected,	as	we	will	
       be	replacing	this	validation	with	a	regular	expression:	




                                                                                                                                                            	
    8. Change	the	Action	Type	option	just	below	from	Equality	to	Regex.	
    9. Enter	the	regular	expression	^[a‐f0‐9‐]*$	
    10. To	test	that	it	is	working,	enter	44ef‐2ab7‐573d	into	the	Document	field,	and	press	Test.	The	
        Result	field	should	update	to	say	true	(if	it	doesn’t,	make	sure	that	you	haven’t	accidentally	
        included	a	space	character	at	the	end	of	the	string).		


	

       Page	13	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
11. Now	add	–y8rr	to	the	end	of	the	Document	field,	giving	44ef‐2ab7‐573d–y8rr,	and	press	Test	
        again.	This	should	fail.	
    12. Press	OK	to	close	the	Field	Editor,	and	OK	again	to	return	to	the	test.	
    13. Save	the	test,	and	then	run	it	in	Test	Lab	to	make	sure	that	it	passes.	If	there	are	any	problems,	
        fix	them	before	moving	on.	

5.3 Timed	Sections	
Timed	sections,	marked	by	the	Begin	Timed	Section	and	End	Timed	Section	actions,	allow	us	to	time	
the	execution	of	different	parts	of	the	test	being	executed.	A	single	functional	test	can	contain	multiple	
timed	sections,	which	may	overlap,	or	contain	other	timed	sections.		
For	each	timed	section,	Rational	Integration	Tester	will	log	data	into	the	project	database	while	a	
performance	test	is	running.	This	will	include	not	only	the	time	taken	for	the	timed	section	to	execute,	
but	also	the	status	of	the	section	–	whether	it	passed,	failed,	or	timed	out.	
If	no	timed	sections	are	added	to	the	test,	Rational	Integration	Tester	will	still	record	the	length	of	time	
taken	to	execute	each	iteration	of	the	entire	test,	and	the	status	at	the	end	of	that	iteration.	
    1. Start	by	returning	to	the	Test	Factory.	
    2. Add	two	new	actions	to	the	test	–	a	Begin	Timed	Section	 ,	and	an	End	Timed	Section	 .	
       The	Begin	Timed	Section	should	go	before	the	two	messaging	actions,	while	the	End	Timed	
       Section	should	go	afterwards.	
    3. Open	the	Begin	Timed	Section	action.	
    4. The	timed	section	will	need	a	name	–	call	it	S1.	
    5. Below	the	name	of	the	timed	section,	there	is	an	option	to	determine	how	this	timed	section	will	
       be	recorded	(Pass/Fail/Timeout).	We	can	take	the	status	of	the	test	at	the	end	of	the	section,	or	
       we	can	take	the	status	of	the	test	at	the	end	of	that	iteration	of	the	test.	In	this	particular	case,	
       since	the	timed	section	covers	the	entire	test,	it	will	not	make	any	difference	which	of	the	two	
       options	we	choose.	
    6. Close	the	Begin	Timed	Section	action,	and	open	the	End	Timed	Section	action.	This	has	only	
       one	setting	–	the	name	of	the	timed	section.	Match	it	to	S1,	the	section	we	started	with	the	Begin	
       Timed	Section	action,	and	close	the	dialog.	


	

       Page	14	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
Our	load	generating	test	is	now	complete.	However,	we	need	to	state	how	many	iterations	we	will	
carry	out,	at	what	rate,	and	so	on.	We	will	do	this	separately,	in	a	performance	test.	




	

      Page	15	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
6 Creating Performance Tests
                                                                                                   	

                               In	this	chapter,	you	will:	
                                            Create	a	simple	performance	test	
                                            Add	a	load	generating	test	to	a	performance	test	
                                            Configure	a	performance	test	to	use	selected	engines	and	
                                             probes	
                                                                                                                                                                           	
	

6.1 Introduction	
Once	we	have	a	functional	test	that	will	create	a	load	on	the	system,	we	can	start	putting	together	a	
performance	test.	Performance	tests	are	created	in	the	Test	Factory,	and	are	contained	within	the	same	
tree	structure	as	other	test	resources.	
Our	first	performance	test	will	be	fairly	basic,	running	a	single	load	generating	test	at	1	iteration	per	
second.	Later	performance	tests	will	look	at	changing	the	load	on	the	system,	and	varying	it	over	time.	

6.2 Initial	Setup	
    1. In	the	Test	Factory	Tree,	right	click	on	the	Login	operation,	and	select	New	>	Tests	>	
       Performance	Test.	Call	the	test	simplePerformance.	
    2. The	initial	screen	of	the	Performance	Test	Editor	will	appear.	




                                                                                                                                                                                                   	
    3. Click	on	the	text	                                                 	on	the	left	hand	side.	
    4. The	right	hand	side	will	alter	to	show	settings	for	the	performance	test.	Make	sure	that	you	are	
       on	the	Execution	tab.	
	

       Page	16	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
5. Most	settings	can	be	left	at	their	defaults,	but	change	the	length	of	the	test	phase	to	30	seconds,	
       as	in	the	screenshot	below.	




                                                                                                                                                                        	
6.3 Adding	Tests	
    1. A	performance	test	on	its	own	does	nothing	–	it	requires	load	generating	or	background	tests	in	
       order	to	test	the	performance	of	your	system.	We	will	now	add	the	loginBase	test	as	our	load	
       generating	test.	Right	click	on	the	              	text	on	the	left	hand	side	–	two	options	will	
       appear:	


                                                                                                                                     	
    2. Click	on	Add	Load	Generating	Test.	
    3. The	load	generating	test	will	appear,	and	should	be	selected	on	the	left	hand	side	of	the	editor.	
       Configuration	information	for	the	load	generating	test	will	appear	on	the	right	hand	side.	The	
       first	thing	we	need	to	do	is	to	choose	which	test	will	be	used	for	load	generation.	To	do	this,	
       make	sure	you	are	on	the	Execution	tab,	and	find	the	Test	Path	field.	Press	the	Browse	button	
       next	to	that	field,	and	select	the	loginBase	test	from	the	dialog	that	appears.	
    4. We’ll	leave	the	other	execution	options	at	their	default	settings	for	the	moment,	as	shown	
       below.	




	

       Page	17	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
5. The	load	generating	test	is	nearly	ready	to	go	–	but	first,	we	will	need	to	say	what	engine	(or	
       engines)	will	be	executing	this	test.	

6.4 Engine	Settings	
Our	test	can	be	run	on	one	or	more	engines.	For	the	purposes	of	this	manual,	we	will	only	be	using	a	
single	engine	running	on	a	single	agent,	but	in	more	complex	tests,	multiple	engines	can	be	set	up	in	
different	locations,	splitting	up	the	load	between	different	machines.	Regardless	of	how	many	engines	
are	being	used,	remember	that	the	engines	are	all	managed	by	a	single	controller	‐	the	instance	of	
Rational	Integration	Tester	that	is	running	the	performance	test.	
    1. Switch	from	the	Execution	tab	to	the	Engines	tab.	
    2. Press	the	Add…	button	at	the	bottom	of	the	screen.	
    3. A	Select	dialog	should	appear.	In	this	case,	there	will	be	only	one	engine	available	–	the	default	
       engine	attached	to	the	agent	we	created	in	Architecture	School.	Select	the	default	engine,	and	
       press	OK.	
    4. If	multiple	engines	were	available,	we	could	select	more	of	them	by	pressing	the	Add…	button	
       again,	and	selecting	other	engines,	but	for	this	example,	our	test	is	now	ready	to	be	run	–	we	just	
       need	to	state	how	we	will	be	monitoring	it.	

6.5 Managing	Probes	
Each	performance	test	can	choose	which	probes	it	wants	to	use	to	gather	data.	Different	tests	may	be	
measuring	different	data.	For	example,	one	test	may	be	gathering	system	data,	while	another	may	
gather	statistics	from	the	middleware	layer.	
Regardless	of	which	probes	are	being	requested	here,	they	must	still	be	set	up	in	Architecture	School.	
For	this	example,	we	will	use	the	Windows	Performance	Monitor	probe,	as	set	up	in	the	previous	
exercises.	
    1. On	the	left	hand	side	of	the	Performance	Test	Editor,	click	on	the	Performance	Test	to	switch	
       back	to	its	settings.	
    2. Click	on	the	Probes	tab.	

	

       Page	18	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
3. We	can	now	select	from	the	available	probes.	As	we	have	only	set	up	the	Windows	
       Performance	Monitor,	check	the	box	for	that	probe,	and	leave	the	others	blank.	




                                                                                                                                                                                                   	
    4. Save	the	simplePerformance	test.	




	

       Page	19	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
7 Running Performance Tests and Analyzing Results
                                                                                                   	

                               In	this	chapter,	you	will:	
                                            Execute	a	performance	test	and	view	the	statistics	shown	
                                             at	runtime.	
                                            View	the	results	of	a	performance	test	in	the	Results	
                                             Gallery	
                                            Compare	results	of	multiple	executions	of	a	performance	
                                             test		
                                                                                                                                                                           	
	

7.1 Running	the	Test	
The	procedure	for	running	a	performance	test	is	much	the	same	as	for	a	functional	test	–	simply	use	
the	Run	 	button	in	the	Test	Lab,	or	double	click	on	the	test	in	the	tree.	While	the	performance	test	is	
running,	a	summary	of	the	data	being	gathered	will	be	displayed	in	the	console.	For	full	reporting,	we’ll	
need	to	go	to	the	Results	Gallery,	as	we’ll	see	in	the	following	exercise.	
	
    1. Switch	to	Test	Lab.	
    2. Run	the	simplePerformance	test.	
    3. Watch	the	console	results	–	you	will	notice	that	the	probes	will	be	started	15	seconds	before	the	
       load	generating	tests	are	run,	as	per	the	settings	in	the	performance	test.		
       	




                                                                                                                                                                                                   	
       	
    4. Once	the	load	generating	tests	are	being	run,	you	will	see	counters	for	the	numbers	of	tests	
       started,	passed,	failed,	and	the	number	of	pending	database	writes.	These	are	defined	in	the	
       table	below:	
       	
       	

	

       Page	20	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
Iterations	started	in	the	report	interval	(The	setting	is	'Collect	statistics	
                                                  every'	in	Performance	Test	Statistics	tab).	The	default	interval	is	5	
          Started	
                                                  seconds,	so	if	the	test	is	set	for	10TPS	this	would	show	a	total	of	50	
                                                  each	time.	
          Passed	                                 Iterations	passed	so	far	during	the	performance	test.	
                                                  Iterations	where	message	receivers	did	not	get	a	response	within	their	
          Timed	Out	
                                                  configured	timeout.	
          Failed	                                 Iterations	failed	so	far	during	the	performance	test.	
                                                  Database	writes	queued	on	the	results	database.	Large	numbers	
          Pending	DB	                             indicate	that	database	access	is	slower	than	required	and	may	be	a	
          Writes	                                 result	of	a	slow	network	connection.	Note	that	the	writes	are	buffered	
                                                  and	do	not	slow	down	the	test	rate.	
    	
    Note:		A	performance	test	may	run	on	longer	than	the	specified	time	while	remaining	test	instances	
    complete	and	database	writes	are	flushed.	In	this	case	you	will	see	the	'started'	figure	as	zero	for	
    those	intervals	since	the	given	number	of	iterations	has	already	been	started.	
	

7.2 Viewing	Results	
    1. The	Test	Lab	doesn’t	show	much	in	the	way	of	results	besides	statistics	for	how	many	timed	
       sections	passed/failed,	etc.	To	get	this	information,	we	need	to	go	to	the	Results	Gallery.	
       Switch	to	that	perspective	now.	
    2. In	the	Results	Gallery,	you	will	see	a	single	line	describing	basic	information	about	your	test	–	
       start/end	times,	number	of	iterations,	etc.	Select	this,	and	then	click	the	Analyse	Results	button	
       at	the	bottom	of	the	screen.	
    3. An	empty	chart	will	now	appear.	We’re	going	to	populate	this	with	some	of	the	data	we	have	
       collected	with	our	probes.	This	is	available	to	the	left	of	the	chart,	sorted	into	different	
       categories.	Expand	this	out	to	navigate	to	Performance Test Sections (Based on Start 
       Times) > Average Pass Section Duration > simplePerformance / S1.		
    4. A	checkbox	should	now	be	available	–	tick	it.	




                                                                                                                                                     	


	

        Page	21	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
5. A	chart	should	now	appear	on	the	left	hand	side.	Experiment	with	adding	other	information	
       gathered	by	your	probes	to	the	chart.	In	particular,	look	at	the	information	recorded	by	the	
       Windows	Performance	Monitor	Probe.	Note	that	charts	can	be	removed	simply	by	unchecking	
       them	again.	
    6. As	you	experiment,	you	may	find	a	situation	where	the	axes	of	charts	do	not	match.	For	
       example,	if	you	look	at	the	Windows	Performance	Monitor	probe,	and	select	both	pieces	of	data	
       for	the	memory,	you	might	see	something	like	this:	




                                                                                     	
       	
       The	number	of	MB	available	is	changing,	but	the	scale	of	the	chart	for	the	number	of	page	faults	
       is	preventing	us	from	seeing	that	information.	
    7. In	order	to	fix	this,	we	can	edit	the	properties	of	one	of	the	charts	so	that	it	is	displayed	on	a	
       separate	axis.	Go	to	the	left	of	the	chart	display,	where	you	have	selected	your	data,	and	double	
       click	on	one	of	the	coloured	lines	that	has	appeared	next	to	the	checkbox.		
    8. A	Choose	Style	dialog	will	appear.	On	the	Style	tab,	you	can	change	how	the	data	is	displayed	
       (colour	of	the	line,	type	of	chart,	etc.).	Change	any	settings	here	that	are	of	interest.	




	

       Page	22	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
9. Switch	to	the	Data	tab,	and	set	the	Axis	to	2.	Close	the	dialog,	and	the	charts	should	update:	




                                                                                        	
    10. At	this	stage,	we	can	edit	the	chart	and	give	it	a	name	and	some	notes,	in	the	text	fields	below	
        the	chart.	We	can	also	save	the	chart	for	later	reference,	or	use	the	 	button	to	export	data	to	a	
        CSV	file.	

7.3 Multiple	Data	Sets	
So	far,	we’ve	just	looked	at	results	for	a	single	performance	test.	It	is	also	possible	to	compare	results	of	
multiple	performance	tests,	run	at	different	times.	This	allows	us	to	see	how	changes	made	to	our	load	
generating	test,	or	changes	made	to	the	system,	have	affected	the	performance	of	the	system.	
    1. Close	the	chart	for	the	moment,	and	return	to	the	Test	Lab.	
    2. Run	the	simplePerformance	test	again.	
    3. Once	it	is	complete,	go	back	to	the	Results	Gallery,	and	choose	Analyse	Results	for	the	most	
       recent	test	run.	
    4. A	chart	will	appear,	as	before.	However,	this	time	another	option	is	open	to	us:	we	can	compare	
       this	test	run	to	any	previous	test	run.	Switch	to	the	Data	Sets	tab,	and	press	the	Add	button.	
    5. Select	your	previous	test	run	here.	




	

       Page	23	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
6. Return	to	the	Counters	tab.	For	each	counter,	two	charts	will	now	be	available,	and	you	can	use	
       this	to	compare	the	current	results	to	the	previous	results.	




                                                                                                                                                                                                   	




	

       Page	24	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
8 Data Driving Performance Tests
                                                                                                   	

                               In	this	chapter,	you	will:	
                                            Learn	the	limitations	of	using	the	iterate	actions	within	a	
                                             load	generating	test.	
                                            Use	the	Input	Mappings	settings	in	the	performance	test	
                                             to	data	drive	a	load	generating	test.	

                                                                                                                                                                           	
                                                                                                   	

8.1 Differences	from	Functional	Tests	
When	creating	functional	tests	in	Rational	Integration	Tester,	we	can	simply	use	the	Iterate	Test	Data	
action	to	run	through	a	data	set,	and	test	the	system	with	that	particular	data	set.	However,	just	
imagine	that	we	had	a	test	similar	to	the	one	we	have	created	already,	that	sends	a	single	message,	and	
receives	a	single	message.			
As	an	example,	if	we	were	to	use	the	Iterate	Test	Data	action	(or	any	other	Iterate	test	action)	with	that	
test,	to	send	10	messages,	and	our	performance	test	had	also	specified	10	iterations	per	second,	we	
could	end	up	sending	anywhere	between	10	and	100	messages	per	second,	with	no	precise	control	
over	the	load	on	the	system.	In	addition,	test	duration	and	status	data	will	be	limited	in	their	
usefulness	–	rather	than	measuring	the	messaging	times,	we	would	be	measuring	the	time	to	execute	
10	messages;	similarly,	we	would	be	recording	the	pass/fail	status	of	a	group	of	10	messages,	rather	
than	a	single	message.	
For	these	reasons,	it	is	generally	advised	not	to	use	the	Iterate	Test	Data	action	within	a	performance	
test.	Instead,	we	can	map	a	data	source	to	a	test	using	the	Input	Mappings	tab	for	each	Load	Generating	
or	Background	Test.	

8.2 Driving	a	Load	Generating	Test	with	External	Data	
    1. Create	a	copy	of	the	loginBase	test,	and	call	it	loginDataDriven.	
    2. Open	the	Send	Request	action	of	the	loginDataDriven	test,	and	go	to	the	Config	tab.	
    3. Quick	Tag	the	UserName	and	Password	fields.	




	

       Page	25	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
4. Save	the	loginDataDriven	test.	
    5. Now	make	a	copy	of	the	simplePerformance	test,	and	call	it	dataDrivenPerformance.	
    6. Open	the	dataDrivenPerformance	test,	and	go	to	the	settings	for	the	Load	Generating	Test.	
    7. Find	the	Test	Path	setting	–	it	should	currently	be	set	to	the	loginBase	test.	Use	the	Browse	
       button	to	change	this	to	the	loginDataDriven	test.	
    8. We	now	need	a	data	source,	so	leave	Rational	Integration	Tester	to	create	one.	Create	a	brand	
       new	CSV	file	or	Excel	spreadsheet	with	the	following	data	(or	some	of	your	own	invention	–	just	
       remember	to	add	a	line	with	headings):	
       User,Pass 
       Jim,gr33nhat 
       Steve,ght3st3r 
       Monica,perf0rmance 
       Karen,eng1n3s 
       Ben,pr0b3s 
    9. Save	your	CSV/Excel	file,	and	return	to	Rational	Integration	Tester.	Create	a	File	Data	Source	
       or	Excel	Data	Source	to	connect	to	your	data.	Remember	to	use	the	Refresh	button	to	check	
       that	the	data	has	loaded	properly,	and	Save	the	data	source.	
    10. Go	back	to	the	dataDrivenPerformance	test,	and	go	to	the	Input	Mappings	tab.	Underneath	
        this,	you	will	see	three	new	tabs	appear	–	Config,	Filter,	and	Store.	
    11. In	the	Config	tab,	use	the	Browse	button	to	choose	your	test	data	set.	
    12. If	we	wanted	to	filter	the	incoming	data,	we	could	do	that	on	the	Filter	tab.	In	this	case,	we’ll	use	
        the	entire	data	set	supplied,	so	go	to	the	Store	tab.	
    13. Map	the	tags	in	the	loginDataDriven	test	to	the	columns	in	your	data	source	here,	and	save	the	
        dataDrivenPerformance	test.	
    14. Run	the	test	and	analyze	the	results.	



	

       Page	26	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
9 Load Profiles
                                                                                                   	

                               In	this	chapter,	you	will:	
                                            Encounter	standard	performance	testing	scenarios,	
                                             including	load,	stress,	and	soak	tests.	
                                            Use	the	Constant	Growth	settings	in	a	performance	test	
                                             to	provide	an	increase	in	the	load	on	the	system.	
                                            Data	drive	the	load	on	the	system	using	the	Externally	
                                             Driven	settings	in	a	performance	test.
                                                                                                                                                                           	
	

9.1 Performance	Testing	Scenarios	
So	far,	our	tests	have	been	modeling	a	very	simple	scenario,	running	our	load	generating	test	at	1	
iteration	per	second.	However,	there	are	a	number	of	scenarios	where	we	might	like	to	use	more	
complex	performance	tests.	We’ll	look	at	a	few	example	scenarios,	and	how	Rational	Integration	Tester	
can	deal	with	them.	
Load	Testing	
A	load	test	attempts	to	represent	a	period	in	the	working	day,	and	test	how	the	system	will	respond	to	
a	similar	load.	For	example,	it	may	be	anticipated	that	the	greatest	risk	of	non‐performance	may	be	at	
the	start	of	the	working	day.	The	scenario	will	model	the	ramp	up	from	the	minimum	number	of	users	
to	the	peak	login	period	during	the	first	few	hours	of	business.	
Stress	Testing	
A	stress	test	scenario	is	used	to	identify	the	break	point	of	the	system.	The	break	point	will	thus	be	
identified	as	a	specified	load	(and	ramp‐up)	and	will	be	used	to	identify	performance	weaknesses	in	
the	distributed	system.	This	may	be	modeled	with	a	simple	linear	increase	in	the	load	on	the	system.	
Stress	tests	can	also	be	useful	in	proving	the	recoverability	of	a	system	–	how	does	the	system	break	
and	how	gracefully	can	it	recover	under	extreme	loads?		In	this	case,	a	bell	curve	could	be	used	to	view	
how	the	system	breaks	as	it	approaches	a	point	identified	as	a	break	point,	and	then	to	allow	the	
system	some	room	to	recover.	
Soak	Testing	
A	soak	test	scenario	will	be	run	against	a	constant	low‐level	load	that	may	run	for	hours	or	days.	This	
scenario	could	be	a	modeling	an	iteration	run	once	per	minute,	or	less.	Running	the	test	for	an	
extended	period	of	time	will	identify	any	issues	that	may	manifest	themselves	over	a	longer	period,	
such	as	memory	leaks.	
	
	
	

       Page	27	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
High	Intensity	Scenarios		
High	intensity	scenarios,	whether	they	are	load	tests	or	stress	tests,	will	require	a	lot	of	data.	Some	
basic	math	will	be	required	to	understand	the	overall	data	requirements	in	terms	of	the	data	to	drive	
the	tests	and	the	data	that	is	required	to	be	present	in	the	system	for	reference	and	execution.	If	you	
are	expected	to	run	at	100	transactions	per	second	for	over	3	hours	then	you	will	require	3	x	60	x	60	x	
100	rows	of	data	–	over	1	million	rows	of	data!	Whether	or	not	you	can	use	repeating	data	in	these	
million	rows	of	data	will	be	determined	by	the	nature	of	the	system	under	test.	Similar	issues	may	
arise	in	particularly	lengthy	soak	tests.	
In	addition,	modeling	the	correct	number	of	requests	in	a	high	intensity	scenario	may	not	be	possible	
with	a	single	test	engine.	You	may	need	to	create	multiple	test	engines	in	order	to	be	able	to	create	a	
sufficient	load	on	the	system.	

9.2 Constant	Growth	
Moving	beyond	a	simple,	constant	load	on	the	system,	the	simplest	way	to	vary	the	load	in	a	
performance	test	is	to	increase	the	load	over	time.	To	do	this,	we	split	the	performance	test	up	into	
multiple	phases	of	a	given	duration,	increasing	the	number	of	iterations	for	each	new	phase.	This	will	
give	a	simple	demonstration	of	how	the	system	handles	an	increasing	amount	of	load.	
    1. Return	to	the	Test	Factory,	and	create	a	copy	of	the	simplePerformance	test.	
    2. Rename	the	new	test,	and	call	it	threePhaseTest.	
    3. Open	the	threePhaseTest,	and	on	the	Execution	tab,	change	the	Number	of	test	phases	to	3.	
    4. Switch	to	the	Load	Generating	Test	on	the	left	hand	side,	and	go	to	the	Execution	tab.	
    5. Change	the	Initial	target	iterations	setting	to	5	per	second.	
    6. Set	the	Increment	per	phase	to	5.	We	will	now	have	3	phases,	5	iterations	per	second,	10	
       iterations	per	second,	and	15	iterations	per	second.	
    7. Save	the	performance	test,	and	run	it	from	the	Test	Lab.	You	should	see	each	phase	execute	in	
       the	console;	notice	that	each	phase	is	running	more	and	more	tests	per	second.	
    8. Go	to	the	Results	Gallery	to	view	the	tests,	and	view	the	data	in	a	chart.		
	

9.3 Externally	Defined	Load	Profiles	
Using	an	external	data	source	gives	us	much	more	control	over	the	amount	of	load	on	the	system.	The	
length	of	each	phase	can	be	varied	as	required.	In	addition,	a	constant	level	of	growth	is	no	longer	
required	–	the	exact	number	of	iterations	per	second/minute/hour	can	be	set	for	each	individual	
phase.	
    1. Minimize	Rational	Integration	Tester,	and	create	a	new	CSV	or	Excel	file	with	the	following	data:	
       	
       Period,Iterations 
       10,10 
       20,30 
       30,10	
	

       Page	28	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
We	will	use	the	first	column	of	the	data	to	specify	the	length	of	each	phase	that	we	will	run,	
       while	the	second	supplies	the	number	of	iterations	in	each	phase.	As	we	will	specify	our	units	as	
       seconds	in	the	performance	test	(minutes	and	hours	are	also	possible),	we	will	have	10	seconds	
       at	the	beginning	of	the	test	where	we	run	10	iterations	per	second,	20	seconds	where	we	ramp	
       it	up	to	30	iterations	per	second,	and	then	30	seconds	where	we	allow	the	system	to	go	back	to	
       the	original	10	iterations	per	second.	
    2. Return	to	Rational	Integration	Tester,	and	create	a	File	Data	Source	or	Excel	Data	Source	to	
       link	to	your	data.	Remember	to	use	the	Refresh	button	to	check	that	the	data	loads	correctly.	
    3. Save	the	new	data	source.	
    4. Create	a	copy	of	the	simplePerformance	Test,	and	call	it	externalPhaseTest.	
    5. Open	the	externalPhaseTest,	and	go	to	the	Execution	tab	of	the	Performance	Test	settings.	
    6. Change	the	Load	Profile	to	Externally	Defined.	
    7. Next	to	Data	set	for	load	phases,	press	the	Browse	button	to	find	and	select	the	data	source	
       containing	the	phase	data.	
    8. Leave	the	Execute	test	phases	field	blank,	and	make	sure	that	the	Phase	duration	read	from	
       column	setting	is	set	to	Period.	
    9. Switch	to	the	settings	for	the	Load	Generating	Test,	and	go	to	the	Execution	tab.	
    10. Make	sure	that	the	number	of	iterations	is	read	from	the	Iterations	column.	
    11. Save	the	externalPhaseTest,	and	run	it	in	the	Test	Lab.	
    12. Go	to	the	Results	Gallery,	and	view	the	test	results.	You	may	notice	that	the	statistics	for	the	
        minimum,	average,	and	maximum	pass	section	durations	spike	during	the	test,	indicating	that	
        the	load	was	slowing	down	the	performance	of	the	system.	




                                                                                                                                                                     	
	

       Page	29	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
10 Advanced Topics
                                                                                                   	

                               In	this	chapter,	you	will:	
                                            Encounter	background	tests,	and	discuss	their	uses.	
                                            See	how	the	Log	Measurement	action	works,	and	how	it	
                                             can	be	used.	
                                            Use	the	Log	Measurement	action	in	a	background	test	to	
                                             provide	a	custom	probe.	

                                                                                                                                                                           	
	

10.1 Background	Tests	
So	far,	we	have	used	a	load	generating	test	to	provide	a	pre‐defined	load	upon	the	system.	Multiple	
load	generating	tests	could	be	used,	if	required.	However,	in	some	cases,	you	may	want	to	provide	a	
constant	stimulus	for	the	system	while	using	your	load	generating	test.	There	may	also	be	situations	
where	you	need	to	use	stubs	to	simulate	part	of	the	system	under	test.	Both	of	these	situations	can	be	
handled	by	adding	a	background	test	to	your	performance	test.	
A	background	test	is	a	functional	test	(or	stub)	that	will	be	run	repeatedly	for	the	duration	of	the	
performance	test	(or,	optionally,	until	the	background	test	fails).	This	means	that	it	will	be	run	
concurrently	with	any	load	generating	tests	included	in	the	performance	test.	Each	background	test	
can	have	a	single	iteration	running	at	a	time,	or	may	be	run	multiple	times	in	parallel	–	unlike	load	
generating	tests,	this	is	not	limited	by	the	Rational	Integration	Tester	license	that	is	in	use.	However,	
timing	and	status	information	will	not	be	recorded	for	a	background	test.	
Since	background	tests	are	run	differently	to	load	generating	tests,	several	things	should	be	kept	in	
mind.	Firstly,	the	Initialise	and	Tear	Down	phases	of	the	test	will	be	run	as	normal.	Second,	while	the	
Begin	Timed	Section	and	End	Timed	Section	actions	can	still	be	included	in	the	functional	test,	they	
will	not	have	any	effect	on	what	is	recorded	into	the	project	database	during	the	performance	test.	

10.2 Log	Measurement	
The	Log	Measurement	action	can	be	used	to	log	custom	data	into	your	database	while	running	a	
performance	test.	This	may	be	useful	in	several	situations.		
Firstly,	it	may	be	used	when	recording	data	from	the	system	under	test,	acting	as	a	custom	probe.	This	
may	be	necessary	when	information	is	required	that	is	not	covered	by	the	standard	probes	–	for	
example,	when	querying	proprietary	systems	for	information.	
An	alternative	use	exists	for	systems	where	a	message	goes	through	several	processes	before	a	
response	is	received	by	Rational	Integration	Tester.	In	these	cases,	it	may	be	desirable	to	measure	the	
time	taken	for	a	single	process	to	provide	a	response,	rather	than	measuring	the	entire	round‐trip	time	
between	the	initial	message	sent	from	Rational	Integration	Tester,	and	that	eventual	response.		
	

       Page	30	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
In	the	diagram	below,	Rational	Integration	Tester	publishes	a	message	to	a	queue,	and	waits	for	a	
response.	Using	the	data	normally	gathered	by	a	performance	test,	we	would	be	told	how	long	it	took	
for	the	message	to	be	processed	by	operations	A,	B,	and	C.	However,	if	there	were	performance	issues	
as	we	increased	the	load	on	the	system,	we	would	not	know	if	these	could	be	narrowed	down	to	a	
single	service.		




                                                                                                                                                                                                   	
For	example,	we	might	suspect	that	service	B	is	where	most	of	the	delay	is	occurring.	In	order	to	
investigate	this,	we	can	add	timestamps	to	fields	in	the	message	as	it	passes	through	the	system.	
Subtracting	Time	2	from	Time	3	would	then	give	us	the	amount	of	time	that	was	spent	inside	service	B.	
Using	the	log	measurement	action,	this	information	could	be	recorded	in	the	project	database,	and	
analyzed	later	with	respect	to	the	load	on	the	system.	
When	using	the	Log	Measurement	action,	it	is	important	to	note	that	it	cannot	be	used	within	a	timed	
section.	This	is	because	writing	to	the	project	database	would	alter	the	time	taken	during	the	execution	
of	the	timed	section,	thereby	skewing	the	timing	information.	

10.3 Creating	the	measurement	test	
In	this	example,	we’ll	use	a	background	test	and	the	log	measurement	action	to	act	as	a	custom	probe	
for	the	system	under	test.	This	particular	example	will	be	gathering	data	about	the	bytes	sent	and	
received	by	the	system	–	note	that	this	could	also	be	gathered	by	the	Windows	Performance	Monitor	
probe.		
We’ll	be	using	a	background	test	for	two	reasons:	firstly,	it	means	we	don’t	need	to	change	our	load	
generating	test;	and	second,	we	don’t	want	to	have	our	probe	constantly	running	–	we’ll	gather	our	
data	every	two	seconds,	rather	than	constantly	polling	the	system	and	possibly	adding	extra	
unintended	load.	
    1. Create	a	new	test	for	the	Login	operation,	and	call	it	byteMonitor.		
    2. Add	a	Run	Command	action	to	the	test.		
	

       Page	31	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
3. On	the	Config	tab,	enter	the	following	command:	
       netstat ‐e | find "Bytes"	
    4. Make	sure	that	the	Wait	for	command	execution	to	finish	checkbox	is	ticked.	
    5. Press	the	Test	button.	You	should	see	a	single	line	of	data	for	stdout,	similar	to	the	following:	
       Bytes                      12008764        57368668	
    6. Switch	to	the	Store	tab,	so	we	can	store	the	data	into	tags.	
    7. We’ll	need	to	store	the	two	numbers	into	separate	tags,	which	we’ll	be	calling	bytesSent	and	
       bytesReceived.	To	do	this,	right	click	on	the	stdout	field,	and	choose	Contents	>	Edit.	
    8. Make	sure	you’re	looking	at	the	Store	tab	within	the	window	that	appears,	and	then	press	the	
       New	button.	
    9. Details	for	the	data	to	store	will	appear	below.	The	default	action	type	should	be	set	to	Copy	–	
       change	it	to	Regular	Expression.	
    10. Change	the	Tag	to	bytesReceived.	You	can	also	change	the	description	field	to	match.	
    11. In	the	Expression	section,	type	the	regular	expression	d+	to	match	a	number.	Below	that,	
        choose	to	Extract	Instance	1,	so	that	we’ll	be	extracting	the	first	number	found	in	the	string.	In	
        the	example	string	given	above,	this	would	store	12008764	into	the bytesReceived tag.	You	
        can	test	this	out	with	an	example	string	to	check	that	it	is	working	correctly.	
    12. We	still	need	to	store	the	number	of	bytes	sent.	Press	New	again	to	generate	a	second	store	
        action	for	the	stdout	field,	and	follow	steps	9‐11	again,	but	this	time	set	the	Tag	name	to	
        bytesSent,	and	Extract	Instance	2.	Similar	to	the	first	action,	if	you	were	to	test	this	out	using	
        the	example	above,	you	should	get	a	Result	of	57368668.	
    13. Once	you’re	done,	the	two	tags	should	be	configured	as	seen	below:	




	

       Page	32	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
14. Press	OK	to	close	the	Field	Editor,	and	then	OK	again	to	close	the	test	action.	
    15. Before	we	add	the	Log	Measurement	action,	we’ll	check	that	this	is	working	as	we	expect.	Add	
        a	normal	Log	action,	and	log	the	values	captured	in	the	bytesReceived	and	bytesSent	tags	to	
        the	console.	
    16. Run	the	test,	and	check	that	it	works	at	the	moment.	If	it	doesn’t,	check	the	preceding	steps	to	
        make	sure	that	everything	has	been	entered	correctly.	
    17. Return	to	the	Test	Factory,	and	delete	or	disable	the	Log	action.	
    18. Add	a	new	Log	Measurement	action	after	the	Run	Command.	
    19. Set	up	the	Log	Measurement	action	as	shown	in	the	image	below.	This	will	graph	the	number	
        of	bytes	sent	and	received	by	looking	up	the	values	captured	earlier.	The	attributes	section	
        allows	us	to	graph	multiple	sets	of	data	–	in	this	case	we	only	have	one,	but	at	least	one	attribute	
        is	required	in	order	for	the	Log	Measurement	action	to	run.	
       	




	

       Page	33	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
20. Press	OK	to	close	the	action.	
    21. Add	a	Sleep	action	to	the	end	of	the	test.	As	we’ll	be	running	this	as	a	background	test,	and	
        background	tests	run	continuously,	we’ll	want	to	pace	this	test	so	it	doesn’t	interfere	with	
        system	results.	Set	the	Sleep	action	to	have	a	fixed	duration	of	2000ms.	
    22. Save	the	byteMonitor	Test.	

10.4 Adding	the	Measurements	to	a	Performance	Test	
    1. Create	a	copy	of	the	externalPhaseTest,	and	call	it	customLogging.	
    2. Edit	the	customLogging	Test,	and	right	click	on	the	Performance	Test	label	on	the	left	hand	
       side	of	the	editor	–	you’ll	see	the	options	for	adding	load	generating	and	background	tests.	Add	
       a	background	test.	
    3. On	the	Execution	tab	for	the	Background	Test,	select	byteMonitor	for	the	Test	Path	field.	
    4. Make	sure	that	Terminate	on	failure	is	not	checked.	
    5. Switch	to	the	Engines	tab,	and	click	Add.	There	will	only	be	one	engine	available,	as	before	–	
       select	it.	
    6. Save	the	customLogging	test,	and	run	it	in	the	Test	Lab.	
    7. Once	it	has	run,	you	should	be	able	to	view	the	results	in	the	Results	Gallery.	The	counter	will	
       be	found	in	the	Log Values section.	
	

       Page	34	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
11 Legal Notices
       The	following	paragraph	does	not	apply	to	the	United	Kingdom	or	any	other	country	where	
        such	provisions	are	inconsistent	with	local	law:	INTERNATIONAL	BUSINESS	MACHINES	
        CORPORATION	PROVIDES	THIS	PUBLICATION	"AS	IS"	WITHOUT	WARRANTY	OF	ANY	KIND,	
        EITHER	EXPRESS	OR	IMPLIED,	INCLUDING,	BUT	NOT	LIMITED	TO,	THE	IMPLIED	
        WARRANTIES	OF	NON‐INFRINGEMENT,	MERCHANTABILITY	OR	FITNESS	FOR	A	PARTICULAR	
        PURPOSE.	Some	states	do	not	allow	disclaimer	of	express	or	implied	warranties	in	certain	
        transactions,	therefore,	this	statement	may	not	apply	to	you.	
       This	information	could	include	technical	inaccuracies	or	typographical	errors.	Changes	are	
        periodically	made	to	the	information	herein;	these	changes	will	be	incorporated	in	new	editions	
        of	the	publication.	IBM	may	make	improvements	and/or	changes	in	the	product(s)	and/or	the	
        program(s)	described	in	this	publication	at	any	time	without	notice.	
       If	you	are	viewing	this	information	in	softcopy,	the	photographs	and	color	illustrations	may	not	
        appear.	
       Any	references	in	this	information	to	non‐IBM	websites	are	provided	for	convenience	only	and	
        do	not	in	any	manner	serve	as	an	endorsement	of	those	websites.	The	materials	at	those	
        websites	are	not	part	of	the	materials	for	this	IBM	product	and	use	of	those	websites	is	at	your	
        own	risk.	
       Any	performance	data	contained	herein	was	determined	in	a	controlled	environment.	
        Therefore,	the	results	obtained	in	other	operating	environments	may	vary	significantly.	Some	
        measurements	may	have	been	made	on	development‐level	systems	and	there	is	no	guarantee	
        that	these	measurements	will	be	the	same	on	generally	available	systems.	Furthermore,	some	
        measurements	may	have	been	estimated	through	extrapolation.	Actual	results	may	vary.	Users	
        of	this	document	should	verify	the	applicable	data	for	their	specific	environment.	
       Information	concerning	non‐IBM	products	was	obtained	from	the	suppliers	of	those	products,	
        their	published	announcements	or	other	publicly	available	sources.	IBM	has	not	tested	those	
        products	and	cannot	confirm	the	accuracy	of	performance,	compatibility	or	any	other	claims	
        related	to	non‐IBM	products.	Questions	on	the	capabilities	of	non‐IBM	products	should	be	
        addressed	to	the	suppliers	of	those	products.	
       All	statements	regarding	IBM's	future	direction	or	intent	are	subject	to	change	or	withdrawal	
        without	notice,	and	represent	goals	and	objectives	only.	
       This	information	contains	examples	of	data	and	reports	used	in	daily	business	operations.	To	
        illustrate	them	as	completely	as	possible,	the	examples	include	the	names	of	individuals,	
        companies,	brands,	and	products.	All	of	these	names	are	fictitious	and	any	similarity	to	the	
        names	and	addresses	used	by	an	actual	business	enterprise	is	entirely	coincidental.	
       This	information	contains	sample	application	programs	in	source	language,	which	illustrate	
        programming	techniques	on	various	operating	platforms.	You	may	copy,	modify,	and	distribute	
        these	sample	programs	in	any	form	without	payment	to	IBM,	for	the	purposes	of	developing,	
        using,	marketing	or	distributing	application	programs	conforming	to	the	application	
        programming	interface	for	the	operating	platform	for	which	the	sample	programs	are	written.	
        These	examples	have	not	been	thoroughly	tested	under	all	conditions.	IBM,	therefore,	cannot	
        guarantee	or	imply	reliability,	serviceability,	or	function	of	these	programs.	The	sample	

	

        Page	35	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012
programs	are	provided	"AS	IS",	without	warranty	of	any	kind.	IBM	shall	not	be	liable	for	any	
        damages	arising	out	of	your	use	of	the	sample	programs.		
	
Trademarks	and	service	marks	
       IBM,	the	IBM	logo,	and	ibm.com	are	trademarks	or	registered	trademarks	of	International	
        Business	Machines	Corp.,	registered	in	many	jurisdictions	worldwide.	Other	product	and	
        service	names	might	be	trademarks	of	IBM	or	other	companies.	A	current	list	of	IBM	
        trademarks	is	available	on	the	web	at	www.ibm.com/legal/copytrade.shtml.		
       Microsoft	and	Windows	are	trademarks	of	Microsoft	Corporation	in	the	United	States,	other	
        countries,	or	both.	
       Java	and	all	Java‐based	trademarks	and	logos	are	trademarks	or	registered	trademarks	of	
        Oracle	and/or	its	affiliates		
       Other	company,	product,	or	service	names	may	be	trademarks	or	service	marks	of	others.	




	

        Page	36	of	36																																																																																																																																															©	IBM	Corporation	2001,	2012

More Related Content

Viewers also liked

Rit 8.5.0 performance testing training student's guide
Rit 8.5.0 performance testing training student's guideRit 8.5.0 performance testing training student's guide
Rit 8.5.0 performance testing training student's guide
Darrel Rader
 
2012 10 23_2649_rational_integration_tester_vi
2012 10 23_2649_rational_integration_tester_vi2012 10 23_2649_rational_integration_tester_vi
2012 10 23_2649_rational_integration_tester_vi
Darrel Rader
 
A tour of the rational lab services community
A tour of the rational lab services communityA tour of the rational lab services community
A tour of the rational lab services community
Darrel Rader
 
Doorsng po t_core_workbook_sse_imagev3.3.1_v6moda_final_letter
Doorsng po t_core_workbook_sse_imagev3.3.1_v6moda_final_letterDoorsng po t_core_workbook_sse_imagev3.3.1_v6moda_final_letter
Doorsng po t_core_workbook_sse_imagev3.3.1_v6moda_final_letter
Darrel Rader
 
Rit 8.5.0 virtualization training student's guide
Rit 8.5.0 virtualization training student's guideRit 8.5.0 virtualization training student's guide
Rit 8.5.0 virtualization training student's guide
Darrel Rader
 
Rit 8.5.0 virtualization training slides
Rit 8.5.0 virtualization training slidesRit 8.5.0 virtualization training slides
Rit 8.5.0 virtualization training slides
Darrel Rader
 
Rit 8.5.0 integration testing training student's guide
Rit 8.5.0 integration testing training student's guideRit 8.5.0 integration testing training student's guide
Rit 8.5.0 integration testing training student's guide
Darrel Rader
 

Viewers also liked (7)

Rit 8.5.0 performance testing training student's guide
Rit 8.5.0 performance testing training student's guideRit 8.5.0 performance testing training student's guide
Rit 8.5.0 performance testing training student's guide
 
2012 10 23_2649_rational_integration_tester_vi
2012 10 23_2649_rational_integration_tester_vi2012 10 23_2649_rational_integration_tester_vi
2012 10 23_2649_rational_integration_tester_vi
 
A tour of the rational lab services community
A tour of the rational lab services communityA tour of the rational lab services community
A tour of the rational lab services community
 
Doorsng po t_core_workbook_sse_imagev3.3.1_v6moda_final_letter
Doorsng po t_core_workbook_sse_imagev3.3.1_v6moda_final_letterDoorsng po t_core_workbook_sse_imagev3.3.1_v6moda_final_letter
Doorsng po t_core_workbook_sse_imagev3.3.1_v6moda_final_letter
 
Rit 8.5.0 virtualization training student's guide
Rit 8.5.0 virtualization training student's guideRit 8.5.0 virtualization training student's guide
Rit 8.5.0 virtualization training student's guide
 
Rit 8.5.0 virtualization training slides
Rit 8.5.0 virtualization training slidesRit 8.5.0 virtualization training slides
Rit 8.5.0 virtualization training slides
 
Rit 8.5.0 integration testing training student's guide
Rit 8.5.0 integration testing training student's guideRit 8.5.0 integration testing training student's guide
Rit 8.5.0 integration testing training student's guide
 

Similar to 2012 10 23_3013_rational_integration_tester_fo

Komatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdfKomatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdf
djkkskmmmdm
 
Komatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdfKomatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdf
djkkskmmmdm
 
Komatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdfKomatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdf
djkkskmmmdm
 
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdfHitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
fusekdmdm
 
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdfHitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
fujsekdmdmd
 

Similar to 2012 10 23_3013_rational_integration_tester_fo (20)

Trainer Guide
Trainer GuideTrainer Guide
Trainer Guide
 
Open Source Search Applications
Open Source Search ApplicationsOpen Source Search Applications
Open Source Search Applications
 
Komatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdfKomatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdf
 
Komatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdfKomatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdf
 
Komatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdfKomatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdf
 
Komatsu saa6 d140e 3 diesel engine service repair manual
Komatsu saa6 d140e 3 diesel engine service repair manualKomatsu saa6 d140e 3 diesel engine service repair manual
Komatsu saa6 d140e 3 diesel engine service repair manual
 
Komatsu sa6 d140e 3 diesel engine service repair manual
Komatsu sa6 d140e 3 diesel engine service repair manualKomatsu sa6 d140e 3 diesel engine service repair manual
Komatsu sa6 d140e 3 diesel engine service repair manual
 
Komatsu sda6 d140e 3 diesel engine service repair manual
Komatsu sda6 d140e 3 diesel engine service repair manualKomatsu sda6 d140e 3 diesel engine service repair manual
Komatsu sda6 d140e 3 diesel engine service repair manual
 
Komatsu sda6 d140e 3 diesel engine service repair manual
Komatsu sda6 d140e 3 diesel engine service repair manualKomatsu sda6 d140e 3 diesel engine service repair manual
Komatsu sda6 d140e 3 diesel engine service repair manual
 
Komatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdfKomatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SA6D140E-3 Diesel Engine Service Repair Manual.pdf
 
Komatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdfKomatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SAA6D140E-3 Diesel Engine Service Repair Manual.pdf
 
Komatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdfKomatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdf
Komatsu SDA6D140E-3 Diesel Engine Service Repair Manual.pdf
 
Komatsu sa6 d140e 3 diesel engine service repair manual
Komatsu sa6 d140e 3 diesel engine service repair manualKomatsu sa6 d140e 3 diesel engine service repair manual
Komatsu sa6 d140e 3 diesel engine service repair manual
 
Komatsu saa6 d140e 3 diesel engine service repair manual
Komatsu saa6 d140e 3 diesel engine service repair manualKomatsu saa6 d140e 3 diesel engine service repair manual
Komatsu saa6 d140e 3 diesel engine service repair manual
 
Komatsu sa6 d140e 3 diesel engine service repair manual
Komatsu sa6 d140e 3 diesel engine service repair manualKomatsu sa6 d140e 3 diesel engine service repair manual
Komatsu sa6 d140e 3 diesel engine service repair manual
 
Komatsu saa6 d140e 3 diesel engine service repair manual
Komatsu saa6 d140e 3 diesel engine service repair manualKomatsu saa6 d140e 3 diesel engine service repair manual
Komatsu saa6 d140e 3 diesel engine service repair manual
 
Komatsu sda6 d140e 3 diesel engine service repair manual
Komatsu sda6 d140e 3 diesel engine service repair manualKomatsu sda6 d140e 3 diesel engine service repair manual
Komatsu sda6 d140e 3 diesel engine service repair manual
 
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdfHitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
 
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdfHitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
Hitachi EH3000 Rigid Frame Truck Service Repair Manual.pdf
 
Business And It Value
Business And It ValueBusiness And It Value
Business And It Value
 

More from Darrel Rader (7)

DevOps Community Blueprint
DevOps Community BlueprintDevOps Community Blueprint
DevOps Community Blueprint
 
Rit 8.5.0 platform training slides
Rit 8.5.0 platform training slidesRit 8.5.0 platform training slides
Rit 8.5.0 platform training slides
 
dW Sharing your Profile
dW Sharing your ProfiledW Sharing your Profile
dW Sharing your Profile
 
Steps for creating an engagement activity
Steps for creating an engagement activitySteps for creating an engagement activity
Steps for creating an engagement activity
 
Steps for creating a personal learning roadmap
Steps for creating a personal learning roadmapSteps for creating a personal learning roadmap
Steps for creating a personal learning roadmap
 
Joe’s upskilling story
Joe’s upskilling storyJoe’s upskilling story
Joe’s upskilling story
 
Making your Overview Page Look Lke a Whiteboard
Making your Overview Page Look Lke a WhiteboardMaking your Overview Page Look Lke a Whiteboard
Making your Overview Page Look Lke a Whiteboard
 

2012 10 23_3013_rational_integration_tester_fo

  • 1. Performance Testing with IBM Rational Integration Tester
  • 2. Note Before using this information and the product it supports, read the information in “Legal Notices” on page 35. © Copyright IBM Corporation 2001, 2012.
  • 3. INTRODUCTION ............................................................................................................................. 3  2  BACKGROUND .............................................................................................................................. 4  3  PERFORMANCE TEST INFRASTRUCTURE ......................................................................................... 5  3.1  INTRODUCTION ....................................................................................................................... 5  3.2  ENGINES................................................................................................................................ 6  3.3  PROBES ................................................................................................................................ 7  3.4  AGENTS ................................................................................................................................. 7  4  ARCHITECTURE SCHOOL ............................................................................................................... 9  4.1  INTRODUCTION ....................................................................................................................... 9  4.2  BASIC SYSTEM SETUP ............................................................................................................ 9  4.3  AGENT & ENGINE SETUP....................................................................................................... 10  4.4  PROBE SETUP ...................................................................................................................... 10  5  CREATING THE LOAD GENERATING TEST ...................................................................................... 12  5.1  RE-USING FUNCTIONAL TEST RESOURCES ............................................................................. 12  5.2  BASIC SETUP ....................................................................................................................... 13  5.3  TIMED SECTIONS .................................................................................................................. 14  6  CREATING PERFORMANCE TESTS ................................................................................................ 16  6.1  INTRODUCTION ..................................................................................................................... 16  6.2  INITIAL SETUP ...................................................................................................................... 16  6.3  ADDING TESTS ..................................................................................................................... 17  6.4  ENGINE SETTINGS ................................................................................................................ 18  6.5  MANAGING PROBES .............................................................................................................. 18  7  RUNNING PERFORMANCE TESTS AND ANALYZING RESULTS........................................................... 20  7.1  RUNNING THE TEST .............................................................................................................. 20  7.2  VIEWING RESULTS ................................................................................................................ 21  7.3  MULTIPLE DATA SETS ........................................................................................................... 23  8  DATA DRIVING PERFORMANCE TESTS .......................................................................................... 25  8.1  DIFFERENCES FROM FUNCTIONAL TESTS ............................................................................... 25  8.2  DRIVING A LOAD GENERATING TEST WITH EXTERNAL DATA ..................................................... 25  9  LOAD PROFILES ......................................................................................................................... 27  9.1  PERFORMANCE TESTING SCENARIOS ..................................................................................... 27  9.2  CONSTANT GROWTH ............................................................................................................ 28  9.3  EXTERNALLY DEFINED LOAD PROFILES .................................................................................. 28  10  ADVANCED TOPICS .................................................................................................................. 30  Page 1 of 36 © IBM Corporation 2001, 2012
  • 4. 10.1  BACKGROUND TESTS ........................................................................................................ 30  10.2  LOG MEASUREMENT .......................................................................................................... 30  10.3  CREATING THE MEASUREMENT TEST ................................................................................... 31  10.4  ADDING THE MEASUREMENTS TO A PERFORMANCE TEST ..................................................... 34  11  LEGAL NOTICES ...................................................................................................................... 35  Page 2 of 36 © IBM Corporation 2001, 2012
  • 5. 1 Introduction This document serves as a training manual to help familiarize the user with the performance testing capabilities available in IBM® Rational® Integration Tester. It is expected that the reader has already been through the basic Rational Integration Tester training, and understands the workflow of Rational Integration Tester. In this course we will:  Create performance tests  Set up agents, probes, and engines to execute and monitor performance tests  Analyze results of performance tests  Manage the amount of load driven by a performance test over time  Data drive performance tests Page 3 of 36 © IBM Corporation 2001, 2012
  • 6. 2 Background When testing a service oriented architecture (SOA), there will be times when simply verifying functional requirements will not be enough. Many systems will need to come with service level agreements (SLAs) that will state a minimum level of performance that must be satisfied. This level of performance may have a number of components. In particular, system uptimes and message response times will be important. However, it will not be enough to test the system to check that it can respond to a single message within a given amount of time – the system will need to hold up under a certain amount of load as well. This load may take the form of a large number of messages, extreme message rates, or large amounts of data. In addition, accurately modeling the load on the system may require us to generate message requests from a number of different sources. For experienced performance testers, this will all be fairly familiar. However, SOA environments bring challenges on top of the traditional client‐server model. For example, services are often shared among several applications and failure can occur anywhere along the transaction path. Considering both the number of services in place and the many points at which they intersect—any one of which may not be performant—how can we ensure that performance levels satisfy the nonfunctional requirements? In addition, there is a fundamental difference between SOA performance testing and a traditional, client‐server approach. Performance testers who are familiar with the traditional approach tend to talk in terms of the number of users or “virtual users” that are required to generate the load. They also tend to be concerned with end‐to‐end response times – the response time experienced by an end user. This end‐to‐end performance testing is typically executed against a functionally proven, complete system. SOA performance testers are still interested in response times but would be more interested in the volume of messages sent between components – there is no requirement to wait until the system has completed assembly or for a front end GUI interface to be created. Hence, the SOA performance tester can begin testing much earlier. When running performance tests, you will normally be faced with the following questions: 1. Does the system’s performance satisfy the requirements or SLAs? 2. At which point will the performance degrade? 3. Can the system handle sudden increases in traffic without compromising response time, reliability, and accuracy? 4. Where are the system bottlenecks? 5. What is the system’s break point? 6. Will the system recover (and when)? 7. Does the system performance degrade if run for an extended period at relatively low levels of load? 8. Are there any capacity issues that come from processing large amounts of data? Page 4 of 36 © IBM Corporation 2001, 2012
  • 7. 3 Performance Test Infrastructure In this chapter, you will:  Look at the distributed nature of a performance test infrastructure.  See how engines are used to execute actions within a performance test  Examine how data needs to be recorded from the system under test, and how this can be done with probes  Learn how performance test licensing is handled 3.1 Introduction Before creating performance tests, we will need to revise how we create the infrastructure of Rational Integration Tester. As in regular functional tests, we have the Rational Integration Tester GUI, and the project database. However, these will work slightly differently in the context of a performance test. While in a regular functional test, the GUI and the test are normally run from the same machine, a performance test may be run from another machine, or may be distributed across a number of other machines. This means that the Rational Integration Tester software, as presented by the GUI, also provides a test controller, to manage any remote systems involved in the performance test. In addition, the project database, which is optional for a functional test, becomes mandatory for performance tests. This is due to the higher volume of data that is recorded during a performance test – it cannot be easily presented in a simple console window, but will need to be summarized, and possibly manipulated. Besides the GUI, test controller, and project database, there are also three new items in our infrastructure: engines, probes, and agents. They fit together in a framework to run tests and monitor performance across a number of different systems. Page 5 of 36 © IBM Corporation 2001, 2012
  • 8. 3.2 Engines An engine is the process that actually runs a test in Rational Integration Tester. When carrying out functional testing using Rational Integration Tester, an engine exists underneath the surface, and runs the tests on behalf on the controlling instance of Rational Integration Tester (ie, the instance of Rational Integration Tester that is running the main performance test), on the user’s machine. When performance testing, the engine is separated from the controlling instance of Rational Integration Tester. The engine can be on the same machine as Rational Integration Tester, or it can be on another machine. In fact, there may be more than one engine, spread across multiple machines. If there is more than one engine, test iterations are spread across the available engines. For example, in a performance test which is executing 40 tests per second with 2 engines, each engine would be running 20 tests per second. The distribution of the tests is handled by the controlling instance of Rational Integration Tester. Using multiple engines lets us solve a number of problems. Most simply, if one machine is not capable of generating a high enough load for a performance test, the load can be split across multiple machines. Secondly, multiple engines give us the capability to distribute the load across multiple endpoints. For example, if we need to simulate requests arriving from different parts of the world, or from different networks, we can set up engines in such locations as to satisfy the demands of the performance test. Page 6 of 36 © IBM Corporation 2001, 2012
  • 9. 3.3 Probes With such complex and heterogeneous platforms, it can difficult to understand what to measure apart from transaction response times. It will be impossible to measure everything. Probes are the tools used by Rational Integration Tester to gather statistics from the system under test. There are a variety of probes available to the user:  System Statistics Probe  Windows Performance Monitor Probe  TIBCO BusinessWorks Probe  TIBCO Rendezvous Probes  TIBCO EMS Probe  Sonic MQ Probe  webMethods Broker Probe  webMethods Integration Server Probe  JMX Probe The probes will be deployed on the systems that you want to measure, and multiple probes can co‐ exist on the one system. Recording statistics with Rational Integration Tester’s probes gives us access to much more information than just the transaction response times. This can aid us to determine the cause of poor performance – if response times are becoming too long after going past a certain number of requests per second, we can use probes to see if it is due to load on the CPU, excess memory usage, message queues growing larger, or another cause. Whichever probes you choose, they will record statistics during the performance test, and send those statistics to the controller, for writing to the project database. These writes are set up as a low‐priority task, so that they cause as small an impact as possible on system performance. 3.4 Agents Engines drive the tests, and probes monitor them. However, both need a host controlling them, talking to the instance of Rational Integration Tester controlling the performance test. This role is played by the Rational Integration Tester Agent. The agent runs on each machine that has an engine or a probe, and handles the communications with Rational Integration Tester. The agent can host an engine, a probe, or both at the same time. In fact, it can also handle multiple probes or engines within the one agent. The agent is installed with the Rational Test Performance Server (RTPS), or the Rational Test Virtualization Server (RTVS). It can be run by hand, or set up as a service on the system it’s running on. Due to this, each machine that requires an agent requires an installation of RTPS or RTVS. Following the installation of RTPS or RTVS, the agent will need to be configured in the Library Manager. This configuration follows the same procedure as that of Rational Integration Tester itself, and so will not be covered in this training course. Page 7 of 36 © IBM Corporation 2001, 2012
  • 10. Note: If you are running through this training material on a cloud instance or virtual machine, all parts of the system will be on a single machine. This is purely for ease of configuration, and does not reflect a real‐world scenario. Page 8 of 36 © IBM Corporation 2001, 2012
  • 11. 4 Architecture School In this chapter, you will:  Configure Rational Integration Tester to connect to the system under test  Set up an engine to run the performance test, with an agent to host it  Configure a probe to measure the performance of the system during testing 4.1 Introduction Creating a model of the system under test will be very similar for performance tests to functional tests. However, in addition to modeling the system under test, the Architecture School perspective will also be used to provide configuration data for the agents, engines, and probes in the system. Adding this information to your Rational Integration Tester project should be done after the normal process of modeling the system under test; configuration for the performance testing components is then carried out in the Physical View of Architecture School. Note that as it is configured on a physical basis, you may need to configure new components when setting up new environments. 4.2 Basic System Setup In this example, we will be testing a web service – a simple Login service that takes a username and password, and returns a login token. We will first start the service on our local machine, then synchronize with the WSDL provided by the service. 1. On the Desktop of the cloud instance, there is a folder called WebServices. Open this folder, and execute RunLoginService.bat. 2. This will pop up a window – keep it open, but minimize it. 3. Open up Rational Integration Tester, and start a new project. Note that you will need to use a project database – one is already specified on the cloud instance by default, so you can keep using this, but use the Test Connection option to make sure that it is working correctly. If you are not using a cloud instance, please ask your instructor for the project database settings. 4. Once Rational Integration Tester is open, switch to the Synchronization view within Architecture School. 5. Press the button in the toolbar at the top of the Synchronization view, and select WSDL. 6. The Create a new External Resource dialog will appear. Press the New… button. 7. The New WSDL dialog will appear. Press the Select… button to select a new location. Page 9 of 36 © IBM Corporation 2001, 2012
  • 12. 8. Once the Select Location dialog appears, switch to the URL tab. In order to get the URL of the WSDL, copy it from the window that popped up when you ran the login service. 9. Press OK to close the Select Location and New WSDL dialogs. 10. Click Next, and run through the rest of the synchronization process as normal. 4.3 Agent & Engine Setup 1. In some cases, the agent might be executed manually; however, in this example, the agent is running as a Windows service on the localhost. This means that the agent can be found at localhost:4476. However, in order to enter its details properly, we need to know the name of the local host. Execute the command hostname at a command prompt. 2. Switch back to Rational Integration Tester, and go to the Physical View of Architecture School. 3. Press the button at the left of the Physical View Toolbar, and select the Agent option. 4. In the Host field, enter the hostname provided in step 1. For the Port number, make sure that the default setting is 4476. 5. An engine called default is automatically attached to the engine – leave this as‐is, and press OK to close the dialog and complete the agent configuration. 4.4 Probe Setup We’re now going to setup the probe that we want to run on the same machine as the agent. Remember that each probe will need to be running on an agent, or tests using that probe will fail. Also, probes can be set up on individual hosts, or on services running on those hosts – for example, the System Statistics probe will run on a particular host, but most of the technology‐specific probes will need to be attached to a particular process on that host. If you need to use those particular probes in the future, they can be configured by editing the properties of that physical component, similar to the way we will edit the probe on the host machine in this exercise. 1. In the Physical View, each physical component will be visible in a tree underneath a Subnet and a Host. Double click on the host (which should have the hostname we used in the previous exercise) to bring up its properties. The screenshot below shows where to find the host, though your hostname and IP will be different. 2. Once the properties dialog for your host has appeared, switch to the Probes tab. 3. For our tests, we’re going to use the Windows Performance Monitor probe. Select it so that it can be configured. Page 10 of 36 © IBM Corporation 2001, 2012
  • 13. 4. The first, and most important, setting to note is the Hosting Agent at the very bottom of the dialog – it tells us which agent is running this probe. Currently, we only have one agent to deal with, but make sure that the agent for this probe is set to the agent created in the previous exercise. If the agent is not set here, then any performance tests that attempt to use this probe will fail. 5. As for the other settings, the probe should be set to Collect Statistics Every 1 second. Following that, we need to specify what statistics we need to collect from the Windows Performance Monitor. 6. Press the button to add a new counter. The Add Counters dialog will appear. 7. For our first counters, we’ll examine memory statistics. To do this, select Memory in the Performance Object field. Under Counter, select Available Mbytes, then press the Add button. 8. Repeat this for the Page Faults/sec counter, and any other data you are interested in. 9. Now select Processor in the Performance Object field, and add the % Processor Time counter, along with any other counters that are of interest. 10. Press Done to return to the configuration of the probe. 11. Select all of the Counters, then press the button to validate that each one is working. The dialog should now appear like so: 12. Press OK to close the dialog. Our system is now set up to gather statistics during performance tests. Page 11 of 36 © IBM Corporation 2001, 2012
  • 14. 5 Creating the Load Generating Test In this chapter, you will:  Create a functional test that can be used as a load generating test  Encounter the test actions created for use within performance tests  Create a timed section within a test to capture timing and status information 5.1 Re‐using Functional Test Resources One of the advantages in using Rational Integration Tester for SOA testing is that functional tests can easily be refactored to be run within a performance scenario. This is important because when evaluating the performance of the system, it is insufficient to just send a request and measure the time it takes for a response to arrive. For example, if a web service operation rejects input and sends back a SOAP Fault message, the time it takes may be significantly different from the time it takes to properly process a request and return a valid response. If a test does not truly validate the outcome of an operation, it will provide an inaccurate view of the true system performance. In this case, we will create a simple functional test that we will use as the basis for our performance tests, to illustrate how this works. This functional test will be used as a load generating test within our main performance test. When editing the load generating test, there are several new actions that may be used. These actions will be ignored when running the test as a functional test – they will only be executed when it is run as part of a performance test. Performance Actions Begin Timed Section: Mark the beginning of a timed section for a performance test. End Timed Section: Mark the end of a timed section for a performance test. Log Measurement: Log data to the project database during a performance test. Page 12 of 36 © IBM Corporation 2001, 2012
  • 15. Note: The Initialise, Test Steps, and Tear Down sections become more important in performance tests. When we are running multiple iterations of a load generating test, the Initialise part of the test will only be run once at the beginning, and the Tear Down section once at the end. Only the Test Steps will be run for each iteration of any load generating test used in the performance test. This means we can, for example, set up and clean up a database within the Initialise and Tear Down sections without impacting the data that we are actually interested in. 5.2 Basic Setup 1. Before we can create a performance test, we need to provide a load generating test that will contain the actions carried out during each iteration. To do this, we’ll create a normal test, and add a timed section to it. Go to Test Factory, and right click on the Login operation to bring up the context menu. Select New > Tests > Test Using MEP. 2. The Create dialog will appear – press the Options button to bring up a Settings dialog. 3. On the Message Settings tab, make sure the option Include Optional Fields is selected, and then press OK to return to the Create dialog. 4. Call the test loginBase, and press OK. A test will be created. 5. Open up the Send Request message, and fill in a username, password, and application in the relevant fields. The contents don’t matter for this example – our login service will accept any input for these fields. 6. We now need to set up the validation for this message. For our purposes, it will be enough to check that a login token is returned, and that it contains hexadecimal digits broken up with hyphens. Open up the Receive Reply action, and find the Token field. Double click on the (Text) section below that to bring up the Field Editor. 7. In the top half of the Field Editor, make sure that the Equality validation is selected, as we will be replacing this validation with a regular expression: 8. Change the Action Type option just below from Equality to Regex. 9. Enter the regular expression ^[a‐f0‐9‐]*$ 10. To test that it is working, enter 44ef‐2ab7‐573d into the Document field, and press Test. The Result field should update to say true (if it doesn’t, make sure that you haven’t accidentally included a space character at the end of the string). Page 13 of 36 © IBM Corporation 2001, 2012
  • 16. 11. Now add –y8rr to the end of the Document field, giving 44ef‐2ab7‐573d–y8rr, and press Test again. This should fail. 12. Press OK to close the Field Editor, and OK again to return to the test. 13. Save the test, and then run it in Test Lab to make sure that it passes. If there are any problems, fix them before moving on. 5.3 Timed Sections Timed sections, marked by the Begin Timed Section and End Timed Section actions, allow us to time the execution of different parts of the test being executed. A single functional test can contain multiple timed sections, which may overlap, or contain other timed sections. For each timed section, Rational Integration Tester will log data into the project database while a performance test is running. This will include not only the time taken for the timed section to execute, but also the status of the section – whether it passed, failed, or timed out. If no timed sections are added to the test, Rational Integration Tester will still record the length of time taken to execute each iteration of the entire test, and the status at the end of that iteration. 1. Start by returning to the Test Factory. 2. Add two new actions to the test – a Begin Timed Section , and an End Timed Section . The Begin Timed Section should go before the two messaging actions, while the End Timed Section should go afterwards. 3. Open the Begin Timed Section action. 4. The timed section will need a name – call it S1. 5. Below the name of the timed section, there is an option to determine how this timed section will be recorded (Pass/Fail/Timeout). We can take the status of the test at the end of the section, or we can take the status of the test at the end of that iteration of the test. In this particular case, since the timed section covers the entire test, it will not make any difference which of the two options we choose. 6. Close the Begin Timed Section action, and open the End Timed Section action. This has only one setting – the name of the timed section. Match it to S1, the section we started with the Begin Timed Section action, and close the dialog. Page 14 of 36 © IBM Corporation 2001, 2012
  • 18. 6 Creating Performance Tests In this chapter, you will:  Create a simple performance test  Add a load generating test to a performance test  Configure a performance test to use selected engines and probes 6.1 Introduction Once we have a functional test that will create a load on the system, we can start putting together a performance test. Performance tests are created in the Test Factory, and are contained within the same tree structure as other test resources. Our first performance test will be fairly basic, running a single load generating test at 1 iteration per second. Later performance tests will look at changing the load on the system, and varying it over time. 6.2 Initial Setup 1. In the Test Factory Tree, right click on the Login operation, and select New > Tests > Performance Test. Call the test simplePerformance. 2. The initial screen of the Performance Test Editor will appear. 3. Click on the text on the left hand side. 4. The right hand side will alter to show settings for the performance test. Make sure that you are on the Execution tab. Page 16 of 36 © IBM Corporation 2001, 2012
  • 19. 5. Most settings can be left at their defaults, but change the length of the test phase to 30 seconds, as in the screenshot below. 6.3 Adding Tests 1. A performance test on its own does nothing – it requires load generating or background tests in order to test the performance of your system. We will now add the loginBase test as our load generating test. Right click on the text on the left hand side – two options will appear: 2. Click on Add Load Generating Test. 3. The load generating test will appear, and should be selected on the left hand side of the editor. Configuration information for the load generating test will appear on the right hand side. The first thing we need to do is to choose which test will be used for load generation. To do this, make sure you are on the Execution tab, and find the Test Path field. Press the Browse button next to that field, and select the loginBase test from the dialog that appears. 4. We’ll leave the other execution options at their default settings for the moment, as shown below. Page 17 of 36 © IBM Corporation 2001, 2012
  • 20. 5. The load generating test is nearly ready to go – but first, we will need to say what engine (or engines) will be executing this test. 6.4 Engine Settings Our test can be run on one or more engines. For the purposes of this manual, we will only be using a single engine running on a single agent, but in more complex tests, multiple engines can be set up in different locations, splitting up the load between different machines. Regardless of how many engines are being used, remember that the engines are all managed by a single controller ‐ the instance of Rational Integration Tester that is running the performance test. 1. Switch from the Execution tab to the Engines tab. 2. Press the Add… button at the bottom of the screen. 3. A Select dialog should appear. In this case, there will be only one engine available – the default engine attached to the agent we created in Architecture School. Select the default engine, and press OK. 4. If multiple engines were available, we could select more of them by pressing the Add… button again, and selecting other engines, but for this example, our test is now ready to be run – we just need to state how we will be monitoring it. 6.5 Managing Probes Each performance test can choose which probes it wants to use to gather data. Different tests may be measuring different data. For example, one test may be gathering system data, while another may gather statistics from the middleware layer. Regardless of which probes are being requested here, they must still be set up in Architecture School. For this example, we will use the Windows Performance Monitor probe, as set up in the previous exercises. 1. On the left hand side of the Performance Test Editor, click on the Performance Test to switch back to its settings. 2. Click on the Probes tab. Page 18 of 36 © IBM Corporation 2001, 2012
  • 21. 3. We can now select from the available probes. As we have only set up the Windows Performance Monitor, check the box for that probe, and leave the others blank. 4. Save the simplePerformance test. Page 19 of 36 © IBM Corporation 2001, 2012
  • 22. 7 Running Performance Tests and Analyzing Results In this chapter, you will:  Execute a performance test and view the statistics shown at runtime.  View the results of a performance test in the Results Gallery  Compare results of multiple executions of a performance test 7.1 Running the Test The procedure for running a performance test is much the same as for a functional test – simply use the Run button in the Test Lab, or double click on the test in the tree. While the performance test is running, a summary of the data being gathered will be displayed in the console. For full reporting, we’ll need to go to the Results Gallery, as we’ll see in the following exercise. 1. Switch to Test Lab. 2. Run the simplePerformance test. 3. Watch the console results – you will notice that the probes will be started 15 seconds before the load generating tests are run, as per the settings in the performance test. 4. Once the load generating tests are being run, you will see counters for the numbers of tests started, passed, failed, and the number of pending database writes. These are defined in the table below: Page 20 of 36 © IBM Corporation 2001, 2012
  • 23. Iterations started in the report interval (The setting is 'Collect statistics every' in Performance Test Statistics tab). The default interval is 5 Started seconds, so if the test is set for 10TPS this would show a total of 50 each time. Passed Iterations passed so far during the performance test. Iterations where message receivers did not get a response within their Timed Out configured timeout. Failed Iterations failed so far during the performance test. Database writes queued on the results database. Large numbers Pending DB indicate that database access is slower than required and may be a Writes result of a slow network connection. Note that the writes are buffered and do not slow down the test rate. Note: A performance test may run on longer than the specified time while remaining test instances complete and database writes are flushed. In this case you will see the 'started' figure as zero for those intervals since the given number of iterations has already been started. 7.2 Viewing Results 1. The Test Lab doesn’t show much in the way of results besides statistics for how many timed sections passed/failed, etc. To get this information, we need to go to the Results Gallery. Switch to that perspective now. 2. In the Results Gallery, you will see a single line describing basic information about your test – start/end times, number of iterations, etc. Select this, and then click the Analyse Results button at the bottom of the screen. 3. An empty chart will now appear. We’re going to populate this with some of the data we have collected with our probes. This is available to the left of the chart, sorted into different categories. Expand this out to navigate to Performance Test Sections (Based on Start  Times) > Average Pass Section Duration > simplePerformance / S1. 4. A checkbox should now be available – tick it. Page 21 of 36 © IBM Corporation 2001, 2012
  • 24. 5. A chart should now appear on the left hand side. Experiment with adding other information gathered by your probes to the chart. In particular, look at the information recorded by the Windows Performance Monitor Probe. Note that charts can be removed simply by unchecking them again. 6. As you experiment, you may find a situation where the axes of charts do not match. For example, if you look at the Windows Performance Monitor probe, and select both pieces of data for the memory, you might see something like this: The number of MB available is changing, but the scale of the chart for the number of page faults is preventing us from seeing that information. 7. In order to fix this, we can edit the properties of one of the charts so that it is displayed on a separate axis. Go to the left of the chart display, where you have selected your data, and double click on one of the coloured lines that has appeared next to the checkbox. 8. A Choose Style dialog will appear. On the Style tab, you can change how the data is displayed (colour of the line, type of chart, etc.). Change any settings here that are of interest. Page 22 of 36 © IBM Corporation 2001, 2012
  • 25. 9. Switch to the Data tab, and set the Axis to 2. Close the dialog, and the charts should update: 10. At this stage, we can edit the chart and give it a name and some notes, in the text fields below the chart. We can also save the chart for later reference, or use the button to export data to a CSV file. 7.3 Multiple Data Sets So far, we’ve just looked at results for a single performance test. It is also possible to compare results of multiple performance tests, run at different times. This allows us to see how changes made to our load generating test, or changes made to the system, have affected the performance of the system. 1. Close the chart for the moment, and return to the Test Lab. 2. Run the simplePerformance test again. 3. Once it is complete, go back to the Results Gallery, and choose Analyse Results for the most recent test run. 4. A chart will appear, as before. However, this time another option is open to us: we can compare this test run to any previous test run. Switch to the Data Sets tab, and press the Add button. 5. Select your previous test run here. Page 23 of 36 © IBM Corporation 2001, 2012
  • 26. 6. Return to the Counters tab. For each counter, two charts will now be available, and you can use this to compare the current results to the previous results. Page 24 of 36 © IBM Corporation 2001, 2012
  • 27. 8 Data Driving Performance Tests In this chapter, you will:  Learn the limitations of using the iterate actions within a load generating test.  Use the Input Mappings settings in the performance test to data drive a load generating test. 8.1 Differences from Functional Tests When creating functional tests in Rational Integration Tester, we can simply use the Iterate Test Data action to run through a data set, and test the system with that particular data set. However, just imagine that we had a test similar to the one we have created already, that sends a single message, and receives a single message. As an example, if we were to use the Iterate Test Data action (or any other Iterate test action) with that test, to send 10 messages, and our performance test had also specified 10 iterations per second, we could end up sending anywhere between 10 and 100 messages per second, with no precise control over the load on the system. In addition, test duration and status data will be limited in their usefulness – rather than measuring the messaging times, we would be measuring the time to execute 10 messages; similarly, we would be recording the pass/fail status of a group of 10 messages, rather than a single message. For these reasons, it is generally advised not to use the Iterate Test Data action within a performance test. Instead, we can map a data source to a test using the Input Mappings tab for each Load Generating or Background Test. 8.2 Driving a Load Generating Test with External Data 1. Create a copy of the loginBase test, and call it loginDataDriven. 2. Open the Send Request action of the loginDataDriven test, and go to the Config tab. 3. Quick Tag the UserName and Password fields. Page 25 of 36 © IBM Corporation 2001, 2012
  • 28. 4. Save the loginDataDriven test. 5. Now make a copy of the simplePerformance test, and call it dataDrivenPerformance. 6. Open the dataDrivenPerformance test, and go to the settings for the Load Generating Test. 7. Find the Test Path setting – it should currently be set to the loginBase test. Use the Browse button to change this to the loginDataDriven test. 8. We now need a data source, so leave Rational Integration Tester to create one. Create a brand new CSV file or Excel spreadsheet with the following data (or some of your own invention – just remember to add a line with headings): User,Pass  Jim,gr33nhat  Steve,ght3st3r  Monica,perf0rmance  Karen,eng1n3s  Ben,pr0b3s  9. Save your CSV/Excel file, and return to Rational Integration Tester. Create a File Data Source or Excel Data Source to connect to your data. Remember to use the Refresh button to check that the data has loaded properly, and Save the data source. 10. Go back to the dataDrivenPerformance test, and go to the Input Mappings tab. Underneath this, you will see three new tabs appear – Config, Filter, and Store. 11. In the Config tab, use the Browse button to choose your test data set. 12. If we wanted to filter the incoming data, we could do that on the Filter tab. In this case, we’ll use the entire data set supplied, so go to the Store tab. 13. Map the tags in the loginDataDriven test to the columns in your data source here, and save the dataDrivenPerformance test. 14. Run the test and analyze the results. Page 26 of 36 © IBM Corporation 2001, 2012
  • 29. 9 Load Profiles In this chapter, you will:  Encounter standard performance testing scenarios, including load, stress, and soak tests.  Use the Constant Growth settings in a performance test to provide an increase in the load on the system.  Data drive the load on the system using the Externally Driven settings in a performance test. 9.1 Performance Testing Scenarios So far, our tests have been modeling a very simple scenario, running our load generating test at 1 iteration per second. However, there are a number of scenarios where we might like to use more complex performance tests. We’ll look at a few example scenarios, and how Rational Integration Tester can deal with them. Load Testing A load test attempts to represent a period in the working day, and test how the system will respond to a similar load. For example, it may be anticipated that the greatest risk of non‐performance may be at the start of the working day. The scenario will model the ramp up from the minimum number of users to the peak login period during the first few hours of business. Stress Testing A stress test scenario is used to identify the break point of the system. The break point will thus be identified as a specified load (and ramp‐up) and will be used to identify performance weaknesses in the distributed system. This may be modeled with a simple linear increase in the load on the system. Stress tests can also be useful in proving the recoverability of a system – how does the system break and how gracefully can it recover under extreme loads? In this case, a bell curve could be used to view how the system breaks as it approaches a point identified as a break point, and then to allow the system some room to recover. Soak Testing A soak test scenario will be run against a constant low‐level load that may run for hours or days. This scenario could be a modeling an iteration run once per minute, or less. Running the test for an extended period of time will identify any issues that may manifest themselves over a longer period, such as memory leaks. Page 27 of 36 © IBM Corporation 2001, 2012
  • 30. High Intensity Scenarios High intensity scenarios, whether they are load tests or stress tests, will require a lot of data. Some basic math will be required to understand the overall data requirements in terms of the data to drive the tests and the data that is required to be present in the system for reference and execution. If you are expected to run at 100 transactions per second for over 3 hours then you will require 3 x 60 x 60 x 100 rows of data – over 1 million rows of data! Whether or not you can use repeating data in these million rows of data will be determined by the nature of the system under test. Similar issues may arise in particularly lengthy soak tests. In addition, modeling the correct number of requests in a high intensity scenario may not be possible with a single test engine. You may need to create multiple test engines in order to be able to create a sufficient load on the system. 9.2 Constant Growth Moving beyond a simple, constant load on the system, the simplest way to vary the load in a performance test is to increase the load over time. To do this, we split the performance test up into multiple phases of a given duration, increasing the number of iterations for each new phase. This will give a simple demonstration of how the system handles an increasing amount of load. 1. Return to the Test Factory, and create a copy of the simplePerformance test. 2. Rename the new test, and call it threePhaseTest. 3. Open the threePhaseTest, and on the Execution tab, change the Number of test phases to 3. 4. Switch to the Load Generating Test on the left hand side, and go to the Execution tab. 5. Change the Initial target iterations setting to 5 per second. 6. Set the Increment per phase to 5. We will now have 3 phases, 5 iterations per second, 10 iterations per second, and 15 iterations per second. 7. Save the performance test, and run it from the Test Lab. You should see each phase execute in the console; notice that each phase is running more and more tests per second. 8. Go to the Results Gallery to view the tests, and view the data in a chart. 9.3 Externally Defined Load Profiles Using an external data source gives us much more control over the amount of load on the system. The length of each phase can be varied as required. In addition, a constant level of growth is no longer required – the exact number of iterations per second/minute/hour can be set for each individual phase. 1. Minimize Rational Integration Tester, and create a new CSV or Excel file with the following data: Period,Iterations  10,10  20,30  30,10 Page 28 of 36 © IBM Corporation 2001, 2012
  • 31. We will use the first column of the data to specify the length of each phase that we will run, while the second supplies the number of iterations in each phase. As we will specify our units as seconds in the performance test (minutes and hours are also possible), we will have 10 seconds at the beginning of the test where we run 10 iterations per second, 20 seconds where we ramp it up to 30 iterations per second, and then 30 seconds where we allow the system to go back to the original 10 iterations per second. 2. Return to Rational Integration Tester, and create a File Data Source or Excel Data Source to link to your data. Remember to use the Refresh button to check that the data loads correctly. 3. Save the new data source. 4. Create a copy of the simplePerformance Test, and call it externalPhaseTest. 5. Open the externalPhaseTest, and go to the Execution tab of the Performance Test settings. 6. Change the Load Profile to Externally Defined. 7. Next to Data set for load phases, press the Browse button to find and select the data source containing the phase data. 8. Leave the Execute test phases field blank, and make sure that the Phase duration read from column setting is set to Period. 9. Switch to the settings for the Load Generating Test, and go to the Execution tab. 10. Make sure that the number of iterations is read from the Iterations column. 11. Save the externalPhaseTest, and run it in the Test Lab. 12. Go to the Results Gallery, and view the test results. You may notice that the statistics for the minimum, average, and maximum pass section durations spike during the test, indicating that the load was slowing down the performance of the system. Page 29 of 36 © IBM Corporation 2001, 2012
  • 32. 10 Advanced Topics In this chapter, you will:  Encounter background tests, and discuss their uses.  See how the Log Measurement action works, and how it can be used.  Use the Log Measurement action in a background test to provide a custom probe. 10.1 Background Tests So far, we have used a load generating test to provide a pre‐defined load upon the system. Multiple load generating tests could be used, if required. However, in some cases, you may want to provide a constant stimulus for the system while using your load generating test. There may also be situations where you need to use stubs to simulate part of the system under test. Both of these situations can be handled by adding a background test to your performance test. A background test is a functional test (or stub) that will be run repeatedly for the duration of the performance test (or, optionally, until the background test fails). This means that it will be run concurrently with any load generating tests included in the performance test. Each background test can have a single iteration running at a time, or may be run multiple times in parallel – unlike load generating tests, this is not limited by the Rational Integration Tester license that is in use. However, timing and status information will not be recorded for a background test. Since background tests are run differently to load generating tests, several things should be kept in mind. Firstly, the Initialise and Tear Down phases of the test will be run as normal. Second, while the Begin Timed Section and End Timed Section actions can still be included in the functional test, they will not have any effect on what is recorded into the project database during the performance test. 10.2 Log Measurement The Log Measurement action can be used to log custom data into your database while running a performance test. This may be useful in several situations. Firstly, it may be used when recording data from the system under test, acting as a custom probe. This may be necessary when information is required that is not covered by the standard probes – for example, when querying proprietary systems for information. An alternative use exists for systems where a message goes through several processes before a response is received by Rational Integration Tester. In these cases, it may be desirable to measure the time taken for a single process to provide a response, rather than measuring the entire round‐trip time between the initial message sent from Rational Integration Tester, and that eventual response. Page 30 of 36 © IBM Corporation 2001, 2012
  • 33. In the diagram below, Rational Integration Tester publishes a message to a queue, and waits for a response. Using the data normally gathered by a performance test, we would be told how long it took for the message to be processed by operations A, B, and C. However, if there were performance issues as we increased the load on the system, we would not know if these could be narrowed down to a single service. For example, we might suspect that service B is where most of the delay is occurring. In order to investigate this, we can add timestamps to fields in the message as it passes through the system. Subtracting Time 2 from Time 3 would then give us the amount of time that was spent inside service B. Using the log measurement action, this information could be recorded in the project database, and analyzed later with respect to the load on the system. When using the Log Measurement action, it is important to note that it cannot be used within a timed section. This is because writing to the project database would alter the time taken during the execution of the timed section, thereby skewing the timing information. 10.3 Creating the measurement test In this example, we’ll use a background test and the log measurement action to act as a custom probe for the system under test. This particular example will be gathering data about the bytes sent and received by the system – note that this could also be gathered by the Windows Performance Monitor probe. We’ll be using a background test for two reasons: firstly, it means we don’t need to change our load generating test; and second, we don’t want to have our probe constantly running – we’ll gather our data every two seconds, rather than constantly polling the system and possibly adding extra unintended load. 1. Create a new test for the Login operation, and call it byteMonitor. 2. Add a Run Command action to the test. Page 31 of 36 © IBM Corporation 2001, 2012
  • 34. 3. On the Config tab, enter the following command: netstat ‐e | find "Bytes" 4. Make sure that the Wait for command execution to finish checkbox is ticked. 5. Press the Test button. You should see a single line of data for stdout, similar to the following: Bytes                      12008764        57368668 6. Switch to the Store tab, so we can store the data into tags. 7. We’ll need to store the two numbers into separate tags, which we’ll be calling bytesSent and bytesReceived. To do this, right click on the stdout field, and choose Contents > Edit. 8. Make sure you’re looking at the Store tab within the window that appears, and then press the New button. 9. Details for the data to store will appear below. The default action type should be set to Copy – change it to Regular Expression. 10. Change the Tag to bytesReceived. You can also change the description field to match. 11. In the Expression section, type the regular expression d+ to match a number. Below that, choose to Extract Instance 1, so that we’ll be extracting the first number found in the string. In the example string given above, this would store 12008764 into the bytesReceived tag. You can test this out with an example string to check that it is working correctly. 12. We still need to store the number of bytes sent. Press New again to generate a second store action for the stdout field, and follow steps 9‐11 again, but this time set the Tag name to bytesSent, and Extract Instance 2. Similar to the first action, if you were to test this out using the example above, you should get a Result of 57368668. 13. Once you’re done, the two tags should be configured as seen below: Page 32 of 36 © IBM Corporation 2001, 2012
  • 35. 14. Press OK to close the Field Editor, and then OK again to close the test action. 15. Before we add the Log Measurement action, we’ll check that this is working as we expect. Add a normal Log action, and log the values captured in the bytesReceived and bytesSent tags to the console. 16. Run the test, and check that it works at the moment. If it doesn’t, check the preceding steps to make sure that everything has been entered correctly. 17. Return to the Test Factory, and delete or disable the Log action. 18. Add a new Log Measurement action after the Run Command. 19. Set up the Log Measurement action as shown in the image below. This will graph the number of bytes sent and received by looking up the values captured earlier. The attributes section allows us to graph multiple sets of data – in this case we only have one, but at least one attribute is required in order for the Log Measurement action to run. Page 33 of 36 © IBM Corporation 2001, 2012
  • 36. 20. Press OK to close the action. 21. Add a Sleep action to the end of the test. As we’ll be running this as a background test, and background tests run continuously, we’ll want to pace this test so it doesn’t interfere with system results. Set the Sleep action to have a fixed duration of 2000ms. 22. Save the byteMonitor Test. 10.4 Adding the Measurements to a Performance Test 1. Create a copy of the externalPhaseTest, and call it customLogging. 2. Edit the customLogging Test, and right click on the Performance Test label on the left hand side of the editor – you’ll see the options for adding load generating and background tests. Add a background test. 3. On the Execution tab for the Background Test, select byteMonitor for the Test Path field. 4. Make sure that Terminate on failure is not checked. 5. Switch to the Engines tab, and click Add. There will only be one engine available, as before – select it. 6. Save the customLogging test, and run it in the Test Lab. 7. Once it has run, you should be able to view the results in the Results Gallery. The counter will be found in the Log Values section. Page 34 of 36 © IBM Corporation 2001, 2012
  • 37. 11 Legal Notices  The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON‐INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.  This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.  If you are viewing this information in softcopy, the photographs and color illustrations may not appear.  Any references in this information to non‐IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.  Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development‐level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.  Information concerning non‐IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non‐IBM products. Questions on the capabilities of non‐IBM products should be addressed to the suppliers of those products.  All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only.  This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.  This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample Page 35 of 36 © IBM Corporation 2001, 2012
  • 38. programs are provided "AS IS", without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs. Trademarks and service marks  IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at www.ibm.com/legal/copytrade.shtml.  Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both.  Java and all Java‐based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates  Other company, product, or service names may be trademarks or service marks of others. Page 36 of 36 © IBM Corporation 2001, 2012