Software Quality Assurance Of
By: Vamseedhar Anantula
Under the Guidance of Dr. Richard J. Easton
Submitted in Partial Fulfillment of the Requirements for the Degree of
Master of Science
Department of Mathematics & Computer Science
Indiana State University
January 10, 2005.
First of all I would like to express my great gratitude to my project advisor,
Dr. Richard J. Easton, Professor and Chairperson of the Department of
Mathematics and Computer Science, for his expert guidance, great inspirations,
and encouragement. Without his guidance and generous support, I can hardly
imagine that this project would have been completed so smoothly. It was a
privilege to work with him as a graduate assistant and I thank him for his guidance
and support throughout the entire course of my studies at ISU.
I would also like to thank the Committee members, Dr. Henjin Chi,
Professor, Department of Mathematics and Computer Science, and Dr.
Laurence E. Kunes, Professor, Department of Mathematics and Computer
Science for their helpful and valuable guidance in the entire process of
completing this project.
I am very thankful to Mr. Aaron King, Network Administrator of the
Department of Mathematics and Computer Science, for his professional
support to this project.
I would also like to thank Mr. Carlos Garza, Manager, QA Team,
PeopleSoft Inc., and Sockie West, QA Team Lead, PeopleSoft Inc., for
giving me the opportunity to work in this project.
Table of Contents:
1. Introduction 04
1.1 Introduction 04
1.2 Objective And Advantages 05
1.3 An Overview Of The Project 05
1.4 Technological Information 13
2. Related Terminology 22
3. SQA 24
3.1 Computer Systems Failure 25
3.2 Why Does Software have bugs 32
3.3 Introducing SQA into an organization 33
3.4 Types of Testing 35
4. Rational Robot 44
4.1 Features and Benefits 45
4.2 Product Functionality 46
4.3 Product History 47
5. Test Director 49
6. Interconnectivity 54
7. Conclusion 55
Campus Solutions is a comprehensive suite of software specifically designed for the
changing needs of higher education institutions. It offers the best end-to-end solutions for
managing the entire student lifecycle. Campus Solutions provide on-line, self-service
tools that enable students, faculty, school staff, and alumni to access and update
academic and administrative data such as student enrollment, financial aid, student grades
and personal contact information. These tools make school services such as student
admission status, registration, loan application, class scheduling, grading, and transcripts
accessible via the internet. Any product must be checked for errors and the way it works
before it is released into the market. I got an opportunity to work in this project as an
Intern. PeopleSoft Inc., planned to release a new version of Campus Solutions 8.9 with
new functionality. They developed the product and before releasing this product into the
market, PeopleSoft Inc., wanted to make sure Campus Solutions works correctly. So
PeopleSoft Inc., hired a testing team to check the functionality of Campus Solutions
product. PeopleSoft Inc., choose to use an automated testing tool called Rational Robot to
test the Campus Solutions product. Fortunately I was chosen to be a member of the
testing team at PeopleSoft Inc., My personal role in this project was to verify and validate
Campus Solutions; to check the functionality of the Campus Solutions product and to fix
the errors when they were found. Software Quality Assurance is the process of ensuring
the functional success of a product. Rational Robot is an automated testing tool which is
used in the process of Software Quality Assurance.
1.2 Objectives & Advantages:
Integration improves Student Access to Academic Services and reduces the total cost of
maintenance for higher educational institutions and Campus Solutions that was used to
reduce the cost of the maintenance.
Campus Solutions Advantages
With Campus Solutions, you can:
• Customize the product to meet your institution’s specific goals.
• Improve the efficiency of all of the institutions administrative processes.
• Lower the costs and free resources for more profitable activities.
• Build stronger relationships with all of your constituents.
1.3 An Overview Of The Project:
By extending PeopleSoft’s leadership in providing student management and fund-raising
support solutions, Campus Solutions enables an institution to streamline campus
operations, reduce costs with self-service tools, and better align their IT and
administrative functions so that they coincide with the academic mission of the
institution. Campus Solutions includes a new modular architecture; it expands its global
capabilities, usability enhancements, and provides significantly new functionality.
Campus Solutions also supports the PeopleSoft “person” model; a data-model that allows
institutions to manage all of the information for the full time and contingent staff, the
students, the faculty, and the employees, using a single database.
Campus Solutions delivers a more flexible and a modular architecture for PeopleSoft’s
Human Resources Management Solutions (HRMS) suite. This modular architecture
creates a more streamlined and consistent process for applications to access the shared
data. Additionally, this architecture improves the ability of educational institutions to add
functionality to specific HRMS or Campus Solutions applications. The new architecture
will provide the following advantages to customers:
• The ability to upgrade to PeopleSoft’s latest HRMS 8.9 release, which contains
specialized features for government and education.
• Utilization of the latest People Tools which has improved programming and
• Improved ability to add new functionality to specific Campus Solutions applications
independent of the installed version of PeopleSoft HRMS.
• Improved integration capabilities with other PeopleSoft products and third-party
PeopleSoft Campus Solutions 8.9 includes numerous usability enhancements that provide
customers with a superior Total Ownership Experience. Specific enhancements that
improve the maintenance, implementation, and usability of Campus Solutions 8.9 include
improved navigation, streamlined task flow, and an automated error-prevention tool.
Extensive collaboration and testing with PeopleSoft Customers led to the following
• 20 percent reduction in overall implementation time.
• 23 percent decrease in time needed to complete key tasks.
• 80 percent fewer steps to update applications.
Expanded Global Capabilities:
Campus Solutions 8.9 delivers new Unicode support that will enable customers to
maintain data and user interfaces in virtually any spoken language through a single
database. Previously, Campus Solutions supported only those languages sharing the
Western European character set such as Dutch, English, French, and Spanish. Unicode
support will allow customers to use languages with non-Western European character sets
such as Arabic, Japanese and Thai with their existing database. With support for all
international character sets, higher education customers can now view and maintain
student, faculty and staff data in any language, enabling a truly global implementation.
Additionally country specific functionality for higher education customers in Australia,
New Zealand, and the Netherlands has been incorporated into Campus Solutions 8.9.
This includes tools for centralized admissions, student identification, Government
funding, and regulation compliance.
Campus Solutions 8.9 includes significant new functionality and enhancements,
• The Student Center – Provides a single, personalized point of access to all available
on-line transactions such as enrollment to classes and payment of tuition.
• Self-service Student Planner Tool – Simplifies the course enrollment with sample
class schedules, class key-word searches, and access to required class enrollment dates.
• LDAP Server Interface – Improves data security with a new LDAP directory server
interface and improves authentication tools that strengthen the security of biographic
and demographic information.
• Equation Engine – Defines rules for calculating tuition and fees, enabling the
institutions to use virtually any variable to create customized results for calculating the
• Prospective Student Tool – Streamlines the process for generating records and
tracking prospective students based on submitted test scores.
Campus Solutions modules
• Academic Advisement
• Campus Community
• Campus Self Service
• Contributor Relations
• Financial Aid
• Grade Book
• Recruiting and Admissions
• Student Administration
• Student Financials
• Student Records
The main Campus Solutions login page, Administrator login page and Campus Self
Service looks as follows:
1.4 Technological Information:
All PeopleSoft Enterprise products are developed on PeopleSoft’s Pure Internet
Architecture which allows industry-leading integration with other PeopleSoft
applications, legacy applications, and other vendors. PeopleSoft Enterprise tools and
technology transform the way organizations implement, use, and maintain enterprise
software. They offer an automated and highly flexible development environment.
PeopleSoft Enterprise Tools are the PeopleSoft Enterprise runtime architecture and
integrated development environment. PeopleSoft Enterprise Tools offer the only pure
internet, server-centric enterprise architecture. PeopleSoft Enterprise Tools are shown in
the following figure:
The power and flexibility of using PeopleSoft Enterprise Tools to Deploy, Develop,
Maintain and Upgrade your PeopleSoft solutions are illustrated in the following figure:
PeopleSoft Enterprise Tools Advantages:
• Flexible and scalable metadata-driven toolset.
• Embedded performance monitoring and lifecycle management tools.
• Native web services support.
Enterprise Pure Internet Architecture:
PeopleSoft Enterprise Pure Internet Architecture, introduced with People Tools 8, is
completely focused on the internet in order to support real-time business processes.
It provides a powerful functionality for secure, scalable, internet-based access and
integration. This next-generation architecture leverages a number of internet technologies
and concepts to deliver simple, ubiquitous, real-time PeopleSoft access to customers,
employees, and suppliers. PeopleSoft Pure Internet Architecture has got many advantages
which constitute to its success. PeopleSoft Pure Internet Architecture can be illustrated as
Enterprise Pure Internet Architecture Advantages:
• No Code on the Client
• Interactivity Features
• Lower Development Costs
• Minimized Training
No Code on the Client: PeopleSoft Pure Internet Architecture works without a client.
There is no complex, expensive client software installation. The internet device that
accesses the internet architecture already has all the software and configuration it needs.
No Java applets, Windows DLLs, or browser plug-ins are necessary for this “thin” client
Thus, the client can be a web browser, mobile device, or external system that uses
standard internet technologies such as HTTP, HTML, and XML to communicate with the
PeopleSoft Internet Application Server. This open architecture creates easy,
inexpensive access and collaboration where a web browser and device operating system
cannot be predefined. This is often the case for customer and supplier environments.
• User personalization: Users can control and personalize their own settings for how
they interact with PeopleSoft.
• Mouseless data entry and access-key support: Power users no longer need to
reposition hands constantly to and from the keyboard and mouse. Users can operate
PeopleSoft applications completely from the keyboard.
• Advanced search and drill-down: PeopleSoft 8 relies heavily on search technology
and click-through capability. Users can perform keyword, full-text, and natural-
language searches to find the relevant information quickly.
• PeopleSoft's new “deferred mode” requires no trips to the web server until your
application data is saved, and that means faster data entry for the power user.
Lower development Costs:
Client/server implementations for a large end-user base are very expensive due to which
many organizations cannot afford it. With PeopleSoft, you no longer need to install,
configure, and maintain PeopleSoft software on expensive client devices. PeopleSoft has
reduced the hardware and software requirements for client devices by doing away with
costly hardware and operating system upgrades. PeopleSoft's internet architecture
requires less memory and CPU speed when compared to client/server applications.
Sometimes web access occurs via the low-bandwidth connections of dial-up phone lines
or wireless devices. PeopleSoft supports these connections through a server architecture
proprietary components, or other heavy-footprint client software. In addition, the network
impact of deploying internet-enabled PeopleSoft applications is minimal; allowing
companies to leverage their existing IT infrastructure.
As the industry moves from client/server applications to pure-internet applications, many
organizations faced the challenge of training their application developers in new
internet technologies, like HTML and XML. PeopleSoft customers need not worry about
this ever-changing technology cycle. Their familiarity with earlier releases of People
Tools allows them to develop robust, secure, platform-independent applications for
the internet. These customers can now focus on managing their business more efficiently.
Training is also minimized for end users because PeopleSoft internet application looks
and feels like popular websites. The PeopleSoft internet architecture uses the same web
browser navigation already familiar to internet users, including back, forward, and refresh
buttons. Users can also navigate through a search engine when they need to locate an
Providing many end users with access to applications should not require an inordinate
number of servers. PeopleSoft's internet architecture scales to support access not only for
full-time users but also for large numbers of occasional and external users. Application
load balancing can also be provided across the server tiers. Both UNIX and Windows
server operating systems are available to fit the needs.
The concept of tiers provides a convenient way to group different classes of architecture.
Basically, if your application is running on a single computer, it has a one-tier
architecture. If your application is running on two computers -- for instance, a typical
Web CGI application that runs on a Web browser (client) and a Web server -- then it has
two tiers. In a two-tier system, you have a client program and a server program. The main
difference between the two is that the server responds to requests from many different
clients, while the clients usually initiate the requests for information from a single server.
A three-tier application adds a third program to the mix, usually a database, in which the
server stores its data. The three-tier application is an incremental improvement to the
two-tier architecture. The flow of information is still essentially as follows: a request
comes from the client to the server; the server requests or stores data in the database; the
database returns information to the server; the server returns information back to the
An n-tier architecture, on the other hand, allows an unlimited number of programs to run
simultaneously, send information to one another, use different protocols to communicate,
and interact concurrently. This allows for a much more powerful application, providing
many different services to many different clients.
The PeopleSoft architecture has distinct tiers that make it easy to scale as needed. In fact,
scaling up can be a seamless process that creates little disruption to a production system.
Using Tuxedo from BEA Systems facilitates this process because it is a stateless
application server. The leading distributed-transaction monitor in the marketplace,
Tuxedo has been part of the PeopleSoft architecture for many years and has proven
scalability. Tuxedo also provides built-in failover, load balancing, dynamic spawning,
monitoring, encryption, and compression capabilities. At the web server level, scalability
is simply a matter of adding additional servers by installing and configuring the Java
2. Related Terminology:
This chapter talks about the related terminology used in the entire project. They are as
1. SQA (Software Quality Assurance)
2. Rational Robot
3. Test Director
SQA (Software Quality Assurance).
SQA explains the need for Quality Assurance of Software, its importance and
advantages are discussed in this section. We discuss why software has bugs and why
we need software testing. It also explains the reasons behind making software testing
so important. In this section we also learn how to introduce a Software Quality
Assurance process into an existing organization and how it verifies and validates the
software. We also learn how to use automated testing tools and some common
solutions to software development problems.
This is the automated testing tool which is used in testing the product functionality of
Campus Solutions. This is a very powerful testing tool and this section explains the
features and benefits of this tool. We also learn the product functionality and the
The TestDirector is a single web-based application for all essential aspects of quality
management. In this section we learn how the TestDirector helps in the overall
success of the organization and how each group in the organization can contribute to
the quality process. We know in details what the requirements are and what planning
tests are? We also learn about Scheduling and Running tests, defect/issue
management, graphs and reports.
3. SQA (Software Quality Assurance)
SQA stands for Software Quality Assurance.
Software QA involves the entire software development PROCESS - monitoring and
improving the product, making sure that any agreed-upon standards and procedures
are followed, and ensuring that problems are found and dealt with. It is oriented to
Testing involves the operation of a system or application under controlled conditions
and evaluating the results (eg, 'if the user is in the interface A of the application while
using hardware B, and does C, then D should happen'). The controlled conditions
should include both normal and abnormal conditions. Testing should intentionally
attempt to make things go wrong and to check such things will not happen after the
product is released into the market. It is oriented to 'detection'.
Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they are the combined responsibility of one group or individual. Also
common are project teams that include a mix of testers and developers who work
closely together, with overall QA processes monitored by the project managers. It
usually depends on what best fits an organization's size and business structure.
3.1 Some recent major computer system failures caused by software bugs:
• Media reports in January of 2005 detailed severe problems with a $170 million
high-profile U.S. government IT systems project. Software testing was one of the five
major problem areas according to a report of the commission reviewing the project.
Studies were under way to determine which, if any, portions of the project could be
• In July 2004 newspapers reported that a new government welfare management
system in Canada costing several hundred million dollars was unable to handle a
simple benefits rate increase after being put into live operation. Reportedly the
original contract allowed for only 6 weeks of acceptance testing and the system was
never tested for its ability to handle a rate increase.
• Millions of bank accounts were impacted by errors due to installation of
inadequately tested software code in the transaction processing system of a major
North American bank, according to mid-2004 news reports. Articles about the
incident stated that it took two weeks to fix all the resulting errors, that additional
problem resulted when the incident drew a large number of e-mail phasing attacks
against the bank's customers, and that the total cost of the incident could exceed $100
• A bug in the site management software utilized by companies with a significant
percentage of worldwide web traffic was reported in May of 2004. The bug resulted
in performance problems for many of the sites simultaneously and required disabling
of the software until the bug was fixed.
• According to news reports in April of 2004, a software bug was determined to be
a major contributor to the 2003 Northeast blackout, the worst power system failure in
North American history. The failure involved loss of electrical power to 50 million
customers, forced shutdown of 100 power plants, and economic losses estimated at $6
billion. The bug was reportedly in one of the utility company's vendor-supplied power
monitoring and management system, which was unable to correctly handle and report
on an unusual confluence of initially localized events. The error was found and
corrected after examining millions of lines of code.
• In early 2004, news reports revealed the intentional use of a software bug as a
counter-espionage tool. According to the report, in the early 1980's one nation
surreptitiously allowed a hostile nation's espionage service to steal a version of
sophisticated industrial software that had intentionally-added flaws. This eventually
resulted in major industrial disruption in the country that used the stolen flawed
• A major U.S. retailer was reportedly hit with a large government fine in October
of 2003 due to web site errors that enabled customers to view one another’s online
• News stories in the fall of 2003 stated that a manufacturing company recalled all
their transportation products in order to fix a software problem causing instability in
certain circumstances. The company found and reported the bug itself and initiated
the recall procedure in which a software upgrade fixed the problems.
• In August of 2003 a U.S. court ruled that a lawsuit against a large online
brokerage company could proceed; the lawsuit reportedly involved claims that the
company was not fixing system problems that sometimes resulted in failed stock
trades, based on the experiences of 4 plaintiffs during an 8-month period. A previous
lower court's ruling that "...six miscues out of more than 400 trades does not indicate
negligence." was invalidated.
• In April of 2003 it was announced that a large student loan company in the U.S.
made a software error in calculating the monthly payments on 800,000 loans.
Although borrowers were to be notified of an increase in their required payments, the
company will still reportedly lose $8 million in interest. The error was uncovered
when borrowers began reporting inconsistencies in their bills.
• News reports in February of 2003 revealed that the U.S. Treasury Department
mailed 50,000 Social Security checks without any beneficiary names. A spokesperson
indicated that the missing names were due to an error in a software change.
Replacement checks were subsequently mailed out with the problem corrected, and
recipients were then able to cash their Social Security checks.
• In March of 2002 it was reported that software bugs in Britain's national tax
system resulted in more than 100,000 erroneous tax overcharges. The problem was
partly attributed to the difficulty of testing the integration of multiple systems.
• A newspaper columnist reported in July 2001 that a serious flaw was found in off-
the-shelf software that had long been used in systems for tracking certain U.S. nuclear
materials. The same software had been recently donated to another country to be used
in tracking their own nuclear materials, and it was not until scientists in that country
discovered the problem, and shared the information, that U.S. officials became aware
of the problems.
• According to newspaper stories in mid-2001, a major systems development
contractor was fired and sued over problems with a large retirement plan management
system. According to the reports, the client claimed that system deliveries were late,
the software had excessive defects, and it caused other systems to crash.
• In January of 2001 newspapers reported that a major European railroad was hit by
the aftereffects of the Y2K bug. The company found that many of their new trains
would not run due to their inability to recognize the date '31/12/2000'; those trains
were started by altering the control system's date settings.
• News reports in September of 2000 told about a software vendor settling a lawsuit
with a large mortgage lender; the vendor had reportedly delivered an online mortgage
processing system that did not meet specifications, was delivered late, and didn't
• In early 2000, major problems were reported with a new computer system in a
large suburban U.S. public school district with 100,000+ students; problems included
10,000 erroneous report cards and students left stranded by failed class registration
systems; the district's CIO was fired. The school district decided to reinstate it's
original 25-year old system for at least a year until the bugs were worked out of the
new system by the software vendors.
• In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was
believed to be lost in space due to a simple data conversion error. It was determined
that spacecraft software used certain data in English units that should have been in
metric units. Among other tasks, the orbiter was to serve as a communications relay
for the Mars Polar Lander mission, this failed for unknown reasons in December
1999. Several investigating panels were convened to determine the process failures
that allowed the error to go undetected.
• Bugs in the software supporting a large commercial high-speed data network
affected 70,000 business customers over a period of 8 days in August of 1999.
Among those affected was the electronic trading system of the largest U.S. futures
exchange, which was shut down for most of the week as a result of the outages.
• In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military
satellite launch, the costliest unmanned accident in the history of Cape Canaveral
launches. This failure was the latest in a string of launch failures, triggering a
complete military and industry review of U.S. space launch programs, including
software integration and testing processes.
• A small town in Illinois in the U.S. received an unusually large monthly electric
bill of $7 million in March of 1999. This was about 700 times larger than its normal
bill. It turned out to be due to bugs in new software that had been purchased by the
local power company to deal with Y2K software issues.
• In early 1999 a major computer game company recalled all copies of a popular
new product due to software problems. The company made a public apology for
releasing a product before it was ready.
• The computer system of a major online U.S. stock trading service failed during
trading hours several times over a period of days in February of 1999 according to
nationwide news reports. The problem was reportedly due to bugs in the software
upgrade intended to speed online trade confirmations.
• In April of 1998 a major U.S. data communications network failed for 24 hours,
crippling a large part of some U.S. credit card transaction authorization systems as
well as other large U.S. bank, retail, and government data systems. The cause was
eventually traced to be a software bug.
• In January 1998 news reports told of software problems at a major U.S.
telecommunications company that resulted in no charges for long distance calls for a
month for 400,000 customers. The problem went undetected until customers called up
with questions about their bills.
• In November of 1997 the stock of a major health industry company dropped 60%
due to reports of failures in computer billing systems, problems with a large database
conversion, and inadequate software testing. It was reported that more than
$100,000,000 in receivables had to be written off and that multi-million dollar fines
were levied on the company by government agencies.
• A retail store chain filed a suit in August of 1997 against a transaction processing
system vendor (not a credit card company) due to the software's inability to handle
credit cards with year 2000 expiration dates.
• In August of 1997 one of the leading consumer credit reporting companies
reportedly shut down their new public web site after less than two days of operation
due to software problems. The new site allowed web site visitors instant access, for a
small fee, to their personal credit reports. However, a number of initial users ended up
viewing each others' reports instead of their own, resulting in irate customers and
nationwide publicity. The problem was attributed to "...unexpectedly high demand
from consumers and faulty software that routed the files to the wrong computers."
• In November of 1996, newspapers reported that software bugs caused the 411
telephone information system of one of the U.S. RBOC's to fail for most of a day.
Most of the 2000 operators had to search through phone books instead of using their
13,000,000-listing database. The bugs were introduced by new software
modifications and the problem software had been installed on both the production and
backup systems. A spokesman for the software vendor reportedly stated that 'It had
nothing to do with the integrity of the software. It was human error.'
• On June 4 1996 the first flight of the European Space Agency's new Ariane 5
rocket failed shortly after launching, resulting in an estimated uninsured loss of a half
billion dollars. It was reportedly due to the lack of exception handling of a floating-
point error in a conversion from a 64-bit integer to a 16-bit signed integer.
• Software bugs caused the bank accounts of 823 customers of a major U.S. bank to
be credited with $924,844,208.32 each in May of 1996, according to newspaper
reports. The American Bankers Association claimed it was the largest such error in
banking history. A bank spokesman said the programming errors were corrected and
all funds were recovered.
• Software bugs in a Soviet early-warning monitoring system nearly brought on
nuclear war in 1983, according to news reports in early 1999. The software was
supposed to filter out false missile detections caused by Soviet satellites picking up
sunlight reflections off cloud-tops, but failed to do so. Disaster was averted when a
Soviet commander, based on what he said was a 'funny feeling in my gut', decided
the apparent missile attack was a false alarm. The filtering software code was
3.2 Why Software bugs?
• Miscommunication or no communication - as to specifics of what an application
should or shouldn't do (the application's requirements). The functional analyst should
explain what he is expecting the application to do. The scripts must be clearly
understood by the testing team so that the testing team understands what the
functional analyst expects to verify in that particular script.
• Software complexity - the complexity of current software applications can be
difficult to comprehend for anyone without experience in modern-day software
development. Multi-tiered applications, client-server and distributed applications,
data communications, enormous relational databases, and sheer size of applications
have all contributed to the exponential growth in software/system complexity.
• Programming errors - programmers, like anyone else, can make mistakes.
• Changing requirements (whether documented or undocumented) - the end-user
may not understand the effects of changes, or may understand and request them
anyway - redesign, rescheduling of engineers, effects on other projects, work already
completed that may have to be redone or thrown out, hardware requirements that may be
affected, etc. If there are many minor changes or any major changes, known and
unknown dependencies among parts of the project are likely to interact and cause
problems, and the complexity of coordinating changes may result in errors. The
enthusiasm of engineering staff may be affected. In some fast-changing business
environments, continuously modified requirements may be a fact of life. In this case,
management must understand the resulting risks, and the QA and the test engineers must
adapt and plan for continuous extensive testing to keep the inevitable bugs from running
out of control. This problem can be avoided by proper documentation. Any changes made
should be mentioned in the documentation part so that it will be clearly understood by a
new person in the QA team and by all the test engineers.
• Time pressures - scheduling of software projects is difficult at best, often
requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes
will be made.
• Poorly documented code - it's tough to maintain and modify code that is badly
written or poorly documented; the result is bugs. In many organizations management
provides no incentive for programmers to document their code or write clear,
understandable, maintainable code. In fact, it's usually the opposite: they get points
mostly for quickly turning out code, and there's job security if nobody else can
understand it ('if it was hard to write, it should be hard to read').
• Software development tools - visual tools, class libraries, compilers, scripting
tools, etc. often introduce their own bugs or are poorly documented, resulting in
3.3 Introducing SQA into an organization:
Software QA processes can be introduced in an existing organization as follows:
• A lot depends on the size of the organization and the risks involved. As the size of
the organization increases, the information about the organization will also increase.
For large organizations with high-risk (in terms of lives or property) projects, serious
management buy-in is required and a formalized QA process is necessary.
• Where the risk is lower, management and organizational buy-in and QA
implementation may be a slower, step-at-a-time process. QA processes
should be balanced with productivity so as to keep the product from getting out of
• For small groups or projects, a more ad-hoc process may be appropriate,
depending on the type of customers and projects. A lot will depend on team leads or
managers, feedback to developers, and ensuring adequate communications among
customers, managers, developers, and testers.
• The most value for effort will often be:
(a) requirements management processes, with a goal of clear, complete, testable
requirement specifications embodied in requirements or design documentation, or in
'agile'-type environments extensive continuous coordination with end-users,
(b) design inspections and code inspections, and
Verification and Validation:
Verification typically involves reviews and meetings to evaluate documents, plans,
code, requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings. Validation typically involves actual testing
and takes place after verifications are completed. The term 'IV & V' refers to
Independent Verification and Validation. A 'walkthrough' is an informal meeting for
evaluation or informational purposes. Little or no preparation is usually required.
An inspection is more formalized than a 'walkthrough', typically with 3-8 people
including a moderator, reader, and a recorder to take notes. The subject of the
inspection is typically a document such as a requirements spec or a test plan, and the
purpose is to find problems and see what's missing, not to fix anything. Attendees
should prepare for this type of meeting by reading the document; most problems
will be found during this preparation. The result of the inspection meeting should be a
written report. Thorough preparation for inspections is difficult, painstaking work, but
is one of the most cost effective methods of ensuring quality.
3.4 Types of Testing to be considered:
• Black box testing - not based on any knowledge of internal design or code. Tests
are based on requirements and functionality.
• White box testing - based on knowledge of the internal logic of an application's
code. Tests are based on coverage of code statements, branches, paths, conditions.
• Unit testing - the most 'micro' scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. Not always easily done unless
the application has a well-designed architecture with tight code; may require
developing test driver modules or test harnesses.
• Incremental integration testing - continuous testing of an application as new
functionality is added; requires that various aspects of an application's functionality
be independent enough to work separately before all parts of the program are
completed, or that test drivers be developed as needed; done by programmers or by
• Integration testing - testing of combined parts of an application to determine if
they function together correctly. The 'parts' can be code modules, individual
applications, client and server applications on a network, etc. This type of testing is
especially relevant to client/server and distributed systems.
• Functional testing - black-box type testing geared to functional requirements of
an application; this type of testing should be done by testers. This doesn't mean that
the programmers shouldn't check that their code works before releasing it (which of
course applies to any stage of testing.)
• System testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
• End-to-end testing - similar to system testing; the 'macro' end of the test scale;
involves testing of a complete application environment in a situation that mimics real-
world use, such as interacting with a database, using network communications, or
interacting with other hardware, applications, or systems if appropriate.
• Sanity testing or smoke testing - typically an initial testing effort to determine if
a new software version is performing well enough to accept it for a major testing
effort. For example, if the new software is crashing systems every 5 minutes, bogging
down systems to a crawl, or corrupting databases, the software may not be in a 'sane'
enough condition to warrant further testing in its current state.
• Regression testing - re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed,
especially near the end of the development cycle. Automated testing tools can be
especially useful for this type of testing.
• Acceptance testing - final testing based on specifications of the end-user or
customer, or based on use by end-users/customers over some limited period of time.
• Load testing - testing an application under heavy loads, such as testing of a web
site under a range of loads to determine at what point the system's response time
degrades or fails.
• Stress testing - term often used interchangeably with 'load' and 'performance'
testing. Also used to describe such tests as system functional testing while under
unusually heavy loads, heavy repetition of certain actions or inputs, input of large
numerical values, large complex queries to a database system, etc.
• Performance testing - term often used interchangeably with 'stress' and 'load'
testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in
requirements documentation or QA or Test Plans.
• Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and
will depend on the targeted end-user or customer. User interviews, surveys, video
recording of user sessions, and other techniques can be used. Programmers and testers
are usually not appropriate as usability testers.
• Install/uninstall testing - testing of full, partial, or upgrade install/uninstall
• Recovery testing - testing how well a system recovers from crashes, hardware
failures, or other catastrophic problems.
• Failover testing - typically used interchangeably with 'recovery testing'.
• Security testing - testing how well the system protects against unauthorized.
• Compatibility testing - testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
• Exploratory testing - often taken to mean a creative, informal software test that
is not based on formal test plans or test cases; testers may be learning the software as
they test it.
• Ad-hoc testing - similar to exploratory testing, but often taken to mean that the
testers have significant understanding of the software before testing it.
• Context-driven testing - testing driven by an understanding of the environment,
culture, and intended use of software. For example, the testing approach for life-
critical medical equipment software would be completely different than that for a
low-cost computer game.
• User acceptance testing - determining if software is satisfactory to an end-user or
• Comparison testing - comparing software weaknesses and strengths to
• Alpha testing - testing of an application when development is nearing
completion; minor design changes may still be made as a result of such testing.
Typically done by end-users or others, not by programmers or testers.
• Beta testing - testing when development and testing are essentially completed
and final bugs and problems need to be found before final release. Typically done by
end-users or others, not by programmers or testers.
• Mutation testing - a method for determining if a set of test data or test cases is
useful, by deliberately introducing various code changes ('bugs') and retesting with
the original test data/cases to determine if the 'bugs' are detected. Proper
implementation requires large computational resources.
5 common solutions to software development problems:
• Solid requirements - clear, complete, detailed, cohesive, attainable, testable
requirements that are agreed to by all players. Use prototypes to help nail down
requirements. In 'agile'-type environments, continuous coordination with
customers/end-users is necessary.
• Realistic schedules - allow adequate time for planning, design, testing, bug
fixing, re-testing, changes, and documentation; personnel should be able to complete
the project without burning out.
• Adequate testing - start testing early on, re-test after fixes or changes, plan for
adequate time for testing and bug-fixing. 'Early' testing ideally includes unit testing
by developers and built-in testing and diagnostic capabilities.
• Stick to initial requirements - be prepared to defend against excessive changes
and additions once development has begun, and be prepared to explain consequences.
If changes are necessary, they should be adequately reflected in related schedule
changes. If possible, work closely with customers/end-users to manage expectations.
This will provide them a higher comfort level with their requirements decisions and
minimize excessive changes later on.
• Communication - require walkthroughs and inspections when appropriate; make
extensive use of group communication tools - e-mail, groupware, networked bug-
tracking tools and change management tools, intranet capabilities, etc.; insure that
information/documentation is available and up-to-date - preferably electronic, not
paper; promote teamwork and cooperation; use prototypes if possible to clarify
‘Quality software’ is reasonably bug-free, delivered on time and within
budget, meets requirements and/or expectations, and is maintainable. However,
quality is obviously a subjective term. It will depend on who the 'customer' is and
their overall influence in the scheme of things. A wide-angle view of the 'customers'
of a software development project might include end-users, customer acceptance
testers, customer contract officers, customer management, the development
organization's management/accountants/testers/salespeople, future software
maintenance engineers, stockholders, magazine columnists, etc. Each type of
'customer' will have their own slant on 'quality' - the accounting department might
define quality in terms of profits while an end-user might define quality as user-
friendly and bug-free.
'Good code' is code that works, is bug free, and is readable and maintainable.
Some organizations have coding 'standards' that all developers are supposed to adhere
to, but everyone has different ideas about what's best, or what is too many or too few
rules. There are also various theories and metrics, such as McCabe Complexity
metrics. It should be kept in mind that excessive use of standards and rules can stifle
productivity and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can
be used to check for problems and enforce standards.
'Design' could refer to many things, but often refers to 'functional design' or 'internal
design'. Good internal design is indicated by software code whose overall structure is
clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-
handling and status logging capability; and works correctly when implemented. Good
functional design is indicated by an application whose functionality can be traced back to
customer and end-user requirements. For programs that have a user interface, it's often a
good idea to assume that the end user will have little computer knowledge and may not
read a user manual or even the on-line help; some common rules-of-thumb include:
• The program should act in a way that least surprises the user.
• It should always be evident to the user what can be done next and how to exit.
• The program shouldn't let the users do something stupid without warning them.
The life cycle begins when an application is first conceived and ends when it
is no longer in use. It includes aspects such as initial concept, requirements analysis,
functional design, internal design, documentation planning, test planning, coding,
document preparation, integration, testing, maintenance, updates, retesting, phase-out,
and other aspects.
Use of automated testing tools to make testing easier:
• For small projects, the time needed to learn and implement them may not be
worth it. For larger projects, or on-going long-term projects they can be valuable.
• A common type of automated tool is the 'record/playback' type. For example, a
tester could click through all combinations of menu choices, dialog box choices,
buttons, etc. in an application GUI and have them 'recorded' and the results logged by
a tool. The 'recording' is typically in the form of text based on a scripting language
that is interpretable by the testing tool. If new buttons are added, or some underlying
code in the application is changed, etc. the application might then be retested by just
'playing back' the 'recorded' actions, and comparing the logging results to check
effects of the changes. The problem with such tools is that if there are continual
changes to the system being tested, the 'recordings' may have to be changed so much
that it becomes very time-consuming to continuously update the scripts. Additionally,
interpretation and analysis of results (screens, data, logs, etc.) can be a difficult task.
Note that there are record/playback tools for text-based interfaces also, and for all
types of platforms.
• Another common type of approach for automation of functional testing is 'data-
driven' or 'keyword-driven' automated testing, in which the test drivers are separated
from the data and/or actions utilized in testing (an 'action' would be something like
'enter a value in a text box'). Test drivers can be in the form of automated test tools or
custom-written testing software. The data and actions can be more easily maintained
such as via a spreadsheet - since they are separate from the test drivers. The test
drivers 'read' the data/action information to perform specified tests. This approach can
enable more efficient control, development, documentation, and maintenance of
automated tests/test cases.
• Other automated tools can include:
code analyzers - monitor code complexity, adherence to standards, etc.
coverage analyzers - these tools check which parts of the code have been exercised
by a test, and may be oriented to code statement coverage, condition coverage, path
memory analyzers - such as bounds-checkers and leak detectors.
load/performance test tools - for testing client/server and web applications under
various load levels.
web test tools - to check that links are valid, HTML code usage is correct, client-side
and server-side programs work, a web site's interactions are secure.
other tools - for test case management, documentation management, bug reporting,
and configuration management.
4. Rational Robot:
Automated (GUI based) testing tool from Rational Corporation. Rational Robot is neither
lightweight in usability or in price. This tool is very expensive. Rational Robot
is a development environment with its own custom scripting language, SQABasic. It is
easy to use if you know Java. Similarly, Robot could be easy to use if you know
Rational Robot is a general-purpose test automation tool for QA teams who want to
perform functional testing of client/server applications. It lowers learning curve for
testers discovering the value of test automation processes, enables experienced test-
automation engineers to uncover more defects by extending their test scripts with
conditional logic to cover more of the application and to define test cases to call external
DLLs or executables. It also provides test cases for common objects such as menus, lists
and bitmaps, and specialized test cases for objects specific to the development
environment. It includes built-in test management, and integrates with the tools in the
IBM Rational Unified Process for defect tracking, change management and requirements
traceability. Supports multiple UI technologies for everything from Java, the Web and all
VS.NET controls to Oracle Forms, Borland Delphi and Sybase PowerBuilder
applications. Rational Robot page looks as follows:
4.1 Features and benefits:
Rational Robot automates regression, functional and configuration testing for e-
commerce, client/server and ERP applications. It is used to test applications based upon a
wide variety of user interface technologies, and is integrated with the Rational
TestManager solution to provide desktop management support for all testing activities.
Simplifies configuration testing - Rational Robot can be used to distribute functional
testing among many machines, each one configured differently. The same functional tests
can be run simultaneously, shortening the time to identify problems with specific
Tests many types of applications - Rational Robot supports a wide range of environments
and languages, including HTML and DHTML, Java, VS.NET, Microsoft Visual Basic
and Visual C++, Oracle Developer/2000, PeopleSoft, Sybase PowerBuilder and Borland
Delphi. For advanced testing of Java. VS.NET and Web-based applications, as well as of
3270 (zSeries) and 5250 (iSeries) terminal-based applications.
Ensures testing depth - Tests beyond an application's UI to the hundreds of properties of
an application's component objects - such as ActiveX Controls, OCXs, Java applets and
many more with just the click of a mouse.
Tests custom controls and objects - Rational Robot allows you to test each application
component under varying conditions and provides test cases for menus, lists,
alphanumeric characters, bitmaps and many more objects.
Provides an integrated programming environment - Rational Robot generates test scripts
in SQABasic, an integrated MDI scripting environment that allows you to view and edit
your test script while you are recording.
Helps you analyze problems quickly - Rational Robot automatically logs test results into
the integrated Rational Repository, and color codes them for quick visual analysis. By
double-clicking on an entry, you are brought directly to the corresponding line in your
test script, thereby ensuring fast analysis and correction of test script errors.
Enables reuse - Rational Robot ensures that the same test script, without any
modification, can be reused to test an application running on Microsoft Windows XP,
Windows, ME, Windows 2000, Windows 98 or Windows NT.
4.2 Product Functionality:
Robot automatically plays back scripts that emulate user actions interacting with the GUI
of applications under test (AUT). The validity of the AUT is determined by comparators
at Verification Points, when objects of the AUT are compared against a baseline of what
Robot records several types of scripts:
1. SQABasic scripts (using MS-Basic language syntax) capture the commands equivalent
to each user action.
2. RobotJ scripts (using Java language syntax). These are compiled into .class files
containing java bytecode.
3. Virtual User (VU) scripts (using C language syntax) capture entire streams of
conversations HTML, SQL, Tuxedo, CORBA Inter-ORB IIOP, and raw Sockets Jolt
protocols sent over the network. These are compiled into dynamic-link library (.dll) files
and linked into a .obj compiled from a .c source file which calls the .dll file.
Both types of scripts can be initiated from the Test Manager product. VU scripts are
executed from a schedule. A separate IBM product, Rational Suite Performance studio
(LoadTest.exe), plays back Virtual User (VU) script commands to determine an
application's performance speed and to detect contention problems caused by multiple
users performing actions simultaneously.
Captured scripts typically need to edited in order to:
1. Add for, while, and do-while loops to simplify repetitive actions.
2. Add conditional branching.
3. Modify think time variables.
4. Respond to runtime errors.
5. Store and retrieve test data to/from datapool files.
There are several ways to create Robot scripts. Scripts can read and write to datapools.
As scripts run, Log records are generated into Log Files used to trace script execution
4.3 Product History:
The product name Robot was created after Rational purchased SQA Team Test v6.1.
Before that the product was named PLA. This is the reason for references to “SQA” and
“PLA” within the product. Thus, Robot scripts are in Rational’s SQABasic language
which is based on an early version of Microsoft’s Visual Basic language.
TestDirector allows you to deploy high-quality applications quickly and effectively by
providing a consistent, repeatable process for gathering requirements, planning and
scheduling tests, analyzing results, and managing defects and issues. TestDirector is a
single, Web-based application for all essential aspects of quality management —
Requirements Management, Test Plan, Test Lab, and Defects Management. You can
leverage these core modules either as a standalone solution or integrated within a global
Quality Center of Excellence environment.
TestDirector supports high levels of communication and collaboration among IT teams.
Whether you are coordinating the work of many disparate QA teams, or working with a
large, distributed Center of Excellence, TestDirector helps facilitate information access
across geographical and organization boundaries.
Using TestDirector, multiple groups throughout your organization can contribute to the
1. Business analysts define application requirements and testing objectives
2. Test managers and project leads design test plans and develop test cases
3. Test automation engineers create automated scripts and store them in the repository
4. QA testers run manual and automated tests, report execution results, and enter defects
5. Developers review and fix defects logged into the database
6. Project managers create application status reports and manage resource allocation
7. Product managers decide whether an application is ready to be released.
TestDirector supports the entire testing process requirements management; planning,
building, scheduling, and executing tests; defect management; and project status analysis
through a single Web-based application. It allows teams to access testing assets anytime,
anywhere via a browser interface. Integrates with the industry’s widest range of third-
party applications, preserves your investment in existing solutions, and creates an end-to-
end quality-management infrastructure. It also manages manual and automated tests.
TestDirector helps jumpstart automation projects. It accelerates testing cycles by
scheduling and running tests automatically, unattended, 24x7 and the results are stored in
a central repository, creating an accurate audit trail for analysis and enabling consistent
quality processes. It allows teams to analyze application readiness at any point in the
testing process with integrated graphs and reports.
TestDirector streamlines the quality management process from requirements gathering
through planning, scheduling and running tests, to defect/issue tracking and management
in a single browser-based application.
Requirements-based testing keeps the testing effort on track and measures the application
against business-user needs. TestDirector’s Requirements Manager links test cases to
application functional requirements, ensuring traceability throughout the testing process.
Using TestDirector, you can easily see what percentage of the application functional
requirements are covered by tests, how many of these tests have been run, and how many
have passed or failed.
Based on the requirements, testers can start building the test plan and designing the actual
tests. Test plans can be created in TestDirector, or if your organization has been using
Word or Excel – test names, descriptions and expected results can be imported into
TestDirector’s repository. TestDirector supports both manual and automated tests, as well
as a transition from manual tests to automate. By maintaining all tests planning
information in a central repository, you can easily re-use whole test plans or individual
test cases for future application releases. The testing information can be shared between
multiple projects and is preserved if the tester leaves the team.
Scheduling and Running Tests:
After test design and development issues have been addressed, the testing team is ready
to start running tests. To test the system as a whole, testers need to perform various types
of testing - functional, regression, load, unit and integration – each with its own set of
requirements, schedules and procedures. TestDirector’s Test Lab Manager can schedule
tests to run unattended, overnight or when the system is in least demand for other
resources. But this kind of unattended running of scripts is not recommended. By
defining dependencies between tests, you can realistically emulate real-life business
processes, while making the tests themselves simpler and easier to maintain and reuse.
Analyzing defects and issues is what helps managers make the “go/no-go” decision about
application deployment. By analyzing the defect statistics, you can take a snapshot of the
application under test and tell exactly how many defects you currently have, their status,
severity, priority, age, etc. TestDirector’s defect manager supports the entire defect
lifecycle from initial problem detection through fixing the defect and verifying the fix. By
defining the defect process flow, you can ensure that no defect is overlooked or closed
before it’s been addressed. Before every new defect is submitted, TestDirector will check
the database for similar defects, minimizing duplicate defects and eliminating the need
for manual checking for duplicates.
Graphs and Reports:
The testing process generates large amounts of data. During the testing process, testers
will generate large amounts of data. TestDirector’s customizable graphs and reports assist
with analyzing this data. In a traditional organization, it’s not uncommon to spend 10 or
even 20 hours creating a test execution report, or a release status assessment. With
TestDirector, all of this information is at your fingertips, so you can make an up-to-the-
minute decision on you application status or team productivity.
Successful Web application implementation and testing can require a combination of
tools from many different vendors. It needs to established partnerships with many leading
application lifecycle software vendors to seamlessly integrate TestDirector and Mercury
Quality Center applications into their solutions from component modeling to
requirements tracking to configuration management.
TestDirector's Open API:
TestDirector provides a complete, open-ended test management framework that enables
you to manage and control all phases of application testing. Specifically, TestDirector
provides an application programming interface (API) that enables you to extend
TestDirector applications' functionality to other testing and reporting solutions.
TestDirector can drive other testing tools and return results to the central repository.
Using TestDirector's open API, you can integrate your own configuration management,
defect tracking, and home-grown testing tools with a TestDirector project database.
TestDirector's open API is organized into categories according to its core modules-
Requirements Management, Test Plan, Test Lab, and Defect Tracking--to enable easy
information access at any stage of the testing process. This APIs provide functions that
let you connect to a project database, import information from external applications to a
project database, and export information from a project database to an external
In this chapter we will discuss how all the other chapters are interconnected in our
project. In the previous chapters we mainly focused on Software Quality Assurance
(SQA), Rational Robot and TestDirector. The aim of this project is to ensure the Software
Quality Assurance of the Campus Solutions product. In this chapter we learn how
Rational Robot and TestDirector make the life easy for testers during the testing phase
and contribute to the success of the project.
For testing the scripts to ensure the product is flawless, we used Rational Robot. We
loaded all of the scripts into Rational Robot by specifying the path to the location of the
scripts, run them and verify them. Whenever an error was encountered, we tried to find
the cause of the bug and then fix it. The scripts were again retested. This process
continued until we finally had a flawless product. During this testing phase we used
TestDirector to keep track of all the work that was done each week. TestDirector is
updated on a regular basis, usually weekly, to keep track of the progress in work. All the
people in the Quality Assurance team had access to view the status of the scripts in the
The Software Quality Assurance of Campus Solutions, using Rational Robot, was
successfully completed. This product, developed by PeopleSoft was successfully tested
using the Rational Robot testing tool. This product helped many Universities to become
proud of the on-line services they provide. Campus Solutions made life much easier for
students by providing online access to services. It also manages recruitment and
admissions, financial aid, and student records in a collaborative, real-time environment. It
gave instructors new learning, grading, and advising tools. It used its staff resources more
efficiently and reduced the time spent on day-to-day tasks, enabling the staff to focus
more on student needs. Appropriate care has been taken during the entire testing phase to
assure the safe and secure performance of this product. This report has covered the
various functions available in the Campus Solutions product.
Rational Robot http://www.wilsonmar.com/1robot.htm
Campus Solutions www.peoplesoft.com
Test Director www.mercury.com