SlideShare a Scribd company logo
1 of 83
Software Quality Assurance
(1) A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or
product conforms to established technical requirements.
(2) A set of activities designed to evaluate the process by which products are developed or manufactured.




What's difference between client/server and Web Application ?
Client/server based is any application architecture where one server application and one or many client
applications are involved like your mail server and MS outlook Express, it can be a web application as
well, where the Web Application is a kind of client server application that is hosted on the web server and
accessed over the internet or interanet. There are lots of things that differs between testing of the two type
above and cann't be posted in one post but you can look into the data flow, communication and servside
variable like session and security etc




Software Quality Assurance Activities

        Application of Technical Methods (Employing proper methods and tools for developing software)
        Conduct of Formal Technical Review (FTR)
        Testing of Software
        Enforcement of Standards (Customer imposed standards or management imposed standards)
        Control of Change (Assess the need for change, document the change)
        Measurement (Software Metrics to measure the quality, quantifiable)
        Records Keeping and Recording (Documentation, reviewed, change control etc. i.e. benefits of
         docs).




What's the difference between STATIC TESTING and DYNAMIC TESTING?

Answer1:
Dynamic testing: Required program to be executed
static testing: Does not involve program execution

The program is run on some test cases & results of the program’s performance are examined to check
whether the program operated as expected
E.g. Compiler task such as Syntax & type checking, symbolic execution, program proving, data flow
analysis, control flow analysis

Answer2:
Static Testing: Verification performed with out executing the system code
Dynamic Testing: Verification and validation performed by executing the system code




Software Testing
Software testing is a critical component of the software engineering process. It is an element of software
quality assurance and can be described as a process of running a program in such a manner as to uncover
any errors. This process, while seen by some as tedious, tiresome and unnecessary, plays a vital role in
software development.

Testing involves operation of a system or application under controlled conditions and evaluating the results
(eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should
happen'). The controlled conditions should include both normal and abnormal conditions. Testing should
intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things
don't happen when they should. It is oriented to 'detection'.

Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're
the combined responsibility of one group or individual. Also common are project teams that include a mix
of testers and developers who work closely together, with overall QA processes monitored by project
managers. It will depend on what best fits an organization's size and business structure.




What's difference between QA/testing
The quality assurance
process is a process for providing adequate assurance that the software products and processes in the
product life cycle conform to their specific requirements and adhere to their established plans.quot;
The purpose of Software Quality Assurance is to provide management with appropriate visibility into the
process being used by the software project and of the products being built




What black box testing types can you tell me about?
Black box testing is functional testing, not based on any knowledge of internal software design or code.
Black box testing is based on requirements and functionality. Functional testing is also a black-box type of
testing geared to functional requirements of an application.
System testing is also a black box type of testing. Acceptance testing is also a black box type of testing.
Functional testing is also a black box type of testing. Closed box testing is also a black box type of testing.
Integration testing is also a black box type of testing.




What is software testing methodology?
One software testing methodology is the use a three step process of...
1. Creating a test strategy;
2. Creating a test plan/design; and
3. Executing tests. This methodology can be used and molded to your organization's needs. Rob Davis
believes that using this methodology is important in the development and ongoing maintenance of his
clients' applications.




What’s the difference between QA and testing?
TESTING means “Quality Control”; and
QUALITY CONTROL measures the quality of a product; while
QUALITY ASSURANCE measures the quality of processes used to create a quality product.




Why Testing CANNOT Ensure Quality
Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of
assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific
controlled conditions, the software functioned as expected by the test cases executed.
How to find all the Bugs during first round of Testing?

Answer1:
I understand the problems you are facing. I was involved with a web-based HR system that was
encountering the same problems. What I ended up doing was going back over a few release cycles and
analyzing the types of defects found and when (in the release cycle including the various testing cycles)
they were found. I started to notice a distinct trend in certain areas.
For each defect type, I started looking into the possibility if it could have been caught in the prior phase
(lots of things were being found in the Systems test phase that should have been caught earlier). If so, why
wasn't it caught? Could it have been caught even earlier (say via a peer review)? If so, why not? This led
me to start examining the various processes and found a definite problem with peer reviews (not very
thorough IF they were even being done) and with the testing process (not rigorous enough). We worked
with the customer and folks doing the testing to start educating them and improving the processes. The
result was the number of defects found in the latter test stages (System test for example) were cut by over
half! It was getting harder to find problems with the product as they were discovering them earlier in the
process -- saving time & money!

Answer2:
There could be several reasons for not catching a showstopper in the first or second build/rev. A found
defect could either functionally or physiologically mask a second or third defect. Functionally the thread or
path to the second defect could have been boken or rerouted to another path or physiologically the tester
who found the first defect knows the app must go back and be rewritten so he/she procedes halfheartedly
on and misses the second one. I've seen both cases. It is difficult to keep testing on a known defective app.
The testers seem to lose interest knowing that what effort they put in to test it, will have to be redone on the
next iteration. This will test your metal as a lead to get them to follow through and maintain a professional
attitude.

Answer3:
The best way is to prevent bugs in the first place. Also testing doesn't fix or prevent bugs. It just provides
information. Applying this information to your situation is the important part.
The other thing that you may be encountering is that testing tends to be exploratory in nature. You have
stated that these are existing bugs, but not stated whether tests already existed for these bugs.
Bugs in early cycles inhibit exploration. Additionally, a tester's understanding of the application and its
relationships and interactions will improve with time and thus more 'interesting' bugs tend to be found in
later iterations as testers expand their exploration (ie. think of new tests).
No matter how much time you have to read through the documents and inspect artefacts, seeing the actual
application is going to trigger new thoughts, and thus introduce previously unthought of tests. Exposure to
the application will trigger new thoughts as well, thus the longer your testing goes, the more new tests (and
potential bugs) are going to be found. Iterative development is a good way to counter this, as testers get to
see something physical earlier, but this issue will always exist to some degree as the passing of time, and
exploration of the application allow new tests to be thought of at inconvenient moments.

Is regression testing performed manually?
The answer to this question depends on the initial testing approach. If the initial testing approach was
manual testing, then the regression testing is usually performed manually. Conversely, if the initial testing
approach was automated testing, then the regression testing is usually performed by automated testing.




How to choose which defect to remove in 1000000 defects? (because It will take too much resources in
order to remove them all.)


Answe1:
Are you the programmer who has to fix them, the project manager who has to supervise the programmers,
the change control team that decides which areas are too high risk to impact, the stakeholder-user whose
organization pays for the damage caused by the defects or the tester?
The tester does not choose which defects to fix.
The tester helps ensure that the people who do choose, make a well-informed choice.
Testers should provide data to indicate the *severity* of bugs, but the project manager or the development
team do the prioritization.
When I say quot;indicate the severityquot;, I don't just mean writing S3 on a piece of paper. Test groups often do
follow-up tests to assess how serious a failure is and how broad the range of failure-triggering conditions.
Priority depends on a wide range of factors, including code-change risk, difficulty/time to complete the
change, which stakeholders are affected by the bug, the other commitments being handled by the person
most knowledgeable about fixing a certain bug, etc. Many of these factors are not within the knowledge of
most test groups.

Answe2:
As a tester we don't fix the defects but we surely can prioritize them once detected. In our org we assign
severity level to the defects depending upon their influence on other parts of products. If a defect doesnt
allow you to go ahead and test test the product, it is critical one so it has to be fixed ASAP. We have 5
levels as
1-critical
2-High
3-Medium
4-Low
5-Cosmetic

Dev can group all the critical ones and take them to fix before any other defect.

Answer3:
Priority/Severity P1 P2 P3
S1
S2
S3

Generally the defects are classified in aboveshown grid. Every organization / software has some target of
fixing the bugs.
Example -
P1S1 -> 90% of the bugs reported should be fixed.
P3S3 -> 5% of the bugs reported may be fixed. Rest are taken in letter service packs or versions.

Thus the organization should decide its target and act accordingly.
Basically bugfree software is not possible.

Answer4:
Ideally, the customer should assign priorities to their requirements. They tend to resist this. On a large,
multi-year project I just completed, I would often (in the lack of customer guidelines) rely on my
knowledge of the application and the potential downstream impacts in the modeled business process to
prioritize defects.
If the customer doesn't then I fell the test organization should based on risk or other, similar considerations.




What is software quality?
The quality of the software varies widely from system to system. Some common quality attributes are
stability, usability, reliability, portability, and maintainability.
What are the five dimensions of the Risks?
Schedule: Unrealistic schedules, exclusion of certain activities when chalking out a schedule etc. could be
deterrents to project delivery on time. Unstable communication link can be considered as a probable risk if
testing is carried out from a remote location.
Client: Ambiguous requirements definition, clarifications on issues not being readily available, frequent
changes to the requirements etc. could cause chaos during project execution.
Human Resources: Non-availability of sufficient resources with the skill level expected in the project are
not available; Attrition of resources - Appropriate training schedules must be planned for resources to
balance the knowledge level to be at par with resources quitting. Underestimating the training effort may
have an impact in the project delivery.
System Resources: Non-availability of /delay in procuring all critical computer resources either hardware
and software tools or licenses for software will have an adverse impact.
Quality: Compound factors like lack of resources along with a tight delivery schedule and frequent changes
to requirements will have an impact on the quality of the product tested.




What is good code?
A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually
have coding standards all developers should adhere to, but every programmer and software engineer has
different ideas about what is best and what are too many or too few rules. We need to keep in mind that
excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can
be used to check for problems and enforce standards.




How do you perform integration testing?
To perform integration testing, first, all unit testing has to be completed. Upon completion of unit testing,
integration testing begins. Integration testing is black box testing. The purpose of integration testing is to
ensure distinct components of the application still work in accordance to customer requirements. Test cases
are developed with the express purpose of exercising the interfaces between the components. This activity
is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in line or
differences are explainable, or acceptable, based on client input.

    Why back-end testing is required, if we are going to check the front-end. What errros/bugs we are
missing out by not doing back-end testing.
Why we need to do unit testing, if all the features are being tested in System testing. What extra things
are tested in unit testing, which can not be tested in System testing.


Answer1:
Assume that you're thinking client-server or web. If you test the application on the front end only you can
see if the data was stored and retrievd correctly. You can't see if the servers are in an error state or not.
many server processes are monitored by another process. If they crash, they are restarted. You can't see that
without looking at it.
The data may not be stored correctly either but the front end may have cached data lying around and it will
use that instead. The least you should be doing is verifying the data as stored in the database.
It is easier to test data being transferred on the boundaries and see the results of those transactions when
you can set the data in a driver.

Answer2:
Back-End testing : Basically the requirement of this testing depends on ur project. like Say if ur project is
.Ticket booking system,Front end u will provided with an Interface , where u can book the ticket by giving
the appropriate details ( Like Place to go, and Time when u wanna go etc..). It will have a Data storage
system (Database or XL sheet etc) which is a Back end for storing details entered by the user.
After submitting the details ,U might have provided with a correct acknowledgement.But in back end , the
details might not updated correctly in Database becoz of wrong logic development. Then that will cause a
major problem.
and regarding Unit level testing and System testing Unit level testing is for testing the basic checks whether
the application is working fyn with the basic requirements.This will be done by developers before
delivering to the QA.In System testing , In addition to the unit checks ,u will be performing all the checks
( all possible integrated checks which required) .Basically this will be carried out by tester

Answer3:
Ever heard about divide and conquer tactic ? It is a same method applied in backend and frontend testing.
A good back end test will help minimize the burden of frontend test.
Another point is you can test the backend while develope the frontend. A true pararelism could be achived.
Backend testing has another problem which must addressed before front end could use it. The problem is
concurency. Building a scenario to test concurency is formidable task.
A complex thing is hard to test. To create such scenarios will make you unsure which test you already done
and which you haven't. What we need is an effective methods to test our application. The simplest method i
know is using divide and conquer.

Answer4:
A wide range of errors are hard to see if you don't see the code. For example, there are many optimizations
in programs that treat special cases. If you don't see the special case, you don't test the optimization. Also, a
substantial portion of most programs is error handling. Most programmers anticipate more errors than most
testers.
Programmers find and fix the vast majority of their own bugs. This is cheaper, because there is no
communication overhead, faster because there is no delay from tester-reporter to programmer, and more
effective because the programmer is likely to fix what she finds, and she is likely to know the cause of the
problems she sees. Also, the rapid feedback gives the programmer information about the weaknesses in her
programming that can help her write better code.
Many tests -- most boundary tests -- are done at the system level primarily because we don't trust that they
were done at the unit level. They are wasteful and tedious at the system level. I'd rather see them properly
done and properly automated in a suite of programmer tests.




What is Software “Quality”?
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or
expectations, and is maintainable.
However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in
the scheme of things. A wide-angle view of the ‘customers’ of a software development project might
include end-users, customer acceptance testers, customer contract officers, customer management, the
development organisation’s management/accountants/testers/salespeople, future software maintenance
engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on
‘quality’ - the accounting department might define quality in terms of profits while an end-user might
define quality as user-friendly and bug-free.




What is retesting?


Answer1:
Retesting is usually equated with regression testing (see above) but it is different in that is follows a
specific fix--such as a bug fix--and is very narrow in focus (as opposed to testing entire application again in
a regression test). A product should never be released after any change has been applied to the code, with
only retesting of the bug fix, and without a regression test.

Answer2:
1. Re-testing is the testing for a specific bug after it has been fixed.(one given by your definition).
2. Re-testing can be one which is done for a bug which was raised by QA but could not be found or
confirmed by Development and has been rejected. So QA does a re-test to make sure the bug still exists and
again assigns it back to them.
when entire project is tested & client have some doubts about the quality of testing, Re-Testing can be
called. It can also be testing the same application again for better Quality.

Answer3:
Regression Testing is, the selective retesting of a system that has been modified to ensure that any bugs
have been fixed and that no other previously working functions have failed as a result of the reparations
and that newly added features have not created problems with previous versions of the software. Also
referred to as verification testing
It is important to determine whether in a given set of circumstances a particular series of tests has been
failed. The supplier may want to submit the software for re-testing. The contract should deal with the
parameters for retests, including (1) will test program which are doomed to failure be allowed to finish
early, or must they be completed in their entirety? (2) when can, or must, the supplier submit his software
for retesting?, and (3) how many times can the supplier fail tests and submit software for retesting ñ is this
based on time spent, or the number of attempts? A well drawn contract will grant the customer options in
the event of failure of acceptance tests, and these options may vary depending on how many attempts the
supplier has made to achieve acceptance.
So the conclusion is retesting is more or less regression testing. More appropriately retesting is a part of
regression testing.

Answer4:
Re-testing is simply executing the test plan another time. The client may request a re-test for any reason -
most likely is that the testers did not properly execute the scripts, poor documentation of test results, or the
client may not be comfortable with the results.
I've performed re-tests when the developer inserted unauthorized code changes, or did not document
changes.
Regression testing is the execution of test cases quot;not impactedquot; by the specific project. I am currently
working on testing of a system with poor system documentation (and no user documentation) so our
regression testing must be extensive.

Answer5:
* QA gets a bug fix, and has to verify that the bug is fixed. You might want to check a few things that are a
“gut feel” if you want to and get away by calling it retesting, but not the entire function / module / product.
* Development Refuses a bug on the basis of it being “Non Reproducible”, then retesting, preferably in the
presence of the Developer, is needed.

    How to establish QA Process in an organization?
1.CURRENT SITUATION
The first thing you should do is to put what you currently do in a piece of paper in some sort of a flowchart
diagram. This will allow you to analyze what is being currently done.
2.DEVELOPMENT PROCESS STAGE
Once you have the quot;big picturequot;, you have to be aware of the current status of your development project or
projects. The processes you select will vary depending if you are in early stages of developing a new
application (i.e.: developing a version 1.0), or maintaining an existing application (i.e.: working on release
6.7.1).
3. PRIORITIES
The next thing you need to do is identify the priorities of your project, for example: - Compliance with
industry standards - Validation of new functionality (new GUIs, etc) - Security - Capacity Planning ( You
should see quot;Effective Methods for Software Testingquot; for more info). Make a list of the priorities, and then
assign them values of (H)igh, (M)edium and (L)ow.
4. TESTING TYPES
Once you are aware of the priorities, focus on the High first, then Medium, and finally evaluate whether the
Low ones need immediate attention.
Based on this, you need to select those Testing Types that will provide coverage for your priorities.
Example of testing types:
- Functional Testing
- Integration Testing
- System Testing
- System-to-System Testing (for testing interfaces)
- Regression Testing
- Load Testing
- Performance Testing
- Stress Testing
Etc.

5. WRITE A TEST PLAN
Once you have determined your needs, the simplest way to document and implement your process is to
elaborate a quot;Test Planquot; for every effort that you are engaged into (i.e.: for every release).
For this you can use generic Test Plan templates available in the web that will help you brainstorm and
define the scope of your testing:
- Scope of Testing (defects, functionality, and what will be and will not be tested).
- Testing Types (Functional, Regression, etc).
- Responsible people
- Requirements traceability matrix (match test cases with requirements to ensure coverage)
- Defect tracking
- Test Cases
DURING AND POST-TESTING ACTIVITIES
Make sure you keep track of the completion of your testing activities, the defects found, and that you
comply with an exit criteria prior to moving to the next stage in testing (i.e. User Acceptance Testing, then
Production Release).
Make sure you have a mechanism for:
- Reporting
- Test tracking




What is software testing?
1) Software testing is a process that identifies the correctness, completenes, and quality of software.
Actually, testing cannot establish the correctness of software. It can find defects, but cannot prove there are
no defects.
2) It is a systematic analysis of the software to see whether it has performed to specified requirements.
What software testing does is to uncover errors however it does not tell us that errors are still not present.




Any recommendation for estimation how many bugs the customer will find till gold release?


Answer1:
If you take the total number of bugs in the application and subtract the number of bugs you found, the
difference will be the maximum number of bugs the customer can find.
Seriously, I doubt you will find any sort of calculations or formula that can answer your question with
much accuracy. If you could refernce a previous application release, it might give you a rough idea. The
best thing to do is insure your test coverage is as good as you can make it then hope you've found the ones
the customer might find.
Remember Software testing is Risk Management!

Answer2:
For doing estimation :
1.)Find out the Coverage during testing of ur software and then estimate keeping in mind 80-20 principle.
2.)You can also look at the deepening of your test cases e.g. how much unit level testing and how much life
cycle teting have you performed (Believe that most of the bugs from customer comes due to real use of
lifecycle in the software)
3.)You can also refer the defect density from earlier releases of the same product line.
by doing these evaluation you can find out the probability of bugs at an approximately optimum estimation.


Answer3:
You can look at the customer issues mapping from previous release (If you have the same product line) to
the current release ,This is the best way of finding estimation for gold release of migration of any
product.Secondly, till gold release most of the issues comes from various combination of installation
testing like cross-platform,i18 issues,Customization,upgradation and migration.
So ,these can be taken as a parameter and then can estimation be completed.

    When the build comes to the QA team, what are the parameters to be taken for consideration to
reject the build upfront without committing for testing ?


Answer1:
Agree with R&D a set of tests that if one fails you can reject the build. I usually have some build
verification tests that just make sure the build is stable and the major functionality is working.
Then if one test fails you can reject the build.

Answer2:
The only way to legitimately reject a build is if the entrance criteria have not been met. That means that the
entrance criteria to the test phase have been defined and agreed upon up front. This should be standard for
all builds for all products. Entrance criteria could include:
- Turn-over documentation is complete
- All unit testing has been successfully completed and U/T cases are documented in turn-over
- All expected software components have been turned-over (staged)
- All walkthroughs and inspections are complete
- Change requests have been updated to correct status
- Configuration Management and build information is provided, and correct, in turn-over
The only way we could really reject a build without any testing, would be a failure of the turn-over
procedure. There may, but shouldn't be, politics involved. The only way the test phase can proceed is for
the test team to have all components required to perform successful testing. You will have to define
entrance (and exit) criteria for each phase of the SDLC. This is an effort to be taken together by the whole
development team. Developments entrance criteria would include signed requirements, HLD doc, etc.
Having this criteria pre-established sets everyone up for success

Answer3:
The primary reason to reject a build is that it is untestable, or if the testing would be considered invalid.
For example, suppose someone gave you a quot;bad buildquot; in which several of the wrong files had been loaded.
Once you know it contains the wrong versions, most groups think there is no point continuing testing of
that build.
Every reason for rejecting a build beyond this is reached by agreement. For example, if you set a build
verification test and the program fails it, the agreement in your company might be to reject the program
from testing. Some BVTs are designed to include relatively few tests, and those of core functionality.
Failure of any of these tests might reflect fundamental instability. However, several test groups include a
lot of additional tests, and failure of these might not be grounds for rejecting a build.
In some companies, there are firm entry criteria to testing. Many companies pay lipservice to entry criteria
but start testing the code whether the entry criteria are met or not. Neither of these is right or wrong--it's the
culture of the company. Be sure of your corporate culture before rejecting a build.

Answer4:
Generally a company would have set some sort of minimum goals/criteria that a build needs to satisfy - if it
satisfies this - it can be accepted else it has to be rejected
For eg.
Nil - high priority bugs
2 - Medium Priority bugs
Sanity test or Minimum acceptance and Basic acceptance should pass The reasons for the new build - say a
change to a specific case - this should pass Not able to proceed - non - testability or even some more which
is in relation to the new build or the product If the above criterias don't pass then the build could be
rejected.




What is software testing?

Software testing is more than just error detection;
Testing software is operating the software under controlled conditions, to (1) verify that it behaves “as
specified”; (2) to detect errors, and (3) to validate that what has been specified is what the user actually
wanted.
Verification is the checking or testing of items, including software, for conformance and consistency by
evaluating the results against pre-specified requirements. [Verification: Are we building the system right?]
Error Detection: Testing should intentionally attempt to make things go wrong to determine if things
happen when they shouldn’t or things don’t happen when they should.
Validation looks at the system correctness – i.e. is the process of checking that what has been specified is
what the user actually wanted. [Validation: Are we building the right system?]
In other words, validation checks to see if we are building what the customer wants/needs, and verification
checks to see if we are building that system correctly. Both verification and validation are necessary, but
different components of any testing activity.

The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of
analysing a software item to detect the differences between existing and required conditions (that is
defects/errors/bugs) and to evaluate the features of the software item.

Remember: The purpose of testing is verification, validation and error detection in order to find problems –
and the purpose of finding those problems is to get them fixed.




What is the testing lifecycle?
There is no standard, but it consists of:
Test Planning (Test Strategy, Test Plan(s), Test Bed Creation)
Test Development (Test Procedures, Test Scenarios, Test Cases)
Test Execution
Result Analysis (compare Expected to Actual results)
Defect Tracking
Reporting




How to validate data?
I assume that you are doing ETL (extract, transform, load) and cleaning. If my assumetion is correct then
1. you are builing data warehouse/ data minning
2. you ask right question to wrong place




What is quality?
Quality software is software that is reasonably bug-free, delivered on time and within budget, meets
requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends
on who the customer is and their overall influence in the scheme of things. Customers of a software
development project include end-users, customer acceptance test engineers, testers, customer contract
officers, customer management, the development organization's management, test engineers, testers,
salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her
own slant on quality. The accounting department might define quality in terms of profits, while an end-user
might define quality as user friendly and bug free.

    what is Benchmark?
How it is linked with SDLC (Software Development Life Cycle)?
or SDLC and Benchmark are two unrelated things.?
What are the compoments of Benchmark?
In Software Testing where Benchmark fits in?
A Benchmark is a standard to measure against. If you benchmark an application, all future application
changes will be tested and compared against the benchmarked application.




Which of the following Statements about gernerating test cases is false?
1. Test cases may contain multiple valid conditions
2. Test cases may contain multiple invalid conditions
3. Test cases may contain both valid and invalid conditions
4. Test cases may contain more than 1 step.
5. test cases should contain Expected results.


Answer1:
all the conditions mentioned are valid and not a single condition can be stated as false.
Here i think, the condition means the input type or situation (some may call it as valid or invalid, positive
or negative)
Also a single test case can contain both the input types and then the final result can be verified (it obviously
should not bring the required result, as one of the input condition is invalid, when the test case would be
executed), this usually happens while writing secnario based test cases.
For ex. Consider web based registration form, in which input data type for some fields are positive and for
some fields it is negative (in a scenario based test case)
Above screen can be tested by generating various scenario's and combinations. The final result can be
verified against actual result and the registration should not be carried out sucessfully (as one/some input
types are invalid), when this test case is executed.
The writing of test case also depends upon the no. of descriptive fields the tester has in the test case
template. So more elaborative is the test case template, more is the ease of writing test cases and generating
scenario's. So writing of test cases totally depends on the indepth thinking of the tester and there are no
predefined or hard coded norms for writing test case.
This is according to my understanding of testing and test case writing knowledge (as for many applications,
i have written many positive and negative conditions in a single test case and verified different scenario's
by generating such test cases)

Answer2:
The answer to this question will be 3 Test cases may contain both valid and invalid conditions.
Since there is no restriction for the test case to be of multiple steps or more than one valid or invalid
conditions. But A test case whether it is feature ,unit level or end to end test case ,it can not contain both
valid and invalid condition in a unit test case.
Because if this will happen then the concept of test case for a result will be dwindled and hence has no
meaning.




What is “Quality Assurance”?
“Quality Assurance” measures the quality of processes used to create a quality product.
Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and improving all activities
associated with software development, from requirements gathering, design and reviews to coding, testing
and implementation.
It involves the entire software development process - monitoring and improving the process, making sure
that any agreed-upon standards and procedures are followed, and ensuring that problems are found and
dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is
‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the
prevalence of errors in the software.
Organisations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re
the combined responsibility of one group or individual. Also common are project teams that include a mix
of testers and developers who work closely together, with overall QA processes monitored by project
managers or quality managers.




Quality Assurance and Software Development
Quality Assurance and development of a product are parallel activities. Complete QA includes reviews of
the development methods and standards, reviews of all the documentation (not just for standardisation but
for verification and clarity of the contents also). Overall Quality Assurance processes also include code
validation.
A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help
minimise the risk of project failure. QA people aim to understand the causes of project failure (which
includes software errors as an aspect) and help the team prevent, detect, and correct the problems. Often
test teams are referred to as QA Teams, perhaps acknowledging that testers should consider broader QA
issues as well as testing.

Which things to consider to test a mobile application through black box technique?


Answer1:
Not sure how your device/server is to operate, so mold these ideas to fit your app. Some highlights are:
Range testing: Ensure that you can reconnect when leaving and returning back into range.
Port/IP/firewall testing - change ports and ips to ensure that you can connect and disconnect. modify the
firewall to shutoff the connection.
Multiple devices - make sure that a user receives his messages with other devices connected to the same
ip/port. Your app should have a method to determine which device/user sent the message and only return to
it. Should be in the message string sent and received. Unless you have conferencing capabilities within the
application.
Cycle the power of the server and watch the mobile unit reconnect automatically.
Mobile unit sends a message and then power off the unit, when powering back on and reconnecting, ensure
that the message is returned to the mobile unit.

Answer2:
Not clearly mentioned which area of the mobile application you are testing with. Whether is it simple SMS
application or WAP application, you need to specify more details.If you are working with WAP then you
can download simulators from net and start testing over it.




What is the general testing process?
The general testing process is the creation of a test strategy (which sometimes includes the creation of test
cases), creation of a test plan/design (which usually includes test cases and test procedures) and the
execution of tests. Test data are inputs that have been devised to test the system
Test Cases are inputs and outputs specification plus a statement of the function under the test.
Test data can be generated automatically (simulated) or real (live).
The stages in the testing process are as follows:
1. Unit testing: (Code Oriented)
Individual components are tested to ensure that they operate correctly. Each component is tested
independently, without other system components.

2. Module testing:
A module is a collection of dependent components such as an object class, an abstract data type or some
looser collection of procedures and functions. A module encapsulates related components so it can be
tested without other system modules.

3. Sub-system testing: (Integration Testing) (Design Oriented)
This phase involves testing collections of modules, which have been integrated into sub-systems. Sub-
systems may be independently designed and implemented. The most common problems, which arise in
large software systems, are sub-systems interface mismatches. The sub-system test process should therefore
concentrate on the detection of interface errors by rigorously exercising these interfaces.

4. System testing:
The sub-systems are integrated to make up the entire system. The testing process is concerned with finding
errors that result from unanticipated interactions between sub-systems and system components. It is also
concerned with validating that the system meets its functional and non-functional requirements.

5. Acceptance testing:
This is the final stage in the testing process before the system is accepted for operational use. The system is
tested with data supplied by the system client rather than simulated test data. Acceptance testing may reveal
errors and omissions in the systems requirements definition( user - oriented) because real data exercises the
system in different ways from the test data. Acceptance testing may also reveal requirement problems
where the system facilities do not really meet the users needs (functional) or the system performance (non-
functional) is unacceptable.

Acceptance testing is sometimes called alpha testing. Bespoke systems are developed for a single client.
The alpha testing process continues until the system developer and the client agrees that the delivered
system is an acceptable implementation of the system requirements.
When a system is to be marketed as a software product, a testing process called beta testing is often used.

Beta testing involves delivering a system to a number of potential customers who agree to use that system.
They report problems to the system developers. This exposes the product to real use and detects errors that
may not have been anticipated by the system builders. After this feedback, the system is modified and
either released fur further beta testing or for general sale.




What's normal practices of the QA specialists with perspective of software?
These are the normal practices of the QA specialists with perspective of software
[note: these are all QC activities, not QA activities.]
1-Desgin Review Meetings with the System Analyst and If possible should be the part in Requirement
gathering
2-Analysing the requirements and the desing and to trace the desing with respect to the requirements
3-Test Planning
4-Test Case Identification using different techniques (With respect to the Web Based Applciation and
Desktoip Applications)
5-Test Case Writing (This part is to be assigned to the testing engineers)
6-Test Case Execution (This part is to be assigned to the testing engineers)
7-Bug Reporting (This part is to be assigned to the testing engineers)
8-Bug Review and thier Analysis so that future bus can be removed by desgining some standards

from low-level to high level (Testing in Stages)
Except for small programs, systems should not be tested as a single unit. Large systems are built out of sub-
systems, which are built out of modules that are composed of procedures and functions. The testing process
should therefore proceed in stages where testing is carried out incrementally in conjunction with system
implementation.

The most widely used testing process consists of five stages

           UNIT TESTING
COMPONENT
TESTING    MODULE
           TESTING      VERIFICATIO                 WHITE BOX TESTING TECHNIQUES
                        N                           (TESTS THAT ARE DERIVED FROM KNOWLEDGE
           SUB-SYSTEM (PROCESS                      OF THE PROGRAM'S STRUCTURE AND
INTEGRATED TESTING      ORIENTED)                   IMPLEMENTATION)
TESTING    SYSTEM
           TESTING
                           VALIDATION BLACK BOX TESTING TECHNIQUES
USER            ACCEPTANCE
                           (PRODUCT   (TESTS ARE DERIVED FROM THE PROGRAM
TESTING         TESTING
                           ORIENTED)  SPECIFICATION)

HOWEVER, AS DEFECTS ARE DISCOVERED AT ANY ONE STAGE, THEY REQUIRE PROGRAM
MODIFICATIONS TO CORRECT THEM AND THIS MAY REQUIRE OTHER STAGES IN THE
TESTING PROCESS TO BE REPEATED.
ERRORS IN PROGRAM COMPONENTS, SAY MAY COME TO LIGHT AT A LATER STAGE OF
THE TESTING PROCESS. THE PROCESS IS THEREFORE AN ITERATIVE ONE WITH
INFORMATION BEING FED BACK FROM LATER STAGES TO EARLIER PARTS OF THE
PROCESS.




How to test and to get the difference between two images which is in the same window?


Answer1:
How are you doing your comparison? If you are doing it manually, then you should be able to see any
major differences. If you are using an automated tool, then there is usually a comparison facility in the tool
to do that.

Answer2:
Jasper Software is an open-source utility which can be compiled into C++ and has a imgcmp function
which compares JPEG files in very good detail as long as they have the same dimentions and number of
components.

Answer3:
Rational has a comparison tool that may be used. I'm sure Mercury has the same tool.

Answer4:
The key question is whether we need a bit-by-bit exact comparison, which the current tools are good at, or
an equivalency comparison. What differences between these images are not differences? Near-match
comparison has been the subject of a lot of research in printer testing, including an M.Sc. thesis at Florida
Tech. It's a tough problem.




Testing Strategies
Strategy is a general approach rather than a method of devising particular systems for component tests.
Different strategies may be adopted depending on the type of system to be tested and the development
process used. The testing strategies are

Top-Down Testing
Bottom - Up Testing
Thread Testing
Stress Testing
Back- to Back Testing
1. Top-down testing
Where testing starts with the most abstract component and works downwards.

2. Bottom-up testing
Where testing starts with the fundamental components and works upwards.

3. Thread testing
Which is used for systems with multiple processes where the processing of a transaction threads its way
through these processes.

4. Stress testing
Which relies on stressing the system by going beyond its specified limits and hence testing how well the
system can cope with over-load situations.

5. Back-to-back testing
Which is used when versions of a system are available. The systems are tested together and their outputs
are compared. 6. Performance testing.
This is used to test the run-time performance of software.

7. Security testing.
This attempts to verify that protection mechanisms built into system will protect it from improper
penetration.

8. Recovery testing.
This forces software to fail in a variety ways and verifies that recovery is properly performed.



Large systems are usually tested using a mixture of these strategies rather than any single approach.
Different strategies may be needed for different parts of the system and at different stages in the testing
process.

Whatever testing strategy is adopted, it is always sensible to adopt an incremental approach to sub-system
and system testing. Rather than integrate all components into a system and then start testing, the system
should be tested incrementally. Each increment should be tested before the next increment is added to the
system. This process should continue until all modules have been incorporated into the system.

When a module is introduced at some stage in this process, tests, which were previously unsuccessful, may
now, detect defects. These defects are probably due to interactions with the new module. The source of the
problem is localized to some extent, thus simplifying defect location and repai


Debugging
Brute force, backtracking, cause elimination.

UNIT                               FOCUSES ON EACH MODULE AND WHETHER IT WORKS
                CODING
TESTING                            PROPERLY. MAKES HEAVY USE OF WHITE BOX TESTING
INTEGRATIO DESIGN                  CENTERED ON MAKING SURE THAT EACH MODULE WORKS
N TESTING                          WITH ANOTHER MODULE.
                                   COMPRISED OF TWO KINDS:
TOP-DOWN AND
                                   BOTTOM-UP INTEGRATION.
                                   OR FOCUSES ON THE DESIGN AND CONSTRUCTION OF THE
                                   SOFTWARE ARCHITECTURE.
                                   MAKES HEAVY USE OF BLACK BOX TESTING.(EITHER ANSWER
                                   IS ACCEPTABLE)
VALIDATION
           ANALYSIS                ENSURING CONFORMITY WITH REQUIREMENTS
TESTING
                            MAKING SURE THAT THE SOFTWARE PRODUCT WORKS WITH
SYSTEMS         SYSTEMS
                            THE EXTERNAL ENVIRONMENT, E.G., COMPUTER SYSTEM,
TESTING         ENGINEERING
                            OTHER SOFTWARE PRODUCTS.

DRIVER AND STUBS

DRIVER: DUMMY MAIN PROGRAM
STUB: DUMMY SUB-PROGRAM
THIS IS BECAUSE THE MODULES ARE NOT YET STAND-ALONE PROGRAMS THEREFORE
DRIVE AND OR STUBS HAVE TO BE DEVELOPED TO TEST EACH UNIT.

When do we prepare a Test Plan?
[Always prepared a Test Plan for every new version or release of the product? ]

For four or five features at once, a single plan is fine. Write new test cases rather than new test plans. Write
test plans for two very different purposes. Sometimes the test plan is a product; sometimes it's a tool.




What is boundary value analysis?
Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along
data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside
boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these
extreme or special values, then it will work correctly for all values in between. An effective way to test
code is to exercise it at its natural boundaries.

Boundary Value Analysis is a method of testing that complements equivalence partitioning. In this case,
data input as well as data output are tested. The rationale behind BVA is that the errors typically occur at
the boundaries of the data. The boundaries refer to the upper limit and the lower limit of a range of values
or more commonly known as the quot;edgesquot; of the boundary.




Describe methods to determine if you are testing an application too much?


Answer1:
While testing, you need to keep in mind following two things always:
-- Percentage of requirements coverage
-- Number of Bugs present + Rate of fall of bugs
-- Firstly, There may be a case where requirement is covered quite adequately but number of bugs do not
fall. This indicates over testing.
--- Secondly, There may be a case where those parts of application are also being tested which are not
affected by a CHANGE or BUG FIXTURE. This is again a case of over testing.
-- Third is the case as you have suggested, with slight modification, i.e bug has sufficiently dropped off but
still testing is being at SAME levels as before.
Methods to determine if an application is being over-tested are--
1. Comparison of 'Rate of Drop in number of Bugs' & 'Effort Invested in Testing' (With all Requirements
been met) That is, if bug rate is falling (as it generally happens in all applications), but effort invested in
man hours does not fall, this implies Over testing.
2. Comparison of 'Achievment of bug rate threshold' & 'Effort Invested in Testing' (With all Requirements
been met) That is, if bug rate has already achieved the agreed-upon value with business and still the testing
efforts are being invested with no/little reduction.
3. Verifying if the 'Impact Analysis' for 'Change Requests' has been done properly and being implemented
correctly. That is, to check and verify that the components of AUT which have got impacted by the new
change are being tested only and no other unrequired component is being tested unneccessarily. If
unaffected components are being tested, this implies Over testing.

Answer2:
If the bug find rate has dropped off considerably, the test group should shift its testing strategy. One of the
key problems with heavy reliance on regression testing is that the bug find rate drops off even though there
are plenty of bugs not yet found. To find new bugs, you have to run new tests.
Every test technique is stronger for some types of bugs and weaker for others. Many test groups use only a
few techniques. In our consulting, James Bach and I repeatedly worked with companies that relied on only
one or two main techniques.
When one technique, any one test technique, yields few bugs, shifting to new technique(s) is likely to
expose new problems.
At some point, you can use a measure that is only partially statistical -- if your bug find rate is low AND
you can't think of any new testing approaches that look promising, THEN you are at the limit of your
effectiveness and you should ship the product. That still doesn't mean that the application is overtested. It
just means that YOU'RE not going to find many new bugs.

Answer3:
Best way is to monitor the test defects over the period of time
Refer williams perry book, where he has mentioned the concept of 'under test' and 'over test', in fact the
data can be plotted to see the criteria.
Yes one of the criteria is to monitor the defect rate and see if it is almost zero second method would be
using test coverage when it reach 100% (or 100% requirement coverage)




Procedural Software Testing Issues
Software testing in the traditional sense can miss a large number of errors if used alone. That is why
processes like Software Inspections and Software Quality Assurance (SQA) have been developed.
However, even testing all by itself is very time consuming and very costly. It also ties up resources that
could be used otherwise. When combined with inspections and/or SQA or when formalized, it also
becomes a project of its own requiring analysis, design and implementation and supportive
communications infrastructure. With it interpersonal problems arise and need managing. On the other hand,
when testing is conducted by the developers, it will most likely be very subjective. Another problem is that
developers are trained to avoid errors. As a result they may conduct tests that prove the product is working
as intended (i.e. proving there are no errors) instead of creating test cases that tend to uncover as many
errors as possible.

   How do I start with testing?

Think twice (or may be more) times before you choose a career. Are you interested in it or do u just want to
jump on the bandwagon?
Prerequisite
You can join a software development company as a tester if you can convince the interviewer
1. You have a knack for breaking software
2. You are aware of basic Quality concepts and belive in them
3. You want to pursue Testing as a career and not just to try it




OO Software Testing Issues
A common way of testing OO software testing-by-poking-around (Binder, 1995). In this case the
developer's goal is to show that the product can do something useful without crashing. Attempts are made
to quot;breakquot; the product. If and when it breaks, the errors are fixed and the product is then deemed quot;testedquot;.
Testing-by-poking-around method of testing OO software is, in my opinion, as unsuccessful as random
testing of procedural code or design. It leaves the finding of errors up to a chance.
Another common problem in OO testing is the idea that since a superclass has been tested, any subclasses
inheriting from it don't need to be.
This is not true because by defining a subclass we define a new context for the inherited attributes. Because
of interaction between objects, we have to design test cases to test each new context and re-test the
superclass as well to ensure proper working order of those objects.
Yet another misconception in OO is that if you do proper analysis and design (using the class interface or
specification), you don't need to test or you can just perform black-box testing only.
However, function tests only try the quot;normalquot; paths or states of the class. In order to test the other paths or
states, we need code instrumentation. Also it is often difficult to exercise exception and error handling
without examination of the source code.




What is the purpose of black box testing?


Answer1:
The main purpose of BB Testing is to validate that the application works as the user will be operating it and
in the environments of their systems. How do you do system testing and integration testing?
You may lose time and money but you may also lose Quality and eventually Customers!

Answer2:
quot;What is the purpose of black box testing?quot;
Black-box testing checks that the user interface and user inputs and outputs all work correctly. Part of this
is that error handling must work correctly. It's used in functional and system testing.
quot;We do everything in white box testing: - we check each module's function in the unit testingquot;
Who is quot;wequot;? Are you programmers or quality assurance testers? Usually, unit testing is done by
programmers, and white-box testing would be how they'd do it.
quot;- once unit test result is ok, means that modules work correctly (according to the requirement
documemts)quot;
Not quite. It means that on a stand-alone basis, each module is okay. White box testing only tests the
internal structure of the program, the code paths. Functional testing is needed to test how the individual
components work together, and this is best done from an external perspective, meaning by using the
software the way an end user would, without reference to the code (which is what black-box testing is).
if we doing testing again in black box will we lose time and money?quot;
No, the opposite: You'll lose money from having to repair errors you didn't catch with the white-box testing
if you don't do some black-box testing. It's far more expensive to fix errors after release than to test for
them and fix them early on.
But again, who is quot;wequot;? The black box testers should not be the people who did the programming; they
should be the QA team -- also some end users for the usability testing.
Now that I've said that, good programmers will run some basic black-box tests before handing the
application to QA for testing. This isn't a substitute for having QA do the tests, but it's a lot quicker for the
programmer to find and fix an error right away than to have to go through the whole process of reporting a
bug, then fixing and releasing a new build, then retesting.
How do you create a test plan/design?
Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing
logical groups of functions that can be further broken into test procedures. Test procedures define test
conditions, data to be used for testing and expected results, including database updates, file outputs, report
results. Generally speaking...
* Test cases and scenarios are designed to represent both typical and unusual situations that may occur in
the application.
* Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test
cases.
* It is the test team that, with assistance of developers and clients, develops test cases and scenarios for
integration and system testing.
* Test scenarios are executed through the use of test procedures or scripts.
* Test procedures or scripts define a series of steps necessary to perform one or more test scenarios.
* Test procedures or scripts include the specific data that will be used for testing the process or transaction.
* Test procedures or scripts may cover multiple test scenarios.
* Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is
within scope.
* Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and
system testing and used to exercise system functionality in a controlled environment.
* Some output data is also base-lined for future comparison. Base-lined data is used to support future
application maintenance via regression testing.
* A pretest meeting is held to assess the readiness of the application and the environment and data to be
tested. A test readiness document is created to indicate the status of the entrance criteria of the release.
Inputs for this process:
* Approved Test Strategy Document.
* Test tools, or automated test tools, if applicable.
* Previously developed scripts, if applicable.
* Test documentation problems uncovered as a result of testing.
* A good understanding of software complexity and module path coverage, derived from general and
detailed design documents, e.g. software design document, source code, and software complexity data.
Outputs for this process:
* Approved documents of test scenarios, test cases, test conditions, and test data.
* Reports of software design issues, given to software developers for correction.

What is the purpose of a test plan?
Reason number 1: We create a test plan because preparing it helps us to think through the efforts needed to
validate the acceptability of a software product.
Reason number 2: We create a test plan because it can and will help people outside the test group to
understand the why and how of product validation.
Reason number 3: We create a test plan because, in regulated environments, we have to have a written test
plan.
Reason number 4: We create a test plan because the general testing process includes the creation of a test
plan.
Reason number 5: We create a test plan because we want a document that describes the objectives, scope,
approach and focus of the software testing effort.
Reason number 6: We create a test plan because it includes test cases, conditions, the test environment, a
list of related tasks, pass/fail criteria, and risk assessment.
Reason number 7: We create test plan because one of the outputs for creating a test strategy is an approved
and signed off test plan document.
Reason number 8: We create a test plan because the software testing methodology a three step process, and
one of the steps is the creation of a test plan.
Reason number 9: We create a test plan because we want an opportunity to review the test plan with the
project team.
Reason number 10: We create a test plan document because test plans should be documented, so that they
are repeatable.
Can we prepare Test Plan without SRS?

It is not always mandatory that you should have SRS document to prepare a Test Plan. This kind of
Documents Hierarchy is maintained to maintain Organizational standards and also to have clear
understanding of the things.
Yes you can Prepare a Test plan directly without SRS, When the Requirements are clear with your
clients,and when your URD(User Requirement Document ) is supportive enough to clarify the issues.
Though we don't have SRS clients will be giving some information SRS only contains mainly Product
information
But we will not know the Testing effort if we don't have SRS.
SRS contains How many cycles we are testing, and on the platforms we are testing , etc.
Actually there won't be any harm in doing so, becoz, ultimately you will send your Test plan document to
your client and after getting approval from him only you start Testing.
(Note:- SRS is the document which you get in the Analysis phase of your Software Development. Test plan
is the document , which contains the details of Product interms of , Tset strategy , Scope of testing, Types
of tests to be conducted,Risk Managemnet , Mention of Automation Tool ,About Bug tracking Tool, etc..,)




How do test plan templates look like?
The test plan document template helps to generate test plan documents that describe the objectives, scope,
approach and focus of a software testing effort. Test document templates are often in the form of
documents that are divided into sections and subsections. One example of a template is a 4-section
document where section 1 is the description of the quot;Test Objectivequot;, section 2 is the the description of
quot;Scope of Testingquot;, section 3 is the the description of the quot;Test Approachquot;, and section 4 is the quot;Focus of
the Testing Effortquot;.
All documents should be written to a certain standard and template. Standards and templates maintain
document uniformity. They also help in learning where information is located, making it easier for a user to
find what they want. With standards and templates, information will not be accidentally omitted from a
document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He
will also recommend improvements and/or additions.
A software project test plan is a document that describes the objectives, scope, approach and focus of a
software testing effort. The process of preparing a test plan is a useful way to think through the efforts
needed to validate the acceptability of a software product. The completed document will help people
outside the test group understand the why and how of product validation.




How to Test a desktop systems ?

You will likely have to use a programming or scripting language to interact with the service directly. You
will have more control over the raw information that way.
You will have to determine what the service is supposed to do and how it is supposed to interact with other
applications and services. A data dictionary likely exists. It may not be called that however. What this
document does is explain what commands the service will respond to and what sort of data should be sent.
You will have to use this document to do your testing. Get close to the person or people who created the
document or the service and expect them to keep you in the loop when changes take place (it doesn't help
anyone if you report a defect and it's really only reflecting an expected change in the operation of the
service).
Desktop applications are generally designed to run and quit. You have to be concerned with memory leaks
and system usage.
How do you create a test strategy?
The test strategy is a formal description of how a software product will be tested. A test strategy is
developed for all levels of testing, as required. The test team analyzes the requirements, writes the test
strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the
test environment, a list of related tasks, pass/fail criteria and risk assessment.
Inputs for this process:
* A description of the required hardware and software components, including test tools. This information
comes from the test environment, including test tool data.
* A description of roles and responsibilities of the resources required for the test and schedule constraints.
This information comes from man-hours and schedules.
* Testing methodology. This is based on known standards.
* Functional and technical requirements of the application. This information comes from requirements,
change request, technical and functional design documents.
* Requirements that the system can not provide, e.g. system limitations.
Outputs for this process:
* An approved and signed off test strategy document, test plan, including test cases.
* Testing issues requiring resolution. Usually this requires additional negotiation at the project management
level.

   How to do Estimating Testing effort ?

Time Estimation method for Testing Process
Note : folloing method is based on use case driven specification.
Step 1 : count number of use cases (NUC) of system
step 2 : Set Avg Time Test Cases(ATTC) as per test plan
step 3 : Estimate total number of test cases (NTC)
Total number of test cases = Number of usecases X Avg testcases per a use case
Step 4 : Set Avg Execution Time (AET) per a test case (idelly 15 min depends on your system)
Step 5 : Calculate Total Execution Time (TET)
TET = Total number of test cases * AET
Step 6 : Calculate Test Case Creation Time (TCCT)
useually we will take 1.5 times of TET as TCCT
TCCT = 1.5 * TET
Step 7 : Time for ReTest Case Execution (RTCE) this is for retesting
useually we take 0.5 times of TET
RTCE = 0.5 * TET
Step 8 : Set Report generation Time (RGT
usually we take 0.2 times of TET
RGT = 0.2 * TET
Step 9 : Set Test Environment Setup Time (TEST)
it also depends on test plan
Step 10 : Total Estimation time = TET + TCCT+ RTCE + RGT + TEST + some buffer...;)
Example
Total No of use cases (NUC) : 227
Average test cases per Use cases(AET) : 10
Estimated Test cases(NTC) : 227 * 10 = 2270
Time estimation execution (TET) : 2270/4 = 567.5 hr
Time for creating testcases (TCCT) : 567.5*4/3 = 756.6 hr
Time for retesting (RTCE) : 567.5/2 = 283.75 hr
Report Generation(RGT) = 100 hr
Test Environment Setup Time(TEST) = 20 hr.
-------------------
Total Hrs 1727.85 + buffer
-------------------
here 4 means Number of testcases executed per hour
i.e 15 min will take for execution of each test case
What is the purpose of test strategy?
Reason number 1: The number one reason of writing a test strategy document is to quot;havequot; a signed, sealed,
and delivered, FDA (or FAA) approved document, where the document includes a written testing
methodology, test plan, and test cases.
Reason number 2: Having a test strategy does satisfy one important step in the software testing process.
Reason number 3: The test strategy document tells us how the software product will be tested.
Reason number 4: The creation of a test strategy document presents an opportunity to review the test plan
with the project team.
Reason number 5: The test strategy document describes the roles, responsibilities, and the resources
required for the test and schedule constraints.
Reason number 6: When we create a test strategy document, we have to put into writing any testing issues
requiring resolution (and usually this means additional negotiation at the project management level).
Reason number 7: The test strategy is decided first, before lower level decisions are made on the test plan,
test design, and other testing issues.




What's Quality Approach document? what should be the contents and things like that...


Answer1:
you should start thinking from your company business type, and according to it define different processes
for your organization. like procurment, CM etc
Then think over different matrices you will be calculating for each process, and define them with formula,
the kind of analysis will be doing and when shall the red flag to be raised,
Decide on your audit policies frequencies etc. Think on the change control board if any process needs
modification.

Answer2:
By defining the process i mean the structured collection of practices that describe the characteristics of the
work and its quality. writting process means creating a system with which every one will work, the benefits
of it are like common language and a shared vision across organization, its will be a frame work for
prioritizing actions.
From implementation point of view first you need to break the complete life cycle of your product in
diffrent meaningful steps, and setting the goals for each phase.
you can create different document templates which every one shall follow, Define the dependencies among
different groups for each project, Define risks for each project and what is mitigation plan for each risk. etc
You can read the CMMI model, customize that as per your organization goal. for a start up company As per
my personal opinion, its better to define and reach at the process for Level 3 First and then go for level 5.




What does a test strategy document contain?
The test strategy document contains test cases, conditions, the test environment, a list of related tasks,
pass/fail criteria and risk assessment. The test strategy document is a formal description of how a software
product will be tested. What is the test strategy document developed for? It is developed for all levels of
testing, as required. How is it written, and who writes it? It is the test team that analyzes the requirements,
writes the test strategy, and reviews the plan with the project team.




Why Q/A should not report to development?
Based on research from the Quality Assurance Institute, the percent of quality groups in each location is
noted,
50% - reports to Senior IT Manager - This is the best positioning because it gives the Quality Manager
immediate access to the IT Manager to discuss and promote Quality issues, when the quality manager
reports elsewhere, quality issues may not be raised to the appropriate level or receive the necessary action.
25% - reports to Manager of systems/programming
15 % reports to Manger oprerations.
10 % outside IT function.

    Which of the following statements about Regression statements are true?
1---Regression testing must consist of a fixed set of tests to create a base line
2---Regression tests should be used to detect defects in new feature
3---Regression testing can be run on every build
4--- Regression testing should be targeted areas of high risk and known code change
5---Regression testing when automated, is highly effective in preventing defects.


Answer1:
1---Regression testing must consist of a fixed set of tests to create a base line
Don't think it is true as a quot;mustquot; -- it
depends on whether your regression testing style involves repeating identical tests or redoing testing in
previously tested areas with similar tests or tests that address the same risks. For example, some people do
regression testing with tests whose specific parameters are determined randomly. They broaden the set of
values they test while achieving essentially the same testing. Second example--some regression test suites
include random stringing together of test cases (they include load testing and duration testing in their
regression series, reporting their results as part of the assessment of each build). Depending on your theory
of the _point_ of regression testing, these may or may not be entirely valid regression tests.

2---Regression tests should be used to detect defects in new feature
How do you create new regression tests? Should you design new tests as standalone, or should you develop
a strategy in which the tests you use for bug-hunting are designed to be reusable as regression tests? If the
latter, and I have certainly heard some skilled testers argue that the latter approach worked well in their
sistuation, then #2 is sometimes true.

3---Regression testing can be run on every build
This is true, though it might be silly and a big waste of time.

4--- Regression testing should be targeted areas of high risk and known code change
Hmmm, there's a area of computer science called program slicing and one of the objectives of this class of
work is to figure out how to restrict the regression test suite to a smaller number of tests, which test only
those things that might have been impacted by a change. Bob Glass has criticized the results of some of this
work, but if #4 is false, some Ph.D.'s and big research grants should be retracted.

5---Regression testing when automated, is highly effective in preventing defects.
Unit-level automated regression testing is highly effective in preventing defects--read up on test-driven
development.

Answer2:
Let me explain why I think 2 & 5 are false
2---Regression tests should be used to detect defects in new feature
Since regression tests only address existing features and functionality, it can't find defects in new features.
It can only find where existing features and functionality have been broken by changes.

5---Regression testing when automated, is highly effective in preventing defects.
Since no tests prevent defects, they only find them, it's impossible to prevent defects with a regression test.
I will add, however, that if a developer can use an automated regression test to test their own code before
submitting it to the code repository (say in the form a series of unit tests coupled to a library, etc.) then you
could in some way prevent defects with a regression test.

I also don't like 1- and 4. 1- since a regression test suite grows as the product does. Therefore the tests are
not fixed. 4- because a regression test tests the whole application, not just a targeted area. In the past, I have
used the concept of test depth (level 1 being the basic regression tests--higher number reflect additional
functionality) so you could run a level one regression on the whole program but do level three on the
transport layer quot;because we've updated the libraryquot;. T

an automated set of tests would be the most likely way to make 3- a possibility. It is unlikely that with daily
builds, as many companies run their build process, that anything short of an automated regression test suite
would be able to be run daily with any efficacy. if the builds were weekly, then a manual regression test
would be likely.

Answer3:
As per the difinition of regression testing and actual workaround if you have to have answer this question
then option 3 & 4 is the best choice among all.The reason behind it is :
3---Regression testing can be run on every build It is a normal phenomenon if there is build coming on
weekly basis or it is a RC build.Since,there is nothing mention about daily build ,only thing mention is
every build so it can be correct.
4---Regression testing should be targeted areas of high risk and known code change This is also true in
most of the situation,it is not universally true but in certain condition where there is code change and the
related modules are only tested in regression automation rather than whole code.
5 is not true coz in regression we detect the defect not prevent normally.




How do you execute tests?
Execution of tests is completed by following the test documents in a methodical manner. As each test
procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure
and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the
execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues,
status and activities.
* The output from the execution of test procedures is known as test results. Test results are evaluated by
test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies
are logged and discussed with the software team lead, hardware test lead, programmers, software engineers
and documented for further investigation and resolution. Every company has a different process for logging
and reporting bugs/defects uncovered during testing.
* A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test
summary report. The severity of a problem, found during system testing, is defined in accordance to the
customer's risk assessment and recorded in their selected tracking tool.
* Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are
regression tested and flawless fixes are migrated to a new baseline. Following completion of the test,
members of the test team prepare a summary report. The summary report is reviewed by the Project
Manager, Software QA Manager and/or Test Team Lead.
* After a particular level of testing has been certified, it is the responsibility of the Configuration Manager
to coordinate the migration of the release software components to the next test level, as documented in the
Configuration Management Plan. The software is only migrated to the production environment after the
Project Manager's formal acceptance.
* The test team reviews test document problems identified during testing, and update documents where
appropriate.
Inputs for this process:
* Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
* Test tools, including automated test tools, if applicable.
* Developed scripts.
* Changes to the design, i.e. Change Request Documents.
* Test data.
* Availability of the test team and project team.
* General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.
* A software that has been migrated to the test environment, i.e. unit tested code, via the
Configuration/Build Manager.
* Test Readiness Document.
* Document Updates.
Outputs for this process:
* Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved
and signed-off with revised testing deliverables.
* Changes to the code, also known as test fixes.
* Test document problems uncovered as a result of testing. Examples are Requirements document and
Design Document problems.
* Reports on software design issues, given to software developers for correction. Examples are bug reports
on code issues.
* Formal record of test incidents, usually part of problem tracking.
* Base-lined package, also known as tested source and object code, ready for migration to the next level.




What is a requirements test matrix?
The requirements test matrix is a project management tool for tracking and managing testing efforts, based
on requirements, throughout the project's life cycle.
The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and
the descriptions of testing efforts are put in the column headers of the same table.
The requirements test matrix is similar to the requirements traceability matrix, which is a representation of
user requirements aligned against system functionality. The requirements traceability matrix ensures that all
user requirements are addressed by the system integration team and implemented in the system integration
effort.
The requirements test matrix is a representation of user requirements aligned against system testing.
Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user
requirements are addressed by the system test team and implemented in the system testing effort.




Can you give me a requirements test matrix template?
For a requirements test matrix template, you want to visualize a simple, basic table that you create for
cross-referencing purposes.
Step 1: Find out how many requirements you have.
Step 2: Find out how many test cases you have.
Step 3: Based on these numbers, create a basic table. If you have a list of 90 requirements and 360 test
cases, you want to create a table of 91 rows and 361 columns.
Step 4: Focus on the the first column of your table. One by one, copy all your 90 requirement numbers, and
paste them into rows 2 through 91 of the table.
Step 5: Now switch your attention to the the first row of the table. One by one, copy all your 360 test case
numbers, and paste them into columns 2 through 361 of the table.
Step 6: Examine each of your 360 test cases, and, one by one, determine which of the 90 requirements they
satisfy. If, for the sake of this example, test case number 64 satisfies requirement number 12, then put a
large quot;Xquot; into cell 13-65 of your table... and then you have it; you have just created a requirements test
matrix template that you can use for cross-referencing purposes.

    What metrics are used for bug tracking?
Metrics that can be used for bug tracking include the followings: the total number of bugs, total number of
bugs that have been fixed, number of new bugs per week, and the number of fixes per week. Metrics for
bug tracking can be used to determine when to stop testing, for example, when bug rate falls below a
certain level. You CAN learn to use defect tracking software.
1. In QA team, everyone talks about process. What exactly they are taking about?
2. Are there any different type of process?


Answer1:
When you talk about quot;processquot; you are generally talking about the actions used to accomplish a task.
Here's an example: How do you solve a jigsaw puzzle?
You start with a box full of oddly shaped pieces. In your mind you come up with a strategy for matching
two pieces together (or no strategy at all and simply grab random pieces until you find a match), and
continue on until the puzzle is completed.
If you were to describe the *way* that you go about solving the puzzle you would be describing the
process.
Some follow-up questions you might think about include things like:
- How much time did it take you to solve the puzzle?
- Do you know of any skills, tricks or practices that might help you solve the puzzle quicker?
- What if you try to solve the puzzle with someone else? Does that help you go faster, or slower? (why or
why not?) Can you have *too* many people on this one task?
- To answer your second question, I'll ask *you* the question: Are there different ways that people can
solve a jigsaw puzzle?
There are many interesting process-related questions, ideas and theories in Quality Assurance. Generally
the identification of workplace processes lead to the questions of improvement in efficiency and
productivity. The motivation behind that is to try and make the processes as efficient as possible so as to
incur the least amount of time and expense, while providing a general sense of repeatability, visibility and
predictability in the way tasks are performed and completed.
The idea behind this is generally good, but the execution is often flawed. That is what makes QA so
interesting. You see, when you work with people and processes, it is very different than working with the
processes performed by machines. Some people in QA forget that distinction and often become
disillusioned with the whole thing.
If you always remember to approach processes in the workplace with a people-centric view, you should do
fine.


Answer2:
There is:
* Waterfall
* Spiral
* Rapid prototype
* Clean room
* Agile (XP, Scrum, ...)


What metrics are used for test report generation?
Metrics that can be used for test report generation include...
McCabe metrics: cyclomatic complexity metric (v(G)), actual complexity metric (AC), module design
complexity metric (iv(G)), essential complexity metric (ev(G)), pathological complexity metric (pv(G)),
design complexity metric (S0), integration complexity metric (S1), object integration complexity metric
(OS1), global data complexity metric (gdv(G)), data complexity metric (DV), tested data complexity metric
(TDV), data reference metric (DR), tested data reference metric (TDR), maintenance severity metric
(maint_severity), data reference severity metric (DR_severity), data complexity severity metric
(DV_severity), global data severity metric (gdv_severity).
McCabe object-oriented software metrics: encapsulation percent public data (PCTPUB), access to public
data (PUBDATA), polymorphism percent of unoverloaded calls (PCTCALL), number of roots
(ROOTCNT), fan-in (FANIN), quality maximum v(G) (MAXV), maximum ev(G) (MAXEV), and
hierarchy quality (QUAL).
Other object-oriented software metrics: depth (DEPTH), lack of cohesion of methods (LOCM), number of
children (NOC), response for a class (RFC), weighted methods per class (WMC), Halstead software
metrics program length, program volume, program level and program difficulty, intelligent content,
programming effort, error estimate, and programming time.
Line count software metrics: lines of code, lines of comment, lines of mixed code and comments, and lines
left blank.




What is quality plan?


Answer1:
the test plan is the document created before starting the testing process, it includes that types of testing that
will be performed, high level scope of the project, the envirnmental requirements of the testing process,
what automated testing tools will be used (If available), the schedule of each test, when it will start and end.


Answer2:
you should not only understand what a Quality Plan is, but you should understand why you're making it. I
don't beleieve that quot;because I was told to do soquot; is a good enough reason. If the person who told you to
create it can't tell you 1) what it is, and 2) how to create it, I don't think that they actually know why it's
needed. That breaks the primary rule of all plans used in testing:
We write quality plans for two very different purposes. Sometimes the quality plan is a product; sometimes
it's a tool. It's too easy, but also too expensive, to confuse these goals.
If it's not being used as a tool, don't waste your time (and your company's money) doing this.




What is the difference between verification and validation?
Verification takes place before validation, and not vice versa.
Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other
hand, evaluates the product itself.
The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and
meetings. The input of validation, on the other hand, is the actual testing of an actual product.
The output of verification is a nearly perfect set of documents, plans, specifications, and requirements
document. The output of validation, on the other hand, is a nearly perfect, actual product.

    What is the difference between efficient and effective?
quot;Efficientquot; means having a high ratio of output to input; which means working or producing with a
minimum of waste. For example, quot;An efficient engine saves gas.quot; Or, quot;An efficient test engineer saves
timequot;.
quot;Effectivequot;, on the other hand, means producing or capable of producing an intended result, or having a
striking effect. For example, quot;For rapid long-distance transportation, the jet engine is more effective than a
witch's broomstickquot;. Or, quot;For developing software test procedures, engineers specializing in software
testing are more effective than engineers who are generalistsquot;.




How effective can we implement six sigma principles in a very large software services organization?


Answer1:
Effective way of implementing sixsigma.
there are quite a few things one needs
1. management buyin
2. dedicated team both drivers as well as adopters
3. training
4. culture building - if you have a pro process culture, life is easy
5. sustained effort over a period towards transforming, people, thoughts and actions Personally technical
content is never a challenge, but adoption is a challenge.

Answer2:
quot;Six sigmaquot; is a combination of process recommendations and mathematical model. The name quot;six sigmaquot;
reflects the notion of reducing variation so much that errors -- events out of tolerance -- are six standard
deviations from a desired mean. The mathematics are at the core of the process implementation.
The problem is that software is not hardware. Software defects are designed in, not the result of
manufacturing variation.
The other side of six sigma is the drive for continuous improvement. You don't need the six sigma math for
this and the concept has been around long before the six sigma movement.
To improve anything, you need some type of indicator of its current state and a way to tell that it is
improved. Plus determination to improve it. Management support helps.

Answer3:
There are different methodologies adopted in sixsigma. However, it is commonly referenced from the
variance based approach. If you are trying to look at sixsigma from that, for software services,
fundamentally the measurement system should be reliable - industry has not reached the maturity level of
manufacturing industry where it fits to a T. The differences between SW and HW/manufacturing industry
is slightly difficult to address.
There are some areas you can adopt sixsigma in its full statistical form(eg in-process error rate, productivity
improvements etc), some areas are difficult.
The narrower the problem area is, the better it gets even in software services to address adopting the
statistical method.
There are methodologies that have a bundle of tools,along with statistical techniques, are used on the full
SDLC.
A generic observation is ,SS helps if we look for proper fitment of methodology for the purpose. Else
doubts creep in




What stage of bug fixing is the most cost effective?
Bug prevention techniques (i.e. inspections, peer design reviews, and walk-throughs) are more cost
effective than bug detection.




What is Defect Life Cycle.?


Answer1:
Defect life cycle is....different stages after a defect is identified.
New (When defect is identified)
Accepted (when Development team and QA team accepts it's a Bug)
In Progress (when a person is working to resolve the issue-defect)
Resolved (once the defect resolved)
Completed (Some one who can take up the responsibly Team lead)
Closed/reopened (Retested by TE and he will update the Status of the bug)

Answer2:
Defect Life Cycle is nothing but the various phases a Bug undergoes after it is raised or reported.
A general Interview answer can be given as:
1. New or Opened
2. Assinged
3. Fixed
4. Tested
5. Closed.




What is the difference between a software bug and software defect?
quot;Software bugquot; is nonspecific; it means an inexplicable defect, error, flaw, mistake, failure, fault, or
unwanted behavior of a computer program. Other terms, e.g. quot;software defectquot;, or quot;software failurequot;, are
more specific.
While the word quot;bugquot; has been a part of engineering jargon for many-many decades; many-many decades
ago even Thomas Edison, the great inventor, wrote about a quot;bugquot; - today there are many who believe the
word quot;bugquot; is a reference to insects that caused malfunctions in early electromechanical computers.

    What is the difference between a software bug and software defect?
In software testing, the difference between quot;bugquot; and quot;defectquot; is small, and also depends on the end client.
For some clients, bug and defect are synonymous, while others believe bugs are subsets of defects.
Difference number one: In bug reports, the defects are easier to describe.
Difference number two: In my bug reports, it is easier to write descriptions as to how to replicate defects. In
other words, defects tend to require only brief explanations.
Commonality number one: We, software test engineers, discover both bugs and defects, before bugs and
defects damage the reputation of our company.
Commonality number two: We, software QA engineers, use the software much like real users would, to
find both bugs and defects, to find ways to replicate both bugs and defects, to submit bug reports to the
developers, and to provide feedback to the developers, i.e. tell them if they've achieved the desired level of
quality.
Commonality number three: We, software QA engineers, do not differentiate between bugs and defects. In
our reports, we include both bugs and defects that are the results of software testing.




Are developers smarter than tester? Any suggestion about the future prospects and technicality
involvedin the testing job?


Answer1:
QA & Testing are thankless jobs. In a software development company developer is a core person. As you
are a fresh graduate, it would be good for you to work as a developer. From development you can always
move to testing or QA or other admin/support tasks. But from Testing or QA it is little difficult to go back
to development, though not impossible(as u are BE comp)
Seeing the job market, it is not possible for each & every fresher to get into development. But you can keep
searching for it.
Some big company's have seperate Verifiction & Validation groups where only testing projects are
executed. Those teams have TLs, PLs who are testing experts. They earn good salary same as development
people.
In technical projects the testing team does lot of technical work. You can do certifications to improve your
technical skills & market value.
It all depends on your way of handling things & interpersonal, communication and leadership skills. If it is
difficult for you to get a job in developement or you really like testing, just go ahead. Try to achieve
excellence as a testing professional. You will never have a job problem .Also you will always get onsite
opportunities too!! Yuo might have to struggle for initial few years like all other freshers.

Answer2:
QA and Testing are thankless only in some companies.
Testing is part of development. Rather than distinguish between testing and development,distinguish
between testing and programming.
Programming is also thankless in some companies.
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1
Lesson 7...Question Part 1

More Related Content

What's hot

Hotel managementsystemcorrectfinalsrs
Hotel managementsystemcorrectfinalsrsHotel managementsystemcorrectfinalsrs
Hotel managementsystemcorrectfinalsrsvidya_shankar
 
Software Requirement Specification Of Hotel Management System
Software Requirement Specification Of Hotel Management SystemSoftware Requirement Specification Of Hotel Management System
Software Requirement Specification Of Hotel Management SystemUttam Singh Chaudhary
 
Hotel management
Hotel managementHotel management
Hotel managementArman Ahmed
 
Hotel management system By Harsh & aditya Mathur.
Hotel management system By  Harsh & aditya  Mathur.Hotel management system By  Harsh & aditya  Mathur.
Hotel management system By Harsh & aditya Mathur.Harsh Mathur
 
Ecommerce testing
Ecommerce testingEcommerce testing
Ecommerce testingbadurkar
 
Hospital mangement system report file
Hospital mangement system report fileHospital mangement system report file
Hospital mangement system report fileNausheen Hasan
 
Canteen management system Documentation
Canteen management system DocumentationCanteen management system Documentation
Canteen management system Documentationrimshailyas1
 
Restaurant automation analysis&designdoc_v3.1
Restaurant automation analysis&designdoc_v3.1Restaurant automation analysis&designdoc_v3.1
Restaurant automation analysis&designdoc_v3.1Prabhakar Ganesamurthy
 
Hospital Management System
Hospital Management SystemHospital Management System
Hospital Management SystemRANJIT SINGH
 
Online Store Modules
Online Store ModulesOnline Store Modules
Online Store ModulesKavita Sharma
 
Hotel Management System final report
Hotel Management System final report  Hotel Management System final report
Hotel Management System final report jaysavani5
 
Online furniture
Online furnitureOnline furniture
Online furnituregitika -
 
Python Project On Cosmetic Shop system
Python Project On Cosmetic Shop systemPython Project On Cosmetic Shop system
Python Project On Cosmetic Shop systemvikram mahendra
 
Synopsis of hms(Hospital Management System)
Synopsis of hms(Hospital Management System)Synopsis of hms(Hospital Management System)
Synopsis of hms(Hospital Management System)Farooq Stanikzai
 

What's hot (20)

HOSPITAL MANAGEMENT SYSTEM PROJECT
HOSPITAL MANAGEMENT SYSTEM PROJECTHOSPITAL MANAGEMENT SYSTEM PROJECT
HOSPITAL MANAGEMENT SYSTEM PROJECT
 
Hotel managementsystemcorrectfinalsrs
Hotel managementsystemcorrectfinalsrsHotel managementsystemcorrectfinalsrs
Hotel managementsystemcorrectfinalsrs
 
Software Requirement Specification Of Hotel Management System
Software Requirement Specification Of Hotel Management SystemSoftware Requirement Specification Of Hotel Management System
Software Requirement Specification Of Hotel Management System
 
Hms project report
Hms project reportHms project report
Hms project report
 
Testing plan for an ecommerce site
Testing plan for an ecommerce siteTesting plan for an ecommerce site
Testing plan for an ecommerce site
 
Hotel management
Hotel managementHotel management
Hotel management
 
Hotel management system By Harsh & aditya Mathur.
Hotel management system By  Harsh & aditya  Mathur.Hotel management system By  Harsh & aditya  Mathur.
Hotel management system By Harsh & aditya Mathur.
 
Ecommerce testing
Ecommerce testingEcommerce testing
Ecommerce testing
 
Hospital mangement system report file
Hospital mangement system report fileHospital mangement system report file
Hospital mangement system report file
 
Makalah manajemen kebidanan
Makalah manajemen kebidananMakalah manajemen kebidanan
Makalah manajemen kebidanan
 
Canteen management system Documentation
Canteen management system DocumentationCanteen management system Documentation
Canteen management system Documentation
 
Restaurant automation analysis&designdoc_v3.1
Restaurant automation analysis&designdoc_v3.1Restaurant automation analysis&designdoc_v3.1
Restaurant automation analysis&designdoc_v3.1
 
Hospital Management System
Hospital Management SystemHospital Management System
Hospital Management System
 
Hotel management
Hotel managementHotel management
Hotel management
 
ONLINE SHOPPING SYSTEM -SEPM
ONLINE SHOPPING SYSTEM -SEPMONLINE SHOPPING SYSTEM -SEPM
ONLINE SHOPPING SYSTEM -SEPM
 
Online Store Modules
Online Store ModulesOnline Store Modules
Online Store Modules
 
Hotel Management System final report
Hotel Management System final report  Hotel Management System final report
Hotel Management System final report
 
Online furniture
Online furnitureOnline furniture
Online furniture
 
Python Project On Cosmetic Shop system
Python Project On Cosmetic Shop systemPython Project On Cosmetic Shop system
Python Project On Cosmetic Shop system
 
Synopsis of hms(Hospital Management System)
Synopsis of hms(Hospital Management System)Synopsis of hms(Hospital Management System)
Synopsis of hms(Hospital Management System)
 

Viewers also liked

Function of software quality assurance lecture 2
Function of software quality assurance lecture 2Function of software quality assurance lecture 2
Function of software quality assurance lecture 2Abdul Basit
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality AssuranceSaqib Raza
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality AssuranceSachithra Gayan
 
Software quality assurance lecture 1
Software quality assurance lecture 1Software quality assurance lecture 1
Software quality assurance lecture 1Abdul Basit
 
Introduction To Software Quality Assurance
Introduction To Software Quality AssuranceIntroduction To Software Quality Assurance
Introduction To Software Quality Assuranceruth_reategui
 

Viewers also liked (8)

Lesson 1
Lesson 1Lesson 1
Lesson 1
 
Function of software quality assurance lecture 2
Function of software quality assurance lecture 2Function of software quality assurance lecture 2
Function of software quality assurance lecture 2
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality Assurance
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality Assurance
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality Assurance
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality Assurance
 
Software quality assurance lecture 1
Software quality assurance lecture 1Software quality assurance lecture 1
Software quality assurance lecture 1
 
Introduction To Software Quality Assurance
Introduction To Software Quality AssuranceIntroduction To Software Quality Assurance
Introduction To Software Quality Assurance
 

Similar to Lesson 7...Question Part 1

Software testing q as collection by ravi
Software testing q as   collection by raviSoftware testing q as   collection by ravi
Software testing q as collection by raviRavindranath Tagore
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance Webtech Learning
 
Software Testing Interview Questions For Experienced
Software Testing Interview Questions For ExperiencedSoftware Testing Interview Questions For Experienced
Software Testing Interview Questions For Experiencedzynofustechnology
 
Software testing
Software testingSoftware testing
Software testingSengu Msc
 
Software testing
Software testingSoftware testing
Software testingSengu Msc
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTINGacemindia
 
Software Quality Assurance training by QuontraSolutions
Software Quality Assurance training by QuontraSolutionsSoftware Quality Assurance training by QuontraSolutions
Software Quality Assurance training by QuontraSolutionsQUONTRASOLUTIONS
 
Introduction to software testing
Introduction to software testingIntroduction to software testing
Introduction to software testingVenkat Alagarsamy
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWJournal For Research
 
Software_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfSoftware_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfAnupmaMunshi
 
Software testing sengu
Software testing  senguSoftware testing  sengu
Software testing senguSengu Msc
 
Software testing pdf
Software testing pdfSoftware testing pdf
Software testing pdfMounikaCh26
 
Testing Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfTesting Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfMuhammadShoaibHussai2
 
Software testing pdf
Software testing pdfSoftware testing pdf
Software testing pdfGaurav Nigam
 
Software testing(1)
Software testing(1)Software testing(1)
Software testing(1)ramvyata123
 

Similar to Lesson 7...Question Part 1 (20)

Qa Faqs
Qa FaqsQa Faqs
Qa Faqs
 
Software testing q as collection by ravi
Software testing q as   collection by raviSoftware testing q as   collection by ravi
Software testing q as collection by ravi
 
Software testing & Quality Assurance
Software testing & Quality Assurance Software testing & Quality Assurance
Software testing & Quality Assurance
 
Software Testing Interview Questions For Experienced
Software Testing Interview Questions For ExperiencedSoftware Testing Interview Questions For Experienced
Software Testing Interview Questions For Experienced
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing
Software testingSoftware testing
Software testing
 
SOFTWARE TESTING
SOFTWARE TESTINGSOFTWARE TESTING
SOFTWARE TESTING
 
Software Quality Assurance training by QuontraSolutions
Software Quality Assurance training by QuontraSolutionsSoftware Quality Assurance training by QuontraSolutions
Software Quality Assurance training by QuontraSolutions
 
Introduction to software testing
Introduction to software testingIntroduction to software testing
Introduction to software testing
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEW
 
Software_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdfSoftware_testing Unit 1 bca V.pdf
Software_testing Unit 1 bca V.pdf
 
Software testing sengu
Software testing  senguSoftware testing  sengu
Software testing sengu
 
Software testing pdf
Software testing pdfSoftware testing pdf
Software testing pdf
 
Testing Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdfTesting Slides 1 (Testing Intro+Static Testing).pdf
Testing Slides 1 (Testing Intro+Static Testing).pdf
 
Software testing pdf
Software testing pdfSoftware testing pdf
Software testing pdf
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing
Software testingSoftware testing
Software testing
 
Software testing(1)
Software testing(1)Software testing(1)
Software testing(1)
 
Software testing
Software testingSoftware testing
Software testing
 

More from bhushan Nehete

More from bhushan Nehete (9)

Lesson 9...Additional Reading
Lesson 9...Additional ReadingLesson 9...Additional Reading
Lesson 9...Additional Reading
 
Lesson 8...Question Part 2
Lesson 8...Question Part 2Lesson 8...Question Part 2
Lesson 8...Question Part 2
 
Lesson 10
Lesson 10Lesson 10
Lesson 10
 
Lesson 4...Bug Life Cycle
Lesson 4...Bug Life CycleLesson 4...Bug Life Cycle
Lesson 4...Bug Life Cycle
 
Lesson 3...PPT 2
Lesson 3...PPT 2Lesson 3...PPT 2
Lesson 3...PPT 2
 
Lesson 2....PPT 1
Lesson 2....PPT 1Lesson 2....PPT 1
Lesson 2....PPT 1
 
Lesson 1...Guide
Lesson 1...GuideLesson 1...Guide
Lesson 1...Guide
 
Lesson 6...Guide
Lesson 6...GuideLesson 6...Guide
Lesson 6...Guide
 
Lesson 5...Guide
Lesson 5...GuideLesson 5...Guide
Lesson 5...Guide
 

Recently uploaded

"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 

Recently uploaded (20)

"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 

Lesson 7...Question Part 1

  • 1. Software Quality Assurance (1) A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical requirements. (2) A set of activities designed to evaluate the process by which products are developed or manufactured. What's difference between client/server and Web Application ? Client/server based is any application architecture where one server application and one or many client applications are involved like your mail server and MS outlook Express, it can be a web application as well, where the Web Application is a kind of client server application that is hosted on the web server and accessed over the internet or interanet. There are lots of things that differs between testing of the two type above and cann't be posted in one post but you can look into the data flow, communication and servside variable like session and security etc Software Quality Assurance Activities  Application of Technical Methods (Employing proper methods and tools for developing software)  Conduct of Formal Technical Review (FTR)  Testing of Software  Enforcement of Standards (Customer imposed standards or management imposed standards)  Control of Change (Assess the need for change, document the change)  Measurement (Software Metrics to measure the quality, quantifiable)  Records Keeping and Recording (Documentation, reviewed, change control etc. i.e. benefits of docs). What's the difference between STATIC TESTING and DYNAMIC TESTING? Answer1: Dynamic testing: Required program to be executed static testing: Does not involve program execution The program is run on some test cases & results of the program’s performance are examined to check whether the program operated as expected E.g. Compiler task such as Syntax & type checking, symbolic execution, program proving, data flow analysis, control flow analysis Answer2: Static Testing: Verification performed with out executing the system code Dynamic Testing: Verification and validation performed by executing the system code Software Testing Software testing is a critical component of the software engineering process. It is an element of software quality assurance and can be described as a process of running a program in such a manner as to uncover any errors. This process, while seen by some as tedious, tiresome and unnecessary, plays a vital role in software development. Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should
  • 2. happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure. What's difference between QA/testing The quality assurance process is a process for providing adequate assurance that the software products and processes in the product life cycle conform to their specific requirements and adhere to their established plans.quot; The purpose of Software Quality Assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built What black box testing types can you tell me about? Black box testing is functional testing, not based on any knowledge of internal software design or code. Black box testing is based on requirements and functionality. Functional testing is also a black-box type of testing geared to functional requirements of an application. System testing is also a black box type of testing. Acceptance testing is also a black box type of testing. Functional testing is also a black box type of testing. Closed box testing is also a black box type of testing. Integration testing is also a black box type of testing. What is software testing methodology? One software testing methodology is the use a three step process of... 1. Creating a test strategy; 2. Creating a test plan/design; and 3. Executing tests. This methodology can be used and molded to your organization's needs. Rob Davis believes that using this methodology is important in the development and ongoing maintenance of his clients' applications. What’s the difference between QA and testing? TESTING means “Quality Control”; and QUALITY CONTROL measures the quality of a product; while QUALITY ASSURANCE measures the quality of processes used to create a quality product. Why Testing CANNOT Ensure Quality Testing in itself cannot ensure the quality of software. All testing can do is give you a certain level of assurance (confidence) in the software. On its own, the only thing that testing proves is that under specific controlled conditions, the software functioned as expected by the test cases executed.
  • 3. How to find all the Bugs during first round of Testing? Answer1: I understand the problems you are facing. I was involved with a web-based HR system that was encountering the same problems. What I ended up doing was going back over a few release cycles and analyzing the types of defects found and when (in the release cycle including the various testing cycles) they were found. I started to notice a distinct trend in certain areas. For each defect type, I started looking into the possibility if it could have been caught in the prior phase (lots of things were being found in the Systems test phase that should have been caught earlier). If so, why wasn't it caught? Could it have been caught even earlier (say via a peer review)? If so, why not? This led me to start examining the various processes and found a definite problem with peer reviews (not very thorough IF they were even being done) and with the testing process (not rigorous enough). We worked with the customer and folks doing the testing to start educating them and improving the processes. The result was the number of defects found in the latter test stages (System test for example) were cut by over half! It was getting harder to find problems with the product as they were discovering them earlier in the process -- saving time & money! Answer2: There could be several reasons for not catching a showstopper in the first or second build/rev. A found defect could either functionally or physiologically mask a second or third defect. Functionally the thread or path to the second defect could have been boken or rerouted to another path or physiologically the tester who found the first defect knows the app must go back and be rewritten so he/she procedes halfheartedly on and misses the second one. I've seen both cases. It is difficult to keep testing on a known defective app. The testers seem to lose interest knowing that what effort they put in to test it, will have to be redone on the next iteration. This will test your metal as a lead to get them to follow through and maintain a professional attitude. Answer3: The best way is to prevent bugs in the first place. Also testing doesn't fix or prevent bugs. It just provides information. Applying this information to your situation is the important part. The other thing that you may be encountering is that testing tends to be exploratory in nature. You have stated that these are existing bugs, but not stated whether tests already existed for these bugs. Bugs in early cycles inhibit exploration. Additionally, a tester's understanding of the application and its relationships and interactions will improve with time and thus more 'interesting' bugs tend to be found in later iterations as testers expand their exploration (ie. think of new tests). No matter how much time you have to read through the documents and inspect artefacts, seeing the actual application is going to trigger new thoughts, and thus introduce previously unthought of tests. Exposure to the application will trigger new thoughts as well, thus the longer your testing goes, the more new tests (and potential bugs) are going to be found. Iterative development is a good way to counter this, as testers get to see something physical earlier, but this issue will always exist to some degree as the passing of time, and exploration of the application allow new tests to be thought of at inconvenient moments. Is regression testing performed manually? The answer to this question depends on the initial testing approach. If the initial testing approach was manual testing, then the regression testing is usually performed manually. Conversely, if the initial testing approach was automated testing, then the regression testing is usually performed by automated testing. How to choose which defect to remove in 1000000 defects? (because It will take too much resources in order to remove them all.) Answe1: Are you the programmer who has to fix them, the project manager who has to supervise the programmers,
  • 4. the change control team that decides which areas are too high risk to impact, the stakeholder-user whose organization pays for the damage caused by the defects or the tester? The tester does not choose which defects to fix. The tester helps ensure that the people who do choose, make a well-informed choice. Testers should provide data to indicate the *severity* of bugs, but the project manager or the development team do the prioritization. When I say quot;indicate the severityquot;, I don't just mean writing S3 on a piece of paper. Test groups often do follow-up tests to assess how serious a failure is and how broad the range of failure-triggering conditions. Priority depends on a wide range of factors, including code-change risk, difficulty/time to complete the change, which stakeholders are affected by the bug, the other commitments being handled by the person most knowledgeable about fixing a certain bug, etc. Many of these factors are not within the knowledge of most test groups. Answe2: As a tester we don't fix the defects but we surely can prioritize them once detected. In our org we assign severity level to the defects depending upon their influence on other parts of products. If a defect doesnt allow you to go ahead and test test the product, it is critical one so it has to be fixed ASAP. We have 5 levels as 1-critical 2-High 3-Medium 4-Low 5-Cosmetic Dev can group all the critical ones and take them to fix before any other defect. Answer3: Priority/Severity P1 P2 P3 S1 S2 S3 Generally the defects are classified in aboveshown grid. Every organization / software has some target of fixing the bugs. Example - P1S1 -> 90% of the bugs reported should be fixed. P3S3 -> 5% of the bugs reported may be fixed. Rest are taken in letter service packs or versions. Thus the organization should decide its target and act accordingly. Basically bugfree software is not possible. Answer4: Ideally, the customer should assign priorities to their requirements. They tend to resist this. On a large, multi-year project I just completed, I would often (in the lack of customer guidelines) rely on my knowledge of the application and the potential downstream impacts in the modeled business process to prioritize defects. If the customer doesn't then I fell the test organization should based on risk or other, similar considerations. What is software quality? The quality of the software varies widely from system to system. Some common quality attributes are stability, usability, reliability, portability, and maintainability.
  • 5. What are the five dimensions of the Risks? Schedule: Unrealistic schedules, exclusion of certain activities when chalking out a schedule etc. could be deterrents to project delivery on time. Unstable communication link can be considered as a probable risk if testing is carried out from a remote location. Client: Ambiguous requirements definition, clarifications on issues not being readily available, frequent changes to the requirements etc. could cause chaos during project execution. Human Resources: Non-availability of sufficient resources with the skill level expected in the project are not available; Attrition of resources - Appropriate training schedules must be planned for resources to balance the knowledge level to be at par with resources quitting. Underestimating the training effort may have an impact in the project delivery. System Resources: Non-availability of /delay in procuring all critical computer resources either hardware and software tools or licenses for software will have an adverse impact. Quality: Compound factors like lack of resources along with a tight delivery schedule and frequent changes to requirements will have an impact on the quality of the product tested. What is good code? A good code is code that works, is free of bugs and is readable and maintainable. Organizations usually have coding standards all developers should adhere to, but every programmer and software engineer has different ideas about what is best and what are too many or too few rules. We need to keep in mind that excessive use of rules can stifle both productivity and creativity. Peer reviews and code analysis tools can be used to check for problems and enforce standards. How do you perform integration testing? To perform integration testing, first, all unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team. Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable, or acceptable, based on client input. Why back-end testing is required, if we are going to check the front-end. What errros/bugs we are missing out by not doing back-end testing. Why we need to do unit testing, if all the features are being tested in System testing. What extra things are tested in unit testing, which can not be tested in System testing. Answer1: Assume that you're thinking client-server or web. If you test the application on the front end only you can see if the data was stored and retrievd correctly. You can't see if the servers are in an error state or not. many server processes are monitored by another process. If they crash, they are restarted. You can't see that without looking at it. The data may not be stored correctly either but the front end may have cached data lying around and it will use that instead. The least you should be doing is verifying the data as stored in the database. It is easier to test data being transferred on the boundaries and see the results of those transactions when you can set the data in a driver. Answer2: Back-End testing : Basically the requirement of this testing depends on ur project. like Say if ur project is .Ticket booking system,Front end u will provided with an Interface , where u can book the ticket by giving the appropriate details ( Like Place to go, and Time when u wanna go etc..). It will have a Data storage system (Database or XL sheet etc) which is a Back end for storing details entered by the user.
  • 6. After submitting the details ,U might have provided with a correct acknowledgement.But in back end , the details might not updated correctly in Database becoz of wrong logic development. Then that will cause a major problem. and regarding Unit level testing and System testing Unit level testing is for testing the basic checks whether the application is working fyn with the basic requirements.This will be done by developers before delivering to the QA.In System testing , In addition to the unit checks ,u will be performing all the checks ( all possible integrated checks which required) .Basically this will be carried out by tester Answer3: Ever heard about divide and conquer tactic ? It is a same method applied in backend and frontend testing. A good back end test will help minimize the burden of frontend test. Another point is you can test the backend while develope the frontend. A true pararelism could be achived. Backend testing has another problem which must addressed before front end could use it. The problem is concurency. Building a scenario to test concurency is formidable task. A complex thing is hard to test. To create such scenarios will make you unsure which test you already done and which you haven't. What we need is an effective methods to test our application. The simplest method i know is using divide and conquer. Answer4: A wide range of errors are hard to see if you don't see the code. For example, there are many optimizations in programs that treat special cases. If you don't see the special case, you don't test the optimization. Also, a substantial portion of most programs is error handling. Most programmers anticipate more errors than most testers. Programmers find and fix the vast majority of their own bugs. This is cheaper, because there is no communication overhead, faster because there is no delay from tester-reporter to programmer, and more effective because the programmer is likely to fix what she finds, and she is likely to know the cause of the problems she sees. Also, the rapid feedback gives the programmer information about the weaknesses in her programming that can help her write better code. Many tests -- most boundary tests -- are done at the system level primarily because we don't trust that they were done at the unit level. They are wasteful and tedious at the system level. I'd rather see them properly done and properly automated in a suite of programmer tests. What is Software “Quality”? Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is a subjective term. It will depend on who the ‘customer’ is and their overall influence in the scheme of things. A wide-angle view of the ‘customers’ of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organisation’s management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine reviewers, etc. Each type of ‘customer’ will have their own view on ‘quality’ - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free. What is retesting? Answer1: Retesting is usually equated with regression testing (see above) but it is different in that is follows a specific fix--such as a bug fix--and is very narrow in focus (as opposed to testing entire application again in a regression test). A product should never be released after any change has been applied to the code, with only retesting of the bug fix, and without a regression test. Answer2:
  • 7. 1. Re-testing is the testing for a specific bug after it has been fixed.(one given by your definition). 2. Re-testing can be one which is done for a bug which was raised by QA but could not be found or confirmed by Development and has been rejected. So QA does a re-test to make sure the bug still exists and again assigns it back to them. when entire project is tested & client have some doubts about the quality of testing, Re-Testing can be called. It can also be testing the same application again for better Quality. Answer3: Regression Testing is, the selective retesting of a system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software. Also referred to as verification testing It is important to determine whether in a given set of circumstances a particular series of tests has been failed. The supplier may want to submit the software for re-testing. The contract should deal with the parameters for retests, including (1) will test program which are doomed to failure be allowed to finish early, or must they be completed in their entirety? (2) when can, or must, the supplier submit his software for retesting?, and (3) how many times can the supplier fail tests and submit software for retesting ñ is this based on time spent, or the number of attempts? A well drawn contract will grant the customer options in the event of failure of acceptance tests, and these options may vary depending on how many attempts the supplier has made to achieve acceptance. So the conclusion is retesting is more or less regression testing. More appropriately retesting is a part of regression testing. Answer4: Re-testing is simply executing the test plan another time. The client may request a re-test for any reason - most likely is that the testers did not properly execute the scripts, poor documentation of test results, or the client may not be comfortable with the results. I've performed re-tests when the developer inserted unauthorized code changes, or did not document changes. Regression testing is the execution of test cases quot;not impactedquot; by the specific project. I am currently working on testing of a system with poor system documentation (and no user documentation) so our regression testing must be extensive. Answer5: * QA gets a bug fix, and has to verify that the bug is fixed. You might want to check a few things that are a “gut feel” if you want to and get away by calling it retesting, but not the entire function / module / product. * Development Refuses a bug on the basis of it being “Non Reproducible”, then retesting, preferably in the presence of the Developer, is needed. How to establish QA Process in an organization? 1.CURRENT SITUATION The first thing you should do is to put what you currently do in a piece of paper in some sort of a flowchart diagram. This will allow you to analyze what is being currently done. 2.DEVELOPMENT PROCESS STAGE Once you have the quot;big picturequot;, you have to be aware of the current status of your development project or projects. The processes you select will vary depending if you are in early stages of developing a new application (i.e.: developing a version 1.0), or maintaining an existing application (i.e.: working on release 6.7.1). 3. PRIORITIES The next thing you need to do is identify the priorities of your project, for example: - Compliance with industry standards - Validation of new functionality (new GUIs, etc) - Security - Capacity Planning ( You should see quot;Effective Methods for Software Testingquot; for more info). Make a list of the priorities, and then assign them values of (H)igh, (M)edium and (L)ow. 4. TESTING TYPES Once you are aware of the priorities, focus on the High first, then Medium, and finally evaluate whether the Low ones need immediate attention. Based on this, you need to select those Testing Types that will provide coverage for your priorities.
  • 8. Example of testing types: - Functional Testing - Integration Testing - System Testing - System-to-System Testing (for testing interfaces) - Regression Testing - Load Testing - Performance Testing - Stress Testing Etc. 5. WRITE A TEST PLAN Once you have determined your needs, the simplest way to document and implement your process is to elaborate a quot;Test Planquot; for every effort that you are engaged into (i.e.: for every release). For this you can use generic Test Plan templates available in the web that will help you brainstorm and define the scope of your testing: - Scope of Testing (defects, functionality, and what will be and will not be tested). - Testing Types (Functional, Regression, etc). - Responsible people - Requirements traceability matrix (match test cases with requirements to ensure coverage) - Defect tracking - Test Cases DURING AND POST-TESTING ACTIVITIES Make sure you keep track of the completion of your testing activities, the defects found, and that you comply with an exit criteria prior to moving to the next stage in testing (i.e. User Acceptance Testing, then Production Release). Make sure you have a mechanism for: - Reporting - Test tracking What is software testing? 1) Software testing is a process that identifies the correctness, completenes, and quality of software. Actually, testing cannot establish the correctness of software. It can find defects, but cannot prove there are no defects. 2) It is a systematic analysis of the software to see whether it has performed to specified requirements. What software testing does is to uncover errors however it does not tell us that errors are still not present. Any recommendation for estimation how many bugs the customer will find till gold release? Answer1: If you take the total number of bugs in the application and subtract the number of bugs you found, the difference will be the maximum number of bugs the customer can find. Seriously, I doubt you will find any sort of calculations or formula that can answer your question with much accuracy. If you could refernce a previous application release, it might give you a rough idea. The best thing to do is insure your test coverage is as good as you can make it then hope you've found the ones the customer might find. Remember Software testing is Risk Management! Answer2: For doing estimation : 1.)Find out the Coverage during testing of ur software and then estimate keeping in mind 80-20 principle. 2.)You can also look at the deepening of your test cases e.g. how much unit level testing and how much life
  • 9. cycle teting have you performed (Believe that most of the bugs from customer comes due to real use of lifecycle in the software) 3.)You can also refer the defect density from earlier releases of the same product line. by doing these evaluation you can find out the probability of bugs at an approximately optimum estimation. Answer3: You can look at the customer issues mapping from previous release (If you have the same product line) to the current release ,This is the best way of finding estimation for gold release of migration of any product.Secondly, till gold release most of the issues comes from various combination of installation testing like cross-platform,i18 issues,Customization,upgradation and migration. So ,these can be taken as a parameter and then can estimation be completed. When the build comes to the QA team, what are the parameters to be taken for consideration to reject the build upfront without committing for testing ? Answer1: Agree with R&D a set of tests that if one fails you can reject the build. I usually have some build verification tests that just make sure the build is stable and the major functionality is working. Then if one test fails you can reject the build. Answer2: The only way to legitimately reject a build is if the entrance criteria have not been met. That means that the entrance criteria to the test phase have been defined and agreed upon up front. This should be standard for all builds for all products. Entrance criteria could include: - Turn-over documentation is complete - All unit testing has been successfully completed and U/T cases are documented in turn-over - All expected software components have been turned-over (staged) - All walkthroughs and inspections are complete - Change requests have been updated to correct status - Configuration Management and build information is provided, and correct, in turn-over The only way we could really reject a build without any testing, would be a failure of the turn-over procedure. There may, but shouldn't be, politics involved. The only way the test phase can proceed is for the test team to have all components required to perform successful testing. You will have to define entrance (and exit) criteria for each phase of the SDLC. This is an effort to be taken together by the whole development team. Developments entrance criteria would include signed requirements, HLD doc, etc. Having this criteria pre-established sets everyone up for success Answer3: The primary reason to reject a build is that it is untestable, or if the testing would be considered invalid. For example, suppose someone gave you a quot;bad buildquot; in which several of the wrong files had been loaded. Once you know it contains the wrong versions, most groups think there is no point continuing testing of that build. Every reason for rejecting a build beyond this is reached by agreement. For example, if you set a build verification test and the program fails it, the agreement in your company might be to reject the program from testing. Some BVTs are designed to include relatively few tests, and those of core functionality. Failure of any of these tests might reflect fundamental instability. However, several test groups include a lot of additional tests, and failure of these might not be grounds for rejecting a build. In some companies, there are firm entry criteria to testing. Many companies pay lipservice to entry criteria but start testing the code whether the entry criteria are met or not. Neither of these is right or wrong--it's the culture of the company. Be sure of your corporate culture before rejecting a build. Answer4: Generally a company would have set some sort of minimum goals/criteria that a build needs to satisfy - if it satisfies this - it can be accepted else it has to be rejected For eg.
  • 10. Nil - high priority bugs 2 - Medium Priority bugs Sanity test or Minimum acceptance and Basic acceptance should pass The reasons for the new build - say a change to a specific case - this should pass Not able to proceed - non - testability or even some more which is in relation to the new build or the product If the above criterias don't pass then the build could be rejected. What is software testing? Software testing is more than just error detection; Testing software is operating the software under controlled conditions, to (1) verify that it behaves “as specified”; (2) to detect errors, and (3) to validate that what has been specified is what the user actually wanted. Verification is the checking or testing of items, including software, for conformance and consistency by evaluating the results against pre-specified requirements. [Verification: Are we building the system right?] Error Detection: Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should. Validation looks at the system correctness – i.e. is the process of checking that what has been specified is what the user actually wanted. [Validation: Are we building the right system?] In other words, validation checks to see if we are building what the customer wants/needs, and verification checks to see if we are building that system correctly. Both verification and validation are necessary, but different components of any testing activity. The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analysing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item. Remember: The purpose of testing is verification, validation and error detection in order to find problems – and the purpose of finding those problems is to get them fixed. What is the testing lifecycle? There is no standard, but it consists of: Test Planning (Test Strategy, Test Plan(s), Test Bed Creation) Test Development (Test Procedures, Test Scenarios, Test Cases) Test Execution Result Analysis (compare Expected to Actual results) Defect Tracking Reporting How to validate data? I assume that you are doing ETL (extract, transform, load) and cleaning. If my assumetion is correct then 1. you are builing data warehouse/ data minning 2. you ask right question to wrong place What is quality? Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software
  • 11. development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization's management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free. what is Benchmark? How it is linked with SDLC (Software Development Life Cycle)? or SDLC and Benchmark are two unrelated things.? What are the compoments of Benchmark? In Software Testing where Benchmark fits in? A Benchmark is a standard to measure against. If you benchmark an application, all future application changes will be tested and compared against the benchmarked application. Which of the following Statements about gernerating test cases is false? 1. Test cases may contain multiple valid conditions 2. Test cases may contain multiple invalid conditions 3. Test cases may contain both valid and invalid conditions 4. Test cases may contain more than 1 step. 5. test cases should contain Expected results. Answer1: all the conditions mentioned are valid and not a single condition can be stated as false. Here i think, the condition means the input type or situation (some may call it as valid or invalid, positive or negative) Also a single test case can contain both the input types and then the final result can be verified (it obviously should not bring the required result, as one of the input condition is invalid, when the test case would be executed), this usually happens while writing secnario based test cases. For ex. Consider web based registration form, in which input data type for some fields are positive and for some fields it is negative (in a scenario based test case) Above screen can be tested by generating various scenario's and combinations. The final result can be verified against actual result and the registration should not be carried out sucessfully (as one/some input types are invalid), when this test case is executed. The writing of test case also depends upon the no. of descriptive fields the tester has in the test case template. So more elaborative is the test case template, more is the ease of writing test cases and generating scenario's. So writing of test cases totally depends on the indepth thinking of the tester and there are no predefined or hard coded norms for writing test case. This is according to my understanding of testing and test case writing knowledge (as for many applications, i have written many positive and negative conditions in a single test case and verified different scenario's by generating such test cases) Answer2: The answer to this question will be 3 Test cases may contain both valid and invalid conditions. Since there is no restriction for the test case to be of multiple steps or more than one valid or invalid conditions. But A test case whether it is feature ,unit level or end to end test case ,it can not contain both valid and invalid condition in a unit test case. Because if this will happen then the concept of test case for a result will be dwindled and hence has no meaning. What is “Quality Assurance”? “Quality Assurance” measures the quality of processes used to create a quality product.
  • 12. Software Quality Assurance (‘SQA’ or ‘QA’) is the process of monitoring and improving all activities associated with software development, from requirements gathering, design and reviews to coding, testing and implementation. It involves the entire software development process - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with, at the earliest possible stage. Unlike testing, which is mainly a ‘detection’ process, QA is ‘preventative’ in that it aims to ensure quality in the methods & processes – and therefore reduce the prevalence of errors in the software. Organisations vary considerably in how they assign responsibility for QA and testing. Sometimes they’re the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers or quality managers. Quality Assurance and Software Development Quality Assurance and development of a product are parallel activities. Complete QA includes reviews of the development methods and standards, reviews of all the documentation (not just for standardisation but for verification and clarity of the contents also). Overall Quality Assurance processes also include code validation. A note about quality assurance: The role of quality assurance is a superset of testing. Its mission is to help minimise the risk of project failure. QA people aim to understand the causes of project failure (which includes software errors as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are referred to as QA Teams, perhaps acknowledging that testers should consider broader QA issues as well as testing. Which things to consider to test a mobile application through black box technique? Answer1: Not sure how your device/server is to operate, so mold these ideas to fit your app. Some highlights are: Range testing: Ensure that you can reconnect when leaving and returning back into range. Port/IP/firewall testing - change ports and ips to ensure that you can connect and disconnect. modify the firewall to shutoff the connection. Multiple devices - make sure that a user receives his messages with other devices connected to the same ip/port. Your app should have a method to determine which device/user sent the message and only return to it. Should be in the message string sent and received. Unless you have conferencing capabilities within the application. Cycle the power of the server and watch the mobile unit reconnect automatically. Mobile unit sends a message and then power off the unit, when powering back on and reconnecting, ensure that the message is returned to the mobile unit. Answer2: Not clearly mentioned which area of the mobile application you are testing with. Whether is it simple SMS application or WAP application, you need to specify more details.If you are working with WAP then you can download simulators from net and start testing over it. What is the general testing process? The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the execution of tests. Test data are inputs that have been devised to test the system Test Cases are inputs and outputs specification plus a statement of the function under the test. Test data can be generated automatically (simulated) or real (live).
  • 13. The stages in the testing process are as follows: 1. Unit testing: (Code Oriented) Individual components are tested to ensure that they operate correctly. Each component is tested independently, without other system components. 2. Module testing: A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures and functions. A module encapsulates related components so it can be tested without other system modules. 3. Sub-system testing: (Integration Testing) (Design Oriented) This phase involves testing collections of modules, which have been integrated into sub-systems. Sub- systems may be independently designed and implemented. The most common problems, which arise in large software systems, are sub-systems interface mismatches. The sub-system test process should therefore concentrate on the detection of interface errors by rigorously exercising these interfaces. 4. System testing: The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from unanticipated interactions between sub-systems and system components. It is also concerned with validating that the system meets its functional and non-functional requirements. 5. Acceptance testing: This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by the system client rather than simulated test data. Acceptance testing may reveal errors and omissions in the systems requirements definition( user - oriented) because real data exercises the system in different ways from the test data. Acceptance testing may also reveal requirement problems where the system facilities do not really meet the users needs (functional) or the system performance (non- functional) is unacceptable. Acceptance testing is sometimes called alpha testing. Bespoke systems are developed for a single client. The alpha testing process continues until the system developer and the client agrees that the delivered system is an acceptable implementation of the system requirements. When a system is to be marketed as a software product, a testing process called beta testing is often used. Beta testing involves delivering a system to a number of potential customers who agree to use that system. They report problems to the system developers. This exposes the product to real use and detects errors that may not have been anticipated by the system builders. After this feedback, the system is modified and either released fur further beta testing or for general sale. What's normal practices of the QA specialists with perspective of software? These are the normal practices of the QA specialists with perspective of software [note: these are all QC activities, not QA activities.] 1-Desgin Review Meetings with the System Analyst and If possible should be the part in Requirement gathering 2-Analysing the requirements and the desing and to trace the desing with respect to the requirements 3-Test Planning 4-Test Case Identification using different techniques (With respect to the Web Based Applciation and Desktoip Applications) 5-Test Case Writing (This part is to be assigned to the testing engineers) 6-Test Case Execution (This part is to be assigned to the testing engineers) 7-Bug Reporting (This part is to be assigned to the testing engineers) 8-Bug Review and thier Analysis so that future bus can be removed by desgining some standards from low-level to high level (Testing in Stages) Except for small programs, systems should not be tested as a single unit. Large systems are built out of sub-
  • 14. systems, which are built out of modules that are composed of procedures and functions. The testing process should therefore proceed in stages where testing is carried out incrementally in conjunction with system implementation. The most widely used testing process consists of five stages UNIT TESTING COMPONENT TESTING MODULE TESTING VERIFICATIO WHITE BOX TESTING TECHNIQUES N (TESTS THAT ARE DERIVED FROM KNOWLEDGE SUB-SYSTEM (PROCESS OF THE PROGRAM'S STRUCTURE AND INTEGRATED TESTING ORIENTED) IMPLEMENTATION) TESTING SYSTEM TESTING VALIDATION BLACK BOX TESTING TECHNIQUES USER ACCEPTANCE (PRODUCT (TESTS ARE DERIVED FROM THE PROGRAM TESTING TESTING ORIENTED) SPECIFICATION) HOWEVER, AS DEFECTS ARE DISCOVERED AT ANY ONE STAGE, THEY REQUIRE PROGRAM MODIFICATIONS TO CORRECT THEM AND THIS MAY REQUIRE OTHER STAGES IN THE TESTING PROCESS TO BE REPEATED. ERRORS IN PROGRAM COMPONENTS, SAY MAY COME TO LIGHT AT A LATER STAGE OF THE TESTING PROCESS. THE PROCESS IS THEREFORE AN ITERATIVE ONE WITH INFORMATION BEING FED BACK FROM LATER STAGES TO EARLIER PARTS OF THE PROCESS. How to test and to get the difference between two images which is in the same window? Answer1: How are you doing your comparison? If you are doing it manually, then you should be able to see any major differences. If you are using an automated tool, then there is usually a comparison facility in the tool to do that. Answer2: Jasper Software is an open-source utility which can be compiled into C++ and has a imgcmp function which compares JPEG files in very good detail as long as they have the same dimentions and number of components. Answer3: Rational has a comparison tool that may be used. I'm sure Mercury has the same tool. Answer4: The key question is whether we need a bit-by-bit exact comparison, which the current tools are good at, or an equivalency comparison. What differences between these images are not differences? Near-match comparison has been the subject of a lot of research in printer testing, including an M.Sc. thesis at Florida Tech. It's a tough problem. Testing Strategies Strategy is a general approach rather than a method of devising particular systems for component tests. Different strategies may be adopted depending on the type of system to be tested and the development
  • 15. process used. The testing strategies are Top-Down Testing Bottom - Up Testing Thread Testing Stress Testing Back- to Back Testing 1. Top-down testing Where testing starts with the most abstract component and works downwards. 2. Bottom-up testing Where testing starts with the fundamental components and works upwards. 3. Thread testing Which is used for systems with multiple processes where the processing of a transaction threads its way through these processes. 4. Stress testing Which relies on stressing the system by going beyond its specified limits and hence testing how well the system can cope with over-load situations. 5. Back-to-back testing Which is used when versions of a system are available. The systems are tested together and their outputs are compared. 6. Performance testing. This is used to test the run-time performance of software. 7. Security testing. This attempts to verify that protection mechanisms built into system will protect it from improper penetration. 8. Recovery testing. This forces software to fail in a variety ways and verifies that recovery is properly performed. Large systems are usually tested using a mixture of these strategies rather than any single approach. Different strategies may be needed for different parts of the system and at different stages in the testing process. Whatever testing strategy is adopted, it is always sensible to adopt an incremental approach to sub-system and system testing. Rather than integrate all components into a system and then start testing, the system should be tested incrementally. Each increment should be tested before the next increment is added to the system. This process should continue until all modules have been incorporated into the system. When a module is introduced at some stage in this process, tests, which were previously unsuccessful, may now, detect defects. These defects are probably due to interactions with the new module. The source of the problem is localized to some extent, thus simplifying defect location and repai Debugging Brute force, backtracking, cause elimination. UNIT FOCUSES ON EACH MODULE AND WHETHER IT WORKS CODING TESTING PROPERLY. MAKES HEAVY USE OF WHITE BOX TESTING INTEGRATIO DESIGN CENTERED ON MAKING SURE THAT EACH MODULE WORKS N TESTING WITH ANOTHER MODULE. COMPRISED OF TWO KINDS:
  • 16. TOP-DOWN AND BOTTOM-UP INTEGRATION. OR FOCUSES ON THE DESIGN AND CONSTRUCTION OF THE SOFTWARE ARCHITECTURE. MAKES HEAVY USE OF BLACK BOX TESTING.(EITHER ANSWER IS ACCEPTABLE) VALIDATION ANALYSIS ENSURING CONFORMITY WITH REQUIREMENTS TESTING MAKING SURE THAT THE SOFTWARE PRODUCT WORKS WITH SYSTEMS SYSTEMS THE EXTERNAL ENVIRONMENT, E.G., COMPUTER SYSTEM, TESTING ENGINEERING OTHER SOFTWARE PRODUCTS. DRIVER AND STUBS DRIVER: DUMMY MAIN PROGRAM STUB: DUMMY SUB-PROGRAM THIS IS BECAUSE THE MODULES ARE NOT YET STAND-ALONE PROGRAMS THEREFORE DRIVE AND OR STUBS HAVE TO BE DEVELOPED TO TEST EACH UNIT. When do we prepare a Test Plan? [Always prepared a Test Plan for every new version or release of the product? ] For four or five features at once, a single plan is fine. Write new test cases rather than new test plans. Write test plans for two very different purposes. Sometimes the test plan is a product; sometimes it's a tool. What is boundary value analysis? Boundary value analysis is a technique for test data selection. A test engineer chooses values that lie along data extremes. Boundary values include maximum, minimum, just inside boundaries, just outside boundaries, typical values, and error values. The expectation is that, if a systems works correctly for these extreme or special values, then it will work correctly for all values in between. An effective way to test code is to exercise it at its natural boundaries. Boundary Value Analysis is a method of testing that complements equivalence partitioning. In this case, data input as well as data output are tested. The rationale behind BVA is that the errors typically occur at the boundaries of the data. The boundaries refer to the upper limit and the lower limit of a range of values or more commonly known as the quot;edgesquot; of the boundary. Describe methods to determine if you are testing an application too much? Answer1: While testing, you need to keep in mind following two things always: -- Percentage of requirements coverage -- Number of Bugs present + Rate of fall of bugs -- Firstly, There may be a case where requirement is covered quite adequately but number of bugs do not fall. This indicates over testing. --- Secondly, There may be a case where those parts of application are also being tested which are not affected by a CHANGE or BUG FIXTURE. This is again a case of over testing. -- Third is the case as you have suggested, with slight modification, i.e bug has sufficiently dropped off but still testing is being at SAME levels as before.
  • 17. Methods to determine if an application is being over-tested are-- 1. Comparison of 'Rate of Drop in number of Bugs' & 'Effort Invested in Testing' (With all Requirements been met) That is, if bug rate is falling (as it generally happens in all applications), but effort invested in man hours does not fall, this implies Over testing. 2. Comparison of 'Achievment of bug rate threshold' & 'Effort Invested in Testing' (With all Requirements been met) That is, if bug rate has already achieved the agreed-upon value with business and still the testing efforts are being invested with no/little reduction. 3. Verifying if the 'Impact Analysis' for 'Change Requests' has been done properly and being implemented correctly. That is, to check and verify that the components of AUT which have got impacted by the new change are being tested only and no other unrequired component is being tested unneccessarily. If unaffected components are being tested, this implies Over testing. Answer2: If the bug find rate has dropped off considerably, the test group should shift its testing strategy. One of the key problems with heavy reliance on regression testing is that the bug find rate drops off even though there are plenty of bugs not yet found. To find new bugs, you have to run new tests. Every test technique is stronger for some types of bugs and weaker for others. Many test groups use only a few techniques. In our consulting, James Bach and I repeatedly worked with companies that relied on only one or two main techniques. When one technique, any one test technique, yields few bugs, shifting to new technique(s) is likely to expose new problems. At some point, you can use a measure that is only partially statistical -- if your bug find rate is low AND you can't think of any new testing approaches that look promising, THEN you are at the limit of your effectiveness and you should ship the product. That still doesn't mean that the application is overtested. It just means that YOU'RE not going to find many new bugs. Answer3: Best way is to monitor the test defects over the period of time Refer williams perry book, where he has mentioned the concept of 'under test' and 'over test', in fact the data can be plotted to see the criteria. Yes one of the criteria is to monitor the defect rate and see if it is almost zero second method would be using test coverage when it reach 100% (or 100% requirement coverage) Procedural Software Testing Issues Software testing in the traditional sense can miss a large number of errors if used alone. That is why processes like Software Inspections and Software Quality Assurance (SQA) have been developed. However, even testing all by itself is very time consuming and very costly. It also ties up resources that could be used otherwise. When combined with inspections and/or SQA or when formalized, it also becomes a project of its own requiring analysis, design and implementation and supportive communications infrastructure. With it interpersonal problems arise and need managing. On the other hand, when testing is conducted by the developers, it will most likely be very subjective. Another problem is that developers are trained to avoid errors. As a result they may conduct tests that prove the product is working as intended (i.e. proving there are no errors) instead of creating test cases that tend to uncover as many errors as possible. How do I start with testing? Think twice (or may be more) times before you choose a career. Are you interested in it or do u just want to jump on the bandwagon? Prerequisite You can join a software development company as a tester if you can convince the interviewer 1. You have a knack for breaking software
  • 18. 2. You are aware of basic Quality concepts and belive in them 3. You want to pursue Testing as a career and not just to try it OO Software Testing Issues A common way of testing OO software testing-by-poking-around (Binder, 1995). In this case the developer's goal is to show that the product can do something useful without crashing. Attempts are made to quot;breakquot; the product. If and when it breaks, the errors are fixed and the product is then deemed quot;testedquot;. Testing-by-poking-around method of testing OO software is, in my opinion, as unsuccessful as random testing of procedural code or design. It leaves the finding of errors up to a chance. Another common problem in OO testing is the idea that since a superclass has been tested, any subclasses inheriting from it don't need to be. This is not true because by defining a subclass we define a new context for the inherited attributes. Because of interaction between objects, we have to design test cases to test each new context and re-test the superclass as well to ensure proper working order of those objects. Yet another misconception in OO is that if you do proper analysis and design (using the class interface or specification), you don't need to test or you can just perform black-box testing only. However, function tests only try the quot;normalquot; paths or states of the class. In order to test the other paths or states, we need code instrumentation. Also it is often difficult to exercise exception and error handling without examination of the source code. What is the purpose of black box testing? Answer1: The main purpose of BB Testing is to validate that the application works as the user will be operating it and in the environments of their systems. How do you do system testing and integration testing? You may lose time and money but you may also lose Quality and eventually Customers! Answer2: quot;What is the purpose of black box testing?quot; Black-box testing checks that the user interface and user inputs and outputs all work correctly. Part of this is that error handling must work correctly. It's used in functional and system testing. quot;We do everything in white box testing: - we check each module's function in the unit testingquot; Who is quot;wequot;? Are you programmers or quality assurance testers? Usually, unit testing is done by programmers, and white-box testing would be how they'd do it. quot;- once unit test result is ok, means that modules work correctly (according to the requirement documemts)quot; Not quite. It means that on a stand-alone basis, each module is okay. White box testing only tests the internal structure of the program, the code paths. Functional testing is needed to test how the individual components work together, and this is best done from an external perspective, meaning by using the software the way an end user would, without reference to the code (which is what black-box testing is). if we doing testing again in black box will we lose time and money?quot; No, the opposite: You'll lose money from having to repair errors you didn't catch with the white-box testing if you don't do some black-box testing. It's far more expensive to fix errors after release than to test for them and fix them early on. But again, who is quot;wequot;? The black box testers should not be the people who did the programming; they should be the QA team -- also some end users for the usability testing. Now that I've said that, good programmers will run some basic black-box tests before handing the application to QA for testing. This isn't a substitute for having QA do the tests, but it's a lot quicker for the programmer to find and fix an error right away than to have to go through the whole process of reporting a bug, then fixing and releasing a new build, then retesting.
  • 19. How do you create a test plan/design? Test scenarios and/or cases are prepared by reviewing functional requirements of the release and preparing logical groups of functions that can be further broken into test procedures. Test procedures define test conditions, data to be used for testing and expected results, including database updates, file outputs, report results. Generally speaking... * Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application. * Test engineers define unit test requirements and unit test cases. Test engineers also execute unit test cases. * It is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing. * Test scenarios are executed through the use of test procedures or scripts. * Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. * Test procedures or scripts include the specific data that will be used for testing the process or transaction. * Test procedures or scripts may cover multiple test scenarios. * Test scripts are mapped back to the requirements and traceability matrices are used to ensure each test is within scope. * Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to exercise system functionality in a controlled environment. * Some output data is also base-lined for future comparison. Base-lined data is used to support future application maintenance via regression testing. * A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release. Inputs for this process: * Approved Test Strategy Document. * Test tools, or automated test tools, if applicable. * Previously developed scripts, if applicable. * Test documentation problems uncovered as a result of testing. * A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code, and software complexity data. Outputs for this process: * Approved documents of test scenarios, test cases, test conditions, and test data. * Reports of software design issues, given to software developers for correction. What is the purpose of a test plan? Reason number 1: We create a test plan because preparing it helps us to think through the efforts needed to validate the acceptability of a software product. Reason number 2: We create a test plan because it can and will help people outside the test group to understand the why and how of product validation. Reason number 3: We create a test plan because, in regulated environments, we have to have a written test plan. Reason number 4: We create a test plan because the general testing process includes the creation of a test plan. Reason number 5: We create a test plan because we want a document that describes the objectives, scope, approach and focus of the software testing effort. Reason number 6: We create a test plan because it includes test cases, conditions, the test environment, a list of related tasks, pass/fail criteria, and risk assessment. Reason number 7: We create test plan because one of the outputs for creating a test strategy is an approved and signed off test plan document. Reason number 8: We create a test plan because the software testing methodology a three step process, and one of the steps is the creation of a test plan. Reason number 9: We create a test plan because we want an opportunity to review the test plan with the project team. Reason number 10: We create a test plan document because test plans should be documented, so that they are repeatable.
  • 20. Can we prepare Test Plan without SRS? It is not always mandatory that you should have SRS document to prepare a Test Plan. This kind of Documents Hierarchy is maintained to maintain Organizational standards and also to have clear understanding of the things. Yes you can Prepare a Test plan directly without SRS, When the Requirements are clear with your clients,and when your URD(User Requirement Document ) is supportive enough to clarify the issues. Though we don't have SRS clients will be giving some information SRS only contains mainly Product information But we will not know the Testing effort if we don't have SRS. SRS contains How many cycles we are testing, and on the platforms we are testing , etc. Actually there won't be any harm in doing so, becoz, ultimately you will send your Test plan document to your client and after getting approval from him only you start Testing. (Note:- SRS is the document which you get in the Analysis phase of your Software Development. Test plan is the document , which contains the details of Product interms of , Tset strategy , Scope of testing, Types of tests to be conducted,Risk Managemnet , Mention of Automation Tool ,About Bug tracking Tool, etc..,) How do test plan templates look like? The test plan document template helps to generate test plan documents that describe the objectives, scope, approach and focus of a software testing effort. Test document templates are often in the form of documents that are divided into sections and subsections. One example of a template is a 4-section document where section 1 is the description of the quot;Test Objectivequot;, section 2 is the the description of quot;Scope of Testingquot;, section 3 is the the description of the quot;Test Approachquot;, and section 4 is the quot;Focus of the Testing Effortquot;. All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. They also help in learning where information is located, making it easier for a user to find what they want. With standards and templates, information will not be accidentally omitted from a document. Once Rob Davis has learned and reviewed your standards and templates, he will use them. He will also recommend improvements and/or additions. A software project test plan is a document that describes the objectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. How to Test a desktop systems ? You will likely have to use a programming or scripting language to interact with the service directly. You will have more control over the raw information that way. You will have to determine what the service is supposed to do and how it is supposed to interact with other applications and services. A data dictionary likely exists. It may not be called that however. What this document does is explain what commands the service will respond to and what sort of data should be sent. You will have to use this document to do your testing. Get close to the person or people who created the document or the service and expect them to keep you in the loop when changes take place (it doesn't help anyone if you report a defect and it's really only reflecting an expected change in the operation of the service). Desktop applications are generally designed to run and quit. You have to be concerned with memory leaks and system usage.
  • 21. How do you create a test strategy? The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment. Inputs for this process: * A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data. * A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules. * Testing methodology. This is based on known standards. * Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents. * Requirements that the system can not provide, e.g. system limitations. Outputs for this process: * An approved and signed off test strategy document, test plan, including test cases. * Testing issues requiring resolution. Usually this requires additional negotiation at the project management level. How to do Estimating Testing effort ? Time Estimation method for Testing Process Note : folloing method is based on use case driven specification. Step 1 : count number of use cases (NUC) of system step 2 : Set Avg Time Test Cases(ATTC) as per test plan step 3 : Estimate total number of test cases (NTC) Total number of test cases = Number of usecases X Avg testcases per a use case Step 4 : Set Avg Execution Time (AET) per a test case (idelly 15 min depends on your system) Step 5 : Calculate Total Execution Time (TET) TET = Total number of test cases * AET Step 6 : Calculate Test Case Creation Time (TCCT) useually we will take 1.5 times of TET as TCCT TCCT = 1.5 * TET Step 7 : Time for ReTest Case Execution (RTCE) this is for retesting useually we take 0.5 times of TET RTCE = 0.5 * TET Step 8 : Set Report generation Time (RGT usually we take 0.2 times of TET RGT = 0.2 * TET Step 9 : Set Test Environment Setup Time (TEST) it also depends on test plan Step 10 : Total Estimation time = TET + TCCT+ RTCE + RGT + TEST + some buffer...;) Example Total No of use cases (NUC) : 227 Average test cases per Use cases(AET) : 10 Estimated Test cases(NTC) : 227 * 10 = 2270 Time estimation execution (TET) : 2270/4 = 567.5 hr Time for creating testcases (TCCT) : 567.5*4/3 = 756.6 hr Time for retesting (RTCE) : 567.5/2 = 283.75 hr Report Generation(RGT) = 100 hr Test Environment Setup Time(TEST) = 20 hr. ------------------- Total Hrs 1727.85 + buffer ------------------- here 4 means Number of testcases executed per hour i.e 15 min will take for execution of each test case
  • 22. What is the purpose of test strategy? Reason number 1: The number one reason of writing a test strategy document is to quot;havequot; a signed, sealed, and delivered, FDA (or FAA) approved document, where the document includes a written testing methodology, test plan, and test cases. Reason number 2: Having a test strategy does satisfy one important step in the software testing process. Reason number 3: The test strategy document tells us how the software product will be tested. Reason number 4: The creation of a test strategy document presents an opportunity to review the test plan with the project team. Reason number 5: The test strategy document describes the roles, responsibilities, and the resources required for the test and schedule constraints. Reason number 6: When we create a test strategy document, we have to put into writing any testing issues requiring resolution (and usually this means additional negotiation at the project management level). Reason number 7: The test strategy is decided first, before lower level decisions are made on the test plan, test design, and other testing issues. What's Quality Approach document? what should be the contents and things like that... Answer1: you should start thinking from your company business type, and according to it define different processes for your organization. like procurment, CM etc Then think over different matrices you will be calculating for each process, and define them with formula, the kind of analysis will be doing and when shall the red flag to be raised, Decide on your audit policies frequencies etc. Think on the change control board if any process needs modification. Answer2: By defining the process i mean the structured collection of practices that describe the characteristics of the work and its quality. writting process means creating a system with which every one will work, the benefits of it are like common language and a shared vision across organization, its will be a frame work for prioritizing actions. From implementation point of view first you need to break the complete life cycle of your product in diffrent meaningful steps, and setting the goals for each phase. you can create different document templates which every one shall follow, Define the dependencies among different groups for each project, Define risks for each project and what is mitigation plan for each risk. etc You can read the CMMI model, customize that as per your organization goal. for a start up company As per my personal opinion, its better to define and reach at the process for Level 3 First and then go for level 5. What does a test strategy document contain? The test strategy document contains test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment. The test strategy document is a formal description of how a software product will be tested. What is the test strategy document developed for? It is developed for all levels of testing, as required. How is it written, and who writes it? It is the test team that analyzes the requirements, writes the test strategy, and reviews the plan with the project team. Why Q/A should not report to development?
  • 23. Based on research from the Quality Assurance Institute, the percent of quality groups in each location is noted, 50% - reports to Senior IT Manager - This is the best positioning because it gives the Quality Manager immediate access to the IT Manager to discuss and promote Quality issues, when the quality manager reports elsewhere, quality issues may not be raised to the appropriate level or receive the necessary action. 25% - reports to Manager of systems/programming 15 % reports to Manger oprerations. 10 % outside IT function. Which of the following statements about Regression statements are true? 1---Regression testing must consist of a fixed set of tests to create a base line 2---Regression tests should be used to detect defects in new feature 3---Regression testing can be run on every build 4--- Regression testing should be targeted areas of high risk and known code change 5---Regression testing when automated, is highly effective in preventing defects. Answer1: 1---Regression testing must consist of a fixed set of tests to create a base line Don't think it is true as a quot;mustquot; -- it depends on whether your regression testing style involves repeating identical tests or redoing testing in previously tested areas with similar tests or tests that address the same risks. For example, some people do regression testing with tests whose specific parameters are determined randomly. They broaden the set of values they test while achieving essentially the same testing. Second example--some regression test suites include random stringing together of test cases (they include load testing and duration testing in their regression series, reporting their results as part of the assessment of each build). Depending on your theory of the _point_ of regression testing, these may or may not be entirely valid regression tests. 2---Regression tests should be used to detect defects in new feature How do you create new regression tests? Should you design new tests as standalone, or should you develop a strategy in which the tests you use for bug-hunting are designed to be reusable as regression tests? If the latter, and I have certainly heard some skilled testers argue that the latter approach worked well in their sistuation, then #2 is sometimes true. 3---Regression testing can be run on every build This is true, though it might be silly and a big waste of time. 4--- Regression testing should be targeted areas of high risk and known code change Hmmm, there's a area of computer science called program slicing and one of the objectives of this class of work is to figure out how to restrict the regression test suite to a smaller number of tests, which test only those things that might have been impacted by a change. Bob Glass has criticized the results of some of this work, but if #4 is false, some Ph.D.'s and big research grants should be retracted. 5---Regression testing when automated, is highly effective in preventing defects. Unit-level automated regression testing is highly effective in preventing defects--read up on test-driven development. Answer2: Let me explain why I think 2 & 5 are false 2---Regression tests should be used to detect defects in new feature Since regression tests only address existing features and functionality, it can't find defects in new features. It can only find where existing features and functionality have been broken by changes. 5---Regression testing when automated, is highly effective in preventing defects. Since no tests prevent defects, they only find them, it's impossible to prevent defects with a regression test. I will add, however, that if a developer can use an automated regression test to test their own code before submitting it to the code repository (say in the form a series of unit tests coupled to a library, etc.) then you
  • 24. could in some way prevent defects with a regression test. I also don't like 1- and 4. 1- since a regression test suite grows as the product does. Therefore the tests are not fixed. 4- because a regression test tests the whole application, not just a targeted area. In the past, I have used the concept of test depth (level 1 being the basic regression tests--higher number reflect additional functionality) so you could run a level one regression on the whole program but do level three on the transport layer quot;because we've updated the libraryquot;. T an automated set of tests would be the most likely way to make 3- a possibility. It is unlikely that with daily builds, as many companies run their build process, that anything short of an automated regression test suite would be able to be run daily with any efficacy. if the builds were weekly, then a manual regression test would be likely. Answer3: As per the difinition of regression testing and actual workaround if you have to have answer this question then option 3 & 4 is the best choice among all.The reason behind it is : 3---Regression testing can be run on every build It is a normal phenomenon if there is build coming on weekly basis or it is a RC build.Since,there is nothing mention about daily build ,only thing mention is every build so it can be correct. 4---Regression testing should be targeted areas of high risk and known code change This is also true in most of the situation,it is not universally true but in certain condition where there is code change and the related modules are only tested in regression automation rather than whole code. 5 is not true coz in regression we detect the defect not prevent normally. How do you execute tests? Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities. * The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing. * A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer's risk assessment and recorded in their selected tracking tool. * Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead. * After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager's formal acceptance. * The test team reviews test document problems identified during testing, and update documents where appropriate. Inputs for this process: * Approved test documents, e.g. Test Plan, Test Cases, Test Procedures. * Test tools, including automated test tools, if applicable. * Developed scripts. * Changes to the design, i.e. Change Request Documents. * Test data. * Availability of the test team and project team.
  • 25. * General and Detailed Design Documents, i.e. Requirements Document, Software Design Document. * A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager. * Test Readiness Document. * Document Updates. Outputs for this process: * Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables. * Changes to the code, also known as test fixes. * Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems. * Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues. * Formal record of test incidents, usually part of problem tracking. * Base-lined package, also known as tested source and object code, ready for migration to the next level. What is a requirements test matrix? The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project's life cycle. The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column headers of the same table. The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality. The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration effort. The requirements test matrix is a representation of user requirements aligned against system testing. Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort. Can you give me a requirements test matrix template? For a requirements test matrix template, you want to visualize a simple, basic table that you create for cross-referencing purposes. Step 1: Find out how many requirements you have. Step 2: Find out how many test cases you have. Step 3: Based on these numbers, create a basic table. If you have a list of 90 requirements and 360 test cases, you want to create a table of 91 rows and 361 columns. Step 4: Focus on the the first column of your table. One by one, copy all your 90 requirement numbers, and paste them into rows 2 through 91 of the table. Step 5: Now switch your attention to the the first row of the table. One by one, copy all your 360 test case numbers, and paste them into columns 2 through 361 of the table. Step 6: Examine each of your 360 test cases, and, one by one, determine which of the 90 requirements they satisfy. If, for the sake of this example, test case number 64 satisfies requirement number 12, then put a large quot;Xquot; into cell 13-65 of your table... and then you have it; you have just created a requirements test matrix template that you can use for cross-referencing purposes. What metrics are used for bug tracking? Metrics that can be used for bug tracking include the followings: the total number of bugs, total number of bugs that have been fixed, number of new bugs per week, and the number of fixes per week. Metrics for bug tracking can be used to determine when to stop testing, for example, when bug rate falls below a certain level. You CAN learn to use defect tracking software.
  • 26. 1. In QA team, everyone talks about process. What exactly they are taking about? 2. Are there any different type of process? Answer1: When you talk about quot;processquot; you are generally talking about the actions used to accomplish a task. Here's an example: How do you solve a jigsaw puzzle? You start with a box full of oddly shaped pieces. In your mind you come up with a strategy for matching two pieces together (or no strategy at all and simply grab random pieces until you find a match), and continue on until the puzzle is completed. If you were to describe the *way* that you go about solving the puzzle you would be describing the process. Some follow-up questions you might think about include things like: - How much time did it take you to solve the puzzle? - Do you know of any skills, tricks or practices that might help you solve the puzzle quicker? - What if you try to solve the puzzle with someone else? Does that help you go faster, or slower? (why or why not?) Can you have *too* many people on this one task? - To answer your second question, I'll ask *you* the question: Are there different ways that people can solve a jigsaw puzzle? There are many interesting process-related questions, ideas and theories in Quality Assurance. Generally the identification of workplace processes lead to the questions of improvement in efficiency and productivity. The motivation behind that is to try and make the processes as efficient as possible so as to incur the least amount of time and expense, while providing a general sense of repeatability, visibility and predictability in the way tasks are performed and completed. The idea behind this is generally good, but the execution is often flawed. That is what makes QA so interesting. You see, when you work with people and processes, it is very different than working with the processes performed by machines. Some people in QA forget that distinction and often become disillusioned with the whole thing. If you always remember to approach processes in the workplace with a people-centric view, you should do fine. Answer2: There is: * Waterfall * Spiral * Rapid prototype * Clean room * Agile (XP, Scrum, ...) What metrics are used for test report generation? Metrics that can be used for test report generation include... McCabe metrics: cyclomatic complexity metric (v(G)), actual complexity metric (AC), module design complexity metric (iv(G)), essential complexity metric (ev(G)), pathological complexity metric (pv(G)), design complexity metric (S0), integration complexity metric (S1), object integration complexity metric (OS1), global data complexity metric (gdv(G)), data complexity metric (DV), tested data complexity metric (TDV), data reference metric (DR), tested data reference metric (TDR), maintenance severity metric (maint_severity), data reference severity metric (DR_severity), data complexity severity metric (DV_severity), global data severity metric (gdv_severity). McCabe object-oriented software metrics: encapsulation percent public data (PCTPUB), access to public data (PUBDATA), polymorphism percent of unoverloaded calls (PCTCALL), number of roots (ROOTCNT), fan-in (FANIN), quality maximum v(G) (MAXV), maximum ev(G) (MAXEV), and hierarchy quality (QUAL). Other object-oriented software metrics: depth (DEPTH), lack of cohesion of methods (LOCM), number of children (NOC), response for a class (RFC), weighted methods per class (WMC), Halstead software metrics program length, program volume, program level and program difficulty, intelligent content,
  • 27. programming effort, error estimate, and programming time. Line count software metrics: lines of code, lines of comment, lines of mixed code and comments, and lines left blank. What is quality plan? Answer1: the test plan is the document created before starting the testing process, it includes that types of testing that will be performed, high level scope of the project, the envirnmental requirements of the testing process, what automated testing tools will be used (If available), the schedule of each test, when it will start and end. Answer2: you should not only understand what a Quality Plan is, but you should understand why you're making it. I don't beleieve that quot;because I was told to do soquot; is a good enough reason. If the person who told you to create it can't tell you 1) what it is, and 2) how to create it, I don't think that they actually know why it's needed. That breaks the primary rule of all plans used in testing: We write quality plans for two very different purposes. Sometimes the quality plan is a product; sometimes it's a tool. It's too easy, but also too expensive, to confuse these goals. If it's not being used as a tool, don't waste your time (and your company's money) doing this. What is the difference between verification and validation? Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product. What is the difference between efficient and effective? quot;Efficientquot; means having a high ratio of output to input; which means working or producing with a minimum of waste. For example, quot;An efficient engine saves gas.quot; Or, quot;An efficient test engineer saves timequot;. quot;Effectivequot;, on the other hand, means producing or capable of producing an intended result, or having a striking effect. For example, quot;For rapid long-distance transportation, the jet engine is more effective than a witch's broomstickquot;. Or, quot;For developing software test procedures, engineers specializing in software testing are more effective than engineers who are generalistsquot;. How effective can we implement six sigma principles in a very large software services organization? Answer1: Effective way of implementing sixsigma. there are quite a few things one needs 1. management buyin 2. dedicated team both drivers as well as adopters 3. training
  • 28. 4. culture building - if you have a pro process culture, life is easy 5. sustained effort over a period towards transforming, people, thoughts and actions Personally technical content is never a challenge, but adoption is a challenge. Answer2: quot;Six sigmaquot; is a combination of process recommendations and mathematical model. The name quot;six sigmaquot; reflects the notion of reducing variation so much that errors -- events out of tolerance -- are six standard deviations from a desired mean. The mathematics are at the core of the process implementation. The problem is that software is not hardware. Software defects are designed in, not the result of manufacturing variation. The other side of six sigma is the drive for continuous improvement. You don't need the six sigma math for this and the concept has been around long before the six sigma movement. To improve anything, you need some type of indicator of its current state and a way to tell that it is improved. Plus determination to improve it. Management support helps. Answer3: There are different methodologies adopted in sixsigma. However, it is commonly referenced from the variance based approach. If you are trying to look at sixsigma from that, for software services, fundamentally the measurement system should be reliable - industry has not reached the maturity level of manufacturing industry where it fits to a T. The differences between SW and HW/manufacturing industry is slightly difficult to address. There are some areas you can adopt sixsigma in its full statistical form(eg in-process error rate, productivity improvements etc), some areas are difficult. The narrower the problem area is, the better it gets even in software services to address adopting the statistical method. There are methodologies that have a bundle of tools,along with statistical techniques, are used on the full SDLC. A generic observation is ,SS helps if we look for proper fitment of methodology for the purpose. Else doubts creep in What stage of bug fixing is the most cost effective? Bug prevention techniques (i.e. inspections, peer design reviews, and walk-throughs) are more cost effective than bug detection. What is Defect Life Cycle.? Answer1: Defect life cycle is....different stages after a defect is identified. New (When defect is identified) Accepted (when Development team and QA team accepts it's a Bug) In Progress (when a person is working to resolve the issue-defect) Resolved (once the defect resolved) Completed (Some one who can take up the responsibly Team lead) Closed/reopened (Retested by TE and he will update the Status of the bug) Answer2: Defect Life Cycle is nothing but the various phases a Bug undergoes after it is raised or reported. A general Interview answer can be given as: 1. New or Opened 2. Assinged 3. Fixed
  • 29. 4. Tested 5. Closed. What is the difference between a software bug and software defect? quot;Software bugquot; is nonspecific; it means an inexplicable defect, error, flaw, mistake, failure, fault, or unwanted behavior of a computer program. Other terms, e.g. quot;software defectquot;, or quot;software failurequot;, are more specific. While the word quot;bugquot; has been a part of engineering jargon for many-many decades; many-many decades ago even Thomas Edison, the great inventor, wrote about a quot;bugquot; - today there are many who believe the word quot;bugquot; is a reference to insects that caused malfunctions in early electromechanical computers. What is the difference between a software bug and software defect? In software testing, the difference between quot;bugquot; and quot;defectquot; is small, and also depends on the end client. For some clients, bug and defect are synonymous, while others believe bugs are subsets of defects. Difference number one: In bug reports, the defects are easier to describe. Difference number two: In my bug reports, it is easier to write descriptions as to how to replicate defects. In other words, defects tend to require only brief explanations. Commonality number one: We, software test engineers, discover both bugs and defects, before bugs and defects damage the reputation of our company. Commonality number two: We, software QA engineers, use the software much like real users would, to find both bugs and defects, to find ways to replicate both bugs and defects, to submit bug reports to the developers, and to provide feedback to the developers, i.e. tell them if they've achieved the desired level of quality. Commonality number three: We, software QA engineers, do not differentiate between bugs and defects. In our reports, we include both bugs and defects that are the results of software testing. Are developers smarter than tester? Any suggestion about the future prospects and technicality involvedin the testing job? Answer1: QA & Testing are thankless jobs. In a software development company developer is a core person. As you are a fresh graduate, it would be good for you to work as a developer. From development you can always move to testing or QA or other admin/support tasks. But from Testing or QA it is little difficult to go back to development, though not impossible(as u are BE comp) Seeing the job market, it is not possible for each & every fresher to get into development. But you can keep searching for it. Some big company's have seperate Verifiction & Validation groups where only testing projects are executed. Those teams have TLs, PLs who are testing experts. They earn good salary same as development people. In technical projects the testing team does lot of technical work. You can do certifications to improve your technical skills & market value. It all depends on your way of handling things & interpersonal, communication and leadership skills. If it is difficult for you to get a job in developement or you really like testing, just go ahead. Try to achieve excellence as a testing professional. You will never have a job problem .Also you will always get onsite opportunities too!! Yuo might have to struggle for initial few years like all other freshers. Answer2: QA and Testing are thankless only in some companies. Testing is part of development. Rather than distinguish between testing and development,distinguish between testing and programming. Programming is also thankless in some companies.