This document discusses software quality assurance and testing. It defines various types of testing like white box, black box and grey box testing. It also defines key terms like verification, validation, test adequacy and different testing techniques. Some key points:
- Testing is the process of executing a program to find errors while verification ensures requirements are met and validation checks if the final product satisfies user needs.
- White box testing evaluates internal code and structures while black box testing treats the system as a "black box" without knowledge of internal workings. Grey box testing uses a partial view of internal structures.
- Test adequacy criteria measure how well a test set covers things like statements, branches/decisions, conditions, and
Unit 3 Control Flow Testing contains Concept of CFT, Generate Test Input Data, Activities of Generating Test Input Data, Control Flow Graph, Path Selection Criteria, Techniques for Path Selection statement wise, branch wise, predicate wise etc. and Generating Test Input.
Unit 3 Control Flow Testing contains Concept of CFT, Generate Test Input Data, Activities of Generating Test Input Data, Control Flow Graph, Path Selection Criteria, Techniques for Path Selection statement wise, branch wise, predicate wise etc. and Generating Test Input.
Manual testing interview questions and answersTestbytes
Manual tester jobs are in plenty out there. The skill is greatly in demand owing to the sudden rise in the importance of QA/software testing in software development there will be a sustained demand for the job. When it comes to manual tester jobs, interviews might be happening as you read this. To be a part of a prestigious company, you need to first crack the interview which often has a verbal section where you have to answer manual testing interview questions.
SO when have compiled the most probable manual testing interview questions in this blog so that you can ace the next manual tester interview with ease.
You can find all of them here also--> https://www.testbytes.net/blog/manual-testing-interview-questions-answers/
A presenetation on basics of software testing, explaining the software development life cycle and steps invovled in it and detials about each step from the testing point of view.
This presentation gives you a walkthorugh on CTFL module 01.
Covers in detail about-
1. Fundamentals of testing
2. Terminologies in testing
3. Seven testing principles
4. Fundamental test process
Manual testing interview questions and answersTestbytes
Manual tester jobs are in plenty out there. The skill is greatly in demand owing to the sudden rise in the importance of QA/software testing in software development there will be a sustained demand for the job. When it comes to manual tester jobs, interviews might be happening as you read this. To be a part of a prestigious company, you need to first crack the interview which often has a verbal section where you have to answer manual testing interview questions.
SO when have compiled the most probable manual testing interview questions in this blog so that you can ace the next manual tester interview with ease.
You can find all of them here also--> https://www.testbytes.net/blog/manual-testing-interview-questions-answers/
A presenetation on basics of software testing, explaining the software development life cycle and steps invovled in it and detials about each step from the testing point of view.
This presentation gives you a walkthorugh on CTFL module 01.
Covers in detail about-
1. Fundamentals of testing
2. Terminologies in testing
3. Seven testing principles
4. Fundamental test process
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
New techniques for writing and developing software have evolved in recent years. One is Test-Driven
Development (TDD) in which tests are written before code. No code should be written without first having
a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be
high.
In this work, we analyze applications written using TDD and traditional techniques. Specifically, we
demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based
criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have
high mutation scores, and we especially reveal this in the case of TDD applications. We found that TestDriven
Development is an effective approach that improves the quality of the test suite to cover more of the
source code and also to reveal more.
Types of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating systemTypes of operating system
Analysis and Design of Algorithms (ADA): An In-depth Exploration
Introduction:
The field of computer science is heavily reliant on algorithms to solve complex problems efficiently. The analysis and design of algorithms (ADA) is a fundamental area of study that focuses on understanding and creating efficient algorithms. This comprehensive overview will delve into the various aspects of ADA, including its importance, key concepts, techniques, and applications.
Importance of ADA:
Efficient algorithms play a critical role in various domains, including software development, data analysis, artificial intelligence, and optimization. ADA provides the tools and techniques necessary to design algorithms that are both correct and efficient. By analyzing the performance characteristics of algorithms, ADA enables computer scientists and engineers to develop solutions that save time, resources, and computational power.
Key Concepts in ADA:
Correctness: ADA emphasizes the importance of designing algorithms that produce correct outputs for all possible inputs. Techniques like mathematical proofs and induction are used to establish the correctness of algorithms.
Complexity Analysis: ADA seeks to analyze the efficiency of algorithms by examining their time and space complexity. Time complexity measures the amount of time required by an algorithm to execute, while space complexity measures the amount of memory consumed.
Asymptotic Notations: ADA employs asymptotic notations, such as Big O, Omega, and Theta, to express the growth rates of functions and classify the efficiency of algorithms. These notations allow for a concise comparison of algorithmic performance.
Algorithm Design Paradigms: ADA explores various design paradigms, including divide and conquer, dynamic programming, greedy algorithms, and backtracking. Each paradigm offers a systematic approach to solving problems efficiently.
Techniques in ADA:
Divide and Conquer: This technique involves breaking down a problem into smaller subproblems, solving them independently, and combining the solutions to obtain the final result. Well-known algorithms like Merge Sort and Quick Sort utilize the divide and conquer approach.
Dynamic Programming: Dynamic programming breaks down a complex problem into a series of overlapping subproblems and solves them in a bottom-up manner. This technique optimizes efficiency by storing and reusing intermediate results. The Fibonacci sequence calculation is a classic example of dynamic programming.
Greedy Algorithms: Greedy algorithms make locally optimal choices at each step, with the hope of achieving a global optimal solution. These algorithms are efficient but may not always yield the best overall solution. The Huffman coding algorithm for data compression is a widely used example of a greedy algorithm.
Backtracking: Backtracking involves searching for a solution to a problem by incrementally building a solution and undoing the choices that lead to dead-ends.
In this chapter, we will introduce you to the fundamentals of testing: why testing is needed; its limitations, objectives and purpose; the principles behind testing; the process that testers follow; and some of the psychological factors that testers must consider in their work. By reading this chapter you'll gain an understanding of the fundamentals of testing and be able to describe those fundamentals.
Similar to SOFTWARE QUALITY ASSURANCE AND TESTING - SHORT NOTES (20)
Object Oriented Programming -- Dr Robert Harlesuthi
This course absorbs what was “Programming Methods” and provides a more formal look at Object Oriented programming with an emphasis on Java
Four Parts ---
1. Computer Fundamentals
2. Object-Oriented Concepts
3. The Java Platform
4. Design Patterns and OOP design examples
THE ROLE OF EDGE COMPUTING IN INTERNET OF THINGSsuthi
Edge computing refers to the enabling technologies allowing computation to be performed at the edge of the network, on downstream data on behalf of cloud services and upstream data on behalf of IoT services. Here we define “edge” as any computing and network resources along the path between data sources and cloud data centers. For example, a smart phone is the edge between body things and cloud, a gateway in a smart home is the edge between home things and cloud, a micro data center and a cloudlet is the edge between a mobile device and cloud. The rationale of edge computing is that computing should happen at the proximity of data sources. From our point of view, edge computing is interchangeable with fog computing, but edge computing focus more toward the things side, while fog computing focus more on the infrastructure side. Edge computing could have as big an impact on our society as has the cloud computing.
The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.
Edge computing refers to the enabling technologies allowing computation to be performed at the edge of the network, on downstream data on behalf of cloud services and upstream data on behalf of IoT services. Here we define “edge” as any computing and network resources along the path between data sources and cloud data centers. For example, a smart phone is the edge between body things and cloud, a gateway in a smart home is the edge between home things and cloud, a micro data center and a cloudlet is the edge between a mobile device and cloud. The rationale of edge computing is that computing should happen at the proximity of data sources. From our point of view, edge computing is interchangeable with fog computing, but edge computing focus more toward the things side, while fog computing focus more on the infrastructure side. Edge computing could have as big an impact on our society as has the cloud computing.
Document Classification Using KNN with Fuzzy Bags of Word Representationsuthi
Abstract — Text classification is used to classify the documents depending on the words, phrases and word combinations according to the declared syntaxes. There are many applications that are using text classification such as artificial intelligence, to maintain the data according to the category and in many other. Some keywords which are called topics are selected to classify the given document. Using these Topics the main idea of the document can be identified. Selecting the Topics is an important task to classify the document according to the category. In this proposed system keywords are extracted from documents using TF-IDF and Word Net. TF-IDF algorithm is mainly used to select the important words by which document can be classified. Word Net is mainly used to find similarity between these candidate words. The words which are having the maximum similarity are considered as Topics(keywords). In this experiment we used TF-IDF model to find the similar words so that to classify the document. Decision tree algorithm gives the better accuracy for text classification when compared to other algorithms fuzzy system to classify text written in natural language according to topic. It is necessary to use a fuzzy classifier for this task, due to the fact that a given text can cover several topics with different degrees. In this context, traditional classifiers are inappropriate, as they attempt to sort each text in a single class in a winner-takes-all fashion. The classifier we proposeautomatically learns its fuzzy rules from training examples. We have applied it to classify news articles, and the results we obtained are promising. The dimensionality of a vector is very important in text classification. We can decrease this dimensionality by using clustering based on fuzzy logic. Depending on the similarity we can classify the document and thus they can be formed into clusters according to their Topics. After formation of clusters one can easily access the documents and save the documents very easily. In this we can find the similarity and summarize the words called Topics which can be used to classify the Documents.
Short Notes on Automata Theory
Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science and discrete mathematics (a subject of study in both mathematics and computer science). The word automata (the plural of automaton) comes from the Greek word αὐτόματα, which means "self-making".
OBJECT ORIENTED PROGRAMMING LANGUAGE - SHORT NOTESsuthi
Short Notes on OOP
Object-oriented programming (OOP) is a programming paradigm based on the concept of "objects", which can contain data, in the form of fields (often known as attributes or properties), and code, in the form of procedures (often known as methods). A feature of objects is an object's procedures that can access and often modify the data fields of the object with which they are associated (objects have a notion of "this" or "self"). In OOP, computer programs are designed by making them out of objects that interact with one another. OOP languages are diverse, but the most popular ones are class-based, meaning that objects are instances of classes, which also determine their types.
PARALLEL ARCHITECTURE AND COMPUTING - SHORT NOTESsuthi
Short Notes on Parallel Computing
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time.
C is a general-purpose, procedural, imperative computer programming language developed in 1972 by Dennis M. Ritchie at the Bell Telephone Laboratories to develop the UNIX operating system. C is the most widely used computer language. It keeps fluctuating at number one scale of popularity along with Java programming language, which is also
equally popular and most widely used among modern software programmers.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
SOFTWARE QUALITY ASSURANCE AND TESTING - SHORT NOTES
1. SOFTWARE QUALITY ASSURANCE AND TESTING
UNIT – I
1. What is testing ?
Testing is the process of executing a program with the intent of finding errors.
2. What is the need for testing?
The need of testing is to find errors in the application.
The good reasons of testing are:
• Testing makes software more reliable and efficient
• To assure that it is defect free and user friendly.
• Check whether the software meets the requirements and
satisfying the needs of the customer.
• Testing also help to improve the quality and also help to
reduce maintenance cost
• Verification and validating the product/application
before it goes live in the market.
• Quality Assurance.
3. Define Successful test case and unsuccessful test case.
According to testing approach, a test case which doesn’t find the hidden errors is called
as an unsuccessful test case. It produces the result without finding the errors.
A test case which finds the hidden errors is called as a successful test case.
4. What is the impact of under testing and over testing?
Testing Process is described as “Too little testing is a crime – too much testing is a sin”
The risk of under-testing may result in the increased system defects.
The risk of over-testing is the unnecessary use of valuable resources in the testing
organization.
5. List the types of testing?
o WHITE BOX TESTING
o BLACK BOX TESTING
o GREY BOX TESTING
2. 6. Define White box testing.
White box testing is also known as Clear Box testing or Open Box testing or Glass Box
testing.
This is used to find internal functionalities of software like conditional loops, statement
coverage. It is mainly done by the developers.
7. Define Black Box testing.
Black Box testing is also known as Skin box testing or Closed b0ox Testing.
This is used to find external functionalities of a software.
8. Define Grey Box Testing.
Gray Box Testing is a software testing method which is a combination of Black Box
Testing method and White Box Testing method. In Black Box Testing, the internal
structure of the item being tested is unknown to the tester and in White Box Testing the
internal structure is known. In Gray Box Testing, the internal structure is partially known.
This involves having access to internal data structures and algorithms for purposes of
designing the test cases, but testing at the user, or black-box level.
9. Define Verification and Validation.
Verification is the process of testing set of documents, plans, specifications and
requirements.
Validation is the actual testing of an actual product
10. What is SDLC
SDLC is Software Development Life Cycle. It is used to develop the software
systematically . SDLC have main 7 Stage:
1. Priliminary Investigation
2. Feasibility Study
3. Analysis
4. Design
5. Coding
6. Testing
7. Maintainance & Review.
11. What are the different techniques used in white box testing?
White box testing uses the following techniques,
i. Unit testing
ii. Integration testing
iii. Regression testing
3. 12. What are the advantages and disadvantages of White box testing?
Advantages
i. Very easy to define data for testing
ii. Optimizes the code
iii. Removes extra lines of code
Disadvantages
i. Increases the cost
ii. It can result in failure of application.
13. What are the advantages and disadvantages of Black box testing?
Advantages of Black Box Testing
1. More effective
2. No knowledge of implementation is required.
3. Tester and programmer are independent of each other.
4. Tests are done from an user’s point of view
5. Exposes any ambiguities
6. Test cases can be designed easily
Disadvantages
1. Unnecessary repetition of test cases
2. Program paths may be untested
14. What are the different techniques used in Black Box testing?
Various types of techniques that are used are,
1. Functional testing
2. Stress testing
3. Load testing
4. Ad-hoc testing
5. Exploratory testing
6. Usability testing
7. Smoke testing
8. Recovery testing
9. Volume testing
10. User acceptance testing
11. Alpha testing
12. Beta testing
15. What is Exhaustive Input Testing
It is an approach used to find all the errors in the program.
It makes use of every input condition as a test case.
To find errors, the software is tested not only all valid inputs, but all possible inputs.
16. What are the advantages and disadvantages of Grey Box testing?
Advantages
1. Combined benefits
4. 2. Non intrusive
3. Intelligent test authoring
4. Unbiased testing
Disadvantages
5. Partial code coverage
6. Defect identification
17. List various SDLC methodologies.
Various SDLC methodologies used are,
1. Waterfall model
2. Spiral Model
3. Rapid Prototyping Model
4. Incremental Model
5. Iterative Model
6. Rational Unified Process(RUP)
18. Give the difference between verification and validation.
Verification – intended to show that software correctly implements a specific function;
typically takes place at the end of each phase. Are we building the product right?
Validation – intended to show that software as a whole satisfies the user requirements:
typically uses black-box testing. Are we building the right product?
19. What are the types verification techniques?
1. Static Verification
2. Dynamic Verification
Static techniques – It is concerned with analysis and checking of system
representations such as requirements document, design diagrams, etc.
Static (nonexecution-based) techniques include: walkthroughs, inspections,
formal verification, etc.
Dynamic techniques – applied when a prototype or an executable program is
available.
It is an execution based technique.
20. List Weyuker’s adequacy axioms.
Axiom 1 (Applicability)
For every program, there exists a finite adequate test set.
Axiom 2 (Nonexhaustive Applicability)
There is a program P and a test set T such that P is adequately tested by T, and T
is not an exhaustive test set.
Axiom 3: (Monotonicity)
If T is adequate for P, and T is a subset of T, then T is adequate for P.
5. Axiom 4: (Inadequate Empty Set)
The empty set is not adequate for any program.
Axiom 5: (Anti- extensibility)
There are programs P and Q such that P is equivalent to Q, T is adequate for P,
but T is not adequate for Q.
Axiom 6: (General Multiple Change)
There are programs P and Q which are the same shape, and a test set T such that T
is adequate for P, but T is not adequate for Q.
Axiom 7 (Anti-decomposition)
There exists a program P and a component Q such that T is adequate for P, T is the
set of vectors of values that variables can assume on entrance to Q for some t of T, and T
is not adequate for Q.
Axiom 8 (Anti – composition)
There exist program P and Q such that T is adequate for P and P(T) is adequate
for Q, but P is not adequate for P;Q
UNIT – II
1. Define test Adequacy?
Test Adequacy is measured for a given test set designed to test P to determine whether P
meets its requirements. This measurement is done against a given Criterion C. A test set
is considered adequate with respect to the criterion C , when it satisfies C.
2. Define statement coverage.
Statement coverage is a measure of the percentage of statements that have been executed
by test cases.
The statement coverage of T with respect to (P, R) is computed as |Sc| / ( |Se| - |Si| ) ,
where Sc is the set of statements covered,
Si is the set of unreachable statements, and
Se is the set of statements in the program, that is the coverage domain.
T is considered adequate with respect to the statement coverage criterion if the statement
coverage of T with respect to (P, R) is 1.
3. Define Branch Coverage or Decision Coverage.
Branch coverage is a measure of the percentage of the decision points (Boolean
expressions) of the program have been evaluated as both true and false in test cases.
The decision coverage of T with respect to (P, R) is computed as |Dc| / ( |De| - |Di| ) ,
6. where Dc is the set of decisions covered,
Di is the set of unreachable decisions, and
De is the set of decisions in the program, that is the coverage domain.
T is considered adequate with respect to the decision coverage criterions if the decision
coverage of T with respect to (P, R) is 1.
4. Define condition coverage.
Condition coverage is a measure of percentage of Boolean sub-expressions of the
program that have been evaluated as both true or false outcome [applies to compound
predicate] in test cases.
The condition coverage of T with respect to (P, R) is computed as |Cc| / ( |Ce| - |Ci| ) ,
where Cc is the set of simple conditions covered,
Ci is the set of infeasible simple conditions, and
Ce is the set of simple conditions in the program, that is the coverage domain.
T is considered adequate with respect to the condition coverage criterions if the condition
coverage of T with respect to (P, R) is 1.
5. Define Decision / Condition coverage.
Decision/Condition coverage is also known as branch condition coverage. Condition
coverage ensures that each simple condition has taken both values true and false.
Condition coverage does not require each decision to have taken both outcomes.
The Decision/Condition coverage of T with respect to (P, R) is computed as
( |Cc| + |Dc| ) / ( ( |Ce| - |Ci| ) + ( |De| - |Di| ) ) ,
where Cc is the set of simple conditions covered,
Dc is the set of decisions covered,
Ce, De are the sets of simple conditions and decisions respectively.
Ci, Di are the sets of infeasible simple conditions and decisions respectively
T is considered adequate with respect to the decision/condition coverage riterion if the
decision/condition coverage of T with respect to (P, R) is 1.
6. Define multiple condition coverage.
Multiple condition coverage is also known as branch condition combination coverage.
Consider a compound condition that contains two or more simple conditions. Using
condition coverage on some compound condition C implies that each simple condition
within C has been evaluated to true and false. It does not imply that all combinations of
the values of the individual simple conditions in C have been evaluated.
7. The multiple condition coverage of T with respect to (P, R) is computed as
|Cc| / ( |Ce| - |Ci| )
where
Cc = set of combinations covered,
Ce = 2ki the total number of combinations
Ci = set of infeasible simple combinations.
T is considered adequate with respect to the multiple condition coverage criterion if the
multiple condition coverage of T with respect to (P, R) is 1.
7. List out the type of errors that uncovers by black box testing.
o Incorrect or missing functions
o Interface errors
o External database access
o Performance errors
o Initialization and termination errors.
8. State data flow testing
In data flow-based testing, the control flowgraph is annotated with information about
how the program variables are defined and used. Different criteria exercise with varying
degrees of precision how a value assigned to a variable is used along different control
flow paths. A reference notation is a definition-use pair, which is a triple of (d, u, V) such
that V is a variable, d is a note in which V is defined, and us is a node in which V is used.
There exists a path between d and u in which the definition of V in d is used in u.
9. State Security Testing.
Security Testing is carried out in order to find out how well the system can protect itself from
unauthorized access, hacking – cracking, any code damage etc. which deals with the code of
application. This type of testing needs sophisticated testing techniques.
10. Mutation Testing:
A kind of testing in which, the application is tested for the code that was modified after
fixing a particular bug/defect. It also helps in finding out which code and which strategy of
coding can help in developing the functionality effectively.
11. What is boundary value analysis ?
Boundary value analysis is a selection technique where test data are chosen to lie along
boundaries of the input domain [or output range] classes, data structures, procedure
parameters etc. Choices include maximum, minimum, and trivial values or parameters.
This technique is often called stress testing.
12. What is robustness testing ?
Robustness is the degree to which a software component functions correctly in the
presence of exceptional inputs or stressful environmental conditions.
8. In Robustness testing, clues come from requirements. The goal is to test a program under
scenarios not stipulated in the requirements.
13. What is equivalence partitioning ?
In equivalence partitioning, a test case is designed so as to uncover a group or class of
error. This limits the number of test cases that might need to be developed otherwise.
Here input domain is divided into classes or group of data. These classes are known as
equivalence classes and the process of making equivalence classes is called equivalence
partitioning. Equivalence classes represent a set of valid or invalid states for input
condition.
14. What are rules for creating equivalence classes ? or How is this partitioning performed
while testing ?
If an input condition specifies a range, one valid and two invalid equivalence
classes are defined.
If an input condition requires a specific value, then one valid and two invalid
equivalence classes are defined.
If an input condition specifies a member of a set, then one valid and one invalid
equivalence class are defined.
If an input condition is Boolean, then one valid and one invalid equivalence class
are defined.
15. What code coverage analysis ?
Code coverage analysis tools measure how much of the software was executed. Tools
are used to measure test coverage and it can measure by statement, block/path, or
function/procedure. Coverage analysis requires the coverage tool to be compiled with the
application software. It creates a summary file of the coverage analysis. There is usually
10-15% performance hit for block coverage.
16. What is test automation ? What are the different steps in test automation ?
“A good manual testing regime is a firm foundation on which to build test automation.”
STEPS IN TEST AUTOMATION
• Test execution: Run large numbers of test cases/suites without human intervention.
• Test generation: Produce test cases by processing the specification, code, or model.
• Test management: Log test cases & results; map tests to requirements &
functionality; track test progress & completeness
9. 17. What are the advantages of test automation?
• More testing can be accomplished in less time.
• Testing is repetitive, tedious, and error-prone.
• Test cases are valuable - once they are created, they can and should be used again,
particularly during regression testing.
18. What are the disadvantages of test automation?
• Automated tests are more expensive to create and maintain (estimates of 3-30 times).
• Automated tests can lose relevancy, particularly when the system under test, changes.
• Use of tools require that testers learn how to use them, cope with their problems, and
understand what they can and can’t do.
19. What are the different levels of testing?
Unit testing
Integration testing
System testing
20. What is Unit testing ?
Unit Testing is the process of testing the individual subprograms, subroutines or
procedures in a program. That is, rather than testing the program as a whole, testing is
focused on the smaller building blocks of the program.
21. What is the need of unit testing ?
o Unit testing is a way of managing the combined elements of testing, since
attention is focused on smaller units of the program.
o Unit testing eases the task of debugging, when an error is found, it is known to
exist in a particular module.
o Introduces parallelism into the program testing process by presenting with the
opportunity to test multiple modules simultaneously.
22. How do we proceed to determine the tests cases?
a. Design an algorithm for the GCD function.
b. Analyze the algorithm using basic path analysis.
c. Determine appropriate equivalence classes for the input data.
d. Determine the boundaries of the equivalence classes.
e. Then, choose tests cases that include the basic path set, data form each
equivalence class, and data at and near the boundaries.
10. 23. What are the benefits of unit testing ?
Developers can work in a predictable way of developing code
Programmers write their own unit tests
Get rapid response for testing small changes
Build many highly-cohesive loosely-coupled modules to make unit testing easier
24. What is integration testing?
• Integration Testing is testing interfaces between components
• It is the first step after Unit Testing
• Components may work alone but fail when put together
• Defect may exist in one module but manifest in another
25. State top down integration testing.
The Top-down method begins by testing the main program unit (the root of the tree) with
one lower level node. Any other lower level units/nodes that may be connected should
created as a stub. Only modules tested in isolation are the modules which are at the
highest level. After a module is tested, the modules directly called by that module are
merged with the already tested module and the combination is tested.
26. What is the advantage of using top down integration testing?
The major advantage of this top-down method is the fact that a system prototype can be
developed early on in the project process.
This is a very attractive property of this integration testing technique as usually the client
of a software company will have little or no software engineering knowledge and
therefore if the client cannot see something tangible then it would be easy for the client to
assume that little or no work is being completed.
27. What is the disadvantage of using top down integration testing ?
One disadvantage of this method is the fact that the programmers will have to produce a large
number of stubs, this means extra work is needed on top of producing the actual system by
producing units that will later be thrown away.
28. What is bottom up integration testing?
The bottom-up method begins by testing one of the leaves of the program with its parent
node first. Any higher nodes that are directly connected to the nodes being tested are
constructed as drivers.
• Only terminal modules (i.e., the modules that do not call other modules) are tested in
isolation
• Modules at lower levels are tested using the previously tested higher level modules
• Non-terminal modules are not tested in isolation
11. • Requires a module driver for each module to feed the test case input to the interface
of the module being tested
29. What is the advantage of bottom up integration testing ?
The major advantage of this method of integration testing is that the program
itself is fully functional at every stage.
This is in contrast from the top-down method, where as a a prototype can be
created at an early stage, but it will have little or no functionality.
30. What is the disadvantage of bottom up integration testing ?
Inability to create an early prototype for the client. This could easily lead to problems
down the line, such as the client may question the progress of the project or when the
prototype is finally produced the client might decide on changes that are difficult to
implement at that late stage of the project.
31. What is system testing ?
System test is concerned with testing the entire system which may be composed of many
very large “sub-systems,” which may themselves be large enough to be considered a
system.
32. What are the different types of system testing ?
Functionality test
Performance test
Acceptance test
Installation test
33. What is functionality test ?
This test is looking at the interactions among the sub-systems and delivering joint
functionality more than just individual, simple function.
34. List out the various performance tests.
Stress and Volume test.
Configuration, environment test
Timing test
Security Test
Recovery Test
Documentation test
Human factors and usability test
Quality test
12. 35. What is Acceptance test ?
This is the testing of the system mainly by users and customers (sometimes with the help
of testers and developers) to ensure that the system is what the users need which may not
be exactly what the users said in the requirements document.
36. How the installation test is performed ?
a. All the different versions of operating systems are tested
b. All the different versions of data bases are tested
c. The network and different interfaces are tested
d. The default data and initialization for different combination of applications
are tested (e.g. HR with Financial)
UNIT – III
1. What is Unit Testing ?
Unit testing is a method by which individual units of source code are tested to determine if
they are fit for use. A unit is the smallest testable part of an application. In procedural
programming a unit may be an individual function or procedure. Unit tests are created by
programmers or occasionally by white box testers.
2. Write the difference between class testing and conventional testing ?
Conventional testing focuses on input-process-output, whereas class testing focuses on each
method, then designing sequences of methods to exercise states of a class
3. What is integration testing?
Integration testing (sometimes called Integration and Testing, abbreviated "I&T") is the
phase in software testing in which individual software modules are combined and tested as a
group. It occurs after unit testing and before system testing. Integration testing takes as its input
modules that have been unit tested, groups them in larger aggregates, applies tests defined in an
integration test plan to those aggregates, and delivers as its output the integrated system ready for
system testing.
4. What are the two ways of integration performed?
There are two ways integration performed. It is called Pre-test and Pro-test.
1.Pre-test: the testing performed in Module development area is called Pre-test. The Pre-test is
required only if the development is done in module development area.
13. 2.Pro-test: The Integration testing performed in baseline is called pro-test. The development of a
release will be scheduled such that the customer can break down into smaller internal releases.
5. What are the different types of integration testing?
1.Top-Down Integration
2.Bottom-up Integration
3.Bidirectional(Sandwitch Testing)
4. System Integration(BigBang Testing)
6. Define Top-down and bottom-up integration testing?
Bottom Up Testing is an approach to integrated testing where the lowest level components are
tested first, then used to facilitate the testing of higher level components. The process is repeated
until the component at the top of the hierarchy is tested.
All the bottom or low-level modules, procedures or functions are integrated and then tested.
After the integration testing of lower level integrated modules, the next level of modules will be
formed and can be used for integration testing. This approach is helpful only when all or most of
the modules of the same development level are ready. This method also helps to determine the
levels of software developed and makes it easier to report testing progress in the form of a
percentage.
Top Down Testing is an approach to integrated testing where the top integrated modules are
tested and the branch of the module is tested step by step until the end of the related module.
7. What is sandwich testing?
Sandwich Testing is an approach to combine top down testing with bottom up testing.
The main advantage of the Bottom-Up approach is that bugs are more easily found. With Top-
Down, it is easier to find a missing branch link.
8. What is bing-bang testing?
In this approach, all or most of the developed modules are coupled together to form a
complete software system or major part of the system and then used for integration testing. The
Big Bang method is very effective for saving time in the integration testing process.
A type of Big Bang Integration testing is called Usage Model testing. Usage Model
testing can be used in both software and hardware integration testing. The basis behind this type
of integration testing is to run user-like workloads in integrated user-like environments. In doing
the testing in this manner, the environment is proofed, while the individual components are
proofed indirectly through their use.
14. 9. What are the different factors of software component dependability ?
Component understandability
Component observability
Component traceability
Component controllability
Component testing support capability
10. What are the different maturity levels of testability?
Evaluating the maturity levels of a test process concerning testability:
Level #1- Initial – At this level, component developers and testers use an ad hoc approach
to enhance component testability in a component development process.
Level #2- Standardized – At this level, component testability requirements, design
methods, implementation mechanisms, and verification criteria are defined as standards.
Level #3- Systematic – At this level, a well-defined component development and test
process and systematic solutions are used to increase component testability at all engineering
phases.
Level #4-Masurable – At this level, component testability can be evaluated and measured
using systematic solutions and tools in all component development phases.
11. What is the testable bean or testable component?
A testable bean is a testable software component that is not only deployable and executable, but
is also testable with the support of standardized components test facilities.
12. What is the need of testable component ?
The major goal of introducing testable components is to find a new way to develop software
components which are easily to be observed, traced, tested, deployed, and executed.
13. What is observability?
Observability is the ability to view the reactions of software components to the inputs that are fed
in and also being able to watch the changes to the internal states of the software.
14. What are the advantages and disadvantages of big-bang?
Advantages:
This approach is simple.
15. Disadvantages:
It is hard to debug.
It is not easy to isolate errors while testing.
In this approach it is not easy to validate test results.
After performing testing, it is impossible to form an integrated system.
15. What are the benefits of smoke testing?
Integration risk is minimized.
The quality of the end-product is improved.
Error diagnosis and correction are simplified.
Assessment of program is easy.
16. What are the conditions exists after performing validation testing?
After performing the validation testing there exists two conditions.
The function or performance characteristics are according to the specifications and are
accepted.
The requirement specifications are derived and the deficiency list is created. The
deficiencies then can be resolved by establishing the proper communication with the
customer.
17. Distinguish between alpha and beta testing.
Alpha and beta testing are the types of acceptance testing.
Alpha test: The alpha testing is attesting in which the version of complete software is tested by
the customer under the supervision of developer. This testing is performed at developer’ s site.
Beta test: The beta testing is a testing in which the version of the software is tested by the
customer without the developer being present. This testing is performed at customer’ s site.
18. What are the various types of system testing?
1. Recovery testing – is intended to check the system’ s ability to recover
from failures.
2. Security testing – verifies that system protection mechanism prevent
improper penetration or data alteration.
3. Stress testing – Determines breakpoint of a system to establish
maximum service level.
4. Performance testing – evaluates the run time performance of the
software, especially real-time software.
UNIT- IV
1. What are the various testing Challenges :
The various challenges are,
(i) Test development
(ii) Availability
16. (iii)Bug hazards of OO programming
(iv)Test design for classes is difficult
(v) Class invariants
(vi) Class associations
(vii)Interface for data flow models
(viii)Choosing an integration strategy
(ix) Achieving comprehensive system testing
(x)Choosing a regression test approach
(xi) Automation of testing
2. What operability in software Testing?
Operability in Software Testing:
1. The better the software works, the more efficiently it can be tested.
2. The system has few bugs (bugs add analysis and reporting overhead to the test
process)
3. No bugs block the execution of tests.
4. The product evolves in functional stages (allows simultaneous development &
testing)
3.What is Observability in software testing?
Observability in Software Testing:
1. What is seen is what is tested
2. Distinct output is generated for each input
3. System states and variables are visible or queriable during execution
4. Past system states and variables are visible or queriable (eg., transaction logs)
5. All factors affecting the output are visible
6. Incorrect output is easily identified
7. Incorrect input is easily identified
8. Internal errors are automatically detected through self-testing mechanism
9. Internally errors are automatically reported
10. Source code is accessible
4. What is Controllability in Software testing?
Controllability in Software Testing:
1. The better the software is controlled, the more the testing can be automated and
optimized.
2. All possible outputs can be generated through some combination of input in
Software Testing
3. All code is executable through some combination of input in Software Testing
17. 4. Software and hardware states can be controlled directly by testing
5. Input and output formats are consistent and structured in Software Testing
6. Tests can be conveniently specified, automated, and reproduced.
5.What is Decomposability in Software testing?
Decomposability in Software Testing:
1. By controlling the scope of testing, problems can be isolated quickly, and smarter
testing can be performed.
2. The software system is built from independent modules
3. Software modules can be tested independently in Software Testing
6.What is simplicity in Software testing?
Simplicity of software testing
1. The less there is to test, the more quickly it can be tested in Software Testing
2. Functional simplicity
3. Structural simplicity
4. Code simplicity
7.What is Stability in Software testing?
Stability in software testing:
The fewer the changes, the fewer the disruptions of testing
Changes of the software are infrequent
Changes in the software are controlled in software testing
changes to the software do not invalidate existing tests in software testing
The software recovers from failures in software testing
8.What is understandability in Software testing?
Understandability in Software testing :
1. The more information we have ,the smarter we will test
2. The design is well understood in software testing
3. 3.Depencies between the internal external and shared components are well
understood.
4. Changes to the design are communicated.
5. Technical documentation is accurate
9.What is BUILT IN TESTING ?
Built in testing helps in easier testing process and makes the software readily
available.
18. 10. What is Built-In tests at class level ?
(i) Highly testable
(ii) Test reusable
(iii) Maintenance and self testable
11.What is Built-in testing at system level ?
(i) Subsystems are tested first.
(ii)There are many sub methods in each class.
(iii)Each method in a class is called as traditional method.
(iv) Class and system levels can be activated by calling the test cases at the
corresponding levels as member functions.
(v) Object name:: Test case N
(vi) Each test case N consists of a test driver N and some test cases for specific
object.
(vii) Results are automatically reported by the built in test driver.
12. What are the categories of Regression testing ?
Categories
(i)Targeted tests: exercises important affected requirement attributes
(ii)Safety tests: coverage oriented and risk oriented
13.What are the Benefits of Regression testing?
(i)Most effective
(ii)Can be implemented as tools
(iii)Sensitive and risky
(iv)Can be applied to state diagrams
(v)Used to develop graph diagrams
(vi)Powerful tool
(vii)Used for functional tests
(viii)It is not subjective
(ix)It is very systematic in functionality
(x)Requires only straight forward calculations
19. UNIT - V
1.What are the different types of testing in Client /server testing?
Application function tests
Server tests
Database tests
Transaction tests
2.What is Application function tests?
Functionality of client application is tested
Standalone fashion
3.What is Server tests?
Data management functions of the server are tested
Overall response time and data throughput are considered
4.What is Database tests?
Accuracy and integrity of data stored by the server is tested.
Archiving is also tested
5.What is Transaction tests?
Ensures whether each class of transaction is processed according to requirement or not
Verifies the communication among the nodes
Verifies the message passing transaction and network traffic occurrence.
6.Is it necessary to have Client server test? Why?
Yes,
Different types of users interoperate with the client server systems.
So a “pattern of usage” is provided to design tests and execute it
7. How many levels of testing occur inC/S testing ?What are they?
Testing occurs in 3 different levels namely,
o Individual client application testing
o Testing client software with associated server application
o Testing complete client server architecture including network
operation and performance
20. 8. What is Client-Server Testing
Client-server software requires specific forms of testing to prevent or predict catastrophic
errors. Servers go down, records lock, I/O (Input/Output) errors and lost messages can really cut
into the benefits of adopting this network technology. Testing addresses system performance and
scalability by understanding how systems respond to increased workloads and what causes them
to fail.
9.What is GUI test?
GUI software testing is the process of testing a product that uses a graphical user
interface, to ensure it meets its written specifications.
10.What are the three forms of capture/playback ?
Three forms of capture/playback are,
o Native or Software intrusive
o Native or Hardware intrusive
o Non intrusive
11. What is TestComplete?
TestComplete is an automated testing tool that lets you create, manage and run tests for
any Windows, Web or Rich Client software. It makes it easy for anyone to create automated
tests. And automated tests run faster, increase test coverage and lower costs.
12.What is functional testing?
Functional testing involves making sure the features that most affect user interactions work
properly;
They are
forms
searches
pop-up windows
shopping carts
online payments
21. 13. What is Usability testing ?
Usability testing assesses the website’s user friendliness and suitability by gathering information
about how users interact with site. The key to usability testing is to study what a user actually
does.
14. What are the steps involved in Usability testing?
The main steps to usability testing are:
• Identify the website’s purpose
• Identify the intended users
• Define tests and conduct the usability testing
• Analyze the acquired information
15. Why we need Navigation testing
Good navigation is an essential part of a website, especially those that are complex and
provide a lot of information.
16.What are the key issues of Navigation testing?
Key issues with navigation testing include:
• Moving to and from pages
• Scrolling through pages
• Clicking on all images and their thumbnails to ensure they work
• Testing all links for validity and correctness
• Ensuring no broken links exist
• Viewing tables and forms to verify proper layout, which can vary with different
browsers
• Verifying that windows with multiple frames are processed as if each were a
single-page frame
• Measuring load time of every web page
• Ensuring compatibility and consistent usage of buttons, keyboard shortcuts, and
mouse actions.
17. What is Form testing?
Websites that use forms need tests to ensure that each field works properly and that the
form post all data as intended by the designers.
18. What does form testing includes?
Testing of forms includes:
• Using the tab key to verify that the form traverses fields in the proper order, both
forwards and backwards
22. • Testing boundary values
• Checking that forms traps invalid data correctly
• Verifying that the form updates information correctly
19. What is Page content testing?
Each web page must be tested for correct content from the user perspective.
20. What are the different page content testing?
There are 2 categories –
1) ensuring that each component functions correctly;
2) ensuring that the content of each is correct
21. What does the Page content testing includes?
• All image and graphics display correctly across various browsers
• All content is present per requirements
• Page instructions are consistent across browsers
• Critical pages maintain same content from version to version
• All parts of table or form are present and in the right place
• Links to relevant content inside and outside of the site are correct