SOME COMMONLY ASKED QUESTION FOR SOFTWARE TESTING<br />Black - Box TestingIn using this strategy, the tester views the program as a black - box, tester doesn't see the code of the program: Equivalence partitioning, Boundary - value analysis, Error guessing.<br />White - Box TestingIn using this strategy, the tester examine the internal structure of the program: Statement coverage, Decision coverage, condition coverage, Decision/Condition coverage, Multiple - condition coverage.<br />Gray - Box testingUsing this strategy Black box testing can be combined with knowledge of database validation, such as SQL for database query and adding/loading data sets to confirm functions, as well as query the database to confirm expected result.<br />Test ScriptType of test file. It is a set of instructions run automatically by a software or hardware test tool.<br />Test SuiteA collection of test cases or scripts. <br />Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, MIPS, interrupts, etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.<br />Load testing is subjecting a system to a statistically representative (usually) load. The two main reasons for using such loads are in support of software reliability testing and in performance testing. The term "
by itself is too vague and imprecise to warrant use. For example, do you mean representative load,"
etc. In performance testing, load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions suffer (application-specific) excessive delay. A third use of the term is as a test whose objective is to determine the maximum sustainable load the system can handle. In this usage, "
is merely testing at the highest transaction arrival rate in performance testing.<br />Smoke Test: When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing. Smoke testing can be done for testing the stability of any interim build. Smoke testing can be executed for platform qualification tests. <br />Sanity testing: Once a new build is obtained with minor revisions, instead of doing a through regression, sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes. It’s generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the application. Generally, when multiple cycles of testing are executed, sanity testing may be done during the later cycles after through regression cycles<br />CMM(Capability Maturity Model):<br />Is an industry-standard model for defining and measuring the "
of a software company's development process and for providing direction on what they can do to improve their software quality? It was developed by the software development community along with the software engineering institute (SEI).<br />CMM software Maturity Levels:<br />Level1: Initial: The s/w development process at this level are ad-hoc and often chaotic. The project's success depends on heroes and luck. There are no general practices for planning, monitoring, or controlling the process. It’s impossible to predict the time and cost to develop the software. The test process is just as ad-hoc as the rest of the process.<br />Level2: Repeatable: This maturity level is best described as project level thinking. Basic project management processes are in place to track the cost schedule, functionality, and qualities of the product. Lessons learned from previous similar projects are applied. There is a sense of discipline. Basic software testing practices, such as test plans and test cases are used.<br />Level3: Defined: Organizational, not just project specific, thinking comes in to play at this level. Common management and engineering activities are standardized and documented. These standards are adapted and approved for use on different projects. The rules are not thrown out when things get stressful. Test documents and plans are reviewed and approved before testing begins. The test group is independent form developers. The test results are used to determine when the s/w is ready.<br />Level4: Managed: At this maturity level, the organizations process is under statistical control. Product quality is specified quantitatively beforehand (for example, this product won’t release until it has fewer than 0.5 defects per 1,000 lines of code) and the s/w isn't released until that goal is met. Details of the development process and the s/w quality are collected over the projects development, and adjustments are made to correct deviations and to keep the project on plan.<br />Level5: Optimizing: This level is called "
) because it's continually improving from level 4. New technologies and processes are attempted, the results are measured, and both incremental and revolutionary changes are instituted to achieve even better quality levels. Just when everyone thinks the best has been obtained. The crank is turned one more time, and the next level of improvement is obtained.<br />What is SDLC AND STLC?<br />Software Development Life Cycle (SDLC)<br />1. Requirement phase,<br />2. Design phase (HLD, DLD (Program spec))<br />3. Coding<br />4. Testing<br />5. Release<br />6. Maintenance<br />Software Test Life Cycle (STLC)<br />1. System Study<br />2. Test planning<br />3. Writing Test case or scripts<br />4. Review the test case.<br />5. Executing test case<br />6. Bug tracking<br />7. Report the defect.<br />What is Pair-wise testing?<br />Pair wise (a.k.a. all-pairs) testing is an effective test case generation technique that is based on the observation that most faults are caused by interactions of at most two factors. Pair wise-generated test suites cover all combinations of two therefore are much smaller than exhaustive ones yet still very effective in finding defects.<br />Why regression test is necessary and at what stage?<br />Regression testing is a type of testing where the already tested functionality is once again tested. It is necessary because it is required to make the developed product is defect free.It is necessary because when build#1 is released to testing department then a test engineer tests the defect functionality as well as the related functionality.After rectification of the issue by the developer, the next build is released to testing department with the new changes as build#2, then the test engineer once again tests the already tested functionality in spite of new changes is incorporated only for selected test cases not for the whole test cases.<br />Test methodology means the different methods carried into order to test an application. The following are the test methodologies followed in Testing an Application or a product<br />What are the criteria/inputs we take for system testing?<br />Generally system testing is done after completion of unit testing and integration testing.This testing is done by Test Engineer in the company with user specified environment in order to check the performance, functionality of the application.As it comes under one of the testing techniques, BDD (Business Design Document) and UCD (UseCase Document) are obvious inputs to this type of testing.<br />What are the key skills a Tester should possess?<br />A tester should have following Key skills:<br />1-have quality oriented mind setup2-active and have lot of perception in his mind3-alwayes concentrate on finding bugs and maintaining quality & standard<br />What are the key challenges of testing? <br />Following are some challenges while testing software1.Requirements are not freezed.2. Application is not testable.3. Ego problems.4. Defect in defect tracking system5.Miscommunication or no Communication6.Bug in software development tools.7. Time pressures<br />What is the role of a bug tracking system? <br />A bug tracking system is used to report all errors at one central place for easy access and retrieval, both by developer as well as tester. Since all the bugs are centrally filed, it becomes easy to update their status. The tracking can be done across multiple projects and lastly, these details can be used by a QA manager for Metrics.<br />What issues come up in test automation, and how do you manage them? <br />Main issue is the frequent change request. If there are frequent changes in the system, as an automation engineer, we need to take care of the changing objects and functionalities to update the scripts.<br />What's the difference between load and stress testing?<br />The consequence of this ignorant semantic abuse is usually that the system >is neither properly "
nor subjected to a meaningful stress >test. > ><br />Stress testing is subjecting a system to an unreasonable load >while denying it the resources (e.g., RAM, disc, MIPS, interrupts, >etc.) needed to process that load. The idea is to stress a system to >the breaking point in order to find bugs that will make that break >potentially harmful. The system is not expected to process the >overload without adequate resources, but to behave (e.g., fail) in a >decent manner (e.g., not corrupting or losing data). Bugs and failure >modes discovered under stress testing may or may not be repaired >depending on the application, the failure mode, consequences, etc. >the load (incoming transaction stream) in stress testing is often >deliberately distorted so as to force the system into resource >depletion. > ><br /> Load testing is subjecting a system to a statistically >representative (usually) load. The two main reasons for using such >loads is in support of software reliability testing and in >performance testing. The term "
by itself is too vague >and imprecise to warrant use. For example, do you mean representative >load,"
etc. In performance testing, load is >varied from a minimum (zero) to the maximum level the system can >sustain without running out of resources or having, transactions >suffer (application-specific) excessive delay. > ><br />A third use of the term is as a test whose objective is to >determine the maximum sustainable load the system can handle. >In this usage, "
is merely testing at the highest >transaction arrival rate in performance testing. <br />Difference between System Testing and Functional Testing?<br />Functional testing is the technique utilized in the system testing. It’s a black box testing type... Functional testing is nothing but testing or check whether a system functions properly as per the requirement. <br />System testing is the phase after integration testing phase. After all modules get integrated and tested for integration, the system is tested as a whole.<br />What is the difference between test case and use case?<br />A use case is a technique for capturing functional requirements of systems and systems-of-system. Each use case provides one or more scenarios that convey how the system should interact with the users called actors to achieve a specific business goal or function.<br />A test case is a set of conditions or variables under which a tester will determine if a requirement or use case upon an application is partially or fully satisfied.<br />What is the exact difference between CMM and CMMi?<br />The Capability Maturity Model (CMM), the first capability maturity model, a way to develop and refine an organization's processes. The first CMM was for the purpose of developing and refining software development processes.There are 18 KPAs in CMM model initially. The Capability Maturity Model for Software has been retired, and CMMI replaces it. The SEI no longer maintains the SW-CMM model, its associated appraisal methods, or training materials, nor does the SEI offer SW-CMM training. Capability Maturity Model Integration (CMMI) is a process improvement approach that provides organizations with the essential elements of effective processes. It can be used to guide process improvement across a project, a division, or an entire organization. The capability or maturity level for the company depends on the type of Representation you are following:<br />Comparison of Capability and Maturity Levels<br />Continuous Representation Capability LevelsLevel 0 - IncompleteLevel 1 - PerformedLevel 2 - ManagedLevel 3 - DefinedLevel 4 - Quantitatively ManagedLevel 5 - Optimizing <br />Staged Representation Maturity LevelsLevel 0 - N/ALevel 1 - InitialLevel 2 - ManagedLevel 3 - DefinedLevel 4 - Quantitatively ManagedLevel 5 - Optimizing <br />As per new version, CMMi ver. 1.2, released by SEI, there are 22 Process Areas altogether in different levels<br />What is the difference between Smoke Test and Sanity Test?<br />SmokeSanity1Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested. A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.2A smoke test is scripted--either using a written set of tests or an automated testA sanity test is usually unscripted.3A Smoke test is designed to touch every part of the application in a cursory way. It's is shallow and wide.A Sanity test is used to determine a small section of the application is still working after a minor change.4Smoke testing will be conducted to ensure whether the most crucial functions of a program work, but not bothering with finer details. (Such as build verification). Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.5Smoke testing is normal health check up to a build of an application before taking it to testing in depth.sanity testing is to verify whether requirements are met or not, checking all features breadth-first.<br />What is the Difference between Validation and Verification?<br />ValidationVerificationAm I building the right productAm I building the product rightDetermining if the system complies with the requirements and performs functions for which it is intended and meets the organization’s goals and user needs. It is traditional and is performed at the end of the project.The review of interim work steps and interim deliverables during a project to ensure they are acceptable. To determine if the system is consistent, adheres to standards, uses reliable techniques and prudent practices, and performs the selected functions in the correct manner.Am I accessing the right data (in terms of the data required to satisfy the requirement)Am I accessing the data right (in the right place; in the right way).High level activityLow level activityPerformed after a work product is produced against established criteria ensuring that the product integrates correctly into the environmentPerformed during development on key artifacts, like walkthroughs, reviews and inspections, mentor feedback, training, checklists and standardsDetermination of correctness of the final software product by a development project with respect to the user needs and requirementsDemonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.<br />What is the Difference between QA and QC?<br />Quality AssuranceQuality ControlA planned and systematic set of activities necessary to provide adequate confidence that requirements are properly established and products or services conform to specified requirements.The process by which product quality is compared with applicable standards; and the action taken when nonconformance is detected.An activity that establishes and evaluates the processes to produce the products.An activity which verifies if the product meets pre-defined standards.Helps establish processes.Implements the process.Sets up measurements programs to evaluate processes.Verifies if specific attribute(s) are in a specific product or serviceIdentifies weaknesses in processes and improves them.Identifies defects for the primary purpose of correcting defects.QA is the responsibility of the entire team.QC is the responsibility of the tester.Prevents the introduction of issues or defectsDetects, reports and corrects defectsQA evaluates whether or not quality control is working for the primary purpose of determining whether or not there is a weakness in the process.QC evaluates if the application is working for the primary purpose of determining if there is a flaw / defect in the functionalities.QA improves the process that is applied to multiple products that will ever be produced by a process.QC improves the development of a specific product or service.QA personnel should not perform quality control unless doing it to validate quality control is working.QC personnel may perform quality assurance tasks if and when required.<br />What is Testing Techniques?What are different types of testing techniques(question can also be modified as types of testing techniques used while writing test cases)?<br />Testing techniques refer to different ways of testing particular features a computer program, system or product. Broadly there are three types of testing techniques:<br />1. Black box testing technique:<br />Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or application. Main focus in black box testing is on functionality of the system as a whole<br />Different types of Black Box Testing Techniques:-<br />* Error Guessing<br />* Boundary Value analysis<br />* Equivalence partitioning<br />* Comparison Testing<br />2. Grey Box Testing technique: <br />Grey box testing is the combination of black box and white box testing. Intention of this testing is to find out defects related to bad design or bad implementation of the system. In gray box testing, test engineer is equipped with the knowledge of system and designs test cases or test data based on system knowledge.<br />3. White box testing technique:<br />White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations performed according to the specification. And all internal components have been adequately exercised<br />Different types of White Box Testing Techniques:-<br />* Basis Path Testing<br />* Flow Graph Notation<br />* Cyclomatic Complexity<br />* Graph Matrices<br />* Control Structure Testing<br />* Loop Testing<br />