The document is an internship report that describes work done on quality assurance of virtual labs. It discusses manual testing including developing a test plan, test cases, and reports. It also covers automated testing using Python scripts to check links and spelling on pages. The report provides details on testing objectives, requirements, tools used, and the structure of test cases, reports, and defect management. It aims to help deliver high quality, open-source virtual labs.
Software testing: an introduction - 2017XavierDevroey
Software testing involves dynamically verifying that a program behaves as expected on a finite set of test cases. This is done because exhaustively testing every possible case is not feasible. Unit testing involves testing individual program units such as classes through automated tests that make assertions about the output. JUnit is a unit testing framework for Java that uses annotations to identify test methods and make assertions about the results.
This document discusses testing capabilities in Visual Studio 2010, including test case management, lab management, exploratory testing, and coded UI testing. It highlights how Visual Studio 2010 aims to align testing with the development lifecycle and enable tighter collaboration between developers and testers. Key capabilities like test case management allow tracking test cases as work items in Team Foundation Server, while lab management helps simplify environment setup and improves test hardware utilization.
Foundation level sample_exam_v2.3_answers_and_justificationVenera Romanova
The document is a sample exam for the ISTQB Foundation Level certification. It includes 40 multiple choice questions across 6 sections covering various testing topics aligned to the ISTQB Foundation Level syllabus such as fundamentals, testing throughout the lifecycle, static techniques, test design techniques, test management, and test tools. The questions are intended to help ISTQB member boards in writing exams and for individuals preparing to take the certification.
This document provides the syllabus for the International Software Testing Qualifications Board's Certified Tester Advanced Level certification. It outlines the learning objectives for test managers, test analysts, and technical test analysts. The syllabus covers topics such as testing in the software lifecycle, specific system types like systems of systems and safety critical systems, testing processes, test management, risk-based testing, and more. It is intended to guide curriculum and training for the advanced level certification. The syllabus was last updated in 2007 by the Advanced Level Working Party committee members.
This document discusses test driven development for mobile applications. It compares the traditional development cycle to a test driven development cycle. It also discusses how using the Robolectric framework allows testing Android applications outside of an emulator, improving the speed of test driven development. Key benefits of test driven development mentioned include delivering functionality faster, improving code quality and confidence, and allowing more time for cleaning code and learning new tools.
The document outlines an automation testing syllabus covering software development lifecycles, the role of testers, types of testing, test techniques, test cases, test plans, bugs, Java concepts, production tools, load testing, test management tools, and real-world manual testing projects. Key topics include waterfall, agile, and scrum models; unit, integration, and regression testing; black box and white box techniques; test plans; bug tracking; Java fundamentals; and tools like JUnit, Selenium, JIRA, LoadRunner, and QTP. The syllabus aims to equip students with the skills needed for both manual and automation testing.
This is a brief tutorial, with a practical use-case, on how to use Maveryx testing tool for automating Android(TM) apps. It is a step-by-step guide both for novice and expert testers. For more info http://www.maveryx.com/en/support/learn-more/user-documentation.html
This is a brief tutorial, with a practical use-case, on how to use Maveryx testing tool for automating Java(TM) applications. It is a step-by-step guide both for novice and expert testers.
Software testing: an introduction - 2017XavierDevroey
Software testing involves dynamically verifying that a program behaves as expected on a finite set of test cases. This is done because exhaustively testing every possible case is not feasible. Unit testing involves testing individual program units such as classes through automated tests that make assertions about the output. JUnit is a unit testing framework for Java that uses annotations to identify test methods and make assertions about the results.
This document discusses testing capabilities in Visual Studio 2010, including test case management, lab management, exploratory testing, and coded UI testing. It highlights how Visual Studio 2010 aims to align testing with the development lifecycle and enable tighter collaboration between developers and testers. Key capabilities like test case management allow tracking test cases as work items in Team Foundation Server, while lab management helps simplify environment setup and improves test hardware utilization.
Foundation level sample_exam_v2.3_answers_and_justificationVenera Romanova
The document is a sample exam for the ISTQB Foundation Level certification. It includes 40 multiple choice questions across 6 sections covering various testing topics aligned to the ISTQB Foundation Level syllabus such as fundamentals, testing throughout the lifecycle, static techniques, test design techniques, test management, and test tools. The questions are intended to help ISTQB member boards in writing exams and for individuals preparing to take the certification.
This document provides the syllabus for the International Software Testing Qualifications Board's Certified Tester Advanced Level certification. It outlines the learning objectives for test managers, test analysts, and technical test analysts. The syllabus covers topics such as testing in the software lifecycle, specific system types like systems of systems and safety critical systems, testing processes, test management, risk-based testing, and more. It is intended to guide curriculum and training for the advanced level certification. The syllabus was last updated in 2007 by the Advanced Level Working Party committee members.
This document discusses test driven development for mobile applications. It compares the traditional development cycle to a test driven development cycle. It also discusses how using the Robolectric framework allows testing Android applications outside of an emulator, improving the speed of test driven development. Key benefits of test driven development mentioned include delivering functionality faster, improving code quality and confidence, and allowing more time for cleaning code and learning new tools.
The document outlines an automation testing syllabus covering software development lifecycles, the role of testers, types of testing, test techniques, test cases, test plans, bugs, Java concepts, production tools, load testing, test management tools, and real-world manual testing projects. Key topics include waterfall, agile, and scrum models; unit, integration, and regression testing; black box and white box techniques; test plans; bug tracking; Java fundamentals; and tools like JUnit, Selenium, JIRA, LoadRunner, and QTP. The syllabus aims to equip students with the skills needed for both manual and automation testing.
This is a brief tutorial, with a practical use-case, on how to use Maveryx testing tool for automating Android(TM) apps. It is a step-by-step guide both for novice and expert testers. For more info http://www.maveryx.com/en/support/learn-more/user-documentation.html
This is a brief tutorial, with a practical use-case, on how to use Maveryx testing tool for automating Java(TM) applications. It is a step-by-step guide both for novice and expert testers.
Testing and Mocking Object - The Art of Mocking.Deepak Singhvi
The document provides an overview of mocking objects for unit testing. It discusses the problems with testing, such as dependencies on external objects. Mocking objects allows creating test doubles that simulate real objects' behavior for testing in isolation. The document outlines best practices for mocking, such as mocking interfaces rather than concrete classes and verifying expectations. It provides examples of using EasyMock to define mock objects and expected behavior.
HP Quick Test Professional is automated testing software that performs functional and regression testing through a user interface. It identifies objects in an application and performs operations like clicks and keyboard inputs. Tests are created using VBScript to specify test procedures and manipulate application objects. An automation framework provides support for automated testing by standardizing assumptions, concepts and tools to reduce maintenance costs when tests need to be updated.
1. The document discusses various tools and frameworks for unit testing Java, Android, and C/C++ code including JUnit, EasyMock, Google Test, and Google Mock.
2. It provides information on setting up and writing tests for each framework including creating test projects, including libraries, and using mocking functionality.
3. Code examples and references are given for writing and running unit tests with each testing framework.
The document discusses various topics related to software testing including:
1) An overview of software testing, its goals of finding bugs and evaluating quality.
2) The need for testing plans to define scope, resources, schedules and quality standards.
3) Types of testing like functional, non-functional, unit, integration and acceptance.
4) Black box and white box testing techniques.
Introduction to testing with MSTest, Visual Studio, and Team Foundation Serve...Thomas Weller
Intro to the MSTest framework (aka. Visual Studio Unit Testing), some additional tools (e.g. Moq, Moles, White), how this is supported in Visual Studio, and how it integrates into the broader context of the TFS environment.
Software testing quiz questions and answersRajendraG
This document contains a software testing quiz with 77 multiple choice questions covering various topics in software testing. The questions assess knowledge in areas such as test documentation, test types, quality management, testing levels, metrics, risks, and the software development life cycle. Correct answers are provided at the end. The quiz is intended to help individuals learn and evaluate their understanding of key concepts in software testing.
This foreword discusses the author's initial uncertainty about software testing based on differences between academic descriptions of testing and his own experience testing software as a developer. The author describes going through phases of thinking he needed to radically change his approach, then seeing how other approaches could work but not adopting them fully, and finally deciding other approaches wouldn't work for him. The author concludes that experience from multiple projects over time leads one to trust their own judgment and preferences for how to prioritize and approach testing based on an ongoing process of learning from different ideas, discussions, trials and errors.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
This document contains multiple choice questions related to software testing concepts and processes. Key topics covered include: types of testing (e.g. functional testing, regression testing, integration testing), testing levels (e.g. unit, integration, system, acceptance), beta testing, impact analysis, load testing, and definitions of quality assurance terms. The questions assess understanding of when and how different test types are used within the software development and maintenance lifecycles.
PROPOSING AUTOMATED REGRESSION SUITE USING OPEN SOURCE TOOLS FOR A HEALTH CAR...ijseajournal
Regression testing is very important for the delivery of high quality product. It helps to simulate a suite of critical test cases periodically and helps to identify if introduction of any new features or any source code change has adversely affected the software quality or functionality. As a result, regression testing cannot be ignored from the software testing life cycle (STLC). But just doing a regression testing cannot be beneficial until it is accompanied by automation testing. Automated regression suite not only saves time and cost by re-running test scripts again and again but it also provide the confidence that all the critical test cases has been covered, providing more confidence in the quality of the product and increasing the ability to meet schedules. IT has an ability to explore the whole software every day without requiring much of manual effort. Current software is going through continuous development which requires testing again and again to check if new feature implementation has affected the existing functionality. In addition to this, it is facing issue in validation of the installation at client site and requires availability of testers to check the critical functionality of the software manually. This paper came up with the solution of creating automated regression suite for the software. The current research will provide guidelines to the future researchers on how to create an automated regression suite for any web application using open source tools.
This is a draft of a presentation for a course on Visual Studio 2010 Unit Testing, I've uploaded mainly because I tried to create a Metro Style presentation, so if everyone like it, he can use as base for own presentation.
My experiences on Unit Testing in the Android environment. I hope they are useful to you too.
A brief tour about how to make Android Studio run your unit tests (logical and instrumentation) and how to start creating tests for your app.
The document discusses Application Lifecycle Management (ALM) and HP Quality Center. It provides information on what Quality Center is, its modules, how to map requirements to test cases, generate tests from requirements, and use filters. It also answers various questions related to Quality Center features, usage, libraries, databases, and more.
Hopper's approach to QA is described in the Case study. At Hopper, we believe that QA starts at the very beginning of product life cycle. This helps reduce risk and deliver quality products. We combine all aspects of QA - blackbox testing, performance testing, load testing, regression testing, QA Automation etc. We also design QA systems where the existing frameworks may not work.
Tools for Software Verification and Validationaliraza786
The document discusses two tools for software verification and validation (V&V): NUnit and Mercury Quality Center (MQC).
NUnit is an open source unit testing framework for .NET applications. It allows developers to write unit tests to verify code meets design conditions. NUnit supports IDE integration, assertions, attributes, configurations and multiple assembly testing. It is used during implementation to facilitate code verification.
MQC is a web-based test management tool for organizing testing projects. It allows requirements management, test planning, case authoring, execution, and defect tracking. Various roles can access modules for requirements, tests, execution, and defects. Reports can be generated on results. It integrates with other tools and facilitates
- Prasanth Kumar Pendam has over 9.5 years of experience in manual testing including team leading activities. He has strong experience in test planning, test case design, and working with tools like CA Clarity PPM and HP Service Manager.
- He is proficient in various phases of the software development life cycle including requirements analysis, testing, documentation, and deployment testing.
- He has worked on several projects as a technology lead and senior test engineer, with responsibilities including requirement gathering, test case preparation, automation, and issue tracking.
Software Test automation tools are available under several categories such as commercial, free software, open source software and etc. In this paper Open Source Software Testing Tools will be discussed.
Open source software test automation tools may be practical alternatives to popular closed-source commercial applications and some open source tools offers features or performance benefits that exceed their commercial counterparts. The source code is openly published for use and/or modification from its original design, free of charge. And these are usually available under a license defined by the Open Source Initiative.
Automation Testing of Web based Application with Selenium and HP UFT (QTP)IRJET Journal
This document compares two automation testing tools: Selenium and HP UFT (also known as QTP). It first provides background on software testing and the benefits of automation over manual testing. It then discusses the aims of the study, related work on automation testing frameworks and the major components of the Selenium tool. The key points are that the document aims to compare Selenium and UFT/QTP for testing web applications and provides background on software testing, the benefits of automation, automation frameworks, and an overview of the Selenium tool and its major components in order to facilitate the comparison.
This document discusses using VectorCAST/Manage and Jenkins together to achieve requirement-based testing according to ISO 26262. VectorCAST/Manage is a tool for automating unit, integration, and system testing that identifies code coverage. Jenkins is an open-source continuous integration server that compiles code and runs tests after each change. Integrating VectorCAST/Manage with Jenkins provides a continuous integration testing platform that runs automated tests frequently to identify defects early. This approach reduces testing time and ensures software reliability through repeated testing with minimal human intervention. Requirement-based testing in VectorCAST/Manage includes techniques like equivalence partitioning and boundary value analysis to generate test cases. Coverage analysis is also performed to meet different coverage levels required by ISO 26262
The document describes the software quality assurance process used by a company. It involves initial project planning, requirements analysis, development, testing of individual modules by developers and testers, integration testing, testing for compatibility, load, and system testing, and finally release after test report approval. Testing of existing vendor products includes peer reviews, validation, data-driven, load, compatibility, and usability testing. Testing new systems developed from scratch includes requirements, test strategy, traceability, cases, risks, tools, resources, schedule, deliverables, defect tracking, and approval processes.
Testing and Mocking Object - The Art of Mocking.Deepak Singhvi
The document provides an overview of mocking objects for unit testing. It discusses the problems with testing, such as dependencies on external objects. Mocking objects allows creating test doubles that simulate real objects' behavior for testing in isolation. The document outlines best practices for mocking, such as mocking interfaces rather than concrete classes and verifying expectations. It provides examples of using EasyMock to define mock objects and expected behavior.
HP Quick Test Professional is automated testing software that performs functional and regression testing through a user interface. It identifies objects in an application and performs operations like clicks and keyboard inputs. Tests are created using VBScript to specify test procedures and manipulate application objects. An automation framework provides support for automated testing by standardizing assumptions, concepts and tools to reduce maintenance costs when tests need to be updated.
1. The document discusses various tools and frameworks for unit testing Java, Android, and C/C++ code including JUnit, EasyMock, Google Test, and Google Mock.
2. It provides information on setting up and writing tests for each framework including creating test projects, including libraries, and using mocking functionality.
3. Code examples and references are given for writing and running unit tests with each testing framework.
The document discusses various topics related to software testing including:
1) An overview of software testing, its goals of finding bugs and evaluating quality.
2) The need for testing plans to define scope, resources, schedules and quality standards.
3) Types of testing like functional, non-functional, unit, integration and acceptance.
4) Black box and white box testing techniques.
Introduction to testing with MSTest, Visual Studio, and Team Foundation Serve...Thomas Weller
Intro to the MSTest framework (aka. Visual Studio Unit Testing), some additional tools (e.g. Moq, Moles, White), how this is supported in Visual Studio, and how it integrates into the broader context of the TFS environment.
Software testing quiz questions and answersRajendraG
This document contains a software testing quiz with 77 multiple choice questions covering various topics in software testing. The questions assess knowledge in areas such as test documentation, test types, quality management, testing levels, metrics, risks, and the software development life cycle. Correct answers are provided at the end. The quiz is intended to help individuals learn and evaluate their understanding of key concepts in software testing.
This foreword discusses the author's initial uncertainty about software testing based on differences between academic descriptions of testing and his own experience testing software as a developer. The author describes going through phases of thinking he needed to radically change his approach, then seeing how other approaches could work but not adopting them fully, and finally deciding other approaches wouldn't work for him. The author concludes that experience from multiple projects over time leads one to trust their own judgment and preferences for how to prioritize and approach testing based on an ongoing process of learning from different ideas, discussions, trials and errors.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
This document contains multiple choice questions related to software testing concepts and processes. Key topics covered include: types of testing (e.g. functional testing, regression testing, integration testing), testing levels (e.g. unit, integration, system, acceptance), beta testing, impact analysis, load testing, and definitions of quality assurance terms. The questions assess understanding of when and how different test types are used within the software development and maintenance lifecycles.
PROPOSING AUTOMATED REGRESSION SUITE USING OPEN SOURCE TOOLS FOR A HEALTH CAR...ijseajournal
Regression testing is very important for the delivery of high quality product. It helps to simulate a suite of critical test cases periodically and helps to identify if introduction of any new features or any source code change has adversely affected the software quality or functionality. As a result, regression testing cannot be ignored from the software testing life cycle (STLC). But just doing a regression testing cannot be beneficial until it is accompanied by automation testing. Automated regression suite not only saves time and cost by re-running test scripts again and again but it also provide the confidence that all the critical test cases has been covered, providing more confidence in the quality of the product and increasing the ability to meet schedules. IT has an ability to explore the whole software every day without requiring much of manual effort. Current software is going through continuous development which requires testing again and again to check if new feature implementation has affected the existing functionality. In addition to this, it is facing issue in validation of the installation at client site and requires availability of testers to check the critical functionality of the software manually. This paper came up with the solution of creating automated regression suite for the software. The current research will provide guidelines to the future researchers on how to create an automated regression suite for any web application using open source tools.
This is a draft of a presentation for a course on Visual Studio 2010 Unit Testing, I've uploaded mainly because I tried to create a Metro Style presentation, so if everyone like it, he can use as base for own presentation.
My experiences on Unit Testing in the Android environment. I hope they are useful to you too.
A brief tour about how to make Android Studio run your unit tests (logical and instrumentation) and how to start creating tests for your app.
The document discusses Application Lifecycle Management (ALM) and HP Quality Center. It provides information on what Quality Center is, its modules, how to map requirements to test cases, generate tests from requirements, and use filters. It also answers various questions related to Quality Center features, usage, libraries, databases, and more.
Hopper's approach to QA is described in the Case study. At Hopper, we believe that QA starts at the very beginning of product life cycle. This helps reduce risk and deliver quality products. We combine all aspects of QA - blackbox testing, performance testing, load testing, regression testing, QA Automation etc. We also design QA systems where the existing frameworks may not work.
Tools for Software Verification and Validationaliraza786
The document discusses two tools for software verification and validation (V&V): NUnit and Mercury Quality Center (MQC).
NUnit is an open source unit testing framework for .NET applications. It allows developers to write unit tests to verify code meets design conditions. NUnit supports IDE integration, assertions, attributes, configurations and multiple assembly testing. It is used during implementation to facilitate code verification.
MQC is a web-based test management tool for organizing testing projects. It allows requirements management, test planning, case authoring, execution, and defect tracking. Various roles can access modules for requirements, tests, execution, and defects. Reports can be generated on results. It integrates with other tools and facilitates
- Prasanth Kumar Pendam has over 9.5 years of experience in manual testing including team leading activities. He has strong experience in test planning, test case design, and working with tools like CA Clarity PPM and HP Service Manager.
- He is proficient in various phases of the software development life cycle including requirements analysis, testing, documentation, and deployment testing.
- He has worked on several projects as a technology lead and senior test engineer, with responsibilities including requirement gathering, test case preparation, automation, and issue tracking.
Software Test automation tools are available under several categories such as commercial, free software, open source software and etc. In this paper Open Source Software Testing Tools will be discussed.
Open source software test automation tools may be practical alternatives to popular closed-source commercial applications and some open source tools offers features or performance benefits that exceed their commercial counterparts. The source code is openly published for use and/or modification from its original design, free of charge. And these are usually available under a license defined by the Open Source Initiative.
Automation Testing of Web based Application with Selenium and HP UFT (QTP)IRJET Journal
This document compares two automation testing tools: Selenium and HP UFT (also known as QTP). It first provides background on software testing and the benefits of automation over manual testing. It then discusses the aims of the study, related work on automation testing frameworks and the major components of the Selenium tool. The key points are that the document aims to compare Selenium and UFT/QTP for testing web applications and provides background on software testing, the benefits of automation, automation frameworks, and an overview of the Selenium tool and its major components in order to facilitate the comparison.
This document discusses using VectorCAST/Manage and Jenkins together to achieve requirement-based testing according to ISO 26262. VectorCAST/Manage is a tool for automating unit, integration, and system testing that identifies code coverage. Jenkins is an open-source continuous integration server that compiles code and runs tests after each change. Integrating VectorCAST/Manage with Jenkins provides a continuous integration testing platform that runs automated tests frequently to identify defects early. This approach reduces testing time and ensures software reliability through repeated testing with minimal human intervention. Requirement-based testing in VectorCAST/Manage includes techniques like equivalence partitioning and boundary value analysis to generate test cases. Coverage analysis is also performed to meet different coverage levels required by ISO 26262
The document describes the software quality assurance process used by a company. It involves initial project planning, requirements analysis, development, testing of individual modules by developers and testers, integration testing, testing for compatibility, load, and system testing, and finally release after test report approval. Testing of existing vendor products includes peer reviews, validation, data-driven, load, compatibility, and usability testing. Testing new systems developed from scratch includes requirements, test strategy, traceability, cases, risks, tools, resources, schedule, deliverables, defect tracking, and approval processes.
This document outlines the test approach, scope, objectives, assumptions, and methodology for testing applications. It describes unit, integration, system, regression, and user acceptance testing. The primary objective is to ensure all requirements are met and the system functions as intended. The secondary objective is to identify and address all issues before release. Test deliverables include documents like the test approach, plan, and specifications as well as test cases, bug reports, and status reports.
The document discusses two software verification and validation tools: NUnit and HP LoadRunner. NUnit is a unit testing framework for .NET that provides features like assertions, setup/teardown functionality, and test runners. It is used to verify code meets specifications. HP LoadRunner performs load and performance testing to validate systems can meet service level agreements under certain user loads. It generates virtual users, runs tests via a controller, and provides analysis to identify bottlenecks. Both tools are used in later software development phases like implementation and verification to help ensure quality.
Exploratory testing is a hands-on approach where testers are involved in minimal planning and maximum test execution. The planning involves creating a test charter and objectives, while test design and execution are done in parallel without formally documenting test conditions, cases, or scripts. Some notes are taken during testing to produce a report afterwards. Use case testing identifies and executes the functional requirements of an application from start to finish using use cases. SDLC deals with software development/coding while STLC deals with validation and verification of software. A traceability matrix shows the relationship between test cases and requirements.
The document provides an overview of topics related to software quality assurance including software testing strategies, project management, risk management, and maintenance. It discusses software quality assurance and defines verification and validation. It describes different testing types like unit testing, integration testing, system testing, and validation testing. It also covers ISO standards for testing, SQA plans, testing goals and attributes. Finally, it discusses testing approaches, strategies for validation testing, and the goals of system testing.
Mindtree’s upstream testing enables effective and early testing, constantly increasing the coverage during the development phase. It empowers developers to boost their productivity and allows the QA team to focus on integration and system testing.
One of the best software Training & Placement center in Nagercoil. In our Sector we are providing specialized training for Engineering & diploma students
1) The document discusses software testing principles, lifecycles, limitations and methods. It describes the different phases of software testing like requirements study, test case design, test execution, test closure and test process analysis.
2) It also discusses different levels of testing including unit testing, integration testing, system testing and acceptance testing. Unit testing checks individual program modules, integration testing verifies interface connections, system testing checks full application functionality, and acceptance testing gets customer approval.
3) The document provides objectives and features of good test cases and objectives of a software tester. It also outlines principles of testing like testing for failures, starting early, defining test plans, and testing for valid and invalid conditions.
The document provides an overview of software testing. It defines software and describes different types, including system software, programming software, and application software. It then discusses objectives of testing like ensuring requirements are met and finding defects. Testing types include black box, white box, and interface testing. The software testing life cycle is also explained as a sequence of requirement analysis, test planning, case development, execution, and closure.
Test driven development and unit testing with examples in C++Hong Le Van
Test-driven development (TDD) relies on short development cycles of writing a failing test case, producing code to pass that test, and refactoring the code. Unit testing tests individual units of code by isolating each part and showing they work correctly. Boost.Test is a popular C++ unit testing framework that allows organizing tests into suites and fixtures, and provides assertions and output of results. A minimal Boost.Test example defines a test case using BOOST_AUTO_TEST_CASE that contains an assertion like BOOST_CHECK_EQUAL to test that 2+2 equals 4.
The document discusses various concepts related to software testing such as testing types (unit testing, integration testing, etc.), test case design techniques (equivalence partitioning, boundary value analysis, etc.), test documentation (test plan, test cases, test procedures, etc.), software quality models (CMM, ISO), and the software development life cycle (waterfall model, iterative model, etc.). It provides definitions and explanations of key terms to understand software testing processes and methodologies.
Software Test Automation - Best PracticesArul Selvan
The document provides best practices for software test automation. It recommends treating test automation like a software development project by focusing on design, documentation, and bug tracking. It also stresses setting measurable goals, choosing the right testing tool and framework to meet automation needs, ensuring high quality test data, training a dedicated team, conducting early and frequent testing, and writing independent test cases.
The document provides guidelines for software testing at United Finance Limited. It outlines the scope, purpose, types of testing including unit, integration, functional, system and acceptance testing. It describes the testing methods of automated and manual testing and the testing approaches of white box and black box testing. The document also discusses testing documentation including test plans, specifications, incident reports and progress reports. General testing principles and complementary reviews are provided.
1. Internship Report
IIT GUWAHATI
Project report on Quality Assurance of Virtual Labs
Submitted by
Hrishikesh Malakar
B.Tech, Computer Science and Engineering,
Tezpur University
Mentored By
Dr. Santosh Biswas
Asso. Professor, Computer Science and Engineering,
IIT Guwahati
Tezpur University
Duration:1st June - 15th July 2016
2. CERTIFICATE
This is to certify that the work contained in this project entitled “Quality Assurance of Virtual
Labs" is a bonafide work of Hrishikesh Malakar, carried out in the Department of Computer
Science and Engineering, Indian Institute of Technology Guwahati under my supervision and
that it has not been submitted elsewhere for a degree.
Supervisor: Dr. Santosh Biswas
Associate Professor,
July, 2016
Department of Computer Science & Engineering,
Guwahati. Indian Institute of Technology Guwahati, Assam.
3. ACKNOWLEDGEMENTS
I would like to express a deep sense of thanks and gratitude to our supervisorDr. SANTOSH
BISWAS,for giving me the opportunity to do an internship under his guidance.During the
project I got a chance to improve my practical skills beyond the limits of laboratories. I
learned a lot of concept of computer science and this project was really helpful for me.
I would also like to thank Mr.HRISHIKESH BARUAH and Mr. BIJU DAS, who helped me
in better understanding of project and helped me when I faced challenges.This work would
not have been possible without his support and valuable suggestions.
I also wish to thank everyone in the Computer Science and Engineering Department who
created such a lively atmosphere that it was always exciting to go to the department.
I have no words to express our sincere gratitude to our parents who have shown us this world
and for every support they have given us.
Finally, I would like to express my sincere thanks to all my friends and others who helped me
directly or indirectly during this project work.
4. CONTENTS
1 INTRODUCTION 1
1.1 Virtual Labs 1
1.2 Manual Testing of Virtual Labs 1
1.3 Automated Testing of Virtual 1
1.4 Hosting Process of Virtual Lab 1
2. Quality Assurance of Virtual Labs 2
2.1 Manual Testing-Test Plan 2
2.1.1 Testing Objectives 2
2.1.2 Test Requirements 3
2.1.2.1 System Testing 3
2.1.2.2 Integration Testing 3
2.2.2.3 User Interface Testing 3
2.1.3 Tools Used 4
2.1.4 Development of test cases 4
2.1.5 Definition of test cases 4
2.1.6 Structure of test cases 4
2..1.7 Description of various fields of test cases 6
2..1.8 Location of test case 6
2.1.9 Test reports 6
2.1.10 Structure of test reports 7
3 Automated Testing of Virtual Lab 9
3.1 About the modules 9
3.2 Description of the script 10
3.2.1 Link testing 10
3.2.2 Spelling Checking 11
3.3 Snapshot of working instance of the script 12
4 Conclusion 13
5 References 14
5. Chapter 1
INTRODUCTION
1.1 Virtual Labs
Virtual Labs is a mission mode project initiated by the Ministry of Human
Resources and Development (MHRD). The objective of this project is to provide
laboratory learning experience to the students who do not have access to adequate
laboratory infrastructure. Currently there are around 150 labs which have been
developed by various institutes. A streamlined software development life cycle
process followed for the development of these labs ensure high quality labs. The
integration process of Virtual Labs described here defines the development, quality
assurance and hosting practices followed by the developers (open source
community) of the Virtual Labs project. It aims at delivering responsive, open-
source and device-independent labs, helping us in our strive for excellence.
1.2 Manual Testing of Virtual Labs
Manual testing is a testing process that is carried out manually in order to find
defects without the usage of tools or automation scripting. A test plan document is
prepared that act as a guide to the testing process in order to have the complete test
coverage.
Tools used: Emacs 24.4.2,Github 2.1.4 ,org and html format
1.3 Automated Testing of Virtual Labs:
Automated testing is a testing process that is carried out by running a script in
order to track down whether any link is up or down and later to check for spelling
error is each page.
Tools used: Python 3.5.2, Selenium 2.53.6
1.4 Hosting Process of Virtual Labs:
A virtual lab is hosted by the VLEAD team once the lab is Once 'Approved' from
the IIIT-H QA team, the release engineer would carry out the hosting process using
the Auto Deployment Service (ADS).
1
6. Chapter 2
Quality Assurance of Virtual Lab
This section captures the Quality Assurance (QA) process to be carried out by the
respective lab developers and the IIIT-H QA team. Every QA process starts with a
test plan followed by the creation of test cases.
2.1 Manual Testing-Test Plan
This section describes the plan that would be followed by the IIIT-H QA team for
testing of the Virtual Labs . The test plan would test all the requirements of the
Virtual Labs. It supports the following objectives:
Identification of the existing project information and the software
components to be tested.
Specifying the recommended high level testing requirements.
Recommendation and description of the testing strategies to be employed.
Specifying the deliverable elements of the test activities.
This test plan would apply to the integration and system tests that would be
conducted on the Virtual Labs Releases. Testing would be conducted as per the
black box testing techniques.
2.1.1 Testing Objectives
It supports the following objectives:
Identification of the existing project information and the software
components to be tested.
Specifying the recommended high level testing requirements.
Recommendation and description of the testing strategies to be employed.
Specifying the deliverable elements of the test activities.
This test plan would apply to the integration and system tests that would be
conducted on the Virtual Labs Releases. Testing would be conducted as per the
black box testing techniques.
2
7. 2.1.2 Test requirements
The list below identifies the different levels (functional requirements) of the testing
that would be performed.
2.1.2.1 System Testing
The goal of system testing would be to verify that Virtual Labs works as per user
expectations. This type of testing is based upon black box techniques, that is,
verifying the application by interacting with it and analyzing the output (results).
Identified below is an outline of the testing process :
Test Objectives: Verification of working of the Virtual Labs home
page and links to the participating institutes.
Techniques: Using positive and negative data, following would be
verified:
1.Occurence of the expected results when positive data is used.
2.The appropriate error/warning messages displayed when
negative data is used.
Completion Criteria:
1.All planned tests should be executed.
2.All identified defects should be addressed
2.1.2.2 Integration Testing
The goal of integration testing would be to verify that Virtual Labs fulfills the end
users expectations from the look and feel point of view. A detailed description of
what would be tested in this category is listed below :
Different labs and experiments would be verified for simulator,
theory, reference and usability.
Usability is defined as the extent to which an application is
understood, easy to operate and attractive to the users under
specified conditions.
2.1.2.3 User Interface Testing
User Interface testing verifies a user’s interaction with the software. The goal is
to verify the details of the functioning of all the labs and experiments which are
hosted under Virtual-Labs organisation.
3
8. 2.1.3 Tools Used
The following tools have been used for fulfilling manual testing:
Test Design - Emacs 24.4.1
Defect Tracking - Github 2.1.4
Functional Testing - Manual
Test Report and Statistics - org & html format
Project Management - Microsoft Project, Microsoft Word, Microsoft Excel
2.1.4 Development of test cases
This section describes the overall development of the test cases including its
definition, structure, owner, type and location.
2.1.5 Definition of test cases
A test case is a set of conditions under which a test engineer will determine
whether an application, software system or one of its features is working as it was
originally established to do. A test case is usually a single step, or occasionally a
sequence of steps, to test the correct behaviour/functionality and features of an
application. For Virtual Labs, a test case would be a file listing all the steps to be
carried out by the test engineers. Every test case should follow a defined structure
encapsulating all the testing conditions necessary for the QA process.
2.1.6 Structure of test cases
The structure of the test cases would be same across all the testing levels and the
labs. Naming convention to be followed for the test case file would be -
experimentname__XX_feature_priority.org
(For example : NumericalRepresentation_01_Usability_smk.org)
Experiment Name : This part of the test case filename should represent the
name of the experiment.
XX : This part of the test case filename should be serial number of the test
case.
Feature : This part of the test case filename should represent the name of
the tested feature.
4
9. Priority : This part of the test case filename should represent the level of
(business) importance assigned to an item. Priority assigned to a test case
file could be of different types as given below :
o p1 : These would be the highest level of business importance
assigned to the test cases. These would be the test cases executed first
in each build, identified and assigned by IIIT-H QA team in
conjunction with domain level testing team.
o p2 : These would be next to P1 test cases, in terms of business
importance assigned to these type of test cases.
o smoke test (smk) : These would be a subset of all defined/planned
test cases that cover the main functionality of a component or system.
It would ascertain that the most crucial functions of a program work,
but not bothered with finer details.
Fig 0.1: A sample of test case
5
10. 2.1.7: Description of Various Fields of Test Cases:
Author : This field should be the name of the author. It could be from the
IIIT-H QA team or from the development team.
Date Created : This field should be the date of creation of a test case by the
test engineer/developer.
Environment : This field would describe the environmental setup under
which the testing of a lab would be performed.
Objective : This field would define the objective of the created test case.
Pre conditions : This field would list the conditions that should be satisfied
before a test case is executed by the test engineer.
Post conditions : This field would generally represent the state which
would be obtained after a test case is executed successfully. In some special
cases it would list the steps to be performed to get the system back to its
initial state.
Test Steps : This field would list the steps to be carried out to execute a test
case.
Expected result : This field would detail the ideal result expected by the
end user.
Reviews/Comments : This field would express the comments of the
reviewer of the test cases
2.1.8 Location of test cases
Every lab has its own repository in GitHub under the Virtual-Labs organisation.
The test cases would also be located in the same repository. The test cases
directory would be at the same level as that of README.txt, src, scripts and
release-notes.
2.1.9 Test Reports
Test reports are generated at end of the testing process of each lab. It would contain
a consolidated report of the executed test cases, a boolean result and links to
defects raised against them. Two important details to be noted here are :
A test case is said to pass only when all its test step pass.
A composite test which consists of a set of test cases is said to pass only
when all its individual test cases pass.
6
11. 2.1.10 Structure of a test report:
This Section describe the structure of a test report.
Fig 0.2:Structure of a Test report.
Various Fields of a Test report
Lab Name : This would be the name of the tested lab repository.
GitHub URL : This would be the GitHub URL for the lab repository
which would hold the test cases and the filed defects.
Commit id : This would be the commit id against which the testing
happened.
Experiment Name : This would be the name of the experiment of the
tested lab.
Test Case : This would be the name of the test case created and tested for
the experiment as shown in the table above.
Pass/Fail : This field would depict whether the test case for the tested
experiment pass or failed.
Severity : This field would indicate the severity of the defect.
Defect Link : This field in the table would be the hyper link for the
corresponding issue in GitHub for a failed test case.
7
12. Severity of defects
At any given time a defect should only have one of the severity levels as described
below. To change the severity of a defect, the existing label of the defect should be
unchecked and the new severity label should be checked.
S1 : This label indicates that the defect affects critical functionality or critical
data. There are no workarounds to get to this functionality. Example:
Prevention of user interaction, corruption of database, unfaithful to the
semantics of interaction and redirection to an error page.
S2 : This label indicates that the defect affects major functionality or major
data. It could have a workaround but is not obvious and is difficult, like broken
links and a field view being inconsistent with its specifications. Example: In a
form if there is a field which is editable but it is not allowing the user to edit it.
S3 : This label indicates that the defect affects minor functionality or non-
critical data. It could have an easy workaround. Example: Visual imperfections
like spelling and grammar, alignment, inconsistent terminology, colour, shapes
and fonts(css properties).
8
13. Chapter 3
Automated Testing of Virtual Labs
This section describes how script works in testing the Lab "Creative Design,
Prototyping & Experiential Simulation In Human Computer Interaction (HCI)" of
IIT Guwahati. Automated testing of the lab is done in two steps:
Link Testing: Here we test the working of link using Selenium 2.53.6 with
python 3.5.2.
Spell Checking: Here we check the correctness of the spelling in each page
of the lab with the help of PyEnchant Library.
3.1 About the Modules:
Selenium
Selenium is a set of different software tools each with a different approach to
supporting test automation. Most Selenium QA Engineers focus on the one or
two tools that most meet the needs of their project, however learning all the
tools will give you many different options for approaching different test
automation problems. The entire suite of tools results in a rich set of testing
functions specifically geared to the needs of testing of web applications of all
types.
These operations are highly flexible, allowing many options for locating UI
elements and comparing expected test results against actual application
behavior. One of Selenium’s key features is the support for executing one’s tests
on multiple browser platforms.
For our testing purpose we have used selenium with python 3.5.2.
PyEnchant
PyEnchant is a spellchecking library for Python, based on the excellent
Enchant library. PyEnchant combines all the functionality of the underlying
Enchant library with the flexibility of Python and a nice "Pythonic" object-
oriented interface. It also aims to provide some higher-level functionality
than is available in the C API.
By default PyEnchant comes with various dictionaries: en_GB:British
English, en_US: American English, de_DE: German, fr_FR:French
9
14. 3.2 Description of the script
Setting up of Environment : Here the function setUpclass(self) does this. It initializes a
driver object to have the functionalities of chrome webdriver.
3.2.1 Link Testing
fig 0.3: Code Snapshot
Writing Test cases:
Here there are two test cases in the script- test_title(), test_link(self).
Usage of functions:
driver.get('url'):redirects us to the specified url.
assertIn("Virtual Lab",dirver.title): this function takes a condition as a
argument and returns a true or false value.
fig 0.4 : Code Snapshot
10
15. Some functions of selenium-python module to interact with web browser.
element=driver.get_element_by_css_selector('body')
Select the content of the element 'body' which it locates by searching the css
file.
element=driver.get_element_by_link_text(''SomeText")
This is a common way of visting link by identifiny links by their text.
We can also have some other function element.click() to click on the
link found by text.
element=driver.get_element_by_xpath("//xpath"):
This is the most popular way of getting link and by using xpath querry
language.
In the figure 0.4 we have maintained a 2-D list of links of the Lab, which we
iterated over and apply the functions of the selenium-python module to test for the
links.
3.2.2 Spelling Checking
fig 0.4: Code Snapshot.
Here we get the text of the page by the css selector and then pass it to the function
check(webtext) of the module spellCheck. Snapshot of code of the spellCheck
module is given in the next page.
11
16. my_dict is a dictionary object that contains English Dictionary along with
a user defined dictionary "mywords.txt"
3.3 Snapshot of working instance of the script.
12
17. Chapter 4
Conclusion
During our project "Quality assurance of virtual lab" we learned about the various process
of manual and automated testing. We got chance to get familiar with tools like GitHub,
Emacs, we also got chance to try our hands on advanced tools for automation like selenium
and spell checking tools like PyEnchant. Moreover, this project also taught the importance of
virtual labs all over our nation.
13