The document introduces in-container testing tools to improve testing of gCube services. It notes that current testing practices are manual, not repeatable, slow, and not integrated into build processes. Tests are executed in containers separate from the code, causing reproducibility issues. The proposed in-container testing tools would run tests inside the same container as the code, improving speed, repeatability, and integration with development workflows. This would help discover bugs earlier and improve system testing approaches.
With the proliferation of testing culture, many developers are facing new challenges. As projects are getting started, the focus may be on developing enough tests to maintain confidence that the code is correct. However, as developers write more and more tests, performance and repeatability become growing concerns for test suites. In our study of large open source software, we found that running tests took on average 41% of the total time needed to build each project – over 90% in those that took the longest to build. Unfortunately, typical techniques for accelerating test suites from literature (like running only a subset of tests, or running them in parallel) can’t be applied in practice safely, since tests may depend on each other. These dependencies are very hard to find and detect, posing a serious challenge to test and build acceleration. In this talk, I will present my recent research in automatically detecting and isolating these dependencies, enabling for significant, safe and sound build acceleration of up to 16x.
This document discusses test-driven development (TDD), behavior-driven development (BDD), and acceptance test-driven development (ATDD). It explains that while they have different names, they all share the same core idea of using examples from business requirements to create automated tests. The document provides examples of how to write tests before having a user interface, and recommends abstracting from the GUI to focus on business logic. It also lists some popular tools that can be used for ATDD, BDD, and TDD.
Writing useful automated tests for the single page applications you buildAndrei Sebastian Cîmpean
How to approach testing if you are building a modern single page application. I try to emphasize that integration testing is the way to go and that developers should consider the tests as part of the system and spend time to maintain them.
Jason Taylor, former IT Shared Delivery Director for Specsavers gives an insight to the future of the test professional in today’s lifecycle containing more Agile delivery and the concept of DevOps and how the challenges for today’s testers became the catalyst for change.
Testing in TFS involves managing the test process to improve quality and customer satisfaction. Key aspects of testing in TFS include: test planning, authoring test cases and scenarios, executing tests in test suites, tracking and reporting results, and managing bugs. Test cases go through various states like design, ready, pass, fail, and more to track progress and ensure quality.
The document discusses Acceptance Test Driven Development (ATDD), where acceptance tests are used to define requirements and drive the development process. It describes how ATDD works through a cycle of writing examples and tests, implementing features to pass the tests, and ensuring the tests continue to pass as changes are made. The benefits of ATDD include improved collaboration, a shared understanding of requirements, and preventing defects. Various tools that can be used for ATDD are also outlined, including FIT and Robot Framework. Adopting ATDD requires training, evangelism, and addressing organizational challenges through shared understanding.
Stc 2016 regional-round-ppt-automation testing with devops in agile methodolgyArchana Krushnan
DevOps is a software development methodology that integrates development and operations functions to facilitate continuous delivery. It aims to shorten development cycles and allow for more frequent releases. Automated testing is integrated in the DevOps process to continuously test software and catch bugs earlier. Companies adopt DevOps to enable automated and frequent deployments in an agile manner. Challenges in implementing DevOps include ensuring organizational processes are mature enough and integrating exploratory testing.
The document discusses continuous delivery practices including defining goals and features through examples and stories, automating acceptance criteria tests, and implementing application code to pass those tests. It emphasizes that quality must be built into the process through techniques like test automation to enable continuous delivery of value to the business. Automating acceptance criteria keeps projects on track, provides better visibility, allows faster release cycles, and reduces risk and costs.
With the proliferation of testing culture, many developers are facing new challenges. As projects are getting started, the focus may be on developing enough tests to maintain confidence that the code is correct. However, as developers write more and more tests, performance and repeatability become growing concerns for test suites. In our study of large open source software, we found that running tests took on average 41% of the total time needed to build each project – over 90% in those that took the longest to build. Unfortunately, typical techniques for accelerating test suites from literature (like running only a subset of tests, or running them in parallel) can’t be applied in practice safely, since tests may depend on each other. These dependencies are very hard to find and detect, posing a serious challenge to test and build acceleration. In this talk, I will present my recent research in automatically detecting and isolating these dependencies, enabling for significant, safe and sound build acceleration of up to 16x.
This document discusses test-driven development (TDD), behavior-driven development (BDD), and acceptance test-driven development (ATDD). It explains that while they have different names, they all share the same core idea of using examples from business requirements to create automated tests. The document provides examples of how to write tests before having a user interface, and recommends abstracting from the GUI to focus on business logic. It also lists some popular tools that can be used for ATDD, BDD, and TDD.
Writing useful automated tests for the single page applications you buildAndrei Sebastian Cîmpean
How to approach testing if you are building a modern single page application. I try to emphasize that integration testing is the way to go and that developers should consider the tests as part of the system and spend time to maintain them.
Jason Taylor, former IT Shared Delivery Director for Specsavers gives an insight to the future of the test professional in today’s lifecycle containing more Agile delivery and the concept of DevOps and how the challenges for today’s testers became the catalyst for change.
Testing in TFS involves managing the test process to improve quality and customer satisfaction. Key aspects of testing in TFS include: test planning, authoring test cases and scenarios, executing tests in test suites, tracking and reporting results, and managing bugs. Test cases go through various states like design, ready, pass, fail, and more to track progress and ensure quality.
The document discusses Acceptance Test Driven Development (ATDD), where acceptance tests are used to define requirements and drive the development process. It describes how ATDD works through a cycle of writing examples and tests, implementing features to pass the tests, and ensuring the tests continue to pass as changes are made. The benefits of ATDD include improved collaboration, a shared understanding of requirements, and preventing defects. Various tools that can be used for ATDD are also outlined, including FIT and Robot Framework. Adopting ATDD requires training, evangelism, and addressing organizational challenges through shared understanding.
Stc 2016 regional-round-ppt-automation testing with devops in agile methodolgyArchana Krushnan
DevOps is a software development methodology that integrates development and operations functions to facilitate continuous delivery. It aims to shorten development cycles and allow for more frequent releases. Automated testing is integrated in the DevOps process to continuously test software and catch bugs earlier. Companies adopt DevOps to enable automated and frequent deployments in an agile manner. Challenges in implementing DevOps include ensuring organizational processes are mature enough and integrating exploratory testing.
The document discusses continuous delivery practices including defining goals and features through examples and stories, automating acceptance criteria tests, and implementing application code to pass those tests. It emphasizes that quality must be built into the process through techniques like test automation to enable continuous delivery of value to the business. Automating acceptance criteria keeps projects on track, provides better visibility, allows faster release cycles, and reduces risk and costs.
1. The document discusses software development methodologies Scrum and Test-Driven Development (TDD). Scrum is an agile framework that uses short iterations called sprints to incrementally develop features. TDD involves writing automated tests before code and refactoring to ensure quality.
2. Continuous integration (CI) is also covered, which is the practice of frequently integrating code changes to reduce integration problems. Tools discussed include CruiseControl for CI and SymbianOSUnit for testing on Symbian platforms.
3. Contact information is provided for Gábor Török from Agil Eight for further questions.
Full Testing Experience - Visual Studio and TFS 2010Ed Blankenship
This presentation goes through the full testing experience of Visual Studio 2010 and Team Foundation Server 2010 including using the new Lab Management features in the full testing process.
The document discusses why developers are attracted to DevOps. It provides several benefits of DevOps including faster delivery of work, reduced failures, collaborative workflows, and more time for innovation. DevOps allows for continuous delivery, shorter release cycles, and faster deployments. It also emphasizes collaboration between development and operations teams.
Pete Marshall - casmadrid2015 - Continuous Delivery in Legacy EnvironmentsPeter Marshall
This document discusses practices for implementing continuous delivery in legacy software environments. It outlines key characteristics of continuous delivery like keeping software deployable throughout its lifecycle. It then provides examples of how one company transitioned their monolithic legacy application to a continuous delivery model by using techniques like the strangler pattern, refactoring to separate concerns, and restructuring their organization into cross-functional product teams. The document emphasizes establishing technical foundations, learning through the build-deploy-learn cycle, and focusing on delivering value to customers.
Continuous Integration vs Continuous Delivery vs Continuous Deployment I hope you now get the difference between Continuous Integration, Continuous Delivery and Continuous Deployment. As i mentioned above, these are really an important practices which needs to be implemented to get all the benefits of DevOps.
Its a long journey to understand SCM and utilising all its benefits. Hope you enjoyed our today’s article as well ……
This document discusses continuous integration, test-driven development, living documentation, and how they can be applied to Odoo. It defines these terms and concepts: continuous integration automatically runs tests on merged code; test-driven development involves writing tests first before implementing code; and living documentation acts as project documentation, tests, and scope definition that evolves with a project. The document demonstrates how to create living documentation for Odoo using Behave and ERPPeek to run tests written in Gherkin, and shows an example output of the living documentation generated.
DevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOpsSailaja Tennati
Continuous delivery is frightening to enterprise IT managers who see each new private, public or hybrid cloud infrastructure software change potentially causing service outages or security concerns.
This presentation by Marc Hornbeek, first shared at the DevOps Summit 2015 in London, explains Spirent’s comprehensive Clear DevOps Solution to support:
- Rapid paced continuous testing without compromising coverage or service quality
- Orchestration of service deployments over physical and virtual infrastructures
- Best practices for integrating continuous testing into CI infrastructures
- How to use continuous testing analytics for deployment decisions
DevOps is a software development method which is all about working together between Developers and IT Professionals. This presentation gives you an introduction to DevOps.
From the the teams struggling with DevOps to experienced professionals trying to make a shift to DevOps, this presentation helps in how understanding how DevOps makes Deliveries faster and accurate
This is a 90 min talk with some exercises and discussion that I gave at the DHS Agile Expo. It places DevOps as a series of feedback loops and emphasizes agile engineering practices being at the core.
Continuous testing & devops with @petemar5hallPeter Marshall
This document discusses testing software in high frequency delivery environments using continuous testing and DevOps practices. It outlines how continuous testing is not just about test automation, but also includes automated management of environments, application feedback through monitoring, and engaging in XP practices. DevOps helps by automating building, testing, and deployment to provide consistency and tools for teams. Characteristics of high frequency delivery environments include automating infrastructure, testing, and deployment to reduce errors and allow for smaller, more frequent deliveries. This allows for a single view of quality and faster restore times when issues arise.
A Digital Software Quality Magazine for all the Quality Professionals in IT Industry. This Quarterly Magazine intend to discuss and share new concepts, pilots, analysis related to the modern IT technologies and platform from the Quality perspective. The magazine is promoted by DigitQ.in website. This is collaborative platform to share and know the latest information in Quality field. The intent is to Digitize the Quality Concepts to fit to Modern IT needs.
Introduction to Acceptance Test Driven DevelopmentSteven Mak
The document discusses Acceptance Test Driven Development (ATDD), which involves using automated acceptance tests to drive the implementation of requirements. The key points are:
- ATDD uses examples selected from real-world scenarios to build a shared understanding and act as both specifications and an acceptance test suite.
- The development process focuses on passing the acceptance tests before implementation. Tests are automated and run in parallel with coding and other activities.
- ATDD involves collaboration between developers, testers, and product owners to clarify requirements and implement features together based on the tests.
- Benefits include comprehensible examples over complex requirements, close collaboration, definition of done, and testing at the system level to prevent defects rather
Testing and DevOps Culture: Lessons LearnedLB Denker
This document discusses the speaker's background and experiences with software engineering practices. It covers his education in computational mathematics and computer science, past roles at Universal Instruments developing machine software and at Google and Etsy implementing DevOps practices. Key topics covered include the benefits of continuous integration, deployment and delivery; the importance of testing including test-driven development; and embracing interdependence between developers and other IT roles. Best practices are noted to be situational and relationships must be respected.
DevOps is a culture that improves collaboration between development and operations teams. It aims to speed up delivery of software and services according to business needs through automation. The three pillars of DevOps are infrastructure automation, continuous delivery, and reliability engineering. Automation improves productivity, reliability, and standardizes processes. This allows for faster, more cost effective and higher quality software delivery that better meets customer needs. DevOps builds trust between teams and increases business revenue through quicker feedback and turnaround.
A Software Tester's Travels from the Land of the Waterfall to the Land of Agi...Yuval Yeret
In my work, I have come across many software testing organizations/groups. Some use the waterfall method and may be in the first stages of exposure to Agile-based methods. Some are on the journey to switch between methods, and some have already been using Agile and are looking for ways to do this more effectively. In this article, I will try to describe the experience of a typical software tester when his organization decides to move to Agile.
Note: This is an english translation of a "Thinking Testing" hebrew magazine article from 2012
Test Driven Development (TDD) is a software testing methodology where developers write automated test cases before writing any code. It involves writing a failing test case for new functionality, then writing the minimum amount of code to pass that test, and refactoring the code. Key steps are to add a test, run all tests and see the new one fail, write code to pass the test, run tests to succeed, and refactor code while maintaining tests. Benefits include instant feedback, better code quality, and lower rework effort. Limitations include difficulty testing certain types of code like GUIs. Unit testing tests individual units of code in isolation through mocking of external dependencies.
Join Stacey Brown, President of MindLink Resources, for a webinar that will examine the top 10 qualities of a quality assurance (QA) tester. Learn how to bring out these traits in your current QA staff and how to watch for these soft skills when screening new candidates.
When localizing products, the QA step is essential in confirming the translation and making sure the product was successfully prepared for the target market. Managers trust the QA staff to catch translation and engineering errors and ensure product readiness to avoid quality issues caught by the end customer. Many managers make the mistake of assigning this critical role to a linguist who may not have the right characteristics of a good tester. When selecting QA staff, it is important to consider skills beyond just linguistic and technical. There are many “soft skills” to watch for in a candidate that will give localization managers the confidence that even small errors will be reported by their tester.
In this webinar, Stacey will discuss the top 10 qualities of a quality assurance (QA) tester, how to bring out these traits in current QA staff, and how to watch for these soft skills when screening new candidates.
About the presenter
Stacey Brown is the Talent Management Specialist and President of Mindlink Resources, LLC.. She has a passion for surrounding herself with talented people. For the past 15 years she has successfully built teams of contractors providing a variety of services at large fortune 500 companies in the Pacific Northwest. She specifically has over 12 years of experience recruiting, training and managing QA specialists. Stacey has a degree in Communications and an MBA in Technology Management.
This document discusses key measurements for testers, including precision vs accuracy, goals for testing (SMART goals), the GQM methodology for defining test goals and questions, and various metrics for evaluating projects, products, and releases such as defect rates and trends. It provides examples of defining test plans and resources needed, tracking reported vs resolved defects, and criteria for determining when a release is ready.
The document discusses the stages an organization goes through in adopting DevOps practices and culture. It outlines four stages - reactive, repeatable, reliable, and aspirational. For each stage, it describes the typical issues, processes, tools, and other aspects around scheduling work, managing requirements, ensuring quality, and how development and operations teams collaborate. The overall goal is for organizations to progress from an initial reactive state to the aspirational ideal of true transparency and collaboration between teams.
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...Abdelkrim Boujraf
The document discusses the authors' experience with different testing strategies at their company StratEx. They initially used Selenium for UI testing but found it did not help when they frequently changed screens. They then investigated Test-Driven Development (TDD) but found it inefficient, as tests are also code that must be written and maintained. Behavior-Driven Development (BDD) showed more promise as it focuses on functionality rather than architecture and bridges communication between users and developers. However, no methodology fully describes large, complex systems. The search for the best testing approach is ongoing.
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...Applitools
Alexey Shpakov presents on testing in Jira Frontend. He discusses the testing pyramid with unit, integration, and end-to-end tests. He then introduces the concept of a "testing hourglass" which adds deployment and post-deployment verification to the pyramid. Key aspects of each type of test are discussed such as using feature flags, monitoring for flaky tests, and gradual rollouts to reduce risk.
1. The document discusses software development methodologies Scrum and Test-Driven Development (TDD). Scrum is an agile framework that uses short iterations called sprints to incrementally develop features. TDD involves writing automated tests before code and refactoring to ensure quality.
2. Continuous integration (CI) is also covered, which is the practice of frequently integrating code changes to reduce integration problems. Tools discussed include CruiseControl for CI and SymbianOSUnit for testing on Symbian platforms.
3. Contact information is provided for Gábor Török from Agil Eight for further questions.
Full Testing Experience - Visual Studio and TFS 2010Ed Blankenship
This presentation goes through the full testing experience of Visual Studio 2010 and Team Foundation Server 2010 including using the new Lab Management features in the full testing process.
The document discusses why developers are attracted to DevOps. It provides several benefits of DevOps including faster delivery of work, reduced failures, collaborative workflows, and more time for innovation. DevOps allows for continuous delivery, shorter release cycles, and faster deployments. It also emphasizes collaboration between development and operations teams.
Pete Marshall - casmadrid2015 - Continuous Delivery in Legacy EnvironmentsPeter Marshall
This document discusses practices for implementing continuous delivery in legacy software environments. It outlines key characteristics of continuous delivery like keeping software deployable throughout its lifecycle. It then provides examples of how one company transitioned their monolithic legacy application to a continuous delivery model by using techniques like the strangler pattern, refactoring to separate concerns, and restructuring their organization into cross-functional product teams. The document emphasizes establishing technical foundations, learning through the build-deploy-learn cycle, and focusing on delivering value to customers.
Continuous Integration vs Continuous Delivery vs Continuous Deployment I hope you now get the difference between Continuous Integration, Continuous Delivery and Continuous Deployment. As i mentioned above, these are really an important practices which needs to be implemented to get all the benefits of DevOps.
Its a long journey to understand SCM and utilising all its benefits. Hope you enjoyed our today’s article as well ……
This document discusses continuous integration, test-driven development, living documentation, and how they can be applied to Odoo. It defines these terms and concepts: continuous integration automatically runs tests on merged code; test-driven development involves writing tests first before implementing code; and living documentation acts as project documentation, tests, and scope definition that evolves with a project. The document demonstrates how to create living documentation for Odoo using Behave and ERPPeek to run tests written in Gherkin, and shows an example output of the living documentation generated.
DevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOpsSailaja Tennati
Continuous delivery is frightening to enterprise IT managers who see each new private, public or hybrid cloud infrastructure software change potentially causing service outages or security concerns.
This presentation by Marc Hornbeek, first shared at the DevOps Summit 2015 in London, explains Spirent’s comprehensive Clear DevOps Solution to support:
- Rapid paced continuous testing without compromising coverage or service quality
- Orchestration of service deployments over physical and virtual infrastructures
- Best practices for integrating continuous testing into CI infrastructures
- How to use continuous testing analytics for deployment decisions
DevOps is a software development method which is all about working together between Developers and IT Professionals. This presentation gives you an introduction to DevOps.
From the the teams struggling with DevOps to experienced professionals trying to make a shift to DevOps, this presentation helps in how understanding how DevOps makes Deliveries faster and accurate
This is a 90 min talk with some exercises and discussion that I gave at the DHS Agile Expo. It places DevOps as a series of feedback loops and emphasizes agile engineering practices being at the core.
Continuous testing & devops with @petemar5hallPeter Marshall
This document discusses testing software in high frequency delivery environments using continuous testing and DevOps practices. It outlines how continuous testing is not just about test automation, but also includes automated management of environments, application feedback through monitoring, and engaging in XP practices. DevOps helps by automating building, testing, and deployment to provide consistency and tools for teams. Characteristics of high frequency delivery environments include automating infrastructure, testing, and deployment to reduce errors and allow for smaller, more frequent deliveries. This allows for a single view of quality and faster restore times when issues arise.
A Digital Software Quality Magazine for all the Quality Professionals in IT Industry. This Quarterly Magazine intend to discuss and share new concepts, pilots, analysis related to the modern IT technologies and platform from the Quality perspective. The magazine is promoted by DigitQ.in website. This is collaborative platform to share and know the latest information in Quality field. The intent is to Digitize the Quality Concepts to fit to Modern IT needs.
Introduction to Acceptance Test Driven DevelopmentSteven Mak
The document discusses Acceptance Test Driven Development (ATDD), which involves using automated acceptance tests to drive the implementation of requirements. The key points are:
- ATDD uses examples selected from real-world scenarios to build a shared understanding and act as both specifications and an acceptance test suite.
- The development process focuses on passing the acceptance tests before implementation. Tests are automated and run in parallel with coding and other activities.
- ATDD involves collaboration between developers, testers, and product owners to clarify requirements and implement features together based on the tests.
- Benefits include comprehensible examples over complex requirements, close collaboration, definition of done, and testing at the system level to prevent defects rather
Testing and DevOps Culture: Lessons LearnedLB Denker
This document discusses the speaker's background and experiences with software engineering practices. It covers his education in computational mathematics and computer science, past roles at Universal Instruments developing machine software and at Google and Etsy implementing DevOps practices. Key topics covered include the benefits of continuous integration, deployment and delivery; the importance of testing including test-driven development; and embracing interdependence between developers and other IT roles. Best practices are noted to be situational and relationships must be respected.
DevOps is a culture that improves collaboration between development and operations teams. It aims to speed up delivery of software and services according to business needs through automation. The three pillars of DevOps are infrastructure automation, continuous delivery, and reliability engineering. Automation improves productivity, reliability, and standardizes processes. This allows for faster, more cost effective and higher quality software delivery that better meets customer needs. DevOps builds trust between teams and increases business revenue through quicker feedback and turnaround.
A Software Tester's Travels from the Land of the Waterfall to the Land of Agi...Yuval Yeret
In my work, I have come across many software testing organizations/groups. Some use the waterfall method and may be in the first stages of exposure to Agile-based methods. Some are on the journey to switch between methods, and some have already been using Agile and are looking for ways to do this more effectively. In this article, I will try to describe the experience of a typical software tester when his organization decides to move to Agile.
Note: This is an english translation of a "Thinking Testing" hebrew magazine article from 2012
Test Driven Development (TDD) is a software testing methodology where developers write automated test cases before writing any code. It involves writing a failing test case for new functionality, then writing the minimum amount of code to pass that test, and refactoring the code. Key steps are to add a test, run all tests and see the new one fail, write code to pass the test, run tests to succeed, and refactor code while maintaining tests. Benefits include instant feedback, better code quality, and lower rework effort. Limitations include difficulty testing certain types of code like GUIs. Unit testing tests individual units of code in isolation through mocking of external dependencies.
Join Stacey Brown, President of MindLink Resources, for a webinar that will examine the top 10 qualities of a quality assurance (QA) tester. Learn how to bring out these traits in your current QA staff and how to watch for these soft skills when screening new candidates.
When localizing products, the QA step is essential in confirming the translation and making sure the product was successfully prepared for the target market. Managers trust the QA staff to catch translation and engineering errors and ensure product readiness to avoid quality issues caught by the end customer. Many managers make the mistake of assigning this critical role to a linguist who may not have the right characteristics of a good tester. When selecting QA staff, it is important to consider skills beyond just linguistic and technical. There are many “soft skills” to watch for in a candidate that will give localization managers the confidence that even small errors will be reported by their tester.
In this webinar, Stacey will discuss the top 10 qualities of a quality assurance (QA) tester, how to bring out these traits in current QA staff, and how to watch for these soft skills when screening new candidates.
About the presenter
Stacey Brown is the Talent Management Specialist and President of Mindlink Resources, LLC.. She has a passion for surrounding herself with talented people. For the past 15 years she has successfully built teams of contractors providing a variety of services at large fortune 500 companies in the Pacific Northwest. She specifically has over 12 years of experience recruiting, training and managing QA specialists. Stacey has a degree in Communications and an MBA in Technology Management.
This document discusses key measurements for testers, including precision vs accuracy, goals for testing (SMART goals), the GQM methodology for defining test goals and questions, and various metrics for evaluating projects, products, and releases such as defect rates and trends. It provides examples of defining test plans and resources needed, tracking reported vs resolved defects, and criteria for determining when a release is ready.
The document discusses the stages an organization goes through in adopting DevOps practices and culture. It outlines four stages - reactive, repeatable, reliable, and aspirational. For each stage, it describes the typical issues, processes, tools, and other aspects around scheduling work, managing requirements, ensuring quality, and how development and operations teams collaborate. The overall goal is for organizations to progress from an initial reactive state to the aspirational ideal of true transparency and collaboration between teams.
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...Abdelkrim Boujraf
The document discusses the authors' experience with different testing strategies at their company StratEx. They initially used Selenium for UI testing but found it did not help when they frequently changed screens. They then investigated Test-Driven Development (TDD) but found it inefficient, as tests are also code that must be written and maintained. Behavior-Driven Development (BDD) showed more promise as it focuses on functionality rather than architecture and bridges communication between users and developers. However, no methodology fully describes large, complex systems. The search for the best testing approach is ongoing.
Testing Hourglass at Jira Frontend - by Alexey Shpakov, Sr. Developer @ Atlas...Applitools
Alexey Shpakov presents on testing in Jira Frontend. He discusses the testing pyramid with unit, integration, and end-to-end tests. He then introduces the concept of a "testing hourglass" which adds deployment and post-deployment verification to the pyramid. Key aspects of each type of test are discussed such as using feature flags, monitoring for flaky tests, and gradual rollouts to reduce risk.
The document discusses Test Driven Development (TDD). It describes the TDD cycle of writing an initially failing test, then code to pass the test, and refactoring. It proposes adopting TDD practices like writing tests for components before code and using continuous integration. It also discusses using code analysis tools in integration and avoiding tests sharing developer blind spots. Shortcomings discussed are difficulty testing interfaces and risk of false security from tests.
TDD involves three core unit testing practices: programmer testing responsibility, automated tests, and example-based test cases. Programmer testing responsibility makes unit testing the job of developers. Automated tests execute unit tests as code to provide continuous feedback. Example-based test cases adopt a black-box or example-driven approach to focus on interfaces and examples rather than implementation details. Together these practices form the basis for TDD and help improve code quality through early defect detection.
Test environment management personnel always find opportunities to reallocate budgets where there is the need. Let’s know why we need multiple QA environments in the IT industry.
Test environment management personnel always find opportunities to reallocate budgets where there is the need. Let’s know why we need multiple QA environments in the IT industry.
This document provides an introduction to automated testing. It discusses the motivations for automated testing such as improving quality and catching bugs early. It covers basic testing concepts like unit, integration, and system tests. It explains testing principles such as keeping tests independent and focusing on visible behavior. The document also discusses popular testing frameworks for different programming languages and provides examples of tests from a codebase.
This document contains the syllabus for a course on software verification, validation, and testing (CSE 565). It lists the topics that will be covered each week, including testing techniques like requirements-based testing, exploratory testing, structure-based testing, integration testing, and usability testing. It also covers testing at different stages like unit testing, integration testing, and system testing. The document provides an overview of the areas and concepts that will be learned throughout the course.
This document discusses why organizations are looking at DevOps. It notes that with traditional software development, each new feature added to the testing queue for QA teams to re-test the entire product. This leads to challenges like some test cases being overlooked or defects in the final product. It also discusses how handing code off between development and operations teams for deployment can lead to issues. The document advocates that DevOps is the solution as it allows engineers to focus on programming while automating other tasks like testing, building, and deploying using tools and machines.
30 February 2005 QUEUE rants [email protected] DARNEDTestin.docxtamicawaysmith
The document discusses challenges in testing large, complex software systems and provides recommendations to address these challenges. It notes that handcrafted tests alone are no longer sufficient due to increasing software size, complexity, and concurrency. It recommends starting with good design practices, static checking, and unit testing to isolate components before integration testing. It also recommends using code coverage and techniques like all-pairs testing to prioritize tests, and generating stochastic tests using models. Finally, it emphasizes the importance of concurrency testing and prioritizing testing given limited resources for very large systems.
Software testing
Developers Belief on Software Testing
Developers Responsibility for testing
Test writing methods
State based testing
Behavioural/interaction based testing
Writing a Testable Code
Flaw 1 - Constructor does Real Work
Flaw 2 - API lies about it's real dependencies
Flaw 3 - Brittle Global State & Singletons
Testing Frameworks and tools for Java...
Mockito and PowerMock...
Testing Models
Stubs Based Testing Model
Mocked Objects Based Testing Model
JUit 4.+ and TestNG
https://www.adroitlogic.com
https://developer.adroitlogic.com
The anonymised slides from an old (but hopefully still relevant) talk on the case for placing a strategic focus on design testability. The material covers the technical, process and organisational considerations arising from such a strategy and is predominantly a summary of the ideas presented in Brett Pettichord's 2001 "Design For Testability' paper available here. The presentation makes a case for why a high level of design testability can be seen as a critical success factor in achieving sustained agility.
Agile Mumbai 2020 Conference | How to get the best ROI on Your Test Automati...AgileNetwork
- Writing automated tests takes a significant amount of time and effort, often resulting in test code that is twice or three times the size of the actual code being tested. This occurs because tests are written in isolation and dependencies must be mocked.
- A better approach is to write tests that focus on behaviors and public interfaces rather than implementation details. Tests should not break when implementation details change, only when public behaviors change. This allows for easier refactoring of code without breaking tests.
- Rather than focusing solely on unit tests, more effort should be put into system level testing which typically finds twice as many bugs. Tests can also be improved by designing them more formally and moving assertions directly into the code being tested.
The document outlines best practices and tips for application performance testing. It discusses defining test plans that include load testing, stress testing, and other types of performance testing. Key best practices include testing early and often using an iterative approach, taking a DevOps approach where development and operations work as a team, considering the user experience, understanding different types of performance tests, building a complete performance model, and including performance testing in unit tests. The document also provides tips to avoid such as not allowing enough time and using a QA system that differs from production.
Interview questions and answers for quality assuranceGaruda Trainings
Future of Software Testing is always good... as long as developers are developing projects we will be testing them and even when they stops developing then also we will test the enhancements and maintenance etc... Testing will always be needed
Customer will never accept the product Without complete testing .Scope of testing is always good as it gives everyone a confidence of the work we all are doing...Its always good to add more processes while doing testing so that one should not think that testing is a boring and easy job....Process is very imp. for testing.
Register for Free DEMO: www.p2cinfotech.com
email id: p2cinfotech@gmail.com
+1-732-546-3607 (USA)
Test-Driven Developments are Inefficient; Behavior-Driven Developments are a ...Abdelkrim Boujraf
In summary, we have presented here a method for efficiently testing large parts of web-based software by using elements of code generation to generate automatable tests, and by using BDD concepts to model tests for non-generated screens and non-generated business actions. Further, we have described a method for context-based unit
testing that, when combined with generated code and tests, yields an acceptable trade-off between development efficiency and time spent on testing
This document provides guidance for estimating testing efforts, with a focus on activities often overlooked that can stress test teams and risk project delivery. It notes that testing estimates should not simply be a percentage of development time, as many test tasks are underestimated. The document outlines factors to consider for tooling, staffing, documentation, and testing at various phases. It emphasizes the importance of thorough requirements reviews to avoid defects leaking into later phases and increasing costs. Early intervention to check requirements can significantly reduce later effort.
This document discusses using agile acceptance tests (AAT) to improve communication between business, development, and testing teams by moving business rules and requirements into automated tests. AAT involve collaboratively building examples during specification workshops that serve as both requirements and tests. This ensures the software meets customer expectations and focuses development. AAT provide benefits like early detection of gaps, improved understanding across roles, and living documentation of the project.
SmartGears is software that makes existing software and containers into resources that can be managed by the gCube framework in a transparent way. It works by installing libraries into any Servlet container that turn applications running in that container into gCube resources. This allows software not originally built for gCube to still benefit from features like discovery, monitoring, access control and lifecycle management. SmartGears achieves this by injecting logic into the application and request lifecycles using a set of lightweight component libraries without requiring changes to the applications or container themselves.
This document outlines a "featherweight stack" for lightweight gCube clients that avoids dependencies on the full gCore stack. It proposes standalone client libraries for discovery, resources, and calling services using just the JDK instead of Axis, Globus, and gCore. Specific topics covered include generating lightweight service stubs, binding resources with JAXB, executing predefined and free-form discovery queries, and providing modular client libraries as an alternative to gCore clients. The goal is to improve client usability, modularity, and independence from obsolete dependencies.
This document discusses testing services within containers. It proposes using "in-container testing" to test service integration with the container and clients. This allows testing functionality, protocols, lifecycles and more earlier in development. Currently, testing is not integrated into development and build processes well. Tests run separately from the container in different JVMs, making the process slower and less reproducible. The document advocates bringing tests and containers closer together.
The document discusses bringing Maven build capabilities to a software project called gCube that currently uses Ant as its build system. It outlines four main steps: 1) Making Maven available on the build machines. 2) Enforcing proper build order and dependency resolution from a local Maven repository. 3) Facilitating integration by rewriting POM dependencies to latest versions. 4) Configuring release builds to deploy components to Maven repositories. The goal is to allow both Maven and Ant components to be built together while resolving dependencies appropriately and ensuring reproducible releases.
This document discusses the design and implementation of client libraries (CL) that mediate access to system services from client Java runtimes. It describes a CL design model including assumptions about services and clients. The model covers various capabilities like endpoint bindings, fault management, queries and discovery, asynchronous and streamed operations. It proposes a CL framework that implements a common API and addresses requirements like consistency, ease of use and purity. The framework would distribute standard proxy templates and delegates to build reusable CLs.
The document discusses the concept of a "virtual repository" to simplify data import and publication for applications. It outlines challenges with the typical file-based approaches and proposes a client library that abstracts the complexity of interacting with multiple external repositories. The virtual repository would allow applications to discover, import and publish data to various repositories using common standards-based APIs, hiding the network interactions and specific repository technologies. Plugins would implement repository-specific logic to map between the standard APIs and each data source's native formats and APIs. This approach aims to maximize reuse and loosen coupling between applications and repositories.
The document discusses the Cotrix tool for collaborative editing of codelists. It describes Cotrix as supporting editorial workflows over codelists through a team-based process where each person has a specific role. Key features mentioned include import and export of codelists to and from various repositories, collaboration tools like user roles and permissions, and editing capabilities. Integration with virtual research environments and as a gCube application is also discussed.
This document introduces a new approach to testing that focuses on integration testing rather than isolated unit testing. It advocates for a "classicist" style without mocking, using fakes when needed. Tests are organized into a dedicated module and follow principles of dependency injection. While some runtime services cannot be reproduced, the goal is to test domain logic by faking services like messaging that get in the way. This provides a better tradeoff between speed and fidelity than traditional unit testing or full container testing.
The document describes a strategy for continuously delivering a system composed of multiple independent components. The key elements are:
1) Defining a "system" component that depends on all other components using dependency properties with open version ranges.
2) Releasing specific versions of the system using a Jenkins "release" job that freezes versions, tags components, and packages dependencies.
3) Preparing distributions of system releases for different environments using a "distro-prepare" job that configures placeholders based on a deployment profile.
Grade is a tool for sustainably managing linked data repositories. It allows users to generate, update, and disseminate linked data in three phases - ingesting source data, transforming it in a staging area, and publishing it. Key features include storage abstraction, interactive dissemination through SPARQL query testing and result visualization, and staged transformations through defined transformation tasks. The tool aims to make currently manual and batch processes more automated, interactive, and production-ready for linked data managers.
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
The Key to Digital Success_ A Comprehensive Guide to Continuous Testing Integ...kalichargn70th171
In today's business landscape, digital integration is ubiquitous, demanding swift innovation as a necessity rather than a luxury. In a fiercely competitive market with heightened customer expectations, the timely launch of flawless digital products is crucial for both acquisition and retention—any delay risks ceding market share to competitors.
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
J-Spring 2024 - Going serverless with Quarkus, GraalVM native images and AWS ...
Technical Report: My Container
1. Introducing MyContainer
In-Container Testing for gCube Services
I am going to motivate and present tools to test service code in two classic scenarios:
• manual testing : i.e. in the tight, implement<->debug cycle of coding sessions
• automated testing: i.e. during local or remote build processes, whether pre-commit builds, nightly
builds, or the continuous integration builds of a good future
In particular, I will illustrate the advantages that these tools offer over existing practices.
Context
My overall impression is that we currently pay lip service to testing. It seems to me that we do so:
• across the entire spectrum of grains, scopes, and forms which may be associated with the
concept
• in spite of a decade of pragmatic and analytic evidence that testing is in fact highly beneficial,
not just to functional correctness but also to design quality.
To start at the beginning, unit testing seems to be scarcely practiced in the project.
It seems so even if we stick with the classic view of testing (validate existing implementations) and
otherwise decide to ignore more "agile" perspectives (design and implement to pass an existing
test, TDD).
Some unit tests do exist buried in our components, but I believe these are not executed in our
nightly builds, which is the only form of automation we have adopted so far for code integration.
This alone sets us quite a long way off the practices continuous integration, which takes testing
to be an integral part of any build process.
As far as development practices go, one may say that we are at least 10 years behind the state of
the art.
Some Reasons
The blame is partly on our technology stack, including gCore, which does not promote component
isolation and thus inhibits standard testing techniques (e.g. mocking).
We can overcome these problems through careful design and isolate the stack as piece of legacy
technology. However, doing so is not always easy even when it is possible. Most importantly, I
believe it has not been done so far.
This tells us that, as a project, we are still to acknowledge the importance of systematic,
reproducible, and automated testing. I am a living proof that pursing testing in design, practices,
and tools requires "education", not to emphasise that I am educated but to admit that I was utterly
ignorant until recently.
2. In-Container Testing
There are gCube components in which unit testing is much easier to practice, such as libraries and
plugins. I believe that we should pursue it aggressively in these cases, as we have started doing
for Content Management for a year or so.
Yet, the bulk of our code runs inside a container and has remote clients. If enabling unit testing of
service code would give us precious feedback, we have an even stronger need for integration
testing.
The notion of integration testing covers a wide spectrum beyond unit testing, from integration of
service components to integration of gCube services over a wide-area network.
Our need is particularly stronger just one step beyond the scope of unit testing. Given our choice of
technologies, this extra step brings us to what people often refer to as in-container testing, where
tests exercise the functional and non-functional features of a target service which is deployed
within a target container.
In our case, in-container testing is the first chance we get to test the integration between the
individual components of a service (if we used other technologies, such as Spring, there would
be earlier opportunities, but we do not use them and those opportunities do not materialise) . This
is also where we test integration between service components and client components and, most
crucially, between service components and container components, particularly gCore
components. Are interfaces supportive of functional requirements? Do client requests serialise and
deserialise correctly? Does the service manage its state correctly? Does it produce expected
outputs? Does it publish and retrieve information from the Information System as expected? Does
it work correctly in multiple scopes? How does it behave under concurrent load and with large
payloads? etc...
System Testing
The questions above are among the first ones, both functional and non-functional, that we try to
answer in the process of developing new services or modifying existing services. Sometimes these
are also the only questions.
Often, however, our services have runtime dependencies that:
• cannot be satisfied within a single container (external dependencies)
• are not the ubiquitous ones towards to the Information System (publication and discovery).
In this case, integration testing may take a broader scope and a coarser grain, requiring the
staging of multiple services in multiple containers. This is system testing and we approach it in
cooperatively managed development infrastructures, such as devNext.
Unsurprisingly, it has been noted that this solution has proven inadequate in ensuring a stable and
reliable testing environment. It has been noted that a development infrastructure, at times more
than others, approximate wilderness. It has also been noted that improving over this solution is of
key importance for our future.
Yet I believe that improving in-container testing is even more crucial, for two reasons.
3. • if we can discover bugs or design inefficiencies in the first testing environment that allows us to
observe them (sure, earlier for some services that for others), then less bugs will need to be
found in the wilderness; releases will be faster and less painfully staged. Overall, less bugs will
risk to make it to production.
• most importantly, I believe improving in-container testing is a necessary step towards better
solutions for system testing .
I will not speculate today on what these solutions may be, as it would be premature. However,
some ideas - not entirely new in fact - are starting to emerge precisely as generalisations of early
experience with in-container testing.
Status Quo
So, what is to improve upon when it comes to current practices for in-container testing? What do
we do today to test services that run inside containers?
Practice and mileage may vary, but I think it is safe to assume that most of us rely on so-called test
clients.
We first build and deploy the service code in some target container, then we launch the clients and
observe the outcomes. We then correct/evolve the clients in response to failures/changing
requirements, and relaunch the tests (one hopes!).
That is pretty much it.
Looking at the problems
Though quickly described, this approach to testing is complex and inherently manual.
In particular, it results in tests that:
• are not repeatable
• are hard to share within a team
• execute more slowly and less frequently than they should
• are never executed within build processes.
It seems to me that the root problem here is that tests and container have different lifetimes and
are managed in different environments.
They have different lifetimes in that, to retain some sanity, we tend to use containers that are
dedicated neither to the test/test-suite nor to the service targeted by those tests. This means that
the state of the container may change across runs of the same test; the environment in which it
runs may change and so may its configuration. With our containers in particular, libraries
may come and go freely at the rhythm of deployments and un-deployments. As a result, a test that
works today may fail tomorrow on the same machine without any intervening change to the service
code or the test.
Within teams, these reproducibility problems can be observed even more across different
machines, i.e. across space as well as over time. And sharing an installation of the container
creates its own problems, to do with distributed management and poorer and slower working
environments.
4. Sharing the tests is complicated in itself. These depend on the physical location of the container,
its endpoint, and the various environment variables or property files that one normally uses to push
these contextual dependencies outside test code. Often undocumented, these contextual
dependencies result in test clients which are understood and thus executed only by their author.
This means that they are executed too late with respect to code changes that have been applied
and are better understood by other team members.
Reproducibility and sharing issues aside, the separation between tests and container makes for
containers that contain more deployments than the tests actually need (local services, globus
notification services, etc). Startup and execution times become longer and coding sessions
slower.
A lot of our time goes also in managing the container's lifetime, i.e. starting and stopping the
container before and after the test. In most cases, we do this from the console, in an environment
other than the IDE in which we author test and production code. If we do manage the container's
lifetime from within the IDE, we end up creating the same kind of synchronisation problems for the
team which we have already discussed for the test clients.
What is probably most time-consuming for us is having to go through build-and-deploy cycles at
each and every change in the code. Testing a one-line change in service code tends to take tens of
seconds rather than milliseconds.
All these inefficiencies push developers to seek testing feedback less incrementally than they
should. The later the feedback the harder it is to pin down problems and sort them out.
Notice that all the problem above become worse if containers are fully CONNECTED to a
development infrastructure, regardless of actual test requirements. This effectively means that we
do system testing even when we could do in-container testing, i.e. in a scope which is considerably
more complex to control. I suspect not many containers join infrastructures in STANDALONE mode
(this is a “stealth mode”: the Information System can be queried but the service leaves no visible
traces in the infrastructure; the container can come up and go down without causing disruption,
and many runtime activities of the container are avoided, which makes for quicker startup and test
execution).
Finally, how are we to automate this approach to integration testing? I do not know how to answer
that question, but I suspect that it is very difficult if not impossible. This means that we cannot test
the code as part of our local or remote builds, which decreases our chances to catch regressions
errors. Our confidence in changes then diminishes and design enters a state of paralysis.
Having optimised our development practices over the years may make for problems that occur
more occasionally than they could. This does not mean that they do not occur, or do not occur
when there is less time to handle them, typically under release . Mostly, it does not mean that we
should not put our time in more creative implementation and design activities!
5. Requirements
It seems to me that improving over the status quo calls for tighter integration between the tests and
the container in which we deploy the service under testing.
To address problems of test reproducibility and test performance, we need a container which is
entirely dedicated to the service and its tests. Only so we will get a guarantee that, at each test
run, the container is configured with no less and no more deployments than are required for that
run.
To make this viable and to address problems of development efficiency, test share-ability, and test
automation, we need container and tests to share the same execution environment. In other
words, we need a container that can be embedded in the tests, i.e. can be configured, started,
stopped, and used from within the tests.
If container and tests run the in the same JVM they will “see” the same classpath resources,
including service code. This means that changes applied in a coding session from the IDE will be
immediately “live” within the container, i.e. we will not need to explicitly build and deploy them
before test execution (manually or not, from within the same or other development environment,
we just won't).
We will still have requirements for explicit deployment and undeployment but these will be limited
to resources which should not or cannot be on the classpath, such as:
• WSDL interfaces, scripts, and various forms of configuration files that may have changed since
the last test execution
• libraries and GARs of other services that the test requires to be co-deployed with the target
service.
Like the container, we need to embed these light-weight deployments and un-deployments in the
tests, as pre-conditions and post-conditions to test execution.
As an important side-effect of the single JVM assumption, the tests will be able to obtain
references to service and gCore components as these run in the container, including port-type
implementations, service contexts, resource homes, the GHNContext, resource serialisations,
etc). This means that we will be able to exercise not only client-driven tests but also service-
side test. We will be able to make assertions on the state of those components and on the state of
the container within the tests.
This potential will lead to service designs that are more testable than they are now. We will be
encouraged to design our service components so that they can be injected mock dependencies
during the execution of the tests. In other words, we will be able to lift well-known techniques for
unit testing in the context of integration testing, circumventing the obstacles to unit testing that I
have discussed above.
My Container
Over the past couple of months we have worked towards meeting the requirements above against
our current technologies.
6. The idea was to produce tools that supported an embedding of Globus and simplified its use for
in-container testing. Since we wanted to ultimately deliver a dedicated and friendly container, we
initially code-named the project Pasqualino. Eventually, we settled on a slightly more neutral
name, my-container.
The first difficulty for us was that, true to its age, the Globus container wasn't born to be easily
embedded. One way in which Globus customises Axis is by
hard-choosing its file-based initialisation mode. The
container comes up and seeks evidence of service deployments in distinguished folders on file
system. This means that an embedded Globus container needs nonetheless a physical
installation, i.e. cannot exist purely in memory.
The best that we could do was to target a minimal installation of the container: my-container
distributes in a tar-ball of about 50Kb and expands in 0.5MB of disk space. It does not contain a
single script, pre-deployed service, or in fact library. It contains the necessary support for
embedding deployment (see later), and configuration to start up on localhost:9999, in
DISCONNECTED mode for a number of known infrastructures, devNext by default. Differently from
standard container distributions, it also embeds the the storage of service state, for convenience of
post-test inspection during coding sessions.
With this distribution, my-container discourages deployments and startups which are not
defined in code, and it requires explicit control of the classpath.
empty
embedded storage
build support
7. The distribution of my-container is built in Etics, where it is available for manual download.
Underneath however, it uses Maven as the build system and is published at least every night in our
Nexus repository at http://maven.research-infrastructures.eu/nexus
We can manually download my-container and install it as a test resource of our components,
excluding it from version control. Even better, we can automate the download and installation of
my-container during the build of our components. As I will show later, we can achieve this
automation for our standard, Ant-based components. We can equally achieve it for the new breed
of Maven-based components that are we are slowly integrating within gCube in a parallel line of
work.
Runtime Library
The distribution of my-container satisfies Globus requirements for a physical installation. Next,
we needed to offer support to control it from within the tests. This support is provided by a
dedicated library, which we refer to as the runtime library of my-container. Like the distribution,
the library is built every night in Etics and, as a Maven-based component, is available in our
Nexus repository.
We can download the library and embed it in our components as a test library. We can submit it to
version control, or keep it outside our projects and explicitly depend on it for Etics builds. While
no decision has been made yet, it is possible that future versions of gCore may embed it, as much
as it now embeds Ant and JUnit ibraries.
The runtime library supports two modes of interaction with my-container:
8. • a low-level mode whereby we interact directly with my-container through an API
• a high-level mode whereby we use annotations and JUnit4 extensions to delegate interactions
with my-container
The high-level mode is recommended, as it makes test code simpler to write and read. The low-
level mode can be used for use cases which are not covered by the high-level mode. The choice
does not need to be exclusive, as the two modes can be combined within a single test or test suite.
We start from the beginning, looking at the low-level mode first.
The low-level API
The basic facility that we find in the runtime library is MyContainer, an abstraction over the local
installation of my-container which we use to interact with the container from within our tests.
The standard usage pattern is as follows:
• create an instance of MyContainer
• invoke the method start() on it
• write the test code proper, interacting with the instance if required by the test
• invoke the method stop() on it
In the first step we identify the local installation of my-container and get a chance to configure
the container for the test, including the service or services that we wish to deploy in it for testing
purposes.
In the second step we block until the container reaches:
• the state CERTIFIED
• the state DOWN or FAILED
• none of the states above within a configurable timeout
In the first case start() returns successfully and the test can progress further, in the latter two
start() raises an exception that fails the test.
In the third step, we write test code that will run “in” the container, i.e. in the same runtime that we
expect for service code. We can access the GHNContext to inspect its state, deploy some plugins,
register some listeners, etc. We can also access the port-types, contexts, homes, etc. of the
services that we have deployed in the container. We can then get to the usual testing business, i.e.
make assertions about the the state of these components and verify the occurrence of expected
interactions.
With the final step, we stop the container and perform some required cleanup.
Consider the simplest of examples:
MyContainer container = new MyContainer();
container.start();
container.stop();
Since we instantiate MyContainer without any configuration, the container will start up with
defaults:
• the installation will be expected in a directory my-container under the working directory
9. • the container will start on port 9999
• no service will be deployed in it
• the startup timeout will be 10 seconds
Since we also specify no testing code proper, the container will be stopped as soon as it reaches
an operational state (just READY in this case, as there are no deployments that require
certification).
Deployments
While the three lines above may serve as a “smoke test” for gCore itself, they are of little use for
service testing. To test a service, we need to be able to deploy it into my-container before start
up. And to do so from within test code, we need to model the the deployment unit in Globus, the
Grid Archive.
The runtime library includes the Gar class for this purpose. We can create a Gar instance and
point it to all the project resources that we wish to deploy in my-container, from Wsdls and
s u p p o r t i n g XML Schemas , t o c o n fi g u r a t i o n fi l e s (JNDI ,WSDD, profile.xml,
registration.xml,...), to libraries. Assuming our standard project layout, for example, we can
assemble a Gar of the service under development as follows:
Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
Here we use the builder API of the Gar class to mimic in code what we normally do with dedicated
Ant targets during the build of our service. Differently from those target, however, the
programmatic construction of the archive is independent of any project layout or underlying build
system. We add project resources by providing their paths relative to the project root (e.g. schema
and etc). With a project layout that aligns with Maven conventions, for example, we may write
instead:
Gar myGar = new Gar("my-service").addInterfaces("src/main/wsdl").
addConfigurations("src/main/resources/META-INF");
Before we get to actually deploying the Gar, there are few things to notice:
• we provide a name for the Gar under which its resources will be deployed (e.g. my-service).
As usual, this name must be consistent with relative paths to deployed Wsdls that we specify in
the deployment descriptor of the service. Our standard Ant buildfiles use package names for
the purpose and our deployment reflect this convention. We would then create Gar instances
accordingly, e.g. new Gar(“org.acme.sample”).....
• we provided relative paths to whole directories of resources. This is a convenient way to add
resources en-masse to the Gar, which matches well our current project layouts. If need arises,
however, we can point to individual resources using methods such as addInterface() and
addConfiguration(), which expect relative paths to individual files (e.g.
addInterface(“config/profile.xml”)). This supports non-conventional project layouts.
Equally, it allows us to “override” some of the standard resources, for exploratory programming
(e.g. to test the service with a non-standard profile). In these use-cases, we can first add
10. directories of standard resources and then add dedicated test resources that override some of
the standard ones.
• we have not added libraries to the Gar. This is because the service code is expected to be
already on the classpath, including generated stub classes (as usual, these are placed on the
classpath after previous building steps). As we discussed above, this code is immediately “live” in
my-cointainer. In some cases, however, the tests may require the deployment of libraries that
are not on the classpath (e.g. service plugin implementations). In these use-cases, we can use
the methods addLibrary() and addLibraries() to add these runtime libraries to the Gar.
Once we have assembled the Gar for deployment, we can pass it to the constructor of
MyContainer. When we invoke start(), the Gar is deployed in my-container before the
container is actually started. For example:
Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
MyContainer container = new MyContainer(myGar);
container.start(); //gar deployed at this point
Deployment goes through the steps we normally observe during a build. Underneath, in fact,
MyContainer invokes programmatically the same Ant buildfiles which are normally found in full-
blown container installations and which are retained in my-container (this ensures consistency
with external build processes). Wsdls will be extended with Globus providers and binding details,
deployment descriptors and JNDI files will re-named and post-processed for expression filtering
(e.g. @config.dir), resources will be placed in the container in the usual places (e.g. share/
schema/my-service, lib, etc/my-service), undeployment scripts will be generated
(undeploy.xml), ... Accordingly, we get a first form of feedback about the service under
development, even before we’ve exercised any piece of service functionality. If the container starts
up then our service has deployed correctly, otherwise we have made some mistake in the
configuration of the service which we rectify straight away.
Notice that we can deploy many Gars at once and there is no minimum requirement for what we
put in each individual Gar. This may be useful when we need to deploy auxiliary libraries, as we
can assemble the standard Gar for the service, a separate Gar for the auxiliary libraries, and then
deploy the two Gars together, e.g:
Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
Gar auxGar = new Gar("my-plugin").addLibrary("test/my-plugin.jar");
container = new MyContainer(myGar,auxGar);
container.start();
We can also construct a Gar instance from an existing archive:
Gar existingGar = new Gar("test/somegar.gar");
This is useful if we wish to place our tests outside the service under testing, in a separate module
that assumes that the archive of the service has been previously built and is available as a test
resource. It is also useful if the test requires the service to be deployed in my-container along
with other services, e.g.:
11. Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
Gar codeployedGar = new Gar("/test/other-service.gar");
container = new MyContainer(myGar, codeployedGar);
container.start();
Port-types and Endpoint References
After deployment and container startup, the test proper can begin. For example, a smoke test that
should be part of the test-suites of all our services is the following:
Gar myGar = ...
container = new MyContainer(myGar);
container.start();
assert(ServiceContext.getContext().getStatus()=GCUBEServiceContext.Status.READIED);
container.stop();
This test inspects the state of the service to ensure that it has come up correctly in my-
container. Next, we will want to test the methods of the service API. We can do this in either one
of two ways:
• by invoking directly the methods of some port-type implementation (internal tests)
• using the service stubs, as a client would normally do (external tests)
For internal testing, we need access to the implementation of the port-type. As this is instantiated
and controlled by Globus, we cannot access it from other service components but need to ask
MyContainer for it, e.g.:
Stateless pt = container.portType(“acme/sample/stateless”, Stateless.class);
...pt.about(...)...
Here we pass the name of the port-type to the container (“acme/sample/stateless”, as it
would be specified in the deployment descriptor of the service), along with the class of the instance
that we expect back (Stateless). We the invoke directly a method on the port-type that we wish
to test (about()).
For external testing, we need an endpoint reference to the port-type. Again, we ask MyContainer
for one:
EndpointReference epr = container.endpoint(“acme/sample/stateless”);
StatelessPortType pt = new StatelessServiceAddressingLocator().getStatelessPortTypePort(epr)
...pt.about(...)...
Clearly, external tests gives more feedback than internal ones, at the cost of slightly slower
execution times (but remember, we are using localhost!). They flag any problems we may have
with input and output serialisations. If we experience any such problem during development, we
may want to temporarily enable org.apache.axis.utils.tcpmon on a port other than the
container’s, so as to inspect the serialisations directly. In this case, we need an endpoint reference
configured for the monitored port, e.g. 9000:
EndpointReference epr = container.endpoint(“acme/sample/stateless”,9000);
12. Once we have sorted the problem out, we can revert the code to use endpoint references that
point to the container’s port, as the we will not have or want the TCP monitor running when the
tests are executed non-interactively during build processes.
Based on these basic facilities, the precise actions that we take in our tests depend on how we
designed the service and on our ingenuity. The possibilities are actually endless. If we need to, we
can obtain from MyContainer access to key locations in the container. For example:
• configLocation() gives us access to the configuration directory of my-container. This
allows us to override key configuration files before we start the container (e.g. add a
ServiceMap, deploy a custom GHNConfig.xml file, enable security, etc..)
• storageLocation() gives us access to the storage directory of my-container, where we
can find the serialisations of any stateful resources that we may have created during the test
(e.g. to confirm the creation of such serialisations)
Other key locations are also available through MyContainer (location(), libLocation(),
deploymentsLocation()), though these are used primarily by MyContainer itself and are
unlikely to be targeted by our tests.
Overall, we can access in principle any gCore component and service component that may enable
us to exercise the intended behaviour of the service under testing.
Logging
One immediate advantage of running tests and container in the same JVM is that the logs emitted
by either are merged in a single log. This give us a full picture of the execution, a picture that can
be delivered to the console of our IDE.
Building on this potential, the distribution of my-container includes a log4j.properties
configuration file, which is loaded up dynamically as soon as we instantiate MyContainer. In it,
the loggers used by Globus, Axis, and gCore are configured to append to the console (only
warnings in the first two cases). So, we do not need to take any action to find the container’s logs
in, say, our Eclipse console:
13. As a further convenience, log4j.properties in my-container also includes configuration for
loggers called test. This gives us a configuration-free way to log from within our test code. For
example using loggers in test code as exemplified below:
private static Logger logger = Logger.getLogger("test");
...
@Test
public void someTest() throws Exception {
...
logger.info("in test!");
...
}
would result in similar logs:
[TEST] 14:17:50,086 INFO test [main,main:549] in test
Of course, this leaves out all the logs of the service under testing. To include them, we need to
place our own log4j.properties the test classpath and follow standard Log4j configuration
patterns. For example, if the the service uses loggers called org.acme...., then the
configuration could look like the following:
log4j.appender.ROOT=org.apache.log4j.ConsoleAppender
log4j.appender.ROOT.layout=org.apache.log4j.PatternLayout
log4j.appender.ROOT.layout.ConversionPattern=%d{HH:mm:ss,SSS} %-5p %c{2} [%t,%M:%L] %m%n
log4j.rootLogger=WARN,ROOT
log4j.appender.ACME=org.apache.log4j.ConsoleAppender
log4j.appender.ACME.layout=org.apache.log4j.PatternLayout
log4j.appender.ACME.layout.ConversionPattern=[ACME] %d{HH:mm:ss,SSS} %-5p %c{2} [%t,%M:%L] %m%n
log4j.category.org.acme=TRACE,ACME
log4j.additivity.org.acme=false
The end result is that the console will merge logs from my-container, logs from the tests, and
logs from the service under testing, while still showing the provenance clearly. My personal
experience is that this merging proves extremely useful during debugging.
Test Isolation and Execution Performance
An important role of MyContainer is to promote the isolation of our tests. To this end,
MyContainer takes a number of actions on container startup, all of which are geared to wipe out
any form of state that my-container may have accumulated in previous tests. This discourages
us from basing some tests on the outcome of other tests, even when we can exert control over the
test order. In particular, MyContainer will:
• restore the default configuration of the container;
• clean the storage directory of any stateful resource serialisation;
• undeploy any Gar which is not required by the current test;
14. Notice that these actions are taken before the tests start, rather than once they have completed.
There are at least two important justifications for this timing choice.
Firstly, during coding sessions, it allows to inspect the state of the container as left at the end of the
tests. In particular, we can confirm our expectations as to the deployed resources and the stateful
resources that may have been created. Since my-container is installed within the service
project, we can easily do so from within our own IDE.
Secondly, MyContainer can optimise container start-up by avoiding unnecessary deployments. If
the resources in a Gar required by the test have not changed since their last deployment, re-
deploying the Gar is happily avoided, as shown in the logs:
[CONTAINER] ... INFO mycontainer.MyContainer ... skipping deployment of sample-service because it is unchanged
The optimisation is significant, as deployments are easily the most time-consuming operations
during the execution of a test, especially when services have multiple port-types and a large
number of operations . Without them, my-container will start in less than 3 seconds, true to the
promise that an embedded container will make for very efficient interactive testing during coding
sessions.
To detect change, Gar instances keep track of the time of last modification of their resources.
Whenever we add a resource or a directory of resources to the Gar, the resource which has most
recently changed provide the the time of last modification of the whole Gar. MyContainer then
compares this time with the time in which a Gar with the same name was last deployed, which is
the time of last modification of the undeploy.xml file for that Gar.
Given MyContainer’s help in terms of test isolation and performance, the actual degree of test
isolation is our responsibility. For maximum isolation, we could use different instance of
MyContainer in each test. This, however, with its own drawbacks. First, there is a performance
issue. While my-container starts quickly, especially when deployments are optimised away, we
are nonetheless talking seconds rather than milliseconds. Second, Globus and gCore make
heavy use of static variables, and this may reintroduce issues of test isolation, which we wanted to
reduce in the first place.
I believe we can obtain a good compromise between test isolation and test performance by sharing
a single instance of MyContainer across a suite of strictly related tests (e.g. create tests, read
tests, write tests, and so on). All the tests we place in the suite above share the same instantiation
and configuration of the container. We pay the startup price once, and then execute each test in
the suite in milliseconds, i.e. in timings that we’ve come to associate with unit testing (and even if
we test service operations externally, through stubs).
JUnit Embedding
Where are we going to place our test code? We could put it in the main() method of a test client,
of course, but the recommended approach is to embed it in a more suitable testing framework,
such as JUnit. By doing so, we get a clear structure, proper integration with IDE and build tools,
and a host of testing facilities which are standards de facto.
15. One mapping of out testing pattern in JUnit is the following:
public class MyTestSuite {
static MyContainer container;
@BeforeClass
public static void startup() {
Gar myGar = ...
container = new MyContainer(myGar);
container.start();
...
}
@Test
public void someTest() throws Exception {...}
@Test
public void anotherTest() throws Exception {...}
...
@AfterClass
public static void shutdown() {
container.stop();
...
}
}
Here, the instance of MyContainer is shared across the tests of a suite, as per the approach
recommended above. The static methods annotated with JUnit‘s @BeforeClass and
@AfterClass methods are used to start and stop the container, respectively. Methods annotated
with JUnit‘s @Test are the individual tests of the suite.
Annotation-driven Tests
The JUnit skeleton above can be taken as boilerplate code for our test suites with my-
container. The runtime library builds on the extension facilities provided by JUnit to spare us
this boilerplate and, more generally, to avoid most of the interactions of MyContainer that we
have presented so far (creation, deployment, start/stop, obtaining port-type implementations and
endpoint references, ..). This is the high-level mode supported by the runtime library.
When we work in this mode, we simply annotate the test-suite as follows:
@RunWith(MyContainerTestRunner.class)
public class MyTestSuite {...}
MyContainerTestRunner is a JUnit 4 test runner which replaces the default one to:
• create, configure, and start an instance of MyContainer before any other code in the test suite
is executed by JUnit
• inject into the test-suite any port-type implementation or endpoint reference which we may need
• clearly name the output of any test with the name of the test itself
• stop the underlying instance of MyContainer after any other code in the test suite is executed
by JUnit
For example, our skeleton now takes this simpler form:
16. @RunWith(MyContainerTestRunner.class)
public class MyTestSuite {
@Test
public void someTest() throws Exception {...}
@Test
public void anotherTest() throws Exception {...}
...
}
This does not mean that we cannot have @BeforeClass and @AfterClass methods, only that we
do not need to have them only to start and stop a container.
Of course, we still need to be able to provide our Gar/s to the underlying MyContainer. However,
we can do so indirectly now, by exposing static fields appropriately typed and annotated. Our test
runner will recognise these fields and pass the information they provide on to the instance of
MyContainer that the runner handles on our behalf, e.g.:
@RunWith(MyContainerTestRunner.class)
public class MyTestSuite {
@Deployment
static Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
@Test
public void someTest() throws Exception {...}
@Test
public void anotherTest() throws Exception {...}
...
}
Here, we have used @Deployment annotation to flag a static field of Gar type to the runner. The
runner will use it when it creates the instance of MyContainer. Since we can deploy as many
Gars in my-container as we need to, we can have multiple fields annotated with @Deployment
and of type Gar in our test-suite.
Similarly, we may define static fields for port-types and endpoint references and have the runner
set their values for us, e.g:
@RunWith(MyContainerTestRunner.class)
public class MyTestSuite {
@Deployment
static Gar myGar = new Gar("my-service").addInterfaces("schema").addConfigurations("etc");
@Named(“acme/sample/stateless”)
static Stateless pt;
@Named(“acme/sample/stateless”)
static EndpointReference epr;
@Test
public void someTest() throws Exception {
17. ....pt.about(...)...
}
@Test
public void anotherTest() throws Exception {
... StatelessPortType ptStub =
new StatelessServiceAddressingLocator().getStatelessPortTypePort(epr);
... ptStub.about()...
}
...
}
Here we have reused @Named from the standard JSR-330 to require the injection a given port-
type implementation and of a endpoint reference for it. The runner will pick on these annotations
and set their values accordingly, well before the suite uses them in its test methods.
Through the same means, and if so required, the runner can also inject the underlying instance of
MyContainer in the test suite, e.g.:
@Inject
static MyContainer container;
where @Inject is also borrowed from JSR-330 to flag requests for (unqualified) value injections.
Having the instance of MyContainer within the test suite allows us to combine low-level and high-
level modes of interactions within the same test-suite. In particular, we can fallback to the API of
MyContainer when our tests need more staging flexibility and sophistication than annotations
can achieve.
Non-Default Configuration
In all the examples above, we’ve relied on defaults for the location of my-container, the port on
which it listens for requests, and the startup timeout. However, we may wish to override some of
these defaults to have more control on the install location, or to make for shorter or longer startup
times or, less commonly, to target a different port (for proxying issues or other regulations).
To do this, we can use the other constructors of MyContainer:
• MyContainer(String,Gar ... gars) is dedicated to non-default locations, which is the
most common scenario for overriding defaults. It remains that the input paths are resolved with
respect to the working directory, so as to discourage absolute paths which compromise the
reproducibility of tests (e.g. new MyContainer(“/src/main/test/resources”),...);
• MyContainer(myProperties,Gar ... gars)is the most generic of all constructors and
allows us to configure all the available properties in a Properties object, or only those we care
to override. Use the constants in the Utils class to name the properties to be overridden (e.g.
for example, Utils.STARTUP_TIMEOUT_PROPERTY).
Finally, note that all MyContainer constructors, including the no-arg constructor, will try to
complement the configuration properties that are implicitly or explicitly provided in code with those
that may be found on the classpath in a file called my-container.properties.
18. For obvious reasons, pushing non-default configuration in one such file is preferred over hard-
coding it in test code. This is particularly the case when we work with the annotations discussed
above, as the test runner will alway create a MyContainer instance through its no-arg
constructor. The property file thus allows us to override the defaults without renouncing to the high-
level mode of interaction with the runtime library.
Test Automation
Controlling my-container and the deployment process along the lines illustrated so far satisfies
the requirement for an efficient test and debug model during interactive coding sessions, typically
from within the IDE. Equally, it delivers on the promise for increased share-ability and
reproducibility of tests. In turn, this creates the basis for test automation, i.e. the possibility of
executing our tests during local or remote build processes. As we have already emphasised, test
automation is key to the development process and is one of the main goals behind the work on
my-container.
Given the facilities of the runtime library, automating the tests is matter of build configuration. As
such is is rather sensitive to the build system that we use, be it Ant, Maven, or other. In all cases,
however, we are after the possibility to:
• automatically download the distribution of my-container from a remote repository, and install it in
the project prior to launching the tests. Since MyContainer gives us good test isolation, we
want this to happen only if previous builds have not done it already;
• trigger test compilation and execution straight after compilation of service code, including
generates stub classes, with the implication ought to fail whenever a test does not pass.
Ant Automation
Let’s first see how we may achieve this automation within our standard Ant buildfiles. Our default
buildfiles have roughly the following target structure (up to target names):
This structure focuses on the independent generation of two types of build artifacts:
gar
package
process
WSDLs
compile
init
deploy
stubs
compile
Stubs
generate
Stubs
deploy
Stubs
19. • a Gar archive which packages service binaries, configuration, and Wsdl interfaces
• a Jar archive with binaries of stub code generated from Wsdl interfaces
Since we do not need to test generated code, we introduce testing only in the process of
generating the Gar archive. (As usual, an up-to-date stubs Jar must be on the test classpath for
both internal and external testing). One way of doing this leads to this modified task structure:
We have interposed test execution (test) between the compilation and packaging of service code
(existing targets package, compile), i.e. as soon as possible. Executing the tests requires the
compilation of the tests (compileTests) and the installation and download of my-container
(install-my-container, download-my-container). Of course, compiling the suites
requires compiling the service code first (existing target compile). Finally, the installation of my-
container can be removed at any point (remove-my-container) and the configuration of
most tasks is centralised in initTest.
An XML serialisation of this structure may look as follow:
<!-- run test suites -->
<target name="test" depends="compileTests,install-my-container" unless="test.skip">
<!-- compile test suites -->
<target name="compileTests" depends="compile, initTests" unless="test.skip" >
<!-- install my-container -->
<target name="install-my-container" depends="initTest" unless="test.skip" >
<!-- download my-container if not installed -->
<target name="download-my-container" depends="initTest" unless="my-container.installed" >
gar
package
process
WSDLs
init
deploy
compile
test
compile
Tests
install
my−container
init
Tests
download
my−container
uninstall
my−container
20. <!-- uninstall my-container-->
<target name="uninstall-my-container" depends="initTest">
<!-- target package service code -->
<target name="package" depends="test">...</target>
Notice that task dependencies are organised in such a way to minimise build time in case of
failures; e.g. when the service fails to compile the tests are not compiled, and when the test fail to
compile, my-container is not downloaded or installed.
Notice also that we can disable all the test-related targets on demand, by setting the test.skip
variable:
.../sample-service> ant -Dtest.skip=true
We could have taken the opposite route here and decided to enable test-related targets on
demand, using something like if=”test.do” on the test-related targets in place of
unless=”test.skip”. The choice depends pretty much on the discipline that we want to
impose upon ourselves.
With the target structure in place, let us look at the individual tasks, in order of their execution:
<target name="initTest" unless="test.skip">
<!-- my-container installation and download directories -->
<property name="my-container.install.dir" value="${basedir}" />
<property name="my-container.download.dir"
value="${my-container.install.dir}/.my-container" />
<property name="my-container.dir" value="${my-container.install.dir}/my-container" />
<!-- test source directory -->
<property name="test.src.dir" value="test" />
<!-- test library directory -->
<property name="test.lib.dir" vdoalue="test-lib" />
<!-- test binary directory -->
<property name="build.tests.class.dir" location="${build.dir}/test-classes" />
<!-- test reports -->
<property name="test.reports.dir" value="${build.dir}/test-reports" />
</target>
In initTest we specify the key locations for testing:
• where my-container should be downloaded and where it should be installed. For installation,
we choose the project root, where it will be automatically discovered by MyContainer without
the immediate need to define my-container.properties or to pass installation paths to
MyContainer constructors. Keeping the installation outside the build.dir avoids us to re-
download my-container after each cleanup. For similar reasons we also download my-
container under project root too but we choose a directory that stays hidden in IDEs. Notice
that install and donwload directories should added to the svn:ignore list at the commit time;
• where are the test sources and where are test libraries;
• where the test classes and test reports out to be written out. Since these outputs are transients
we place them under build.dir, so as to have them removed at each cleanup.
21. Next, we move to the management of my-container:
<target name="install-my-container" depends="initTest" unless="test.skip">
<available file="${my-container.dir}" property="my-container.installed" />
<antcall target="install-my-container" />
</target>
<target name="download-my-container" depends="initTest" unless="my-container.installed">
<mkdir dir="${my-container.download.dir}" />
<get src="http://maven.research-infrastructures.eu/nexus/service/
local/artifact/maven/redirect?r=gcube-releases&g=org.gcube.tools&a=my-
container&v=RELEASE&e=tar.gz&c=distro"
dest="${my-container.download.dir}/my-container.tar.gz"
usetimestamp="true" />
<gunzip src="${my-container.download.dir}/my-container.tar.gz"
dest="${my-container.download.dir}" />
<untar src="${my-container.download.dir}/my-container.tar" dest="${basedir}" />
<target name="uninstall-my-container" depends="initTest">
<delete dir="${my-container.dir}" />
<delete dir="${my-container.download.dir}" />
</target>
In install-my-container we delegate to download-my-container indicating wether an
installation already exists or not. If it does not exist already, download-my-container fetches
the latest release of the from our my-container Nexus repository and unpacks it. uninstall-
my-container cleans up installation and downloads.
Now we move to compiling the tests:
<target name="compileTests" depends="compile,initTest" unless="test.skip">
<mkdir dir="${build.tests.class.dir}" />
<path id="test.classpath">
<path refid="service.classpath" />
<fileset dir="${test.lib.dir}">
<include name="*.jar" />
</fileset>
<pathelement location="${build.class.dir}" />
<pathelement location="${build.tests.class.dir}" />
</path>
<javac srcdir="${test.src.dir}" destdir="${build.tests.class.dir}"
classpathref="test.classpath"
includeantruntime="false" />
</target>
Compilation occurs in a classpath that adds the test libraries, the service binaries, and the test
binaries to the classpath already used to compile service code. Here we use a reference to
another path (service.classpath), though existing buildfiles may not name the service
classpath explicitly (use copy and paste then!).
22. What test libraries should be available? At the very least, a version of the runtime library of my-
container. Since we will want to run JUnit 4 tests, we will also need ant-junit.jar, as it is
included in any installation of Ant from 1.7.1 onwards (older version will not work). On the other
hand, we do not need to worry about JUnit binaries, which are bundled in a full-distribution of the
container. Of course, any other test utility, framework (e.g. mock libraries), or dependency that we
may be using in the tests goes in test.lib.dir.
Finally we get to test execution:
<target name="test" depends="compileTests,install-my-container" unless="test.skip">
<mkdir dir="${test.reports.dir}" />
<junit printsummary="yes" haltonfailure="true" fork="yes"
dir="${basedir}" includeantruntime="false">
<classpath>
<pathelement location="${test.src.dir}" />
<path refid="test.classpath" />
</classpath>
<formatter type="brief"/> <!-- usefile="false" to get logs in console -->
<batchtest toDir="${test.reports.dir}">
<fileset dir="${test.src.dir}">
<include name="**/*Test.java" />
<include name="**/*Tests.java" />
</fileset>
</batchtest>
</junit>
</target>
We execute the test in a separate JVM and against a classpath entirely under our control. In
particular, we do not use the local Ant runtime (which may vary) and prefer instead the Ant
support included in our standard container distribution. We add the test sources here, so as to pick
on all the resources that may have been placed there to be loaded by the tests (including
mycontainer.properties, log4j.properties, ...).
And that’s it. Launching this buildfile from console or from within the IDE will show us that,
whenever we do not explicitly disable it, the execution of our test suites has become integral part of
our builds. This will help us confirm that we have not introduced regression errors, as we re-factor
the code, before we commit the changes, and Etics integrates it in gCube every night.