This document discusses strategies for dealing with increasing amounts of verification data from testing Java implementations. It describes the scope of testing across multiple Jenkins servers for projects like AdoptOpenJDK. A Test Result Summary Service (TRSS) was created to monitor and filter results. Initial deep learning experiments were able to classify test outputs with 98.8% accuracy. Plans going forward include change-based testing, bug prediction services, enhancing TRSS with analytics, and using AI techniques like fuzz testing and test generation.
Agile Open Source Performance Test Workshop for Developers, Testers, IT OpsClever Moe
Training For Selenium, soapUI, Sahi, TestMaker Performance Testing. Slide deck from the free Webinar titled "Technical Training On The Agile Open Source Way To Load Test, Scalability Test, and Stress Test." Learn the Agile Open Source Testing way to load and performance test your Web applications, Rich Internet Applications (RIA, using Ajax, Flex, Flash, Oracle Forms, Applets,) and SOAP and REST Web services. This free Webinar delivers a testing methodology, tools, and best/worst practices.
QA Fest 2015. Владимир Примаков. Автоматизация нестандартной отчетности из JI...QAFest
This document discusses using Google Sheets to create reports from Jira data by connecting to the Jira REST API. It outlines some weaknesses in Jira's native reporting and why Google Sheets is a good alternative. It then describes the key technologies used, including functions for configuring Jira parameters, fetching data via API calls, and parsing the JSON responses. Examples are provided of quality reports, time trends, pivots, and audit reports that can be created in this way in Google Sheets.
BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...Vincenzo Ferme
BenchFlow is an open-source expert system providing a complete platform for automating performance tests and performance analysis. We know that not all the developers are performance experts, but in nowadays agile environment, they need to deal with performance testing and performance analysis every day. In BenchFlow, the users define objective-driven performance testing using an expressive and SUT-aware DSL implemented in YAML. Then BenchFlow automates the end-to-end process of executing the performance tests and providing performance insights, dealing with system under test deployment relying on Docker technologies, distributing simulated users load on different server, error handling, performance data collection and performance metrics and insights computation.
My talk for SPEC Research Group DevOps (https://research.spec.org/devopswg) about BenchFlow. Discover BenchFlow: https://github.com/benchflow
This document discusses test driven development for mobile applications. It compares the traditional development cycle to a test driven development cycle. It also discusses how using the Robolectric framework allows testing Android applications outside of an emulator, improving the speed of test driven development. Key benefits of test driven development mentioned include delivering functionality faster, improving code quality and confidence, and allowing more time for cleaning code and learning new tools.
The document describes the history and development of the "little Jenkinsfile" used to automate testing of the AdoptOpenJDK, OpenJ9, and Eclipse OMR projects. It outlines the principles of keeping the Jenkinsfile simple and learn from past lessons. Key aspects include testing across multiple platforms and versions, using open source tools when possible, and whole team involvement. The ecosystem now includes over 250,000 tests running on multiple servers.
When assertthat(you).understandUnitTesting() failsMartin Skurla
This document summarizes an presentation on advanced testing techniques. It discusses refactoring test code for improved readability, including using logical grouping and custom assertions. It also covers best practices for test naming conventions and avoiding common pitfalls. The presentation shows how integrating testing frameworks can enable features like data-driven and concurrent testing. Results from refactoring showed reductions in test code size and execution time along with resolving issues.
The document discusses progressive refinement in quality assurance (QA), from initial broad testing to more targeted testing. It provides examples of assessing existing QA capabilities, prioritizing what to test, designing a QA approach, prototyping tests, and refining tests. Specifically, it summarizes the AdoptOpenJDK Quality Assurance (AQA) project, which aims to test OpenJDK builds across platforms and versions using an automated three-layer testing framework that is continuously refined.
This document discusses various tools used for automated testing including Bugzilla, Testopia, and Jenkins. It provides overviews of each tool's features and how they integrate together in an automated testing environment. Key steps are outlined for setting up projects in each tool and configuring them to work together. Specifically, it describes how test cases can be retrieved from Testopia, executed via scripts on Jenkins, and results reported back to Testopia.
Agile Open Source Performance Test Workshop for Developers, Testers, IT OpsClever Moe
Training For Selenium, soapUI, Sahi, TestMaker Performance Testing. Slide deck from the free Webinar titled "Technical Training On The Agile Open Source Way To Load Test, Scalability Test, and Stress Test." Learn the Agile Open Source Testing way to load and performance test your Web applications, Rich Internet Applications (RIA, using Ajax, Flex, Flash, Oracle Forms, Applets,) and SOAP and REST Web services. This free Webinar delivers a testing methodology, tools, and best/worst practices.
QA Fest 2015. Владимир Примаков. Автоматизация нестандартной отчетности из JI...QAFest
This document discusses using Google Sheets to create reports from Jira data by connecting to the Jira REST API. It outlines some weaknesses in Jira's native reporting and why Google Sheets is a good alternative. It then describes the key technologies used, including functions for configuring Jira parameters, fetching data via API calls, and parsing the JSON responses. Examples are provided of quality reports, time trends, pivots, and audit reports that can be created in this way in Google Sheets.
BenchFlow: A Platform for End-to-end Automation of Performance Testing and An...Vincenzo Ferme
BenchFlow is an open-source expert system providing a complete platform for automating performance tests and performance analysis. We know that not all the developers are performance experts, but in nowadays agile environment, they need to deal with performance testing and performance analysis every day. In BenchFlow, the users define objective-driven performance testing using an expressive and SUT-aware DSL implemented in YAML. Then BenchFlow automates the end-to-end process of executing the performance tests and providing performance insights, dealing with system under test deployment relying on Docker technologies, distributing simulated users load on different server, error handling, performance data collection and performance metrics and insights computation.
My talk for SPEC Research Group DevOps (https://research.spec.org/devopswg) about BenchFlow. Discover BenchFlow: https://github.com/benchflow
This document discusses test driven development for mobile applications. It compares the traditional development cycle to a test driven development cycle. It also discusses how using the Robolectric framework allows testing Android applications outside of an emulator, improving the speed of test driven development. Key benefits of test driven development mentioned include delivering functionality faster, improving code quality and confidence, and allowing more time for cleaning code and learning new tools.
The document describes the history and development of the "little Jenkinsfile" used to automate testing of the AdoptOpenJDK, OpenJ9, and Eclipse OMR projects. It outlines the principles of keeping the Jenkinsfile simple and learn from past lessons. Key aspects include testing across multiple platforms and versions, using open source tools when possible, and whole team involvement. The ecosystem now includes over 250,000 tests running on multiple servers.
When assertthat(you).understandUnitTesting() failsMartin Skurla
This document summarizes an presentation on advanced testing techniques. It discusses refactoring test code for improved readability, including using logical grouping and custom assertions. It also covers best practices for test naming conventions and avoiding common pitfalls. The presentation shows how integrating testing frameworks can enable features like data-driven and concurrent testing. Results from refactoring showed reductions in test code size and execution time along with resolving issues.
The document discusses progressive refinement in quality assurance (QA), from initial broad testing to more targeted testing. It provides examples of assessing existing QA capabilities, prioritizing what to test, designing a QA approach, prototyping tests, and refining tests. Specifically, it summarizes the AdoptOpenJDK Quality Assurance (AQA) project, which aims to test OpenJDK builds across platforms and versions using an automated three-layer testing framework that is continuously refined.
This document discusses various tools used for automated testing including Bugzilla, Testopia, and Jenkins. It provides overviews of each tool's features and how they integrate together in an automated testing environment. Key steps are outlined for setting up projects in each tool and configuring them to work together. Specifically, it describes how test cases can be retrieved from Testopia, executed via scripts on Jenkins, and results reported back to Testopia.
PuppetConf 2017: Test First Approach for Puppet on Windows- Miro Sommer, HiscoxPuppet
How to apply Test Driven Development (TDD) and Behavior Driven Development (BDD) techniques for Puppet modules? Usually, first comes the module code and then tests, but sometimes there are tests without value just for the sake of testing, or there are no tests at all. This causes problems when someone who isn't familiar with the code has to make changes and has no confidence that the code would still work. We can prevent these issues by using test-first approach with TDD or BDD to ensure that the code is always tested and we write the right tests. This session will give you practical steps to write tests first, as well as an overview and benefits of TDD and BDD techniques. We'll talk about how to write RSpec unit tests before there is any Puppet code written. We'll also look into Test Kitchen and how to write integration tests based purely on acceptance criteria of a work task, with a focus on testing Puppet modules on Windows using the latest and greatest PowerShell BDD testing library, Pester.
Efficient JavaScript Unit Testing, March 2013Hazem Saleh
This material about efficient JavaScript unit testing is presented by Hazem Saleh in the Egyptian Java Developer Conference that was held in 09 March 2013.
Automated Application Tests For Lotus Notes Uklug 2009maxistar
Automated function and regression tests for Lotus Notes applications:
* Workflow tests: Test complete workflows, including automated switching of user IDs and integrated handling of e-mails and doclinks
* Load tests: Measure how your application will scale before you roll it outt
* Performance measuring: Collect hard data on performance issues by sending executable test scripts to your users
* Cross-client testing: Check how your current applications will work in new Notes clients or Domino versions
* Powerful scripting language: Based on an enhanced VBScript dialect
* Log files: Record the results of test runs and store them back into your QA databases
OSGi Applications Testing - André Elia Assad, System Engineer, Cesarmfrancis
This document discusses testing OSGi applications. It introduces the OSGi test harness, which includes a test director bundle and target test bundles. The test director manages test suites and targets, while targets run tests on devices under test. The document also covers how the OSGi test framework relates to JUnit, guidelines for writing OSGi test cases focusing on robustness and performance, and common testing patterns like unit testing, bundle testing, and service contract testing. In conclusion, it notes that compliance programs cannot completely cover specifications and that OSGi testing focuses on functionality over robustness and usability.
The document discusses various techniques for testing Groovy and Java code using Groovy's built-in testing framework and mocking capabilities. It covers unit testing with GroovyTestCase, testing exceptions, and different approaches for mocking collaborators including using mockFor/stubFor, categories, ExpandoMetaClass, and maps. It also addresses integration testing Grails actions. The conclusion compares techniques for mocking Groovy versus Java classes and mentions emerging frameworks like Gmock.
Google AppEngine (GAE/J) - Introduction and Overview from a Java GuyMax Völkel
You know Java, but what is AppEngine? In this session Max will walk you from a brief introduction of Servlets to an overview of the Google AppEngine. We learn about the basics such as the data store – which is quite different from SQL -, application versions, back-ends, scheduler, instances and logging. Services such as mail, url fetch, task queue are also explained. Since we use GAE/J since 2010 in Calpano we also talk about costs and experience in practice.
Stopping the Rot - Putting Legacy C++ Under TestSeb Rose
The document discusses introducing unit testing to legacy C++ code. It covers choosing a testing framework, writing initial tests and mocks, and various refactoring techniques like wrapping dependencies, extracting components, and adding non-intrusive C seams to facilitate testing. The goal is to incrementally make the code more testable while maintaining functionality through practices like test-driven development.
QUICK TEST PROFESSIONAL 8.2
Mercury Quick Test Professional 8.2 provides the industry’s best solution for functional test and regression test automation - addressing every major software application and environment
You’re finally doing TDD, but your past mistakes are catching up with you. No matter what you do, you can’t get rid of the gaping black holes caused by your legacy code.
In this presentation, we learn about the causes of legacy code and the reasons it is so difficult to work with. Then we discuss various techniques to test untestable code, revive and simplify incomprehensible code, redesign stable yet untested code, and repair that rift we created in the time-space continuum.
QA Meetup at Signavio (Berlin, 06.06.19)Anesthezia
The document discusses establishing the architecture for an end-to-end testing project. It outlines key components like the core test structure following the Arrange-Act-Assert pattern, test data preparation, reporting with Allure, managing properties with Typesafe Config, dependency injection with Guice, executing tests on CI with Jenkins, and deploying test environments with Docker. The presenter will demonstrate establishing backend testing first before expanding to UI testing.
Towards Holistic Continuous Software Performance AssessmentVincenzo Ferme
In agile, fast and continuous development lifecycles, software performance analysis is fundamental to confidently release continuously improved software versions. Researchers and industry practitioners have identified the importance of integrating performance testing in agile development processes in a timely and efficient way. However, existing techniques are fragmented and not integrated taking into account the heterogeneous skills of the users developing polyglot distributed software, and their need to automate performance practices as they are integrated in the whole lifecycle without breaking its intrinsic velocity. In this paper we present our vision for holistic continuous software performance assessment, which is being implemented in the BenchFlow tool. BenchFlow enables performance testing and analysis practices to be pervasively integrated in continuous development lifecycle activities. Users can specify performance activities (e.g., standard performance tests) by relying on an expressive Domain Specific Language for objective-driven performance analysis. Collected performance knowledge can be thus reused to speed up performance activities throughout the entire process.
My talk from The International Workshop on Quality-aware DevOps (QUDOS 2017). Cite us: http://dl.acm.org/citation.cfm?id=3053636
If you had an opportunity to build an application from the ground up, with testability a key design goal, what would you do?
In this presentation, we will look at just such a situation - a major, two year rewrite of a suite of core business systems. We will discuss how a system looks when testability is as important as functionality - and what it looks like when quality concerns are part of the initial design. We will look at the role of test automation and manual test in a modern project, and look at the tools and processes. The session will conclude with a demo of the latest visual test automation tool from MIT and a Q&A.
This document provides rules for applying test-driven development (TDD) to legacy code. It discusses:
- Using a bottom-up (inside-out) approach rather than top-down when working with legacy code.
- Only testing the modified code, not writing tests for all use cases of legacy code.
- Testing requirements of new code rather than all behaviors and use cases.
- Injecting new testable code into legacy code without changing it.
- Breaking hidden dependencies to decrease coupling and increase cohesion.
The document provides examples of applying each rule through code snippets from a sample e-commerce application. It aims to help structure new code for testability while minimizing changes to
Performance Test Driven Development with Oracle Coherencearagozin
This presentation discusses test driven development with Oracle Coherence. It outlines the philosophy of PTDD and challenges of testing Coherence, including the need for a cluster and sensitivity to network issues. It discusses automating tests using tools like NanoCloud for managing nodes and executing tests remotely. Different types of tests are described like microbenchmarks, performance regression tests, and bottleneck analysis. Common pitfalls of performance testing like fixed users vs fixed request rates are also covered.
With the proliferation of OpenJDK binaries for a business to choose from, one factor in determining the selection is quality. How do you know your choice is up to snuff? AdoptOpenJDK Quality Assurance (AQA) is an open and transparent verification story for OpenJDK binaries. A robust and adaptable test kit that can be utilized by any OpenJDK implementor, and represents the quality bar required by large-scale customers in enterprise environments. We test multiple freely available JDK implementations at AdoptOpenJDK and continue to refine this suite of tests to give the community access to high-quality binaries.
This document discusses using both "sledgehammer" and "fine brush" approaches to quality assurance (QA). It describes how the AdoptOpenJDK Quality Assurance (AQA) project has progressively refined its QA process from initial brute force techniques to more targeted testing. The AQA project aims to ensure quality across the broadest range of OpenJDK platforms through an open, community-driven approach that evolves alongside the JDK.
Java Performance Testing for Everyone - Shelley LambertEclipse Day India
This document discusses performance testing of Java applications and the tools used at AdoptOpenJDK. It provides an overview of the scope of testing done at AdoptOpenJDK across multiple Java implementations, platforms, and versions. It defines key performance metrics and the basic steps of performance testing. Tools used at AdoptOpenJDK are introduced, including PerfNext for running benchmarks, the Test Results Summary Service for analyzing and comparing results, and BumbleBench for simplifying microbenchmark implementation. The document encourages open collaboration to advance innovation in performance testing.
The document discusses robustness testing of flight software using fault injection. It describes preparing and executing tests, analyzing logs and documenting results. Tests of VxWorks APIs and directives were conducted on over 300 functions and 70 routines. Automation tools were used to generate test cases and scripts to save time and money. The research aimed to transition robustness testing tools and lessons learned to NASA and contractors.
Performance Testing using Real Browsers with JMeter & WebdriverBlazeMeter
Learn how to easily run performance tests with real browsers using Selenium WebDriver.
Ophir Prusak, BlazeMeter’s Chief Evangelist, gives step-by-step instructions on doing this using BlazeMeter and/or JMeter.
Learn how to:
- Correlate actual browser-based user experience with the load tests
- Run multiple Selenium Webdriver tests in parallel at scale by using the power of the cloud
- Do it all without any prior JMeter knowledge or experience!
This document discusses testing software deployed in the cloud. Some things are the same as traditional testing like test plans, cases, and defect management. What's different includes shared test environments, security, performance testing, and determining who owns defects. The project took a risk-based approach with exploratory testing to understand the cloud solution. Harder aspects included isolating components for performance testing and accepting features from cloud vendors. Easier parts included getting buy-in and overlapping test phases. Key success factors included understanding the cloud application, architecture, data, and vendor communications.
Tools of the Trade: Load Testing - Ignite session at WebPerfDays NY 14Alexander Podelko
Tools of the Trade: Load Testing - an Ignite session at WebPerfDays NY 2014. Some consideration about load testing and selecting load testing tools - as much as could be squeezed into 5 min / 20 slides.
PuppetConf 2017: Test First Approach for Puppet on Windows- Miro Sommer, HiscoxPuppet
How to apply Test Driven Development (TDD) and Behavior Driven Development (BDD) techniques for Puppet modules? Usually, first comes the module code and then tests, but sometimes there are tests without value just for the sake of testing, or there are no tests at all. This causes problems when someone who isn't familiar with the code has to make changes and has no confidence that the code would still work. We can prevent these issues by using test-first approach with TDD or BDD to ensure that the code is always tested and we write the right tests. This session will give you practical steps to write tests first, as well as an overview and benefits of TDD and BDD techniques. We'll talk about how to write RSpec unit tests before there is any Puppet code written. We'll also look into Test Kitchen and how to write integration tests based purely on acceptance criteria of a work task, with a focus on testing Puppet modules on Windows using the latest and greatest PowerShell BDD testing library, Pester.
Efficient JavaScript Unit Testing, March 2013Hazem Saleh
This material about efficient JavaScript unit testing is presented by Hazem Saleh in the Egyptian Java Developer Conference that was held in 09 March 2013.
Automated Application Tests For Lotus Notes Uklug 2009maxistar
Automated function and regression tests for Lotus Notes applications:
* Workflow tests: Test complete workflows, including automated switching of user IDs and integrated handling of e-mails and doclinks
* Load tests: Measure how your application will scale before you roll it outt
* Performance measuring: Collect hard data on performance issues by sending executable test scripts to your users
* Cross-client testing: Check how your current applications will work in new Notes clients or Domino versions
* Powerful scripting language: Based on an enhanced VBScript dialect
* Log files: Record the results of test runs and store them back into your QA databases
OSGi Applications Testing - André Elia Assad, System Engineer, Cesarmfrancis
This document discusses testing OSGi applications. It introduces the OSGi test harness, which includes a test director bundle and target test bundles. The test director manages test suites and targets, while targets run tests on devices under test. The document also covers how the OSGi test framework relates to JUnit, guidelines for writing OSGi test cases focusing on robustness and performance, and common testing patterns like unit testing, bundle testing, and service contract testing. In conclusion, it notes that compliance programs cannot completely cover specifications and that OSGi testing focuses on functionality over robustness and usability.
The document discusses various techniques for testing Groovy and Java code using Groovy's built-in testing framework and mocking capabilities. It covers unit testing with GroovyTestCase, testing exceptions, and different approaches for mocking collaborators including using mockFor/stubFor, categories, ExpandoMetaClass, and maps. It also addresses integration testing Grails actions. The conclusion compares techniques for mocking Groovy versus Java classes and mentions emerging frameworks like Gmock.
Google AppEngine (GAE/J) - Introduction and Overview from a Java GuyMax Völkel
You know Java, but what is AppEngine? In this session Max will walk you from a brief introduction of Servlets to an overview of the Google AppEngine. We learn about the basics such as the data store – which is quite different from SQL -, application versions, back-ends, scheduler, instances and logging. Services such as mail, url fetch, task queue are also explained. Since we use GAE/J since 2010 in Calpano we also talk about costs and experience in practice.
Stopping the Rot - Putting Legacy C++ Under TestSeb Rose
The document discusses introducing unit testing to legacy C++ code. It covers choosing a testing framework, writing initial tests and mocks, and various refactoring techniques like wrapping dependencies, extracting components, and adding non-intrusive C seams to facilitate testing. The goal is to incrementally make the code more testable while maintaining functionality through practices like test-driven development.
QUICK TEST PROFESSIONAL 8.2
Mercury Quick Test Professional 8.2 provides the industry’s best solution for functional test and regression test automation - addressing every major software application and environment
You’re finally doing TDD, but your past mistakes are catching up with you. No matter what you do, you can’t get rid of the gaping black holes caused by your legacy code.
In this presentation, we learn about the causes of legacy code and the reasons it is so difficult to work with. Then we discuss various techniques to test untestable code, revive and simplify incomprehensible code, redesign stable yet untested code, and repair that rift we created in the time-space continuum.
QA Meetup at Signavio (Berlin, 06.06.19)Anesthezia
The document discusses establishing the architecture for an end-to-end testing project. It outlines key components like the core test structure following the Arrange-Act-Assert pattern, test data preparation, reporting with Allure, managing properties with Typesafe Config, dependency injection with Guice, executing tests on CI with Jenkins, and deploying test environments with Docker. The presenter will demonstrate establishing backend testing first before expanding to UI testing.
Towards Holistic Continuous Software Performance AssessmentVincenzo Ferme
In agile, fast and continuous development lifecycles, software performance analysis is fundamental to confidently release continuously improved software versions. Researchers and industry practitioners have identified the importance of integrating performance testing in agile development processes in a timely and efficient way. However, existing techniques are fragmented and not integrated taking into account the heterogeneous skills of the users developing polyglot distributed software, and their need to automate performance practices as they are integrated in the whole lifecycle without breaking its intrinsic velocity. In this paper we present our vision for holistic continuous software performance assessment, which is being implemented in the BenchFlow tool. BenchFlow enables performance testing and analysis practices to be pervasively integrated in continuous development lifecycle activities. Users can specify performance activities (e.g., standard performance tests) by relying on an expressive Domain Specific Language for objective-driven performance analysis. Collected performance knowledge can be thus reused to speed up performance activities throughout the entire process.
My talk from The International Workshop on Quality-aware DevOps (QUDOS 2017). Cite us: http://dl.acm.org/citation.cfm?id=3053636
If you had an opportunity to build an application from the ground up, with testability a key design goal, what would you do?
In this presentation, we will look at just such a situation - a major, two year rewrite of a suite of core business systems. We will discuss how a system looks when testability is as important as functionality - and what it looks like when quality concerns are part of the initial design. We will look at the role of test automation and manual test in a modern project, and look at the tools and processes. The session will conclude with a demo of the latest visual test automation tool from MIT and a Q&A.
This document provides rules for applying test-driven development (TDD) to legacy code. It discusses:
- Using a bottom-up (inside-out) approach rather than top-down when working with legacy code.
- Only testing the modified code, not writing tests for all use cases of legacy code.
- Testing requirements of new code rather than all behaviors and use cases.
- Injecting new testable code into legacy code without changing it.
- Breaking hidden dependencies to decrease coupling and increase cohesion.
The document provides examples of applying each rule through code snippets from a sample e-commerce application. It aims to help structure new code for testability while minimizing changes to
Performance Test Driven Development with Oracle Coherencearagozin
This presentation discusses test driven development with Oracle Coherence. It outlines the philosophy of PTDD and challenges of testing Coherence, including the need for a cluster and sensitivity to network issues. It discusses automating tests using tools like NanoCloud for managing nodes and executing tests remotely. Different types of tests are described like microbenchmarks, performance regression tests, and bottleneck analysis. Common pitfalls of performance testing like fixed users vs fixed request rates are also covered.
With the proliferation of OpenJDK binaries for a business to choose from, one factor in determining the selection is quality. How do you know your choice is up to snuff? AdoptOpenJDK Quality Assurance (AQA) is an open and transparent verification story for OpenJDK binaries. A robust and adaptable test kit that can be utilized by any OpenJDK implementor, and represents the quality bar required by large-scale customers in enterprise environments. We test multiple freely available JDK implementations at AdoptOpenJDK and continue to refine this suite of tests to give the community access to high-quality binaries.
This document discusses using both "sledgehammer" and "fine brush" approaches to quality assurance (QA). It describes how the AdoptOpenJDK Quality Assurance (AQA) project has progressively refined its QA process from initial brute force techniques to more targeted testing. The AQA project aims to ensure quality across the broadest range of OpenJDK platforms through an open, community-driven approach that evolves alongside the JDK.
Java Performance Testing for Everyone - Shelley LambertEclipse Day India
This document discusses performance testing of Java applications and the tools used at AdoptOpenJDK. It provides an overview of the scope of testing done at AdoptOpenJDK across multiple Java implementations, platforms, and versions. It defines key performance metrics and the basic steps of performance testing. Tools used at AdoptOpenJDK are introduced, including PerfNext for running benchmarks, the Test Results Summary Service for analyzing and comparing results, and BumbleBench for simplifying microbenchmark implementation. The document encourages open collaboration to advance innovation in performance testing.
The document discusses robustness testing of flight software using fault injection. It describes preparing and executing tests, analyzing logs and documenting results. Tests of VxWorks APIs and directives were conducted on over 300 functions and 70 routines. Automation tools were used to generate test cases and scripts to save time and money. The research aimed to transition robustness testing tools and lessons learned to NASA and contractors.
Performance Testing using Real Browsers with JMeter & WebdriverBlazeMeter
Learn how to easily run performance tests with real browsers using Selenium WebDriver.
Ophir Prusak, BlazeMeter’s Chief Evangelist, gives step-by-step instructions on doing this using BlazeMeter and/or JMeter.
Learn how to:
- Correlate actual browser-based user experience with the load tests
- Run multiple Selenium Webdriver tests in parallel at scale by using the power of the cloud
- Do it all without any prior JMeter knowledge or experience!
This document discusses testing software deployed in the cloud. Some things are the same as traditional testing like test plans, cases, and defect management. What's different includes shared test environments, security, performance testing, and determining who owns defects. The project took a risk-based approach with exploratory testing to understand the cloud solution. Harder aspects included isolating components for performance testing and accepting features from cloud vendors. Easier parts included getting buy-in and overlapping test phases. Key success factors included understanding the cloud application, architecture, data, and vendor communications.
Tools of the Trade: Load Testing - Ignite session at WebPerfDays NY 14Alexander Podelko
Tools of the Trade: Load Testing - an Ignite session at WebPerfDays NY 2014. Some consideration about load testing and selecting load testing tools - as much as could be squeezed into 5 min / 20 slides.
Automated Testing with Docker on Steroids - nlOUG TechExperience 2018 (Amersf...Lucas Jellema
Automated testing is important. We all know that we should do it. We also know that this can be painful, for many reasons. One of the most agonizing aspects of automated testing is the handling of the data. In order to run even the simplest of tests against the user interface, a service or API or even a PL/SQL unit typically requires that a proper starting point needs to be established in the database with respect to the data. Complex set up steps need to prepare various records to ensure the test can even start and afterwards in similarly complex tear down scripts we have to clean up after the test.
This session demonstrates how this hardship can be a thing of the past. Using snapshots of a test database in a Docker container with a managed test data set that supports all tests, we can create automated tests without any set up or tear down effort. These tests can run very fast, concurrently, and whenever and wherever you like them to run. This way of working opens up much higher test coverage and much increased productivity for developers and testers.
This document summarizes a test automation stack that uses Selenium, Cucumber, and Jenkins for end-to-end testing. It utilizes Cucumber for test design with a Gherkin syntax to make tests human-readable. Selenide is used for test development to enable simple code with clear structure and stable Selenium framework methods. Tests are executed in Jenkins with parameterization to run on different environments and browsers. Test results are evaluated using Report Portal for automated analysis, dashboards, and error correlation. A live demo of the stack is available on GitHub.
This document summarizes a test automation stack that uses Cucumber, Selenium, and Jenkins for end-to-end testing. Cucumber is used for test design with a Gherkin syntax to make tests human-readable. Selenium and the Selenide framework provide test development with reusable, stable tests implemented as page objects. Tests are executed via Jenkins pipelines for continuous integration and parameterization. Test results are evaluated using Report Portal for correlation, visualization of errors, and automated analysis.
The document discusses testing cloud solutions. It describes what the cloud is, how testing in the cloud is both similar and different to traditional testing. Some differences include shared test environments, security considerations, and performance testing. The document outlines the project's testing philosophy and phases, which included exploratory testing and a non-standard approach to performance testing and data migration. Challenges included isolating components for performance testing and determining defect ownership. Successes included leadership buy-in and use of offshore resources. Key factors for success centered around understanding the cloud application and architecture.
This document discusses challenges with automated testing and test data management. It introduces service virtualization as a solution to address problems related to lack of environments, slow integration cycles, and unintended consequences of changes. The document also summarizes CA LISA's test automation and data management capabilities like functional testing, mobile testing, continuous validation, and test data warehousing to help improve testing practices.
Design and development of automated tests for the IoTAxel Rennoch
The document discusses the development of automated tests for IoT. It outlines challenges in IoT testing and standards from Eclipse Foundation and ETSI. It proposes using TTCN-3 as a test description language to formally define test scenarios and develop test suites for protocols like CoAP and MQTT. The document presents a methodology for performance and security testing of IoT systems and provides information on related open source and standards resources.
DMYTRO SOBKO, Lead automation QA engineer @EPAM.
We are well aware of how to test the REST API with N endpoints, with relational and non-relational (NonSQL) databases. Same thing with UI testing. Frameworks like Selenium, Selenide, Selenoid are not a mystery to anyone. Moreover, creating a reliable, extensible and really cool automated test framework for such applications from scratch is not difficult. But what about BigData projects that have no back-end or front-end in the classical sense? How can we test them? What parts should we cover with tests in the first place? And, besides, how do we introduce automation and make it an effective way for such projects?
Dmytro will show you how to create a test framework for Cloud Big Data projects from scratch and to develop it in the most optimal way using the most interesting technologies.
Modernizing Testing as Apps Re-ArchitectDevOps.com
Applications are moving to cloud and containers to boost reliability and speed delivery to production. However, if we use the same old approaches to testing, we'll fail to achieve the benefits of cloud. But what do we really need to change? We know we need to automate tests, but how do we keep our automation assets from becoming obsolete? Automatically provisioning test environments seems close, but some parts of our applications are hard to move to cloud.
Testing Applications—For the Cloud and in the CloudTechWell
As organizations adopt a DevOps approach to software development, they work to shorten test cycles, begin testing earlier, and test continuously. However, one challenge still remains―the unavailability of complete and realistic production-like test environments. Technologies like service virtualization help, but there comes a time when you need additional computing resources to deploy and test the application. Today's cloud technology allows teams to spin up test labs on demand. Join Al Wagner as he describes the various clouds―public, private, and hybrid―and the cloud services available today. By combining the cloud with service virtualization, teams can now test applications end-to-end much earlier in the delivery lifecycle. Learn how teams can use today’s SaaS offerings, deployed on cloud technology, to manage their test effort and drive test execution. Explore how you can use clouds throughout the delivery lifecycle as your organization works to migrate and virtualize legacy applications. Take testing to a new level and test with greater efficiency―in the cloud.
DevOps for Big Data - Data 360 2014 ConferenceGrid Dynamics
This document discusses implementing continuous delivery for big data applications using Hadoop, Vertica, and Tableau. It describes Grid Dynamics' initial state of developing these applications in a single production environment. It then outlines their steps to implement continuous delivery, including using dynamic environments provisioned by Qubell to enable automated testing and deployment. This reduced risks and increased efficiency by allowing experimentation and validation prior to production releases.
Similar to DealingwithVerificationDataOverload (20)
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
SMS API Integration in Saudi Arabia| Best SMS API ServiceYara Milbes
Discover the benefits and implementation of SMS API integration in the UAE and Middle East. This comprehensive guide covers the importance of SMS messaging APIs, the advantages of bulk SMS APIs, and real-world case studies. Learn how CEQUENS, a leader in communication solutions, can help your business enhance customer engagement and streamline operations with innovative CPaaS, reliable SMS APIs, and omnichannel solutions, including WhatsApp Business. Perfect for businesses seeking to optimize their communication strategies in the digital age.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
What is Master Data Management by PiLog Groupaymanquadri279
PiLog Group's Master Data Record Manager (MDRM) is a sophisticated enterprise solution designed to ensure data accuracy, consistency, and governance across various business functions. MDRM integrates advanced data management technologies to cleanse, classify, and standardize master data, thereby enhancing data quality and operational efficiency.
1. SHELLEY LAMBERT
LAN XIA
IBM RUNTIME TECHNOLOGIES
NOV 2019
@SHELLEYMLAMBERT
@BONJOURLAN
DEALING WITH
VERIFICATION DATA OVERLOAD
Dealing with Verification Data Overload
2. • THE SCOPE
• TEST FRAMEWORK (TESTKITGEN)
• JENKINS BUILDS
• TEST RESULT SUMMARY SERVICE (TRSS)
• DATA REFINERY EXPERIMENTS
• PLANS FORWARD
AGENDA
Dealing with Verification Data Overload
3. THE SCOPE
AdoptOpenJDK
• ENSURING FREE AND VERIFIED JAVA™ FOR THE COMMUNITY
• PROJECTS: ECLIPSE OMR, ECLIPSE OPENJ9, ADOPTOPENJDK
• 6+ JENKINS SERVERS
Dealing with Verification Data Overload
5. DEGREES OF FREEDOM
OpenJ9 Hotspot SAP Corretto
8 9 10 11 12 +
openjdk functional perf system external
RI
13
JDK Implementations
Platforms
JDK Versions
Test Categories
osx
osx aix win xlinux plinux
6 versions
aarch
58 impl_platform
250000 tests
87,000,000 Tests Impl_platform x testLevels x testGroups x versions
58 x 2 x 3 x 6 x 10M = 20G+ test output per nightly build
Plus PR builds,
promotion builds and
personal builds
Dealing with Verification Data Overload
7. GATHER GREAT TESTS
7
functional openjdk perfexternalsystem
testNG,
cmdlinetester
STF junit & others
Assorted
benchmarks
Jtreg, testNG
Dealing with Verification Data Overload
8. ADOPTOPENJDK QUALITY ASSURANCE
(AQA)
8Dealing with Verification Data Overload
• “Make quality certain to happen”
• Testing a wide criteria representing actual business requirements to identify
binaries ready for production usage
Today Roadmap
Functional correctness Security
OpenJDK regression (open) Passes known vulnerability tests
Oracle JCK (closed) Functional correctness
OpenJDK regression
Builder-specific testing (unknown) Eclipse functional
Application & framework tests
Performance
Published metrics
Achieves minimum throughput scores
Scalability & durability
Load & stress testing
9. AQA MANIFESTO
• open & transparent
• diverse & robust set of test suites
• evolution alongside implementations
• continual investment
• process to modify
• codecov & other metrics
• comparative analysis
• portable
• tag & publish
9Dealing with Verification Data Overload
10. INNOVATE AND COLLABORATE
• Reactive systems
• Latitude
• Flexible
• Common
• Standardized
• Simple
10Dealing with Verification Data Overload
11. GRANULARITY
• Specific Testcases/groups
• Different levels
• Different versions
• Different implementations
• Different iterations
• Different features
• With/Without native test image
• …
11
sanity/extended/special
8/11/13/14…
openj9/hotspot/ibm/corretto/sap
1/2/n…
AOT/JITAAS
functional/system/openjdk/perf/external
Dealing with Verification Data Overload
12. CONSOLIDATE AND CURATE
12
functional openjdk perfexternalsystem
TestKitGen
testNG,
cmdlinetester
STF junit & others
Assorted
benchmarks
Jtreg, testNG
Dealing with Verification Data Overload
14. GROUPING & GRANULARITY
• group=openjdk
• levels=sanity.openjdk, extended.openjdk, special.openjdk
• targets=tests in playlist file
• jdk_awt, jdk_math, jdk_lang, etc.
• jdk_custom=CUSTOM_TARGET env var
• set to individual directories or classes
14
openjdk
sanity.openjdk
jdk_math
Java/math/BigDecimal/NegateTests.java
Dealing with Verification Data Overload
15. ADOPTOPENJDK CI PIPELINE
Run in parallel
Build Deploy
openjdk
functional
system
perf
external
Test
sanity.system
lambdaLoadTest
extended.system
special.system
mathLoadTest
…
daaLoadTest
Dealing with Verification Data Overload
16. JENKINS SCRIPTS FOR TESTING
16
• Repo:
https://github.com/AdoptOpenJDK/openjdk-tests
• One script (JenkinsfileBase) for all
test builds:
• Nightly/release
• Pull Request
• Promotion
• Personal/Grinder
Dealing with Verification Data Overload
17. SURVEY OF TESTS: CI.ADOPTOPENJDK.NET
17
Categorize test builds
based on JDK Version,
JDK Impl, test category
and platform
Dealing with Verification Data Overload
18. TAP & JUNIT PLUGIN
18
Standardize output
Dealing with Verification Data Overload
19. ARCHIVE DATA
• Archive test data from failed tests onto Jenkins master or Artifactory:
• Test logs/output files
• Diagnostic files (core/trace/javacore files)
• TAP file, JUnit *.jtr, *.xml files
• Test repo SHA
19
Minimize stored
artifacts
Dealing with Verification Data Overload
21. TEST RESULT SUMMARY SERVICE (TRSS)
• Monitors multiple Jenkins servers
• Personalized Dashboard
• Provide filtering, sorting, comparing and searching feature
• Provide history for triaging and performance trends
• Available at https://trss.adoptopenjdk.net/
• Git repo: https://github.com/AdoptOpenJDK/openjdk-test-tools
21Dealing with Verification Data Overload
32. LET US COUNT THE WAYS
• categorize
• standardize
• minimize
• personalize
• aggregate
• summarize
• filter
• sort
• compare
• search
• diff
• visualize
• model
Dealing with Verification Data Overload
33. WHAT IS DEEP LEARNING?
Deep learning is a subset of ML algorithms distinguished by:
• Loosely based on structure and function of the brain, use
artificial neural networks (ANN)
• Multiple layers of processing units, “neurons”, output of a
layer is input to another layer
• Modes of learning, supervised (regression, classification)
or unsupervised (pattern analysis)
33Dealing with Verification Data Overload
34. INITIAL DL EXPERIMENTS
• Preprocess testOutput based on own listed feature key words index.
34
Padding preprocessed testOutput for Deep Learning
model:
“[4, 7, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…]”
Label: “1”
Original testOutput:
“Running test TestIBMJlmLocal_0: ERROR code 3, FAILED test 1,
running second test, exception ValueError, exit code 1”
Label: “FAILED”
Preprocessed testOutput:
“[4,
Label:
7, 10]”
“1”
Dealing with Verification Data Overload
35. INITIAL DL EXPERIMENTS
- Training Accuracy and Validation
Accuracy, our evaluation result is
98.897% with 7800 training data
(3900 failed and 3900 passed) and
7800 test data
- Training Loss and Validation Loss
35Dealing with Verification Data Overload
36. MODEL BUILDING
36
JVM version
Variants used
Things we know
(input layer)
Failure expression
Platform
JVM Impl
Machine ‘age’
Failure age
PR list
Defect category
Things we want to know
(output layer)
Bug prediction scores
Best next action
Rate value of test
SHAs
Dealing with Verification Data Overload
37. Deep Learning for Fuzzing Java Compilers
38Dealing with Verification Data Overload
38. Deep Learning for Fuzzing Java Compilers
39Dealing with Verification Data Overload
• Integrated DeepSmith with TKG and TRSS
• Easily run thousands of DeepSmith tests in Jenkins with different
JDK versions / impls / JVM options
• Compare and monitor test outputs using TRSS
39. PLANS FORWARD
• Test smarter (as test volume increases)
• Change-based testing
• Bug prediction service
• Enhance TRSS with analytics services
• Build skills and continue model/deploy, observe & measure
• Innovate/Collaborate
• AI-driven fuzz testing (with Professor Hugh Leather)
• Test generation service (application of CTD)
• Leverage & deploy useful models in open projects
40Dealing with Verification Data Overload
- Test source open, executed in the open, results openly available
- Cover different test categories (functional/regression, security, performance, scalability, load/stress), implementations, versions, platforms
- Evolution:
- refine automation and tools
- remove friction and reduce the process
- review existing/new tests on regular basis
- community awareness of addition/modification/deletion
- codecov, heatmap and bug prediction
- Easily run on local and Jenkins
- Audit trail
- Test source open, executed in the open, results openly available
- Cover different test categories (functional/regression, security, performance, scalability, load/stress), implementations, versions, platforms
- Evolution:
- refine automation and tools
- remove friction and reduce the process
- review existing/new tests on regular basis
- community awareness of addition/modification/deletion
- codecov, heatmap and bug prediction
- Easily run on local and Jenkins
- Audit trail
We need a reactive system which works well in a changing world.
latitude to cover tests that may use significant different test frameworks
flexible to specify and test against different requirements
a common way to easily add, group, exclude and execute tests
test results to have same look & feel
Simple to be integrated with any build system
Slice and dice
Different levels of granularity
Group, categorize, tag
Leverage 3rd party Jenkins plugins
Artifactory
open source project
binary repositories
TensorFlow
Keras Sequential model
https://keras.io/models/sequential/
https://www.tensorflow.org/tutorials/keras