Three ideas to regression test smarter and outline THREE AIDS to do this.
AID #1: Fault propagation analyser - Figure out how what-to-retest by doing a smarter impact analysis using a scientific approach to understanding fault propagation due to change.
AID #2 : Automation analyser - Ensure scenarios are fit-to-automate so that they are easily scriptable and easily maintainable
AID #3 : Yield analyser : Figure out how much not to regress by analysing defect yields over time to understand what parts of the system have been hardened
Well, automation is an obvious choice, ensure that the scenarios are “fit enough for automation” so that you don’t end spend much effort maintaining the scripts to be in sync with every change.
Proceedings of a talk given at the American Translators Association's 55th conference.
Abstract:
The presentation will give a very brief overview over the localization process for software and mobile apps before covering the quality assurance process in detail. The talk will not only discuss the fundamentals of the testing process, but will also help you become a better tester by presenting some tips, tricks, best practices, and pitfalls. The presentation is intended for beginning and intermediate localization testers and discusses the testing process from the tester's perspective. The talk will not cover other steps in the localization process such as file preparation, translation, etc., which have been covered at past ATA conferences.
Modern Release Engineering in a Nutshell - Why Researchers should Care!Bram Adams
Invited talk at the Leaders of Tomorrow Symposium of the 23rd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER 2016).
The presentation (and its accompanying paper, see http://mcis.polymtl.ca/publications/2016/fose.pdf) explain the basics of release engineering pipelines, common challenges industry is facing as well as pitfalls software engineering researchers are falling into.
Speakers are Bram Adams (MCIS, http://mcis.polymtl.ca) and Shane McIntosh (McGill University, http://shanemcintosh.org).
A video-taped version of the talk will be available soon at https://www.youtube.com/channel/UCL8yG6qpHk7V66l1Jt3aZrA/featured.
This slide share contains the webinar, slides and the transcribed audio. The discussion outlines the entities to be considered for design, level based design, the optimal approach (think & prove/execute & evaluate) and finally design techniques.
HBT Innovation Series webinar presented by T Ashok, Architect-HBT and Founder & CEO, STAG Software on the topic - Deliver Superior Outcomes Using HBT Visualization Tool - on Feb 26, 2014.
Proceedings of a talk given at the American Translators Association's 55th conference.
Abstract:
The presentation will give a very brief overview over the localization process for software and mobile apps before covering the quality assurance process in detail. The talk will not only discuss the fundamentals of the testing process, but will also help you become a better tester by presenting some tips, tricks, best practices, and pitfalls. The presentation is intended for beginning and intermediate localization testers and discusses the testing process from the tester's perspective. The talk will not cover other steps in the localization process such as file preparation, translation, etc., which have been covered at past ATA conferences.
Modern Release Engineering in a Nutshell - Why Researchers should Care!Bram Adams
Invited talk at the Leaders of Tomorrow Symposium of the 23rd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER 2016).
The presentation (and its accompanying paper, see http://mcis.polymtl.ca/publications/2016/fose.pdf) explain the basics of release engineering pipelines, common challenges industry is facing as well as pitfalls software engineering researchers are falling into.
Speakers are Bram Adams (MCIS, http://mcis.polymtl.ca) and Shane McIntosh (McGill University, http://shanemcintosh.org).
A video-taped version of the talk will be available soon at https://www.youtube.com/channel/UCL8yG6qpHk7V66l1Jt3aZrA/featured.
This slide share contains the webinar, slides and the transcribed audio. The discussion outlines the entities to be considered for design, level based design, the optimal approach (think & prove/execute & evaluate) and finally design techniques.
HBT Innovation Series webinar presented by T Ashok, Architect-HBT and Founder & CEO, STAG Software on the topic - Deliver Superior Outcomes Using HBT Visualization Tool - on Feb 26, 2014.
Agile and automation have been great enablers to doing tests faster. How we can accelerate further to accomplish more by doing less is the objective of this webinar.
“Left-shifting” by smart decomposition of dev testing aided by smart lightweight aids to perform rapid dev testing will be the takeaways of this webinar.
Chapter 10 Testing and Quality Assurance1Unders.docxketurahhazelhurst
Chapter 10:
Testing and Quality
Assurance
1
Understand quality & basic techniques for software verification and validation.
Analyze basics of software testing and testing techniques.
Discuss the concept of “inspection” process.
Objectives
2
Quality assurance (QA): activities designed
to measure and improve quality in a product— and process.
Quality control (QC): activities designed to validate and verify the quality of the product through detecting faults and “fixing” the defects.
Need good techniques, process, tools,
and team.
Testing Introduction
similar
3
Two traditional definitions:
Conforms to requirements.
Fit to use.
Verification: checking software conforms to
its requirements (did the software evolve
from the requirements properly; does the software “work”?)
Is the system correct?
Validation: checking software meets user requirements (fit to use)
Are we building the correct system?
What Is “Quality”?
4
Testing: executing program in a controlled environment and “verifying/validating” output.
Inspections and reviews.
Formal methods (proving software correct).
Static analysis detects “error-prone conditions.”
Some “Error-Detection” Techniques (finding errors)
5
Error: a mistake made by a programmer or software engineer that caused the fault, which in turn may cause a failure
Fault (defect, bug): condition that may cause a failure in the system
Failure (problem): inability of system to perform a function according to its spec due to some fault
Fault or failure/problem severity (based on consequences)
Fault or failure/problem priority (based on importance of developing a fix, which is in turn based
on severity)
Faults and Failures
6
Activity performed for:
Evaluating product quality
Improving products by identifying defects and having them fixed prior to software release
Dynamic (running-program) verification of program’s behavior on a finite set of test cases selected from execution domain.
Testing can NOT prove product works 100%—even though we use testing to demonstrate that parts of the software works.
Testing
Not always
done!
7
Who tests?
Programmers
Testers/Req. Analyst
Users
What is tested?
Unit code testing
Functional code testing
Integration/system testing
User interface testing
Testing (cont.)
Why test?
Acceptance (customer)
Conformance (std, laws, etc.)
Configuration (user vs. dev.)
Performance, stress, security, etc.
How (test cases designed)?
Intuition
Specification based (black box)
Code based (white box)
Existing cases (regression)
8
Progression of Testing
Equivalence Class Partitioning
Divide the input into several groups, deemed “equivalent” for purposes of finding errors.
Pick one “representative” for each class used for testing.
Equivalence classes determined by req./design specifications and some intuition
Example: pick “larger” of
two integers and . . .
Lessen duplication.
Complete coverage.
10
Suppose we have n distinct functional requirements.
Su ...
A Declarative Approach for Performance Tests Execution in Continuous Software...Vincenzo Ferme
Software performance testing is an important activity to ensure quality in continuous software development environments. Current performance testing approaches are mostly based on scripting languages and framework where users implement, in a procedural way, the performance tests they want to issue to the system under test. However, existing solutions lack support for explicitly declaring the performance test goals and intents. Thus, while it is possible to express how to execute a performance test, its purpose and applicability context remain implicitly described. In this work, we propose a declarative domain specific language (DSL) for software performance testing and a model-driven framework that can be programmed using the mentioned language and drive the end-to-end process of executing performance tests. Users of the DSL and the framework can specify their performance intents by relying on a powerful goal-oriented language, where standard (e.g., load tests) and more advanced (e.g., stability boundary detection and configuration tests) performance tests can be specified starting from templates. The DSL and the framework have been designed to be integrated into a continuous software development process and validated through extensive use cases that illustrate the expressiveness of the goal-oriented language, and the powerful control it enables on the end-to-end performance test execution to determine how to reach the declared intent.
My talk from The 9th ACM/SPEC International Conference on Performance Engineering (ICPE 2018). Cite us: https://dl.acm.org/citation.cfm?id=3184417
Keynote presentation from the Guardian 2016 Software Developer Conference focused on continuous delivery pipelines and the role of Automation, Orchestration, and DevOps on enabling the business
STAG Software presented a webinar on Aug 21, 2013 on the topic - Improving Defect Yield - a three step approach". The webinar was hosted by T Ashok, Founder & CEO, STAG Software and Architect of HBT.
Software Testing :
It is the process used to identify the correctness, completeness and quality of developed computer software.
It is the process of executing a program/application under positive and negative conditions by manual or automated means. It checks for the :-
Specification
Functionality
Performance
DevLabs Alliance Top 20 Software Testing Interview Questions for SDET - by De...DevLabs Alliance
DevLabs Alliance Software Testing Interview Questions for SDET will help SDETs to prepare for their interviews. Learn top 20 questions with their answers for Software Testing which are majorly asked in interview for SDET role.
DevLabs Alliance Top 20 Software Testing Interview Questions for SDET - by De...DevLabs Alliance
DevLabs Alliance Software Testing Interview Questions for SDET will help SDETs to prepare for their interviews. Learn top 20 questions with their answers for Software Testing which are majorly asked in interview for SDET role.
Software testing tools are evolving. More testing frameworks are emerging through the open source community and commercial vendors. In addition, we’re starting to see the rise of machine-learning (ML) and artificial intelligence (AI) in testing solutions.
Given this evolution, it is important to map the tools that match both the practitioners’ skills and their testing types. When referring to the testing practitioners, we mainly look at three different personas:
-The business tester
-The software developer in test (SDET)
-The software developer
These practitioners are tasked with creating, maintaining, and executing unit tests, build acceptance tests, integration, regression, and other nonfunctional tests.
In this webinar led by Perfecto’s Chief Evangelist, Eran Kinsbruner, you will learn the following:
-How should testing types be dispersed among the three personas and throughout the DevOps pipeline?
-What tools should each of these three personas use for the creation and execution of tests?
-What are the key benefits to continuous testing when mapped correctly?
This outlines FIVE key application scenarios of validation using doSmartQA, a smart probing assistant to test deeply & rapidly.
It facilitates rapid testing in short sessions of Recon, Explore & Recoup, based on HyBIST -
‘Hypothesis Based Immersive Session Testing’, an intellectual practice of probing.
“Despite all the testing we do, field issues do not seem to abate. Sometimes it is a few serious issues that cause us to react intensely, sometimes it is a bunch of simple issues that make us consume bandwidth. Clearly the backlog is building up, with debts to be serviced, straining capacity to deliver new ideas.”
This is what I hear from senior engineering managers of product companies. How do you go about fixing this? Well, I have seen a flurry of activity to identify root cause(s) and address them. They help to set focus, but fizzle out.
Analysing 'quality of technical debt’ to understand types of issues that leak enables practical actions, rather than jumping into the ‘reason of why’ (root cause). Smart QA it is, to do failure analytics differently, to ‘tighten the purse’.
Technical debt is indeed a serious drain on engineering capacity, forcing one to fix issues at the expense of building revenue yielding new features. Smart failure analytics visualises problems well, enabling clear actions to strengthen practice and reduce debt significantly.
If you are “choked by technical debt”, then you may find our SmartQA consulting (stagsoftware.com/smartqa) interesting, where we unshackle your practice so that you can exploit technology.
More Related Content
Similar to Is regression hindering your progression?
Agile and automation have been great enablers to doing tests faster. How we can accelerate further to accomplish more by doing less is the objective of this webinar.
“Left-shifting” by smart decomposition of dev testing aided by smart lightweight aids to perform rapid dev testing will be the takeaways of this webinar.
Chapter 10 Testing and Quality Assurance1Unders.docxketurahhazelhurst
Chapter 10:
Testing and Quality
Assurance
1
Understand quality & basic techniques for software verification and validation.
Analyze basics of software testing and testing techniques.
Discuss the concept of “inspection” process.
Objectives
2
Quality assurance (QA): activities designed
to measure and improve quality in a product— and process.
Quality control (QC): activities designed to validate and verify the quality of the product through detecting faults and “fixing” the defects.
Need good techniques, process, tools,
and team.
Testing Introduction
similar
3
Two traditional definitions:
Conforms to requirements.
Fit to use.
Verification: checking software conforms to
its requirements (did the software evolve
from the requirements properly; does the software “work”?)
Is the system correct?
Validation: checking software meets user requirements (fit to use)
Are we building the correct system?
What Is “Quality”?
4
Testing: executing program in a controlled environment and “verifying/validating” output.
Inspections and reviews.
Formal methods (proving software correct).
Static analysis detects “error-prone conditions.”
Some “Error-Detection” Techniques (finding errors)
5
Error: a mistake made by a programmer or software engineer that caused the fault, which in turn may cause a failure
Fault (defect, bug): condition that may cause a failure in the system
Failure (problem): inability of system to perform a function according to its spec due to some fault
Fault or failure/problem severity (based on consequences)
Fault or failure/problem priority (based on importance of developing a fix, which is in turn based
on severity)
Faults and Failures
6
Activity performed for:
Evaluating product quality
Improving products by identifying defects and having them fixed prior to software release
Dynamic (running-program) verification of program’s behavior on a finite set of test cases selected from execution domain.
Testing can NOT prove product works 100%—even though we use testing to demonstrate that parts of the software works.
Testing
Not always
done!
7
Who tests?
Programmers
Testers/Req. Analyst
Users
What is tested?
Unit code testing
Functional code testing
Integration/system testing
User interface testing
Testing (cont.)
Why test?
Acceptance (customer)
Conformance (std, laws, etc.)
Configuration (user vs. dev.)
Performance, stress, security, etc.
How (test cases designed)?
Intuition
Specification based (black box)
Code based (white box)
Existing cases (regression)
8
Progression of Testing
Equivalence Class Partitioning
Divide the input into several groups, deemed “equivalent” for purposes of finding errors.
Pick one “representative” for each class used for testing.
Equivalence classes determined by req./design specifications and some intuition
Example: pick “larger” of
two integers and . . .
Lessen duplication.
Complete coverage.
10
Suppose we have n distinct functional requirements.
Su ...
A Declarative Approach for Performance Tests Execution in Continuous Software...Vincenzo Ferme
Software performance testing is an important activity to ensure quality in continuous software development environments. Current performance testing approaches are mostly based on scripting languages and framework where users implement, in a procedural way, the performance tests they want to issue to the system under test. However, existing solutions lack support for explicitly declaring the performance test goals and intents. Thus, while it is possible to express how to execute a performance test, its purpose and applicability context remain implicitly described. In this work, we propose a declarative domain specific language (DSL) for software performance testing and a model-driven framework that can be programmed using the mentioned language and drive the end-to-end process of executing performance tests. Users of the DSL and the framework can specify their performance intents by relying on a powerful goal-oriented language, where standard (e.g., load tests) and more advanced (e.g., stability boundary detection and configuration tests) performance tests can be specified starting from templates. The DSL and the framework have been designed to be integrated into a continuous software development process and validated through extensive use cases that illustrate the expressiveness of the goal-oriented language, and the powerful control it enables on the end-to-end performance test execution to determine how to reach the declared intent.
My talk from The 9th ACM/SPEC International Conference on Performance Engineering (ICPE 2018). Cite us: https://dl.acm.org/citation.cfm?id=3184417
Keynote presentation from the Guardian 2016 Software Developer Conference focused on continuous delivery pipelines and the role of Automation, Orchestration, and DevOps on enabling the business
STAG Software presented a webinar on Aug 21, 2013 on the topic - Improving Defect Yield - a three step approach". The webinar was hosted by T Ashok, Founder & CEO, STAG Software and Architect of HBT.
Software Testing :
It is the process used to identify the correctness, completeness and quality of developed computer software.
It is the process of executing a program/application under positive and negative conditions by manual or automated means. It checks for the :-
Specification
Functionality
Performance
DevLabs Alliance Top 20 Software Testing Interview Questions for SDET - by De...DevLabs Alliance
DevLabs Alliance Software Testing Interview Questions for SDET will help SDETs to prepare for their interviews. Learn top 20 questions with their answers for Software Testing which are majorly asked in interview for SDET role.
DevLabs Alliance Top 20 Software Testing Interview Questions for SDET - by De...DevLabs Alliance
DevLabs Alliance Software Testing Interview Questions for SDET will help SDETs to prepare for their interviews. Learn top 20 questions with their answers for Software Testing which are majorly asked in interview for SDET role.
Software testing tools are evolving. More testing frameworks are emerging through the open source community and commercial vendors. In addition, we’re starting to see the rise of machine-learning (ML) and artificial intelligence (AI) in testing solutions.
Given this evolution, it is important to map the tools that match both the practitioners’ skills and their testing types. When referring to the testing practitioners, we mainly look at three different personas:
-The business tester
-The software developer in test (SDET)
-The software developer
These practitioners are tasked with creating, maintaining, and executing unit tests, build acceptance tests, integration, regression, and other nonfunctional tests.
In this webinar led by Perfecto’s Chief Evangelist, Eran Kinsbruner, you will learn the following:
-How should testing types be dispersed among the three personas and throughout the DevOps pipeline?
-What tools should each of these three personas use for the creation and execution of tests?
-What are the key benefits to continuous testing when mapped correctly?
This outlines FIVE key application scenarios of validation using doSmartQA, a smart probing assistant to test deeply & rapidly.
It facilitates rapid testing in short sessions of Recon, Explore & Recoup, based on HyBIST -
‘Hypothesis Based Immersive Session Testing’, an intellectual practice of probing.
“Despite all the testing we do, field issues do not seem to abate. Sometimes it is a few serious issues that cause us to react intensely, sometimes it is a bunch of simple issues that make us consume bandwidth. Clearly the backlog is building up, with debts to be serviced, straining capacity to deliver new ideas.”
This is what I hear from senior engineering managers of product companies. How do you go about fixing this? Well, I have seen a flurry of activity to identify root cause(s) and address them. They help to set focus, but fizzle out.
Analysing 'quality of technical debt’ to understand types of issues that leak enables practical actions, rather than jumping into the ‘reason of why’ (root cause). Smart QA it is, to do failure analytics differently, to ‘tighten the purse’.
Technical debt is indeed a serious drain on engineering capacity, forcing one to fix issues at the expense of building revenue yielding new features. Smart failure analytics visualises problems well, enabling clear actions to strengthen practice and reduce debt significantly.
If you are “choked by technical debt”, then you may find our SmartQA consulting (stagsoftware.com/smartqa) interesting, where we unshackle your practice so that you can exploit technology.
"We track a lot of metrics related to progress of development and quality every sprint, like backlogs, technical debt, velocity, task status etc. What is not very evident is the 'quality of movement' i.e. how well done, so that we create less debt as we move. How can I get a better insight of the quality of tests done and a more objective measure of product quality?"
Extrinsic metrics are easier to measure and give visibility of direction, progress, speed and external feel of product quality. Intrinsic metrics are deeper, harder to measure but can give greater insight into the quality of work. Measuring this requires a good structure and organisation of test artefacts. The benefit - a greater insight into effectiveness of outcome and therefore lower technical debt & greater acceleration, don't you think?
Metrics can be classified as measuring work progress, work quality, product quality and practice quality. Except for the first one on work progress where we have a lot of measures facilitated by project and test management tools, the others depend on test organisation and clarity of types of issues to uncover. 'Quality Levels' based on HBT (Hypothesis Based Testing) provides a strong foundation for these, enabling you to assess potential test effectiveness, judge product quality objectively and fine tune practice quality .
If you are keen on "insightful quality metrics", then you may find our SmartQA consulting (stagsoftware.com/smartqa) interesting, where we unshackle your practice so that you can see clearly and do far better.
“As we embrace faster release cycles, testing has become a bottleneck. Yes, we have embraced automation as the way forward. We have a huge regression suite and therefore a big backlog for automation, a tough balance to speed up and yet maintain the fast paced release rhythm. What can I do?”
Automated tests are great to monitor a system’s health. Rather than just use regression as the candidate for automation, key flows that signify the pulse of a system's health are superior, don’t you think? And, this won’t create a huge backlog for automation, right?
Most often I have seen automation embraced as the solution to speed up testing. Conceptually correct it is, the problem is - what makes it worth the while to automate? Automated tests have to be in sync with the product and are therefore not a one time effort.
Choosing the right ones implies, it needs to be at the level of user flow, and be a clear indicator of health. Unless test scenarios are well structured and organised, choosing the right ones will turn out to be difficult, and ultimately weigh you down. It then becomes a pursuit of catching up with automation rather than making it work for you.
The goal is not 100% automation, it really is no leakage of defects. Automated tests are really ‘checks’ that assess key paths for good health (correctness) while intelligent human tests are focused on finding issues(robustness). A harmonious balance between these two enables clean code to be delivered without being weighed down by automation.
If you are “weighed down by automation“, then you may find our SmartQA consulting (stagsoftware.com/smartqa) interesting, where we unshackle your practice so that you can exploit technology.
Inspired by how the world is handling Covid19, this slideshare lists actions taken and criteria met to contain the pandemic and correlate this to how we can deliver clean code for large scale software systems. This article focuses on the process flow and criteria for delivering clean code.
The act of testing is a scientific exploration of a system done in three phases - RECONNAISSANCE to understand and plan, SEARCH to look for issues, REST&RECOVER to analyse and course correct. To enable the various activities in each phase to be done quickly and effectively, is where the SEVEN Thinking Tools outlined in this presentation. How to apply these tools in a session-based approach is also briefly outlined.
This article version of this SlideShare is available at http://bit.ly/7ThinkTools.
Drawing inspiration from Atul Gawande's book "The checklist manifesto", T Ashok, CEO, STAG Software, explores at how we can exploit the power of checklist to delivering good quality code.
Drawing inspiration from Atul Gawande's book "The checklist manifesto", T Ashok, CEO, STAG Software, explores at how we can exploit the power of checklist to delivering good quality code.
This is the webinar recording on the topic ‘Test Case Immunity’- Optimize testing. In this webinar we have conveyed an interesting idea of measuring “Test Case Immunity” to logically assess what test cases to drop by so that we can 'do none'
Part#2 of Tri-webinar series consisting of three webinars commencing with 'How-to question to understand an user story and identify gaps', moving onto 'How-to set clear baseline' to ensure an effective strategy, and finally culminating with 'How-to design test scenarios/cases' using a scientific and disciplined approach
Part1 of Tri-webinar series consisting of three webinars commencing with 'How-to question to understand an user story and identify gaps', moving onto 'How-to set clear baseline' to ensure an effective strategy, and finally culminating with 'How-to design test scenarios/cases' using a scientific and disciplined approach
"Language shapes the way you think" was the topic of the talk presented by T Ashok, CEO STAG Software, to a group of test professionals at a Pune-based IT services and solutions provider on June 16, 2014.
This presentation on Hypothesis Based Testing (HBT) was delivered by Mr Satvik Kini, Associate Quality Manager, Suite Test Centre, SAP Labs India Pvt. Ltd at STeP-IN Forum webinar on Dec 19, 2013.
This presentation was part of the talk delivered by T Ashok Founder & CEO STAG Software at the HSTC 2013: "Think Testing" Conference on Nov 21 & 22 at Hyderabad.
STAG’s assessment for test case potency of a cloud-based trading software helps reduce
regression test cases by 28% and regression cycle time by 40% for an award-winning B2B
e-commerce company.
An article by T Ashok, Founder & CEO, STAG Software,where he highlights that metrics can help drive change in behavior to do better.This article was published in the TeaTime with Testers, Feb-Mar 2013 issue of ezine.
STAG transforms the test process to enable effective product assessment and certification of product fitness for beta release, which helps protect the investment in product development for a leading Fleet Management solution provider.
STAG Software presented a webinar on Mar 14, 2013 on the topic - Agile Sutra "Do more by doing less, Prevent rather than detect". The webinar was hosted by T Ashok, Founder & CEO, STAG Software and Architect of HBT.
The webinar outlines how HBT (Hypothesis Based Testing) can enable you to "do more by doing less" via enhanced defect prevention ability rather than find more.
STAG’s intelligent automation strategy and smart tool selection results in a 70% reduction in testing cycle time and 60% reduction in license cost respectively for an IT services company specializing in the healthcare domain.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
4. 3
Whilst on the path of progression
towards revenue enhancement,
the challenge is:
5. 3
Whilst on the path of progression
towards revenue enhancement,
the challenge is:
“Did I break stuff that were working well”?
6. 3
Whilst on the path of progression
towards revenue enhancement,
the challenge is:
As product grows,
so does regression,
increasing costs &
threatening to
slow down releases.
“Did I break stuff that were working well”?
8. 4
the challenge of CHANGE
what to RE-TEST?1
how to RE-TEST efficiently?2
what not-to RE-TEST?3
9. 5
Let us put together a simple model of
“WHAT-is-TESTING”
and use this to TEST/(RE)TEST smartly
10. 6
DEPLOYS WELL L8
ATTRIBUTES MET L7
WORKS ON ALL ENVIRONMENTS L6
BUSINESS FLOWS CLEAN L5
FEATURES CLEAN L4
INTERNALS CLEAN L3
UI INTERFACE CLEAN L2
INPUTS CLEAN L1
F1 F2 F3 F4
what to test
test-for-what
Testing is a cartesian product of
what-to-test
&
test-for-what
11. 7
DEPLOYS WELL L8
ATTRIBUTES MET L7
WORKS ON ALL ENVIRONMENTS L6
BUSINESS FLOWS CLEAN L5
FEATURES CLEAN L4
INTERNALS CLEAN L3
UI INTERFACE CLEAN L2
INPUTS CLEAN L1
F1 F2 F3 F4
what to test
test-for-what
structural
COMPONENT
technical
FEATURE
business
REQUIREMENT
user
FLOW
WHAT IS THE
EUT?
Testing is a cartesian product of
what-to-test
&
test-for-what
12. 8
DEPLOYS WELL L8
ATTRIBUTES MET L7
WORKS ON ALL ENVIRONMENTS L6
BUSINESS FLOWS CLEAN L5
FEATURES CLEAN L4
INTERNALS CLEAN L3
UI INTERFACE CLEAN L2
INPUTS CLEAN L1
F1 F2 F3 F4
what to test
test-for-what
Testing is a cartesian product of
what-to-test
&
test-for-what
structural
COMPONENT
technical
FEATURE
business
REQUIREMENT
user
FLOW
WHAT IS THE
EUT?
deploys well?
attributes met?
works on all env?
business flows clean?
features clean?
internals clean?
UI interface clean?
inputs clean?
CLEANLINESS CRITERIA
to TEST FOR?
13. 9
DEPLOYS WELL L8
ATTRIBUTES MET L7
WORKS ON ALL ENVIRONMENTS L6
BUSINESS FLOWS CLEAN L5
FEATURES CLEAN L4
INTERNALS CLEAN L3
UI INTERFACE CLEAN L2
INPUTS CLEAN L1
F1 F2 F3 F4
what to test
test-for-what
Regression testing is then
what-to-(RE)test
x
(RE)test-for-what
structural
COMPONENT
technical
FEATURE
business
REQUIREMENT
user
FLOW
WHAT IS THE
EU(RE)T?
14. 10
DEPLOYS WELL L8
ATTRIBUTES MET L7
WORKS ON ALL ENVIRONMENTS L6
BUSINESS FLOWS CLEAN L5
FEATURES CLEAN L4
INTERNALS CLEAN L3
UI INTERFACE CLEAN L2
INPUTS CLEAN L1
F1 F2 F3 F4
what to test
test-for-what
deploys well?
attributes met?
works on all env?
business flows clean?
features clean?
internals clean?
UI interface clean?
inputs clean?
CLEANLINESS CRITERIA
to (RE)TEST FOR?
Regression testing is then
what-to-(RE)test
x
(RE)test-for-what
structural
COMPONENT
technical
FEATURE
business
REQUIREMENT
user
FLOW
WHAT IS THE
EU(RE)T?
15. 11
SMART approach to tackling
the challenge of CHANGE
what to RE-TEST?1
what may be the affected entities that need to be (re)tested?
what criteria of these entities are to be (re)tested for?
16. 12
L1
L2
L3
L4
E1
E2
E3
E4
structural component
technical feature
business requirement
user flow
1
Given an entity (say, an
component) that has
been modified
1. Analyze if this could affect
any of its other criteria ?
e.g. performance?
3. Next analyze if this could
affect any other similar
entity (say component) and
what criteria of that entity
4. Finally analyze which of the
larger entity (say feature)
that uses this entity could be
affected and also the
potential affected criteria
17. 13
L1
L2
L3
L4
E1
E2
E3
E4
structural component
technical feature
business requirement
user flow
1
Given an entity (say, an
component) that has
been modified
1. Analyze if this could affect
any of its other criteria ?
e.g. performance?
2. Next analyze if this could
affect any other similar
entity (say component) and
what criteria of that entity
3. Finally analyze which of the
larger entity (say feature)
that uses this entity could be
affected and also the
potential affected criteria
2
18. 14
L1
L2
L3
L4
E1
E2
E3
E4
structural component
technical feature
business requirement
user flow
1
2
3
Given an entity (say, an
component) that has
been modified
1. Analyze if this could affect
any of its other criteria ?
e.g. performance?
2. Next analyze if this could
affect any other similar
entity (say component) and
what criteria of that entity
3. Finally analyze which of the
larger entity (say feature) that
uses this entity could be
affected and also the
potential affected criteria
20. GIVEN THE FOLLOWING
LEVELS OF QUALITY :
L8 DEPLOYS WELL
L7 ATTRIBUTES MET
L6 WORKS ON ALL ENVIRONMENTS
L5 BUSINESS FLOWS CLEAN
L4 FEATURES CLEAN
L3 INTERNALS CLEAN
L2 UI INTERFACE CLEAN
L1 INPUTS CLEAN
To enable focused
purposeful tests that
validates correctness of
inputs, interfaces, going on
to features & flows and
then correctness on all
environments, finally onto
system attributes and
deployment.
AID #1:
FAULT PROPAGATION ANALYZER
Who is affected? WHAT-to-RETEST
What is affected?-RE-TES T for WHAT?
Analyse fault propagation level-wise
- within an entity (1)
- across entities (2,3)
Given an entity (say, an component)
that has been modified:
1. Analyze if this could any of its
other criteria e.g. performance?
2. Next analyze if this could affect
any other similar entity (say
component) and what criteria of
that entity
3. Finally analyze which of the
larger entity (say feature) that
uses this entity could be affected
and also the potential affected
criteria
21. 17
SMART approach to tackling
the challenge of CHANGE
what to RE-TEST?1
how to RE-TEST efficiently?2
well, automating execution is most useful here, challenge is scripts to be in sync with scenarios
ensure that scenarios are segregated by levels so that scripts are shorter and maintainable
Fault propagation analysis
22. Given test scenarios/cases for an
entity that does a variety of
validations
Segregate them into quality levels
so that that they can be automated
easily ensuring that scripts are
simple and maintainable.
“LEVELISE” them
L8 DEPLOYS WELL
L7 ATTRIBUTES MET
L6 WORKS ON ALL ENVIRONMENTS
L5 BUSINESS FLOWS CLEAN
L4 FEATURES CLEAN
L3 INTERNALS CLEAN
L2 UI INTERFACE CLEAN
L1 INPUTS CLEAN
23. GIVEN THE FOLLOWING
LEVELS OF QUALITY :
L8 DEPLOYS WELL
L7 ATTRIBUTES MET
L6 WORKS ON ALL ENVIRONMENTS
L5 BUSINESS FLOWS CLEAN
L4 FEATURES CLEAN
L3 INTERNALS CLEAN
L2 UI INTERFACE CLEAN
L1 INPUTS CLEAN
To enable focused
purposeful tests that
validates correctness of
inputs, interfaces, going on
to features & flows and
then correctness on all
environments, finally onto
system attributes and
deployment.
*TC= Test Case
TC well structured into levels.
=> FOCUSED TEST CASES
very FIT for automation
1
TC at levels L1-L4 mixed up.
=>SYSTEM TEST includes DEVTEST
Potentially long scripts, brittle,
high maintenance
not FIT for automation
L4
L3
L2
L1
2
TC at levels L4-L7 mixed up.
=> TC validates FEATURES, FLOWS and
ATTRIBUTES
Scripts may do too much, fragile
high maintenance
not FIT for automation
L7
L6
L5
L4
3
AID #2:
AUTOMATION FITNESS ANALYZER
Are your test cases well structured to enable rapid
automation/maintenance?
*TC= Test Case
Segregate them into quality levels
so that that they can be automated
easily ensuring that scripts are
simple and maintainable.
24. P
20
SMART approach to tackling
the challenge of CHANGE
what to RE-TEST?1
how to RE-TEST efficiently?2
what not-to RE-TEST?3
Automation analysis
Fault propagation analysis
as testing progresses, test cases don’t uncover defects
track test cases that do not yield defects , to not execute?
25. L8 DEPLOYS WELL
L7 ATTRIBUTES MET
L6 WORKS ON ALL ENVIRONMENTS
L5 BUSINESS FLOWS CLEAN
L4 FEATURES CLEAN
L3 INTERNALS CLEAN
L2 UI INTERFACE CLEAN
L1 INPUTS CLEAN
*TC= Test Case
Track test cases that do NOT yield
defects each cycle
Normalise defect/TC* ratio for
each cycle
Analyse by levels to see yield wrt
time
12.5
25
37.5
50
L6
12.5
25
37.5
50
L5
12.5
25
37.5
50
L4
C1 C2 C3 C4
test cycles
26. GIVEN THE FOLLOWING
LEVELS OF QUALITY :
L8 DEPLOYS WELL
L7 ATTRIBUTES MET
L6 WORKS ON ALL ENVIRONMENTS
L5 BUSINESS FLOWS CLEAN
L4 FEATURES CLEAN
L3 INTERNALS CLEAN
L2 UI INTERFACE CLEAN
L1 INPUTS CLEAN
To enable focused
purposeful tests that
validates correctness of
inputs, interfaces, going on
to features & flows and
then correctness on all
environments, finally onto
system attributes and
deployment.
*TC= Test Case
AID #3:
YIELD ANALYZER
How is the test yield over time?
yield = outcome/effort i.e #defects/#TC_Executed
Normalise defect/TC* ratio for each cycle
and analyse by levels to see yield wrt time.
12.5
25
37.5
50
L6
12.5
25
37.5
50
L5
12.5
25
37.5
50
L4
C1 C2 C3 C4
test cycles
Track test cases that do NOT yield
defects each cycle
Normalise defect/TC* ratio for
each cycle
Analyse by levels to see yield wrt
time
27. P
23
SMART approach to tackling
the challenge of CHANGE
what to RE-TEST?1
how to RE-TEST efficiently?2
what not-to RE-TEST?3
Automation analysis
Fault propagation analysis
Yield analysis
28. AID #1:
FAULT PROPAGATION ANALYZER
Who is affected? WHAT-to-RETEST
What is affected?-RE-TES T for WHAT?
Analyse fault propagation level-wise
- within an entity (1)
- across entities (2,3)
AID #3:
YIELD ANALYZER
How is the test yield over time?
yield = outcome/effort i.e #defects/#TC_Executed
Normalise defect/TC* ratio for each cycle
and analyse by levels to see yield wrt time.
12.5
25
37.5
50
L6
12.5
25
37.5
50
L5
12.5
25
37.5
50
L4
C1 C2 C3 C4
test cycles
TC well structured into levels.
=> FOCUSED TEST CASES
very FIT for automation
1
TC at levels L1-L4 mixed up.
=>SYSTEM TEST includes DEVTEST
Potentially long scripts, brittle,
high maintenance
not FIT for automation
L4
L3
L2
L1
2
TC at levels L4-L7 mixed up.
=> TC validates FEATURES, FLOWS and
ATTRIBUTES
Scripts may do too much, fragile
high maintenance
not FIT for automation
L7
L6
L5
L4
3
AID #2:
AUTOMATION FITNESS ANALYZER
Are your test cases well structured to enable rapid
automation/maintenance?
*TC= Test Case