To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics are complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
The document summarizes a presentation titled "Measurement and Metrics for Test Managers" given by Rick Craig of Software Quality Engineering. The presentation covered various metrics that can be collected and analyzed by test managers such as defect density, defect arrival rates, and customer satisfaction surveys. It discussed challenges with metrics including obtaining buy-in from teams and potential biases in how metrics are designed and collected.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms—including Goal-Question-Metric—and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
This document promotes switching from Quality Center to qTest, citing several advantages of qTest for agile software testing. Quality Center is not well-suited for agile workflows, has poor usability and integration, and is very expensive. qTest is designed for agile teams, integrates seamlessly with popular agile tools, and provides better visibility, collaboration, and test case management capabilities. Migrating from Quality Center to qTest is straightforward and qTest users report improved efficiency and a better overall testing experience.
Ho Chi Minh City Software Testing Conference January 2015
Software Testing in the Agile World
Website: www.hcmc-stc.org
Author: Nhat Do, Vu Duong
Context-Driven Testing (CDT) rejects the notion of generalized “best practices” that apply to all projects, and instead accepts that different practices work best under different circumstances. The third principle of the seven defined in CDT states that people are the most important part of any project’s context. Less of a focus on processes and tools, with more emphasis on people and their collaboration empowers testers with the freedom to make choices about how best to do their job without following a restrictive plan.
In joining the game of workshop and some theory sharing in slides, you will a better understanding of Context-Driven Testing practices, principles and its benefits as well as know how is a nice Marriage of Agile and Context-Driven Testing.
This document discusses the rationale for adopting continuous delivery practices in software development. It summarizes several studies that found high rates of project failures and benefits not being realized from traditional development approaches. Continuous delivery is presented as an approach that can help address these issues by focusing on rapid, reliable, and automated software releases. Case studies are provided of organizations like Google, Amazon, and HP that have successfully implemented continuous delivery at large scales. Adopting these practices is associated with benefits like increased throughput, reliability, innovation, and business performance.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms—including Goal-Question-Metric—and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics are complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
The document summarizes a presentation titled "Measurement and Metrics for Test Managers" given by Rick Craig of Software Quality Engineering. The presentation covered various metrics that can be collected and analyzed by test managers such as defect density, defect arrival rates, and customer satisfaction surveys. It discussed challenges with metrics including obtaining buy-in from teams and potential biases in how metrics are designed and collected.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms—including Goal-Question-Metric—and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
This document promotes switching from Quality Center to qTest, citing several advantages of qTest for agile software testing. Quality Center is not well-suited for agile workflows, has poor usability and integration, and is very expensive. qTest is designed for agile teams, integrates seamlessly with popular agile tools, and provides better visibility, collaboration, and test case management capabilities. Migrating from Quality Center to qTest is straightforward and qTest users report improved efficiency and a better overall testing experience.
Ho Chi Minh City Software Testing Conference January 2015
Software Testing in the Agile World
Website: www.hcmc-stc.org
Author: Nhat Do, Vu Duong
Context-Driven Testing (CDT) rejects the notion of generalized “best practices” that apply to all projects, and instead accepts that different practices work best under different circumstances. The third principle of the seven defined in CDT states that people are the most important part of any project’s context. Less of a focus on processes and tools, with more emphasis on people and their collaboration empowers testers with the freedom to make choices about how best to do their job without following a restrictive plan.
In joining the game of workshop and some theory sharing in slides, you will a better understanding of Context-Driven Testing practices, principles and its benefits as well as know how is a nice Marriage of Agile and Context-Driven Testing.
This document discusses the rationale for adopting continuous delivery practices in software development. It summarizes several studies that found high rates of project failures and benefits not being realized from traditional development approaches. Continuous delivery is presented as an approach that can help address these issues by focusing on rapid, reliable, and automated software releases. Case studies are provided of organizations like Google, Amazon, and HP that have successfully implemented continuous delivery at large scales. Adopting these practices is associated with benefits like increased throughput, reliability, innovation, and business performance.
To be most effective, test managers must develop and use metrics to help direct the testing effort and make informed recommendations about the software’s release readiness and associated risks. Because one important testing activity is to “measure” the quality of the software, test managers must measure the results of both the development and testing processes. Collecting, analyzing, and using metrics is complicated because many developers and testers are concerned that the metrics will be used against them. Join Rick Craig as he addresses common metrics—measures of product quality, defect removal efficiency, defect density, defect arrival rate, and testing status. Learn the guidelines for developing a test measurement program, rules of thumb for collecting data, and ways to avoid “metrics dysfunction.” Rick identifies several metrics paradigms—including Goal-Question-Metric—and discusses the pros and cons of each. Delegates are urged to bring their metrics problems and issues for use as discussion points.
Using DevOps' Intelligent Insights to Deliver Greater Business ValueCognizant
By applying DevOps - with its real-time analytics - to the software development lifecycle, IT can deliver greater business value in velocity, quality and many other measures.
This document provides an overview and introduction to the Rapid Software Testing course. It acknowledges those who contributed to developing the course material. The document outlines some assumptions about the audience for the course, including that attendees test software and want to improve their testing process. It presents the primary goal of the course as teaching how to test under uncertainty and with scrutiny. Key themes of Rapid Testing are also summarized, including putting the tester's mind at the center and considering cost versus value in testing activities.
Whether you are new to testing or looking for a better way to organize your test practices, understanding risk is essential to successful testing. Dale Perry describes a general risk-based framework—applicable to any development lifecycle model—to help you make critical testing decisions earlier and with more confidence. Learn how to focus your testing effort, what elements to test, and how to organize test designs and documentation. Review the fundamentals of risk identification, analysis, and the role testing plays in risk mitigation. Develop an inventory of test objectives to help prioritize your testing and translate them into a concrete strategy for creating tests. Focus your tests on the areas essential to your stakeholders. Execution and assessing test results provide a better understanding of both the effectiveness of your testing and the potential for failure in your software. Take back a proven approach to organize your testing efforts and new ways to add more value to your project and organization.
The document discusses various quality control and problem solving tools and techniques including:
- Approaches to problem solving like defining the problem, diagnosing causes, implementing remedies, and maintaining improvements
- Tools for analyzing problems like cause-effect diagrams, checksheets, control charts, histograms, Pareto charts, and scatter plots
- Guidelines for using these tools effectively like how to structure a team, gather and analyze data, identify root causes, and monitor ongoing performance
The overall aim is to provide an overview of a structured approach and key analytical methods for quality improvement and problem solving.
A Rapid Introduction to Rapid Software TestingTechWell
You're under tight time pressure and have barely enough information to proceed with testing. How do you test quickly and inexpensively, yet still produce informative, credible, and accountable results? Rapid Software Testing, adopted by context-driven testers worldwide, offers a field-proven answer to this all-too-common dilemma. In this one-day sampler of the approach, Michael Bolton introduces you to the skills and practice of Rapid Software Testing through stories, discussions, and "minds-on" exercises that simulate important aspects of real testing problems. The rapid approach isn't just testing with speed or a sense of urgency; it's mission-focused testing that eliminates unnecessary work, assures that the most important things get done, and constantly asks how testers can help speed up the successful completion of the project. Join Michael to see how rapid testing focuses on both the mind set and skill set of the individual tester who uses tight loops of exploration and critical thinking skills to help continuously re-optimize testing to match clients' needs and expectations.
Agility and planning : tools and processesJérôme Kehrli
The document provides an overview of agile planning tools and processes. It discusses various agile frameworks like Extreme Programming (XP), Scrum, DevOps, Lean Startup, and Kanban. It describes the roles, rituals, and principles used in agile planning, including tools like product backlogs, kanban boards, and story maps. The document emphasizes keeping the story map and product backlog synchronized to provide up-to-date estimations and allow forecasting of delivery dates based on sprint velocity. Regular rituals like sprint planning, daily stand-ups, and retrospectives are also discussed.
You want to integrate skilled testing and development work. But how do you accomplish this without developers accidentally subverting the testing process or testers becoming an obstruction? Efficient, deep testing requires “critical distance” from the development process, commitment and planning to build a testable product, dedication to uncovering the truth, responsiveness among team members, and often a skill set that developers alone—or testers alone—do not ordinarily possess. James Bach presents a model—a redesign of the famous Agile Testing Quadrants that distinguished between business vs. technical facing tests and supporting vs. critiquing―that frames these dynamics and helps teams think through the nature of development and testing roles and how they might blend, conflict, or support each other on an Agile project. James includes a brief discussion of the original Agile Testing Quadrants model, which the presenters believe has created much confusion about the role of testing in Agile.
There's no time to test, can you just automate it? by Anna HeiermannQA or the Highway
Anna Heiermann discusses lessons learned from past testing failures and advocates for a risk-based testing approach. A risk-based strategy involves identifying product risks, prioritizing test cases based on risk, and communicating the test plan to stakeholders. This helps ensure the highest priority and most critical areas are tested thoroughly. When risks are missed, it can lead to catastrophic bugs affecting users and lost revenue. With a risk-based approach, testing is targeted efficiently to reduce risks while keeping stakeholders informed.
business model, business model canvas, mission model, mission model canvas, customer development, hacking for defense, H4D, lean launchpad, lean startup, stanford, startup, steve blank, pete newell, bmnt, entrepreneurship, I-Corps, Security, NSIN, wearable sensors, DOD
All knowledge work requires a delicate and continuously shifting balance between delivery – exploiting existing knowledge – and discovery – exploring new knowledge. This need to balance discovery and delivery can be found across the entire innovation cycle: from technology innovation over performance and sustaining innovation to disruptive innovation. It has been a driving concern for specific approaches such as Lean Product and Process Development as well as The Kanban Method, as exemplified in examples such as: developing a new product that requires novel features (discovery) while at the same time managing the overall risk that is involved in developing those features (delivery); improving agility and predictability of an organization that may require substantial change (discovery) while at the same time keeping resistance to change under control (delivery); a startup that requires an initial focus on finding problem/solution fit or product market fit (discovery) but then needs to develop the organization to delivery at scale (delivery); etc.
In each of the examples above, too much emphasis on discovery may result in a disconnection with the past leading to resistance to change, increasing delivery risk, and non-adoption of innovation. Too little emphasis on discovery (and consequently too much emphasis on delivery) may lead to not being prepared for the future resulting in stagnation and the risk of being disrupted. Discovery Kanban systems are Kanban systems that help to balance discovery and delivery while moving from a mindset of episodic (one-off) innovation and change towards a culture of continuous innovation and change. Discovery Kanban systems work across the entire discovery cycle starting from pre-hypothesis moving into hypothesis validation and ending in post-hypothesis. In this presentation, we will discuss the different elements of Discovery Kanban, examples and underlying principles.
A Rapid Introduction to Rapid Software TestingTechWell
This document provides a summary of a presentation on Rapid Software Testing. The presentation was given by Michael Bolton of DevelopSense and covered the methodology and mindset of rapid software testing. It emphasizes testing software expertly under uncertainty and time pressure. The presentation defines rapid testing as testing more quickly and less expensively while still achieving excellent results. It compares rapid testing to other approaches like exhaustive, ponderous, and slapdash testing. The presentation also discusses principles of rapid testing, how to recognize problems quickly using heuristics, and testing rapidly to fulfill the mission of testing.
STLDODN - Agile Testing in a Waterfall WorldAngela Dugan
Everybody seems to be talking about agile these days, but most companies are still using a waterfall based methodology. Often, the team delivering the code uses a different process than the team responsible for software quality. In this presentation, Angela will discuss which agile tenets are worth incorporating into your daily testing activities in this situation and the impacts, both positive and negative, that you should expect. You will learn tips and tricks for introducing agile concepts into a waterfall environment slowly and successfully; methods that incorporate not just application lifecycle management tools, but a look at strategies for process improvement and in some cases good, old-fashioned psychology. Join Angela to find that low hanging fruit you can address quickly to become more agile, understand how to recognize and mitigate common pitfalls, and learn tools and techniques for managing an agile-under-waterfall testing effort.
Michael Bolton - Two Futures of Software TestingTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Two Futures of Software Testing by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
S.M.A.R.T & F.O.C.U.S Testing - Increasing the value provided by your testing...PractiTest
1. The document discusses how testing teams can increase the perceived value of their work by better understanding how their work is perceived by others and communicating the right information to stakeholders.
2. It identifies that issues with perception of value come from not providing the value stakeholders need and not effectively communicating the value brought to projects.
3. The document provides recommendations for testing teams to focus on communicating information to stakeholders through things like information schedules, alternative reporting methods, and ensuring communication is smart, fast, objective, condensed, user-centered and serves the customer.
Ho Chi Minh City Software Testing Conference January 2015
Software Testing in the Agile World
Website: www.hcmc-stc.org
Author: Lee Copeland
Over the years writers have defined testing as a process of finding, a process of evaluating, a process of measuring, a process of improving. For a quarter of a century we as testers have been focused on the internal process of testing, while generally disregarding its real purpose. The real purpose of testing is to create information. James Bach nailed it when he wrote, “The ultimate reason testers exist is to provide information that others on the project use to create things of value.” That is why testing exists — to provide information of value. So, when managers complain that testing “costs too much” perhaps they are really trying to say, “I’m not getting enough valuable information to justify the cost of testing.” When testers say “my management doesn’t see the value in our work” perhaps they are really trying to say, “My management doesn’t value the information I’m providing to them.” To prove our worth, to increase the value of testing, we must first focus on testing’s purpose — providing valuable information — not its process. Join Lee as he discusses why quantifying the value of testing is difficult work — perhaps that’s why we concentrate so much on testing process—that’s much easier. But until we do this difficult work, until we prove our worth through quantifying our contribution, we should expect the bombardments to continue.
Greg has expertise for over 20 years in the areas of applied data analysis techniques, instructional design, training and development.Root Cause and Corrective Action (RCCA) Workshop
DevOps is driven by tooling and automation implemented as continuous delivery, practices and processes seen in lean management principles, and organizational culture. Research shows these factors drive both IT performance through metrics like deployment frequency and mean time to recovery, and organizational performance. High performing teams are more agile with frequent deployments and faster lead times, as well as more reliable with fewer deployment failures and faster mean time to recovery, without tradeoffs between throughput and stability. Culture, job satisfaction, and a climate for learning are also key predictors of performance.
Root Cause Analysis is the method of problem solving that identifies the root causes of failures or problems. A root cause is the source of a problem and its resulting symptom, that once removed, corrects or prevents an undesirable outcome from recurring.
Congruent Coaching: An Interactive ExplorationTechWell
We have opportunities to coach people all the time. Much of what we see as coaching is actually undercover training. Real coaching is richer—offering support while explaining options. In this interactive session, Johanna Rothman invites you to explore how to coach, regardless of your position in the organization. Teaching is just one option for coaching. You have many other options, depending on your coaching stance. You may select a counselor’s stance if you are managing up or a partner’s stance if you are a peer. You might even select a reflective observer’s stance or a technical advisor’s stance, depending on the situation. We will explore what to do when you see opportunities for coaching but you haven’t been asked to coach. Bring your coaching concerns, whether you are coaching onsite, or coaching at a distance, coaching one-on-one, or coaching teams. Let’s learn and build our coaching skills together.
A Guide to Cross-Browser Functional TestingvTechWell
The term “cross-browser functional testing” usually means some variation of automated or manual testing of a web-based application on different mobile or desktop browsers. The aim of the testing might be to ensure that the application under test behaves or looks the same way on different browsers. Another meaning could be to verify that the application works with two or more browsers simultaneously. Malcolm Isaacs examines these different interpretations of cross-browser functional testing and clarifies what each means in practice. Malcolm explains some of the many challenges of writing and executing portable and maintainable automated test scripts which are at the heart of cross-browser testing. Learn some practical approaches to overcome these challenges, and take back manual and automated testing techniques to validate the consistency and accuracy of your applications—whatever browser they run in.
Using DevOps' Intelligent Insights to Deliver Greater Business ValueCognizant
By applying DevOps - with its real-time analytics - to the software development lifecycle, IT can deliver greater business value in velocity, quality and many other measures.
This document provides an overview and introduction to the Rapid Software Testing course. It acknowledges those who contributed to developing the course material. The document outlines some assumptions about the audience for the course, including that attendees test software and want to improve their testing process. It presents the primary goal of the course as teaching how to test under uncertainty and with scrutiny. Key themes of Rapid Testing are also summarized, including putting the tester's mind at the center and considering cost versus value in testing activities.
Whether you are new to testing or looking for a better way to organize your test practices, understanding risk is essential to successful testing. Dale Perry describes a general risk-based framework—applicable to any development lifecycle model—to help you make critical testing decisions earlier and with more confidence. Learn how to focus your testing effort, what elements to test, and how to organize test designs and documentation. Review the fundamentals of risk identification, analysis, and the role testing plays in risk mitigation. Develop an inventory of test objectives to help prioritize your testing and translate them into a concrete strategy for creating tests. Focus your tests on the areas essential to your stakeholders. Execution and assessing test results provide a better understanding of both the effectiveness of your testing and the potential for failure in your software. Take back a proven approach to organize your testing efforts and new ways to add more value to your project and organization.
The document discusses various quality control and problem solving tools and techniques including:
- Approaches to problem solving like defining the problem, diagnosing causes, implementing remedies, and maintaining improvements
- Tools for analyzing problems like cause-effect diagrams, checksheets, control charts, histograms, Pareto charts, and scatter plots
- Guidelines for using these tools effectively like how to structure a team, gather and analyze data, identify root causes, and monitor ongoing performance
The overall aim is to provide an overview of a structured approach and key analytical methods for quality improvement and problem solving.
A Rapid Introduction to Rapid Software TestingTechWell
You're under tight time pressure and have barely enough information to proceed with testing. How do you test quickly and inexpensively, yet still produce informative, credible, and accountable results? Rapid Software Testing, adopted by context-driven testers worldwide, offers a field-proven answer to this all-too-common dilemma. In this one-day sampler of the approach, Michael Bolton introduces you to the skills and practice of Rapid Software Testing through stories, discussions, and "minds-on" exercises that simulate important aspects of real testing problems. The rapid approach isn't just testing with speed or a sense of urgency; it's mission-focused testing that eliminates unnecessary work, assures that the most important things get done, and constantly asks how testers can help speed up the successful completion of the project. Join Michael to see how rapid testing focuses on both the mind set and skill set of the individual tester who uses tight loops of exploration and critical thinking skills to help continuously re-optimize testing to match clients' needs and expectations.
Agility and planning : tools and processesJérôme Kehrli
The document provides an overview of agile planning tools and processes. It discusses various agile frameworks like Extreme Programming (XP), Scrum, DevOps, Lean Startup, and Kanban. It describes the roles, rituals, and principles used in agile planning, including tools like product backlogs, kanban boards, and story maps. The document emphasizes keeping the story map and product backlog synchronized to provide up-to-date estimations and allow forecasting of delivery dates based on sprint velocity. Regular rituals like sprint planning, daily stand-ups, and retrospectives are also discussed.
You want to integrate skilled testing and development work. But how do you accomplish this without developers accidentally subverting the testing process or testers becoming an obstruction? Efficient, deep testing requires “critical distance” from the development process, commitment and planning to build a testable product, dedication to uncovering the truth, responsiveness among team members, and often a skill set that developers alone—or testers alone—do not ordinarily possess. James Bach presents a model—a redesign of the famous Agile Testing Quadrants that distinguished between business vs. technical facing tests and supporting vs. critiquing―that frames these dynamics and helps teams think through the nature of development and testing roles and how they might blend, conflict, or support each other on an Agile project. James includes a brief discussion of the original Agile Testing Quadrants model, which the presenters believe has created much confusion about the role of testing in Agile.
There's no time to test, can you just automate it? by Anna HeiermannQA or the Highway
Anna Heiermann discusses lessons learned from past testing failures and advocates for a risk-based testing approach. A risk-based strategy involves identifying product risks, prioritizing test cases based on risk, and communicating the test plan to stakeholders. This helps ensure the highest priority and most critical areas are tested thoroughly. When risks are missed, it can lead to catastrophic bugs affecting users and lost revenue. With a risk-based approach, testing is targeted efficiently to reduce risks while keeping stakeholders informed.
business model, business model canvas, mission model, mission model canvas, customer development, hacking for defense, H4D, lean launchpad, lean startup, stanford, startup, steve blank, pete newell, bmnt, entrepreneurship, I-Corps, Security, NSIN, wearable sensors, DOD
All knowledge work requires a delicate and continuously shifting balance between delivery – exploiting existing knowledge – and discovery – exploring new knowledge. This need to balance discovery and delivery can be found across the entire innovation cycle: from technology innovation over performance and sustaining innovation to disruptive innovation. It has been a driving concern for specific approaches such as Lean Product and Process Development as well as The Kanban Method, as exemplified in examples such as: developing a new product that requires novel features (discovery) while at the same time managing the overall risk that is involved in developing those features (delivery); improving agility and predictability of an organization that may require substantial change (discovery) while at the same time keeping resistance to change under control (delivery); a startup that requires an initial focus on finding problem/solution fit or product market fit (discovery) but then needs to develop the organization to delivery at scale (delivery); etc.
In each of the examples above, too much emphasis on discovery may result in a disconnection with the past leading to resistance to change, increasing delivery risk, and non-adoption of innovation. Too little emphasis on discovery (and consequently too much emphasis on delivery) may lead to not being prepared for the future resulting in stagnation and the risk of being disrupted. Discovery Kanban systems are Kanban systems that help to balance discovery and delivery while moving from a mindset of episodic (one-off) innovation and change towards a culture of continuous innovation and change. Discovery Kanban systems work across the entire discovery cycle starting from pre-hypothesis moving into hypothesis validation and ending in post-hypothesis. In this presentation, we will discuss the different elements of Discovery Kanban, examples and underlying principles.
A Rapid Introduction to Rapid Software TestingTechWell
This document provides a summary of a presentation on Rapid Software Testing. The presentation was given by Michael Bolton of DevelopSense and covered the methodology and mindset of rapid software testing. It emphasizes testing software expertly under uncertainty and time pressure. The presentation defines rapid testing as testing more quickly and less expensively while still achieving excellent results. It compares rapid testing to other approaches like exhaustive, ponderous, and slapdash testing. The presentation also discusses principles of rapid testing, how to recognize problems quickly using heuristics, and testing rapidly to fulfill the mission of testing.
STLDODN - Agile Testing in a Waterfall WorldAngela Dugan
Everybody seems to be talking about agile these days, but most companies are still using a waterfall based methodology. Often, the team delivering the code uses a different process than the team responsible for software quality. In this presentation, Angela will discuss which agile tenets are worth incorporating into your daily testing activities in this situation and the impacts, both positive and negative, that you should expect. You will learn tips and tricks for introducing agile concepts into a waterfall environment slowly and successfully; methods that incorporate not just application lifecycle management tools, but a look at strategies for process improvement and in some cases good, old-fashioned psychology. Join Angela to find that low hanging fruit you can address quickly to become more agile, understand how to recognize and mitigate common pitfalls, and learn tools and techniques for managing an agile-under-waterfall testing effort.
Michael Bolton - Two Futures of Software TestingTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Two Futures of Software Testing by Michael Bolton. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
S.M.A.R.T & F.O.C.U.S Testing - Increasing the value provided by your testing...PractiTest
1. The document discusses how testing teams can increase the perceived value of their work by better understanding how their work is perceived by others and communicating the right information to stakeholders.
2. It identifies that issues with perception of value come from not providing the value stakeholders need and not effectively communicating the value brought to projects.
3. The document provides recommendations for testing teams to focus on communicating information to stakeholders through things like information schedules, alternative reporting methods, and ensuring communication is smart, fast, objective, condensed, user-centered and serves the customer.
Ho Chi Minh City Software Testing Conference January 2015
Software Testing in the Agile World
Website: www.hcmc-stc.org
Author: Lee Copeland
Over the years writers have defined testing as a process of finding, a process of evaluating, a process of measuring, a process of improving. For a quarter of a century we as testers have been focused on the internal process of testing, while generally disregarding its real purpose. The real purpose of testing is to create information. James Bach nailed it when he wrote, “The ultimate reason testers exist is to provide information that others on the project use to create things of value.” That is why testing exists — to provide information of value. So, when managers complain that testing “costs too much” perhaps they are really trying to say, “I’m not getting enough valuable information to justify the cost of testing.” When testers say “my management doesn’t see the value in our work” perhaps they are really trying to say, “My management doesn’t value the information I’m providing to them.” To prove our worth, to increase the value of testing, we must first focus on testing’s purpose — providing valuable information — not its process. Join Lee as he discusses why quantifying the value of testing is difficult work — perhaps that’s why we concentrate so much on testing process—that’s much easier. But until we do this difficult work, until we prove our worth through quantifying our contribution, we should expect the bombardments to continue.
Greg has expertise for over 20 years in the areas of applied data analysis techniques, instructional design, training and development.Root Cause and Corrective Action (RCCA) Workshop
DevOps is driven by tooling and automation implemented as continuous delivery, practices and processes seen in lean management principles, and organizational culture. Research shows these factors drive both IT performance through metrics like deployment frequency and mean time to recovery, and organizational performance. High performing teams are more agile with frequent deployments and faster lead times, as well as more reliable with fewer deployment failures and faster mean time to recovery, without tradeoffs between throughput and stability. Culture, job satisfaction, and a climate for learning are also key predictors of performance.
Root Cause Analysis is the method of problem solving that identifies the root causes of failures or problems. A root cause is the source of a problem and its resulting symptom, that once removed, corrects or prevents an undesirable outcome from recurring.
Congruent Coaching: An Interactive ExplorationTechWell
We have opportunities to coach people all the time. Much of what we see as coaching is actually undercover training. Real coaching is richer—offering support while explaining options. In this interactive session, Johanna Rothman invites you to explore how to coach, regardless of your position in the organization. Teaching is just one option for coaching. You have many other options, depending on your coaching stance. You may select a counselor’s stance if you are managing up or a partner’s stance if you are a peer. You might even select a reflective observer’s stance or a technical advisor’s stance, depending on the situation. We will explore what to do when you see opportunities for coaching but you haven’t been asked to coach. Bring your coaching concerns, whether you are coaching onsite, or coaching at a distance, coaching one-on-one, or coaching teams. Let’s learn and build our coaching skills together.
A Guide to Cross-Browser Functional TestingvTechWell
The term “cross-browser functional testing” usually means some variation of automated or manual testing of a web-based application on different mobile or desktop browsers. The aim of the testing might be to ensure that the application under test behaves or looks the same way on different browsers. Another meaning could be to verify that the application works with two or more browsers simultaneously. Malcolm Isaacs examines these different interpretations of cross-browser functional testing and clarifies what each means in practice. Malcolm explains some of the many challenges of writing and executing portable and maintainable automated test scripts which are at the heart of cross-browser testing. Learn some practical approaches to overcome these challenges, and take back manual and automated testing techniques to validate the consistency and accuracy of your applications—whatever browser they run in.
Critical thinking is the kind of thinking that specifically looks for problems and mistakes. Regular people don't do a lot of it. However, if you want to be a great tester, you need to be a great critical thinker. Critically thinking testers save projects from dangerous assumptions and ultimately from disasters. The good news is that critical thinking is not just innate intelligence or a talent—it's a learnable and improvable skill you can master. James Bach shares the specific techniques and heuristics of critical thinking and presents realistic testing puzzles that help you practice and increase your thinking skills. Critical thinking begins with just three questions—Huh? Really? and So?—that kick start your brain to analyze specifications, risks, causes, effects, project plans, and anything else that puzzles you. Join James for this interactive, hands-on session and practice your critical thinking skills. Study and analyze product behaviors and experience new ways to identify, isolate, and characterize bugs.
User Acceptance Testing: Make the User a Part of the TeamTechWell
Adding user acceptance testing (UAT) to your testing lifecycle can increase the probability of finding defects before software is released. The challenge is to fully engage users and assist them in becoming effective testers. Help achieve this goal by involving users early and setting realistic expectations. Showing how users add value and taking them through the UAT process strengthens their ability and commitment. Conducting user acceptance testing sessions as software functionality becomes available helps to build confidence and capability—and find defects earlier. Susan Bradley shares a five-step process that you can use in your organization to conduct user acceptance testing. Learn to conduct training, set up daily testing expectations, assign test cases to users, create a shared information site for both test case management and feedback documentation, conduct a review of noted issues with all interested parties, and participate in a retrospective regarding the UAT process to improve the process for next time.
It’s one week after your product’s launch, and everyone is happy. After all, for the first time in years, your product development exceeded expectations. Coding was completed on time with very few defects. Suddenly, the report of a major usability and security flaw destroys the euphoria and sends everything into chaos. Unfortunately, this is not uncommon in our industry. So, how can we mitigate such things from happening? As he shares stories about the complex domain of product delivery, Ray Arell introduces a framework with associated emergent practices that enable you to better guide your product to success. He presents an overview of the Cynefin model, a description of complicated and complex systems, and discusses how to use it to establish an effective testing strategy. Ray describes how to identify key patterns of product usage to establish a robust defect-prevention system that reduces product development costs. Lastly, Ray describes how to interview customers to identify key quality expectations, ensuring that your testing focuses on producing the highest value for your customers.
Improving the Mobile Application User Experience (UX)TechWell
If users can’t figure out how to use your mobile applications and what’s in it for them, they’re gone. Usability and UX are key factors in keeping users satisfied so understanding, measuring, testing and improving these factors are critical to the success of today’s mobile applications. However, sometimes these concepts can be confusing—not only differentiating them but also defining and understanding them. Philip Lew explores the meanings of usability and UX, discusses how they are related, and then examines their importance for today’s mobile applications. After a brief discussion of how the meanings of usability and user experience depend on the context of your product, Phil defines measurements of usability and user experience that you can use right away to quantify these subjective attributes. He crystallizes abstract definitions into concepts that can be measured, with metrics to evaluate and improve your product, and provide numerous examples to demonstrate the concepts on how to improve your mobile app.
Test reporting is something few testers take time to practice. Nevertheless, it's a fundamental skill—vital for your professional credibility and your own self management. Many people think management judges testing by bugs found or test cases executed. Actually, testing is judged by the story it tells. If your story sounds good, you win. A test report is the story of your testing. It begins as the story we tell ourselves, each moment we are testing, about what we are doing and why. We use the test story within our own minds, to guide our work. James Bach explores the skill of test reporting and examines some of the many different forms a test report might take. As in other areas of testing, context drives good reporting. Sometimes we make an oral report; occasionally we need to write it down. Join James for an in-depth look at the art of the reporting.
Designing for Testability: Differentiator in a Competitive MarketTechWell
In today’s cost conscious marketplace, solution providers gain advantage over competitors when they deliver measurable benefits to customers and partners. Systems of even small scope often involve distributed hardware/software elements with varying execution parameters. Testing organizations often deal with a complex set of testing scenarios, increased risk for regression defects, and competing demands on limited system resources for a continuous comprehensive test program. Learn how designing a testable system architecture addresses these challenges. David Campbell offers practical guidance on the process to make testability a key discriminator from the earliest phases of product definition and design. Learn approaches that consistently deliver for high achieving organizations, and how these approaches impact schedule and architecture performance. Gain insight on how to select and customize techniques that are appropriate for your organization’s size, culture, and market.
Randy Rice presented on lessons learned from user acceptance testing (UAT) on four different projects. The first project involved a new laboratory testing system that had severe performance issues and required three redeployments. The second project with the same company was more successful due to improved testing practices. The third project involved designing many tests based on business scenarios before the system's interface was known. The last project involved a complex legal system where system testing found most defects and UAT involved a simplified walkthrough. Key lessons included not relying solely on UAT, implementing incrementally, and adjusting UAT plans as more is learned.
CAN I USE THIS?—A Mnemonic for Usability TestingTechWell
Often, usability testing does not receive the attention it deserves. A common argument is that usability issues are merely “training issues” and can be dealt with through the product's training or user manuals. If your product is only for internal staff use, this may be a valid response. However, the market now demands easy-to-use products—whether your users are internal or external. David Greenlees shares a tool he has developed to generate test ideas for usability testing. His mnemonic—CAN I USE THIS?—provides a solid starting point for testing any product. C for Comparable Product, A for Accessibility, N for Navigation … David shares how he has used this mnemonic on past projects while the training argument took place around him, and how they realized product improvements and greater user acceptance. Learn how you can quickly and effectively use this mnemonic on any project so you can give usability testing the attention it deserves.
The key to successful testing is effective and timely planning. Rick Craig introduces proven test planning methods and techniques, including the Master Test Plan and level-specific test plans for acceptance, system, integration, and unit testing. Rick explains how to customize an IEEE-829-style test plan and test summary report to fit your organization’s needs. Learn how to manage test activities, estimate test efforts, and achieve buy-in. Discover a practical risk analysis technique to prioritize your testing and become more effective with limited resources. Rick offers test measurement and reporting recommendations for monitoring the testing process. Discover new methods and develop renewed energy for taking your organization’s test management to the next level.
Testing in the Wild: Practices for Testing Beyond the LabTechWell
The stakes in the mobile app marketplace are very high, with thousands of apps vying for the limited space on users’ mobile devices. Organizations must ensure that their apps work as intended from day one and to do that must implement a successful mobile testing strategy leveraging in-the-wild testing. Matt Johnston describes how to create and implement a tailored in-the-wild testing strategy to boost app success and improve user experience. Matt provides strategies, tips, and real-world examples and advice on topics ranging from fragmentation issues, to the different problems inherent in web and mobile apps, to deciding what devices you must test vs. those you should test. After hearing real-world examples of how testing in the wild affects app quality, leave with an understanding of and actionable information about how to launch apps that perform as intended in the hands of end-users—from day one.
Extreme Automation: Software Quality for the Next Generation EnterpriseTechWell
Software runs the business. The modern testing organization aspires to be a change agent and an inspiration for quality throughout the entire lifecycle. To be a change agent, the testing organization must have the right people and skill sets, the right processes in place to ensure proper governance, and the right technology to aid in the delivery of software in support of the business line. Traditionally, testing organizations have focused on the people and process aspect of solving quality issues. With the ever-increasing complexity of the software needed to run the enterprise, testing professionals must adopt technology to help solve some of the most challenging quality issues ever. In short, testing organizations must make the move to extreme automation and become proficient with modern tooling and its benefits. Theresa Lanowitz focuses on new and emerging technologies—proven and successful—to add to the workbench of the test professional.
During the past decade, test engineers have become experts in browser compatibility testing. Just when we thought everything was under control, along come native mobile applications that need to run across platforms far more diverse than the desktop browser landscape has ever been. The variety of OSs, screen sizes, and hardware technology combine to create hundreds of configurations that need some testing. Manual testing across so many deployment targets will drive anyone crazy. Stu Stern looks at the biggest challenges in mobile testing: functional, platform, display, and device compatibility testing and explores how you can use MonkeyTalk, a free open source tool to create test suites that can be easily run across today’s menagerie of mobile devices. MonkeyTalk can help you automate functional interactive tests for native, mobile, and hybrid iOS and Android apps—everything from simple "smoke tests" to sophisticated data-driven test suites.
Today’s test organizations often have sizable investments in test automation. Unfortunately, running and maintaining these test suites represents another sizable investment. All too often this hard work is abandoned and teams revert to a more costly, but familiar, manual approach. Jared Richardson says a more practical solution is to integrate test automation suites with continuous integration (CI). A CI system monitors your source code and compiles the system after every change. Once the build is complete, test suites are automatically run. This approach of ongoing test execution provides your developers rapid feedback and keeps your tests in constant use. It also frees up your testers for more involved exploratory testing. Jared shows how to set up an open source continuous integration tool and explains the best way to introduce this technique to your developers and testers. The concepts are simple when presented properly and provide solid benefits to all areas of an organization.
Whether you are new to testing or looking for a better way to organize your test practices and processes, the Systematic Test and Evaluation Process (STEP™) offers a flexible approach to help you and your team succeed. Dale Perry describes this risk-based framework—applicable to any development lifecycle model—to help you make critical testing decisions earlier and with more confidence. The STEP™ approach helps you decide how to focus your testing effort, what elements and areas to test, and how to organize test designs and documentation. Learn the fundamentals of test analysis and how to develop an inventory of test objectives to help prioritize your testing efforts. Discover how to translate these objectives into a concrete strategy for designing and developing tests. With a prioritized inventory and focused test architecture, you will be able to create test cases, execute the resulting tests, and accurately report on the quality of your application and the effectiveness of your testing. Take back a proven approach to organize your testing efforts and new ways to add more value to your project and organization.
Rick Craig, a consultant with over 30 years of experience in testing and test management, presented a training on essential test management and planning. The presentation covered topics such as test levels, test methodologies, test planning, and test documentation like the master test plan. It emphasized treating testing as a lifecycle process integrated throughout development.
Intro to Data Analytics with Oscar's Director of ProductProduct School
The Director of Product at Oscar, Vasudev Vadlamudi, went over key types of quantitative analysis that B2C product managers use on the job including: funnels, cohorts, and a/b testing. For each one he looked into when and why they are used, and used examples.
Software quality metrics provide important insights into software testing efforts and processes. They can help evaluate products and processes against goals, control resources, and predict future attributes. There are three categories of metrics: process, product, and project. Process metrics measure testing efficiency and effectiveness. Product metrics depict product characteristics like size and quality. Project metrics measure schedule, cost, productivity, and code quality. Choosing metrics based on organizational goals and providing feedback are best practices for an effective metrics program.
This document contains a summary of a presentation on essential test management and planning. The presentation was given by Rick Craig of Software Quality Engineering and covered topics such as test methodology, test levels, test planning, and test management. The summary consisted of over 20 slides covering these various test management topics in detail.
Tackling software testing challenges in the agile eraQASymphony
This document provides an overview of testing challenges in the Agile development era and discusses different testing methodologies. It contains introductions to four chapters that will be included in the eBook. The chapters are written by Vu Lam, CEO of QASymphony, and Sellers Smith, Director of Quality Assurance and Agile Evangelist for Silverpop.
The first chapter discusses how testers need to be reimagined for the Agile age. Testers must adopt an Agile mindset and be involved earlier in the development process. They also need tools designed specifically for Agile testing. The second chapter explores different testing methods including automated, exploratory, and user acceptance testing. It advises using
How to get the most from your clinical outcome assessment (COA) measure - Tes...Keith Meadows
Establishing measurement properties e.g. construct validity of a clinical outcome assessment (COA) is a major requirement in its development process. QuesTReviewTM incorporating our proprietary QuestAnalyzerTM diagnostic test, benchmarks your mobile, tablet, desktop or paper COA against key parameters of questionnaire design good practice.
Agile Testing: Best Practices and Methodology Zoe Gilbert
Agile testing focuses on delivering value to customers through frequent testing and feedback. It differs from the traditional waterfall model which separates development and testing. The document discusses four main agile testing methodologies: behavior driven development, acceptance test driven development, exploratory testing, and session based testing. It also covers the agile testing quadrants framework and how companies can implement best practices for agile testing.
Testing is needed to identify defects, provide confidence, and prevent defects. The objectives of testing include finding defects, providing information, and achieving confidence. Exhaustive testing is impossible, so risk-based testing is used instead of testing all combinations of inputs. Testing activities should start early in the software development life cycle and focus on defined objectives. Defect clusters are used to plan risk-based tests and test cases are regularly revised to overcome the pesticide paradox. The fundamental test process includes test planning, analysis and design, implementation and execution, evaluation and reporting, and closure activities. Independence is important for testing to provide an objective perspective.
Pin the tail on the metric v01 2016 octSteven Martin
This presentation takes a different approach to metrics. Instead of listing the Top 10 field-tested metrics, we first talk about goals as prerequisites for metrics. Next, we discuss characteristics of good and bad metrics. We end with walking through an activity called “Pin the Tail on the Metric,” a technique to facilitate the critical thinking needed to determine what types of metrics can help your organization discuss trade-offs, options, and ultimately make better forward-looking decisions.
Analyst Keynote: Continuous Delivery: Making DevOps AwesomeCA Technologies
This document summarizes a keynote presentation about continuous delivery and DevOps. The presentation discusses research showing that continuous delivery is a key driver of IT and organizational performance. It also discusses how lean management practices and organizational culture contribute to performance. The presentation provides examples of how continuous delivery, lean practices, and culture have helped organizations deliver more value. It encourages adopting these approaches to improve outcomes.
Improving software quality for the future of connected vehiclesDevon Bleibtrey
In the highly regulated environment of automotive, software quality can be difficult but it doesn't need to be. ESG partners with software teams to improve their team's performance through developer operations. From culture to tool integrations, ESG takes a holistic approach to help teams measurably improve their software development lifecycle and the quality of its output.
The productivity of testing in software development life cycleNora Alriyes
This document discusses software testing in the software development life cycle. It addresses three questions: who tests software, how to test software, and what to test in software. Regarding who tests, it discusses research finding that testers often find less important defects than other roles. How to test is discussed in the context of Google's approach of integrating testing roles into development teams. What to test addresses challenges with the waterfall model and proposes risk-based and iterative testing models to help prioritize testing. The goal is to make testing more productive and address challenges of limited time and resources.
Has your organization ever considered replacing a tester that did not write, for example, 15 test cases per day? Is the testing team blamed if defect leakage is greater than 5% into production? What drives decisions like these? The common thread in these examples is “Test Metrics”
Test Metrics... Everyone has an opinion about them. Some believe they are the most valuable way to communicate the results of testing. Some think that they are useless, misleading, and damaging to the communication of test results. Some believe that without measurement you are not managing the effort. And some believe that bad metrics are worse than no metrics at all.
Where does your organization fit in the metrics and measurement debates? Is your team aligned? Do you agree with the team? Do you use a reporting process for test results? Are you forced to report on metrics you don't believe are valuable? Do you have dozens of metrics that you are reporting periodically that no one looks at, and when they do look at them, there is room for misinterpretation?
In this session, Mike Lyles and Jay Philips will challenge the audience to discuss the topic of metrics and measurement, review multiple viewpoints on the topic, and address many of the questions that organizations have today around metrics and measurement.
Takeaways:
- Top metrics that are misused or misunderstood in most every organization.
- Metrics that you should you get rid of ASAP!
- Best and Worst metrics - based on opinions of the speakers & audience.
- Metrics that everyone should use – and how they compare to your organization’s metrics.
- Tools and processes that can help your organization better measure your testing.
** Presentation given at STPCon Spring 2014
This document discusses the importance of test metrics in software testing. It provides examples of key metrics like productivity, defect count, and skills assessment. Productivity metrics like test cases designed/executed per day can demonstrate team capabilities. Defect data around count, age, and severity provides critical project health information. Skills can be measured on an individual, team, and readiness level against required skills to identify training needs. Representing and tracking the right metrics ensures project quality and on-time delivery.
Agile and CMMI: Yes, They Can Work TogetherTechWell
There is a common misconception that agile and CMMI cannot work together. CMMI is viewed as a documentation heavy, slow, process-driven model—the polar opposite of agile principles. The cost of documentation for an appraisal is viewed as another drawback. Join Ed Weller to see why a large organization chose to use the practices in the CMMI to complement agile, and a formal appraisal to improve and evaluate their performance. When mixing approaches that seem contradictory, the first step is to understand the benefits, drawbacks, and cost of each approach and then identify complementary additions. This includes myth busting the misperceptions about both agile and CMMI. The second step, using a formal CMMI appraisal to evaluate organizational performance, requires an understanding of the CMMI model that goes beyond a “checklist approach” requiring extensive documentation. Using lean principles, the appraisal team minimized “appraisal documentation” by using the day-to-day team output. Ed shows that agile and CMMI can be complementary due to executive leadership, lean implementation, and organization training, as demonstrated by a formal appraisal and business results.
Effectiveness of software product metrics for mobile application tanveer ahmad
This document discusses the effectiveness of software product metrics for mobile applications. It defines effectiveness and explores how metrics can be used to measure the quality, performance, and efficacy of mobile apps. The document reviews literature on software metrics and their importance. It also examines different types of product metrics like size, complexity, and defect metrics. Finally, it proposes using a statistical simulation approach and developing a new measurement tool called the Effectiveness Calculation Model for Mobile Applications to quantify mobile app performance using computational mathematics.
Software Metrics: Taking the Guesswork Out of Software ProjectsTechWell
Why bother with measurement and metrics? If you never use the data you collect, this is a valid question—and the answer is “Don’t bother, it’s a waste of time.” In that case, you’ll manage with opinions, personalities, and guesses—or even worse, misconceptions and misunderstandings. Based on his more than forty years of software and systems development experience, Ed Weller describes reasons for measurement, key measures in both traditional and agile environments, decisions enabled by measurement, and lessons learned from successful—and not so successful—measurement programs. Find out how to develop and maintain consistent data and valid measures so you can estimate reliably, deliver products with known quality, and have happy users and customers—the ultimate trailing indicator. Learn to manage projects dynamically with the support of current metrics and data from past projects to guide your management planning and control. Join Ed to explore how to invest in measurements that provide leading indicators to help you meet your company and customer goals.
54 C o m m u n i C at i o n s o F t h e a C m | j u Ly 2 0 1 2 | v o L . 5 5 | n o . 7
practice
i
l
l
u
s
t
r
a
t
i
o
n
b
y
g
a
r
y
n
e
i
l
l
A r e s o f T wA r e M e T r i C s helpful tools or a waste of time?
For every developer who treasures these
mathematical abstractions of software systems
there is a developer who thinks software metrics are
invented just to keep project managers busy. Software
metrics can be very powerful tools that help achieve
your goals but it is important to use them correctly, as
they also have the power to demotivate project teams
and steer development in the wrong direction.
For the past 11 years, the Software Improvement
Group has advised hundreds of organizations
concerning software development and risk
management on the basis of software metrics.
We have used software metrics in more than 200
investigations in which we examined a single snapshot
of a system. Additionally, we use software metrics to
track the ongoing development effort of more than
400 systems. While executing these projects, we have
learned some pitfalls to avoid when using software
metrics in a project management setting. This
article addresses the four most important of these:
˲ Metric in a bubble;
˲ Treating the metric;
˲ One-track metric; and
˲ Metrics galore.
Knowing about these pitfalls will
help you recognize them and, hopeful-
ly, avoid them, which ultimately leads
to making your project successful. As
a software engineer, your knowledge
of these pitfalls helps you understand
why project managers want to use soft-
ware metrics and helps you assist the
managers when they are applying met-
rics in an inefficient manner. As an
outside consultant, you need to take
the pitfalls into account when pre-
senting advice and proposing actions.
Finally, if you are doing research in
the area of software metrics, knowing
these pitfalls will help place your new
metric in the right context when pre-
senting it to practitioners. Before div-
ing into the pitfalls, let’s look at why
software metrics can be considered a
useful tool.
software metrics steer People
“You get what you measure.” This
phrase definitely applies to software
project teams. No matter what you de-
fine as a metric, as soon as it is used to
evaluate a team, the value of the metric
moves toward the desired value. Thus,
to reach a particular goal, you can con-
tinuously measure properties of the
desired goal and plot these measure-
ments in a place visible to the team.
Ideally, the desired goal is plotted
alongside the current measurement to
indicate the distance to the goal.
Imagine a project in which the run-
time performance of a particular use
case is of critical importance. In this
case it helps to create a test in which
the execution time of the use case is
measured daily. By plotting this daily
data point against the desired value,
and making sure the team sees this
mea.
Isabel Evans stopped drawing and painting after being told she was not very good at it, which led to a loss of confidence in her creative and professional abilities. However, she realized that attempting creative activities is important for cognitive and emotional development, and that making mistakes and learning from failures allows for growth. By reengaging with failure through art and with support from others, Isabel was able to regain confidence in her abilities and reboot her career. The document discusses different perspectives on failure and the importance of learning from mistakes.
Instill a DevOps Testing Culture in Your Team and Organization TechWell
The DevOps movement is here. Companies across many industries are breaking down siloed IT departments and federating them into product development teams. Testing and its practices are at the heart of these changes. Traditionally, IT organizations have been staffed with mostly manual testers and a limited number of automation and performance engineers. To keep pace with development in the new “you build it, you own it” environment, testing teams and individuals must develop new technical skills and even embrace coding to stay relevant and add greater value to the business. DevOps really starts with testing. Join Adam Auerbach as he explains what DevOps is and how it relates to testing. He describes how testing must change from top to bottom and how to access your own environment to identify improvement opportunities. Adam dives into practices like service virtualization, test data management, and continuous testing so you can understand where you are now and identify steps needed to instill a DevOps testing culture in your team and organization.
Test Design for Fully Automated Build ArchitectureTechWell
This document summarizes a half-day tutorial on test design for fully automated build architectures presented by Melissa Benua of mParticle at STAREAST 2018. The tutorial covered guiding principles for test design including prioritizing important and reliable tests, structuring automated pipelines around components, packages, and releases, and monitoring test results through code coverage, flaky test handling, and logging versus counters. It also included exercises mapping test cases to functional boundaries and categories of tests to pipeline stages.
System-Level Test Automation: Ensuring a Good StartTechWell
Many organizations invest a lot of effort in test automation at the system level but then have serious problems later on. As a leader, how can you ensure that your new automation efforts will get off to a good start? What can you do to ensure that your automation work provides continuing value? This tutorial covers both “theory” and “practice”. Dot Graham explains the critical issues for getting a good start, and Chris Loder describes his experiences in getting good automation started at a number of companies. The tutorial covers the most important management issues you must address for test automation success, particularly when you are new to automation, and how to choose the best approaches for your organization—no matter which automation tools you use. Focusing on system level testing, Dot and Chris explain how automation affects staffing, who should be responsible for which automation tasks, how managers can best support automation efforts to promote success, what you can realistically expect in benefits and how to report them. They explain—for non-techies—the key technical issues that can make or break your automation effort. Come away with your own clarified automation objectives, and a draft test automation strategy to use to plan your own system-level test automation.
Build Your Mobile App Quality and Test StrategyTechWell
Let’s build a mobile app quality and testing strategy together. Whether you have a web, hybrid, or native app, building a quality and testing strategy means (1) knowing what data and tools you have available to make agile decisions, (2) understanding your customers and your competitors, and (3) testing your app under real-world conditions. Jason Arbon guides you through the latest techniques, data, and tools to ensure the awesomeness of your mobile app quality and testing strategy. Leave this interactive session with a strategy for your very own app—or one you pretend to own. The information Jason shares is based on data from Appdiff’s next-gen mobile app testing platform, lessons from Applause/uTest’s crowd, text mining hundreds of millions of app store reviews, and in-depth discussions with top mobile app development teams.
Testing Transformation: The Art and Science for SuccessTechWell
Technologies, testing processes, and the role of the tester have evolved significantly in the past few years with the advent of agile, DevOps, and other new technologies. It is critical that we testing professionals evaluate ourselves and continue to add tangible value to our organizations. In your work, are you focused on the trivial or on real game changers? Jennifer Bonine describes critical elements that help you artfully blend people, process, and technology to create a synergistic relationship that adds value. Jennifer shares ideas on mastering politics, maneuvering core vs. context, and innovating your technology strategies and processes. She explores how new processes can be introduced in an organization, what the role of organizational culture is in determining the success of a project, and how you can know what tools will add value vs. simply adding overhead and complexity. Jennifer reviews critically needed tester skills and discusses a continual learning model to evolve your skills and stay relevant. This discussion can lead you to technologies, processes, and skills you can stake your career on.
We’ve all been there. We work incredibly hard to develop a feature and design tests based on written requirements. We build a detailed test plan that aligns the tests with the software and the documented business needs. And when we put the tests to the software, it all falls apart because the requirements were changed without informing everyone. Mary Thorn says help is at hand. Enter behavior-driven development (BDD), and Cucumber and SpecFlow, tools for running automated acceptance tests and facilitating BDD. Mary explores the nuances of Cucumber and SpecFlow, and shows you how to implement BDD and agile acceptance testing. By fostering collaboration for implementing active requirements via a common language and format, Cucumber and SpecFlow bridge the communication gap between business stakeholders and implementation teams. In this workshop, practice writing feature files with the best practices Mary has discovered over numerous implementations. If you experience developers not coding to requirements, testers not getting requirements updates, or customers who feel out of the loop and don’t get what they ask for, Mary has answers for you.
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
Many teams go crazy because of brittle, high-maintenance automated test suites. Jim Holmes helps you understand how to create a flexible, maintainable, high-value suite of functional tests using Selenium WebDriver. Learn the basics of what to test, what not to test, and how to avoid overlapping with other types of testing. Jim includes both philosophical concepts and hands-on coding. Testers who haven't written code should not be intimidated! We'll pair you up to make sure you're successful. Learn to create practical tests dealing with advanced situations such as input validation, AJAX delays, and working with file downloads. Additionally, discover when you need to work together with developers to create a system that's more easily testable. This tutorial focuses primarily on automating web tests, but many of the same concepts can be applied to other UI environments. Demos and labs will be in C# and Java using WebDriver. Leave this tutorial having learned how to write high-value WebDriver tests—and stay sane while doing so.
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
Chris Parlette maintains that renting infrastructure on demand is the most disruptive trend in IT in decades. In 2016, enterprises spent $23B on public cloud IaaS services. By 2020, that figure is expected to reach $65B. The public cloud is now used like a utility, and like any utility, there is waste. Who's responsible for optimizing the infrastructure and reducing wasted expenses? It’s DevOps. The excess expense, known as cloud waste, comprises several interrelated problems: services running when they don't need to be, improperly sized infrastructure, orphaned resources, and shadow IT. There are a few core tenets of DevOps—holistic thinking, no silos, rapid useful feedback, and automation—that can be applied to reducing your cloud waste. Join Chris to learn why you should include continuous cost optimization in your DevOps processes. Automate cost control, reduce your cloud expenses, and make your life easier.
Transform Test Organizations for the New World of DevOpsTechWell
With the recent emergence of DevOps across the industry, testing organizations are being challenged to transform themselves significantly within a short period of time to stay meaningful within their organizations. It’s not easy to plan and approach these changes considering the way testing organizations have remained structured for ages. These challenges start from foundational organizational structures and can cut across leadership influence, competencies, tools strategy, infrastructure, and other dimensions. Sumit Kumar shares his experience assisting various organizations to overcome these challenges using an organized DevOps enablement framework. The framework includes radical restructuring, turning the tools strategy upside down, a multidimensional workforce enablement supported by infrastructure changes, redeveloped collaborations models, and more. From his real world experiences Sumit shares tips for approaching this journey and explains the roadmap for testing organizations to transform themselves to lead the quality in DevOps.
The Fourth Constraint in Project Delivery—LeadershipTechWell
All too often, the triple constraints—time, cost, and quality—are bandied about as if they are the be-all, end-all. While they are important, leadership—the fourth and larger underpinning constraint—influences the first three. Statistics on project success and failure abound, and these measurements are usually taken against the triple constraints. According to the Project Management Institute, only 53 percent of projects are completed within budget, and only 49 percent are completed on time. If so many projects overrun budget and are late, we can’t really say, “Good, fast, or cheap—pick two.” Rob Burkett talks about leadership at every level of a team. He shares his insights and stories gleaned from his years of IT and project management experience. Rob speaks to some of the glaring difficulties in the workplace in general and some specifically related to IT delivery and project management. Leave with a clearer understanding of how to communicate with teams and team members, and gain a better understanding of how you can be a leader—up and down your organization.
Resolve the Contradiction of Specialists within Agile TeamsTechWell
As teams grow, organizations often draw a distinction between feature teams, which deliver the visible business value to the user, and component teams, which manage shared work. Steve Berczuk says that this distinction can help organizations be more productive and scale effectively, but he recognizes that not all shared work fits into this model. Some work is best handled by “specialists,” that is people with unique skills. Although teams composed entirely of T-shaped people is ideal, certain skills are hard to come by and are used irregularly across an organization. Since these specialists often need to work closely with teams, rather than working from their own backlog, they don’t fit into the component team model. The use of shared resources presents challenges to the agile planning model. Steve Berczuk shares how teams such as those providing infrastructure services and specialists can fit into a feature+component team model, and how variations such as embedding specialists in a scrum team can both present process challenges and add significant value to both the team and the larger organization.
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
Metrics don’t have to be a necessary evil. If done right, metrics can help guide us to make better forward-looking decisions, rather than being used for simply managing or monitoring. They can help us identify trade-offs between options for what to do next versus punitive or worse, purely managerial measures. Steve Martin won’t be giving the Top Ten List of field-tested metrics you should use. Instead, in this interactive mini-workshop, he leads you through the critical thinking necessary for you to determine what is right for you to measure. First, Steve explores why you want to measure something—whether it’s for a team, a portfolio, or even an agile transformation. Next, he provides multiple real-life metrics examples to help drive home concepts behind characteristics of good and bad metrics. Finally, Steve shows how to run his field-tested agile game—Pin the Tail on the Metric. Take back this activity to help you guide metrics conversations at your organization.
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
A hierarchy is an organizational network that has a top and a bottom, and where position is determined by rank, importance, and value. A holarchy is a network that has no top or bottom and where each person’s value derives from his ability, rather than position. As more companies seek the benefits of agile, leaders need to build and sustain delivery capability while scaling agile without introducing unnecessary process and overhead. The Agile Performance Holarchy (APH) is an empirical model for scaling and sustaining agility while continuing to deliver great products. Jeff Dalton designed the APH by drawing from lessons learned observing and assessing hundreds of agile companies and teams. The APH helps implement a holarchy—a system composed of interacting organizational units called holons—centered on a series of performance circles that embody the behaviors of high performing agile organizations. Jeff describes how APH provides guidelines in the areas of leadership, values, teaming, visioning, governing, building, supporting, and engaging within an all-agile organization. Join Jeff to see what the APH is all about and how you can use it in your team and organization.
A Business-First Approach to DevOps ImplementationTechWell
DevOps is a cultural shift aimed at streamlining intergroup communication and improving operational efficiency for development and operations groups. Over time, inclusion of other IT groups under the DevOps umbrella has become the norm for many organizations. But even broadening the boundaries of DevOps, the conversation has been largely devoid of the business units’ place at the table. A common mistake organizations make while going through the DevOps transformation is drawing a line at the IT boundary. If that occurs, a larger, more inclusive silo within the organization is created, operating in an informational vacuum and causing operational inefficiency and goal misalignment. Sharing his experiences working on both sides of the fence, Leon Fayer describes the importance of including business units in order to align technology decisions with business goals. Leon discusses inclusion of business units in existing agile processes, benefits of cross-departmental monitoring, and a business-first approach to technology decisions.
Databases in a Continuous Integration/Delivery ProcessTechWell
The document summarizes a presentation about including databases in a continuous integration/delivery process. It discusses treating database code like application code by placing it under version control and integrating databases into the DevOps software development pipeline. This allows databases to be built, tested, and released like other software through continuous integration, delivery, and deployment.
Mobile Testing: What—and What Not—to AutomateTechWell
Organizations are moving rapidly into mobile technology, which has significantly increased the demand for testing of mobile applications. David Dangs says testers naturally are turning to automation to help ease the workload, increase potential test coverage, and improve testing efficiency. But should you try to automate all things mobile? Unfortunately, the answer is not always clear. Mobile has its own set of complications, compounded by a wide variety of devices and OS platforms. Join David to learn what mobile testing activities are ripe for automation—and those items best left to manual efforts. He describes the various considerations for automating each type of mobile application: mobile web, native app, and hybrid applications. David also covers device-level testing, types of testing, available automation tools, and recommendations for automation effectiveness. Finally, based on his years of mobile testing experience, David provides some tips and tricks to approach mobile automation. Leave with a clear plan for automating your mobile applications.
Cultural Intelligence: A Key Skill for SuccessTechWell
Diversity is becoming the norm in everyday life. However, introducing global delivery models without a proper understanding of intercultural differences can lead to difficulty, frustration, and reduced productivity. Priyanka Sharma and Thena Barry say that in our diverse world, we need teams with people who can cross these boundaries, communicate effectively, and build the diverse networks necessary to avoid problems. We need to learn about cultural intelligence (CI) and cultural quotient (CQ). CI is the ability to relate and work effectively across cultures. CQ is the cognitive, motivational, and behavioral capacity to understand and respond to beliefs, values, attitudes, and behaviors of individuals and groups. Together, CI and CQ can help us build behavioral capacities that aid motivation, behavior, and productivity in teams as well as individuals. Priyanka and Thena show how to build a more culturally intelligent place with tools and techniques from Leading with Cultural Intelligence, as well as content from the Hofstede cultural model. In addition, they illustrate the model with real-life experiences and demonstrate how they adapted in similar circumstances.
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
Why would a century-old utility with no direct competitors take on the challenge of transforming its entire IT application organization to an agile methodology? In an increasingly interconnected world, the expectations of customers continue to evolve. From smart meters to smart phones, IoT is creating a crisis point for industries not accustomed to rapid change. Glen Morris explains that pizzas can be tracked by the minute and packages at every stop, and customers now expect this same customer service model should exist for all industries—including power. Glen examines how to create momentum and transform non-IT-focused industries to an agile model. If you are struggling with gaining traction in your pursuit of agile within your business, Glen gives you concrete, practical experiences to leverage in your pursuit. Finally, he communicates how to gain buy-in from business partners who have no idea or concern about agile or its methodologies. If your business partners look at you with amusement when you mention the need for a dedicated Product Owner, join Glen as he walks you through the approaches to overcoming agile skepticism.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Measurement and Metrics for Test Managers
1. MI
Half-day Tutorials
5/5/2014 8:30:00 AM
Measurement and Metrics
for Test Managers
Presented by:
Rick Craig
Software Quality Engineering
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
2. Rick Craig
Software Quality Engineering
A consultant, lecturer, author, and test manager, Rick Craig has led numerous teams of
testers on both large and small projects. In his twenty-five years of consulting worldwide,
Rick has advised and supported a diverse group of organizations on many testing and test
management issues. From large insurance providers and telecommunications companies to
smaller software services companies, he has mentored senior software managers and
helped test teams improve their effectiveness. Rick is coauthor of Systematic Software
Testing and is a frequent speaker at testing conferences, including every STAR conference
since its inception.