This presentation was delivered by Tom Kleingarn at HP Software Universe 2010 in Washington DC. It describes basic statistical tests that can be applied to any performance engineering practice to improve accuracy and confidence in your test results.
This document discusses process capability analysis. It defines specification and tolerance limits as boundaries that define conformance for manufacturing or service operations. Process capability indices like Cp, Cpk, CPU, CPL and Ppk are used to determine if a process's natural variation can meet specifications. Cp measures a process's potential to meet specifications based on its spread. Cpk incorporates both mean and standard deviation. CPU and CPL measure if the process mean is centered between the specification limits. Ppk indicates actual long-term process performance meeting specifications. Maintaining capable processes with indices above 1 ensures high quality and uniform output.
Process capability is a measure of a process's ability to meet specifications for a product or service. It is determined by comparing the process variability, as measured by the standard deviation, to the tolerances between the nominal value and upper and lower specifications. The process capability ratio Cp measures the tolerance width relative to the process variability, while the process capability index Cpk considers whether the process mean is centered between the specifications. For example, in assessing an intensive care lab's turnaround time process, which has a standard deviation of 1.35 minutes and specifications of 20-30 minutes, the Cp is calculated as 1.23 but the Cpk is 0.94, indicating the process mean of 26.2 minutes is not centered
090528 Miller Process Forensics Talk @ Asqrwmill9716
Talk presented to local ASQ chapter. It dealt with process improvement: continuous measurement system validation and utilizing capability metrics for process forensics. Further, a program was introduced that\'s been used to optimize spare parts inventory based on a resampling approach to historical data.
Optimizing marketing campaigns using experimental designsPankaj Sharma
This document discusses how experimental designs and non-parametric predictive models can be used together to optimize marketing campaigns. It recommends building highly predictive non-parametric models first to rank prospects, then using experimental design methodology to further improve response rates. The document describes how a Plackett-Burman design was used to test 11 variables in a direct mail campaign. Five variables were found to be statistically significant, and combining them yielded an optimal run expected to increase response rates by 31% over the original champion run.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Autocorrelation and...J. García - Verdugo
The document discusses autocorrelation and cross correlation analysis of time series data. It provides an example of measuring daily body weight over 4 weeks and finds autocorrelation at a lag of 1 day. This indicates dependence between successive daily measurements. The document also analyzes viscosity measurements taken hourly and finds autocorrelation up to a lag of 4 hours. An autoregressive model is fitted to account for this autocorrelation. Finally, the document examines cross correlation between methane feed rate and CO2 concentration measurements taken minute-by-minute. The largest correlation is found at a lag of -1 minute, suggesting the CO2 is affected by methane feed rate from the previous minute.
Fern Halper is an analyst who has observed growing interest in predictive analytics from companies seeking competitive advantages and deeper customer insights. While the technology has existed for decades, businesses are now recognizing its value. Vendors are developing easier to use tools in response, hoping both statisticians and regular business users can build basic models. Open source is also becoming more important, with ecosystems of support emerging around languages like R.
What Every Software Engineer Should Know About Machine Learning - Peter NorvigWithTheBest
I discuss how machine learning has great potential for innovation and how machine learning can be applied to various aspects of technology.
Peter Norvig, Director of Research at Google Inc.
This document discusses process capability analysis. It defines specification and tolerance limits as boundaries that define conformance for manufacturing or service operations. Process capability indices like Cp, Cpk, CPU, CPL and Ppk are used to determine if a process's natural variation can meet specifications. Cp measures a process's potential to meet specifications based on its spread. Cpk incorporates both mean and standard deviation. CPU and CPL measure if the process mean is centered between the specification limits. Ppk indicates actual long-term process performance meeting specifications. Maintaining capable processes with indices above 1 ensures high quality and uniform output.
Process capability is a measure of a process's ability to meet specifications for a product or service. It is determined by comparing the process variability, as measured by the standard deviation, to the tolerances between the nominal value and upper and lower specifications. The process capability ratio Cp measures the tolerance width relative to the process variability, while the process capability index Cpk considers whether the process mean is centered between the specifications. For example, in assessing an intensive care lab's turnaround time process, which has a standard deviation of 1.35 minutes and specifications of 20-30 minutes, the Cp is calculated as 1.23 but the Cpk is 0.94, indicating the process mean of 26.2 minutes is not centered
090528 Miller Process Forensics Talk @ Asqrwmill9716
Talk presented to local ASQ chapter. It dealt with process improvement: continuous measurement system validation and utilizing capability metrics for process forensics. Further, a program was introduced that\'s been used to optimize spare parts inventory based on a resampling approach to historical data.
Optimizing marketing campaigns using experimental designsPankaj Sharma
This document discusses how experimental designs and non-parametric predictive models can be used together to optimize marketing campaigns. It recommends building highly predictive non-parametric models first to rank prospects, then using experimental design methodology to further improve response rates. The document describes how a Plackett-Burman design was used to test 11 variables in a direct mail campaign. Five variables were found to be statistically significant, and combining them yielded an optimal run expected to increase response rates by 31% over the original champion run.
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W4 Autocorrelation and...J. García - Verdugo
The document discusses autocorrelation and cross correlation analysis of time series data. It provides an example of measuring daily body weight over 4 weeks and finds autocorrelation at a lag of 1 day. This indicates dependence between successive daily measurements. The document also analyzes viscosity measurements taken hourly and finds autocorrelation up to a lag of 4 hours. An autoregressive model is fitted to account for this autocorrelation. Finally, the document examines cross correlation between methane feed rate and CO2 concentration measurements taken minute-by-minute. The largest correlation is found at a lag of -1 minute, suggesting the CO2 is affected by methane feed rate from the previous minute.
Fern Halper is an analyst who has observed growing interest in predictive analytics from companies seeking competitive advantages and deeper customer insights. While the technology has existed for decades, businesses are now recognizing its value. Vendors are developing easier to use tools in response, hoping both statisticians and regular business users can build basic models. Open source is also becoming more important, with ecosystems of support emerging around languages like R.
What Every Software Engineer Should Know About Machine Learning - Peter NorvigWithTheBest
I discuss how machine learning has great potential for innovation and how machine learning can be applied to various aspects of technology.
Peter Norvig, Director of Research at Google Inc.
Using the Machine to predict TestabilityMiguel Lopez
This document discusses using machine learning to predict testability based on source code metrics. It begins with an introduction to the presenting organization and definitions of testability and machine learning concepts. It then shows how decision trees and other machine learning approaches could be used to predict testability levels (high, medium, low) based on source code metrics like number of interfaces, abstractness, and coupling. As an example, metrics from 9 Java packages were analyzed to build and test a predictive model in the Weka machine learning software. However, the document notes the initial model is simplistic and could be improved by incorporating more metrics related to factors in the testability fishbone diagram.
Machine Learning in Software EngineeringAlaa Hamouda
Software is nowadays a critical component of our lives and everyday-work working activities. However, as the technological infrastructure of the modern world evolves a great challenge arises for developing high quality software systems with increasing size and complexity. Software engineers and researchers are striving to meet this challenge by developing and implementing software engineering methodologies able to deliver software products of high quality, within budget and time constraints. The field of machine learning in software engineering has recently emerged to provide means for addressing, studying, analyzing, and understanding critical software development issues and at the same time to offer mature machine learning techniques such as artificial neural network, Bayesian networks, decision trees, fuzzy logic, genetic algorithms, and rule induction. Machine learning algorithms have proven to be of great practical value to software engineering. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development tasks could be formulated as learning problems and approached in terms of learning algorithms. In this paper, we first take a look at the characteristics and applicability of some frequently utilized machine learning algorithms. We then present the application of machine learning in the different phases of software engineering that include project planning, requirements analysis, design, implementation, testing and maintenance.
Software quality improvement expert Jan Princen and XBOSoft CEO Philip Lew discuss the use of Predictive Analytics to prevent software defects in this XBOSoft webinar on Defect Prevention.
This document discusses how machine learning can be applied to various activities in software testing. It describes how machine learning works using training and test data to make predictions. Supervised and unsupervised learning techniques are discussed. Specific applications mentioned include software defect prediction, test planning, test case management, debugging, and refining blackbox test specifications. Challenges include availability of past data and finding predictable patterns, while potential steps forward include expanding machine learning to more blackbox techniques, identifying the right patterns for different test activities, algorithm analysis, and crowdsourcing.
The document provides an introduction and overview of performance testing. It discusses what performance testing, tuning, and engineering are and why they are important. It outlines the typical performance test cycle and common types of performance tests. Finally, it discusses some myths about performance testing and gives an overview of common performance testing tools and architectures.
Automated testing of software applications using machine learning editedMilind Kelkar
Machine Learning is the next internet. It is the backbone of search engines, driverless car, paperless banking, and facial recognition in forensics. Running automated software tests with lesser human intervention without the risk of schedule delays is now a reality. This presentation will explore several practical machine learning concepts that are being adopted to test software applications.
Need for Speed: How to Performance Test the right way by Annie BhaumikQA or the Highway
This document discusses the importance of performance testing web applications. It notes that 53% of users will abandon a website if it takes over 3 seconds to load, and 79% of those users will not return. The document outlines different types of performance tests including load testing, endurance testing, spike testing, and stress testing. It emphasizes the need for performance testing to be realistic by simulating real user behavior, network conditions, workloads and data volumes similar to the production environment. The document also discusses analyzing test results and key performance indicators to understand how the system performs under different loads and over time.
Software Test Metrics and MeasurementsDavis Thomas
Explains in detail with example about calculation of -
1.Percentage Test cases Executed [Test Coverage]
2.Percentage Test cases not executed
3.Percentage Test cases Passed
4.Percentage Test cases Failed
5.Percentage Test cases BLOCKED/Deferred
6.Defect Density
7.Defect Removal Efficiency (DRE)
8.Defect Leakage
9.Defect Rejection ratio [Invalid bug ratio]
10.Percentage of Critical defects
11.Percentage of High defects
12.Percentage of Medium defects
13.Percentage of Low/Lowest defects
What are Software Testing Methodologies | Software Testing Techniques | EdurekaEdureka!
YouTube Link: https://youtu.be/6rNgPXz9A9s
(** Test Automation Masters Program: https://www.edureka.co/masters-program/automation-testing-engineer-training **)
This Edureka PPT on "Software Testing Methodologies and Techniques" will give you in-depth knowledge about different types of software testing models and techniques
The following are the topics covered in the session:
Importance of Software Testing
Software Testing Methodologies
Software Testing Techniques
Black-Box Techniques
White-Box Techniques
Experience-Based Techniques
Selenium playlist: https://goo.gl/NmuzXE
Selenium Blog playlist: http://bit.ly/2B7C3QR
Software Testing Blog playlist: http://bit.ly/2UXwdJm
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
The document discusses process capability and defines key terms related to process capability. It provides the standard formula for process capability using 6 sigma and explains how process capability is compared to specification limits. It then discusses different process capability indices including Cp, Cpk, and Cpm. It explains how these indices measure both potential and actual process capability. The document also discusses limitations of the Cp index and the use of Cpk to address process centering. It describes how to calculate confidence intervals for process capability ratios and discusses some key process performance metrics.
Critical System Validation in Software Engineering SE21koolkampus
The document discusses techniques for validating critical systems, with a focus on validating safety and reliability. Static validation techniques include design reviews and formal proofs, while dynamic techniques involve testing. Reliability validation uses statistical testing against an operational profile to measure reliability. Safety validation aims to prove a system cannot reach unsafe states, using techniques like safety proofs, hazard analysis, and safety cases presenting arguments about risk levels. The document also provides an example safety validation of an insulin pump system.
Measurement risk and the impact on your processes Transcat
Howard Zion, Transcat's Director of Service Application Engineering, discusses how measurements are incorrectly influencing the acceptance decision on your products. This webinar will teach you:
What is Measurement Risk?
Where does risk creep into your process?
Where does risk creep into the calibration process?
Calibration Results: Impact on your process
The document summarizes a risk analysis of alternatives for a wastewater treatment plant installation project for a new gold mine. It analyzes three main alternatives - buying a used skid plant, new skid plant, or new fixed plant. It uses stochastic optimization and Monte Carlo simulation to model costs under uncertainty. Sensitivity analysis is performed by varying key parameters like installation time and costs by +/- 30% to evaluate how the optimal decision changes with different risk preferences. The analysis seeks to minimize total costs including investment, operation, and delay costs given the project's risk profile.
This document outlines statistical quality control techniques for evaluating manufacturing and service processes. It discusses measuring and controlling process variation using variables like mean, standard deviation and control charts. Key aspects covered include process capability analysis using metrics like Cpk, acceptance sampling plans to determine quality levels while balancing producer and consumer risks, and operating characteristic curves.
Quality andc apability hand out 091123200010 Phpapp01jasonhian
The document outlines key concepts in quality management and Six Sigma methodology. It discusses definitions of quality, total quality management (TQM), and Six Sigma. Six Sigma aims to reduce defects through eliminating variation and achieving near zero defect levels. It uses a Define-Measure-Analyze-Improve-Control (DMAIC) methodology. Statistical process control charts and process capability indices are also introduced to measure quality performance. An example of Mumbai's successful lunch delivery system achieving over 5-sigma quality levels is provided.
The document provides an overview of software testing fundamentals including:
1. It discusses key testing concepts like error, fault, failure and how testing helps build confidence and reduce costs. Testing aims to find faults and prove software meets requirements.
2. Testing challenges are discussed like the impossibility of exhaustive testing due to huge number of combinations. Prioritization is important given limited time.
3. Principles of testing are covered such as defects clustering, absence of errors fallacy, and how early testing avoids fault multiplication. Testing must be context dependent.
Performance Testing and OBIEE by QuontraSolutionsQUONTRASOLUTIONS
OBIEE online training offered by Quontra Solutions with special features having Extensive Training will be in both OBIEE Online Training and Placement. We help you in resume preparation and conducting Mock Interviews.
Emphasis is given on important topics that were required and mostly used in real time projects. Quontra Solutions is an Online Training Leader when it comes to high-end effective and efficient IT Training. We have always been and still are focusing on the key aspect which is providing utmost effective and competent training to both students and professionals who are eager to enrich their technical skills.
Testing is important to identify errors and improve systems. There are different types of testing like functional, navigational, and user testing. It is important to have a test plan that evaluates all aspects of a solution using normal, erroneous, and boundary test data. The test plan should show what will be tested and expected results. Documenting test results in a table with screenshots provides evidence that a system works as intended.
Six Sigma Confidence Interval Analysis (CIA) Training ModuleFrank-G. Adler
The Six Sigma Confidence Interval Analysis (CIA) Training Module v1.0 includes:
1. MS PowerPoint Presentation including 72 slides covering theory and examples of Confidence Interval Analysis and Hypothesis Testing for CIA for one Mean Value, Comparison of two Mean Values, Comparison of a Paired Data Sets, CIA for one Standard Deviation, Comparison of two Standard Deviations, CIA for Capability Indices, CIA for one Defect Rate, Comparison of two Defect Rates, CIA for one Count, and Comparison of two Counts.
2. MS Excel Six Sigma Confidence Interval Analysis Calculator making it really easy to calculate Confidence Intervals (mean value, standard deviation, capability indices, defect rate, count) and perform a Comparison of two Statistics (mean values, standard deviations, defect rates, counts).
Using the Machine to predict TestabilityMiguel Lopez
This document discusses using machine learning to predict testability based on source code metrics. It begins with an introduction to the presenting organization and definitions of testability and machine learning concepts. It then shows how decision trees and other machine learning approaches could be used to predict testability levels (high, medium, low) based on source code metrics like number of interfaces, abstractness, and coupling. As an example, metrics from 9 Java packages were analyzed to build and test a predictive model in the Weka machine learning software. However, the document notes the initial model is simplistic and could be improved by incorporating more metrics related to factors in the testability fishbone diagram.
Machine Learning in Software EngineeringAlaa Hamouda
Software is nowadays a critical component of our lives and everyday-work working activities. However, as the technological infrastructure of the modern world evolves a great challenge arises for developing high quality software systems with increasing size and complexity. Software engineers and researchers are striving to meet this challenge by developing and implementing software engineering methodologies able to deliver software products of high quality, within budget and time constraints. The field of machine learning in software engineering has recently emerged to provide means for addressing, studying, analyzing, and understanding critical software development issues and at the same time to offer mature machine learning techniques such as artificial neural network, Bayesian networks, decision trees, fuzzy logic, genetic algorithms, and rule induction. Machine learning algorithms have proven to be of great practical value to software engineering. Not surprisingly, the field of software engineering turns out to be a fertile ground where many software development tasks could be formulated as learning problems and approached in terms of learning algorithms. In this paper, we first take a look at the characteristics and applicability of some frequently utilized machine learning algorithms. We then present the application of machine learning in the different phases of software engineering that include project planning, requirements analysis, design, implementation, testing and maintenance.
Software quality improvement expert Jan Princen and XBOSoft CEO Philip Lew discuss the use of Predictive Analytics to prevent software defects in this XBOSoft webinar on Defect Prevention.
This document discusses how machine learning can be applied to various activities in software testing. It describes how machine learning works using training and test data to make predictions. Supervised and unsupervised learning techniques are discussed. Specific applications mentioned include software defect prediction, test planning, test case management, debugging, and refining blackbox test specifications. Challenges include availability of past data and finding predictable patterns, while potential steps forward include expanding machine learning to more blackbox techniques, identifying the right patterns for different test activities, algorithm analysis, and crowdsourcing.
The document provides an introduction and overview of performance testing. It discusses what performance testing, tuning, and engineering are and why they are important. It outlines the typical performance test cycle and common types of performance tests. Finally, it discusses some myths about performance testing and gives an overview of common performance testing tools and architectures.
Automated testing of software applications using machine learning editedMilind Kelkar
Machine Learning is the next internet. It is the backbone of search engines, driverless car, paperless banking, and facial recognition in forensics. Running automated software tests with lesser human intervention without the risk of schedule delays is now a reality. This presentation will explore several practical machine learning concepts that are being adopted to test software applications.
Need for Speed: How to Performance Test the right way by Annie BhaumikQA or the Highway
This document discusses the importance of performance testing web applications. It notes that 53% of users will abandon a website if it takes over 3 seconds to load, and 79% of those users will not return. The document outlines different types of performance tests including load testing, endurance testing, spike testing, and stress testing. It emphasizes the need for performance testing to be realistic by simulating real user behavior, network conditions, workloads and data volumes similar to the production environment. The document also discusses analyzing test results and key performance indicators to understand how the system performs under different loads and over time.
Software Test Metrics and MeasurementsDavis Thomas
Explains in detail with example about calculation of -
1.Percentage Test cases Executed [Test Coverage]
2.Percentage Test cases not executed
3.Percentage Test cases Passed
4.Percentage Test cases Failed
5.Percentage Test cases BLOCKED/Deferred
6.Defect Density
7.Defect Removal Efficiency (DRE)
8.Defect Leakage
9.Defect Rejection ratio [Invalid bug ratio]
10.Percentage of Critical defects
11.Percentage of High defects
12.Percentage of Medium defects
13.Percentage of Low/Lowest defects
What are Software Testing Methodologies | Software Testing Techniques | EdurekaEdureka!
YouTube Link: https://youtu.be/6rNgPXz9A9s
(** Test Automation Masters Program: https://www.edureka.co/masters-program/automation-testing-engineer-training **)
This Edureka PPT on "Software Testing Methodologies and Techniques" will give you in-depth knowledge about different types of software testing models and techniques
The following are the topics covered in the session:
Importance of Software Testing
Software Testing Methodologies
Software Testing Techniques
Black-Box Techniques
White-Box Techniques
Experience-Based Techniques
Selenium playlist: https://goo.gl/NmuzXE
Selenium Blog playlist: http://bit.ly/2B7C3QR
Software Testing Blog playlist: http://bit.ly/2UXwdJm
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
The document discusses process capability and defines key terms related to process capability. It provides the standard formula for process capability using 6 sigma and explains how process capability is compared to specification limits. It then discusses different process capability indices including Cp, Cpk, and Cpm. It explains how these indices measure both potential and actual process capability. The document also discusses limitations of the Cp index and the use of Cpk to address process centering. It describes how to calculate confidence intervals for process capability ratios and discusses some key process performance metrics.
Critical System Validation in Software Engineering SE21koolkampus
The document discusses techniques for validating critical systems, with a focus on validating safety and reliability. Static validation techniques include design reviews and formal proofs, while dynamic techniques involve testing. Reliability validation uses statistical testing against an operational profile to measure reliability. Safety validation aims to prove a system cannot reach unsafe states, using techniques like safety proofs, hazard analysis, and safety cases presenting arguments about risk levels. The document also provides an example safety validation of an insulin pump system.
Measurement risk and the impact on your processes Transcat
Howard Zion, Transcat's Director of Service Application Engineering, discusses how measurements are incorrectly influencing the acceptance decision on your products. This webinar will teach you:
What is Measurement Risk?
Where does risk creep into your process?
Where does risk creep into the calibration process?
Calibration Results: Impact on your process
The document summarizes a risk analysis of alternatives for a wastewater treatment plant installation project for a new gold mine. It analyzes three main alternatives - buying a used skid plant, new skid plant, or new fixed plant. It uses stochastic optimization and Monte Carlo simulation to model costs under uncertainty. Sensitivity analysis is performed by varying key parameters like installation time and costs by +/- 30% to evaluate how the optimal decision changes with different risk preferences. The analysis seeks to minimize total costs including investment, operation, and delay costs given the project's risk profile.
This document outlines statistical quality control techniques for evaluating manufacturing and service processes. It discusses measuring and controlling process variation using variables like mean, standard deviation and control charts. Key aspects covered include process capability analysis using metrics like Cpk, acceptance sampling plans to determine quality levels while balancing producer and consumer risks, and operating characteristic curves.
Quality andc apability hand out 091123200010 Phpapp01jasonhian
The document outlines key concepts in quality management and Six Sigma methodology. It discusses definitions of quality, total quality management (TQM), and Six Sigma. Six Sigma aims to reduce defects through eliminating variation and achieving near zero defect levels. It uses a Define-Measure-Analyze-Improve-Control (DMAIC) methodology. Statistical process control charts and process capability indices are also introduced to measure quality performance. An example of Mumbai's successful lunch delivery system achieving over 5-sigma quality levels is provided.
The document provides an overview of software testing fundamentals including:
1. It discusses key testing concepts like error, fault, failure and how testing helps build confidence and reduce costs. Testing aims to find faults and prove software meets requirements.
2. Testing challenges are discussed like the impossibility of exhaustive testing due to huge number of combinations. Prioritization is important given limited time.
3. Principles of testing are covered such as defects clustering, absence of errors fallacy, and how early testing avoids fault multiplication. Testing must be context dependent.
Performance Testing and OBIEE by QuontraSolutionsQUONTRASOLUTIONS
OBIEE online training offered by Quontra Solutions with special features having Extensive Training will be in both OBIEE Online Training and Placement. We help you in resume preparation and conducting Mock Interviews.
Emphasis is given on important topics that were required and mostly used in real time projects. Quontra Solutions is an Online Training Leader when it comes to high-end effective and efficient IT Training. We have always been and still are focusing on the key aspect which is providing utmost effective and competent training to both students and professionals who are eager to enrich their technical skills.
Testing is important to identify errors and improve systems. There are different types of testing like functional, navigational, and user testing. It is important to have a test plan that evaluates all aspects of a solution using normal, erroneous, and boundary test data. The test plan should show what will be tested and expected results. Documenting test results in a table with screenshots provides evidence that a system works as intended.
Six Sigma Confidence Interval Analysis (CIA) Training ModuleFrank-G. Adler
The Six Sigma Confidence Interval Analysis (CIA) Training Module v1.0 includes:
1. MS PowerPoint Presentation including 72 slides covering theory and examples of Confidence Interval Analysis and Hypothesis Testing for CIA for one Mean Value, Comparison of two Mean Values, Comparison of a Paired Data Sets, CIA for one Standard Deviation, Comparison of two Standard Deviations, CIA for Capability Indices, CIA for one Defect Rate, Comparison of two Defect Rates, CIA for one Count, and Comparison of two Counts.
2. MS Excel Six Sigma Confidence Interval Analysis Calculator making it really easy to calculate Confidence Intervals (mean value, standard deviation, capability indices, defect rate, count) and perform a Comparison of two Statistics (mean values, standard deviations, defect rates, counts).
The document provides an overview of various quality management concepts and tools including:
- Total Quality Management (TQM) which aims to design high quality products and ensure consistent production.
- Six Sigma which seeks to reduce process variation and eliminate defects through tools like DMAIC (Define, Measure, Analyze, Improve, Control).
- ISO 9000 standards for quality management systems which many companies adopt for global competitiveness.
- Various analytical tools used in quality improvement like control charts, flow diagrams and cause-and-effect diagrams.
The document summarizes a school penetration testing project conducted by UDomain. They identified over 1,700 vulnerabilities across 10 school websites, including 20,000+ records of personal data. Critical vulnerabilities included SQL injection, XSS, and passwords in plaintext. Recommendations included more regular scanning, patching of outdated systems, and reliance on secure vendor solutions. UDomain demonstrated SQL injection techniques and explained their security services and qualifications.
This document summarizes four research projects conducted in collaboration with industry partners on search-based software testing. It discusses projects on testing PID controllers with Delphi, robustness testing a video conferencing system with Cisco, environment-based testing of a seismic acquisition system with WesternGeco, and stress testing safety-critical drivers in the oil and gas industry with Kongsberg. It also outlines lessons learned from the collaborations and discusses effective models of collaborative research and innovation between academia and industry.
An IT Security Speedometer Approach. The Exposure Index is a model to merge threats- and vulnerability-metrics to one consolidated index value for management reporting. These slides show how to categorize metrics, normalize and weight them in the index system. Further discussion for this model is much appreciated.
Similar to Predictive Performance Testing: Integrating Statistical Tests into Agile Development Life-cycles (20)
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
3. About Me
> Tom Kleingarn
> Lead, Performance Engineering - Digital River
> 4 years in performance engineering
> Tested over 100 systems/applications
> 100’s of performance tests
> Tools
> LoadRunner
> JMeter
> Webmetrics, Keynote, Gomez
> ‘R’ and Excel
> Quality Center
> QuickTest Professional
4. > Leading provider of global e-commerce solutions
> Builds and manages online businesses for software and game
publishers, consumer electronics manufacturers, distributors,
online retailers and affiliates.
> Comprehensive platform offers
>
>
>
>
>
>
>
>
Site development and hosting
Order management
Fraud management
Export control
Tax management
Physical and digital product fulfillment
Multi-lingual customer service
Advanced reporting and strategic marketing
5. Performance Engineering
> The process of experimental design, test execution, and
results analysis, utilized to validate system performance as
part of the Software Development Lifecycle (SDLC).
> Performance requirements – measureable targets of speed,
reliability, and/or capacity used in performance validation.
> Latency < 10ms, measured at the 99th percentile
> 99.95% uptime
> Throughput of 1,000 requests per second
6. Performance Testing Cycle
1. Requirements Analysis
2. Create test plan
3. Create automated scripts
4. Define workload model
5. Execute scenarios
6. Analyze results
>
Rinse and repeat if…
> Defects identified
> Change in requirements
> Setup or environment issues
> Performance requirement not met
Digital River Test Automation
7. Agile
> A software development paradigm that emphasizes rapid
process cycles, cross-functional teams, frequent
examination of progress, and adaptability.
Initial Plan
Scrum
Deploy
8. Agile Performance Engineering
> Clear and constant communication
> Involvement in initial requirements and design phase
> Identify key business processes before they are built
> Coordinate with analysts and development to build key
business processes first
> Integrate load generation requirements into project schedule
> Test immediately with v1.0
> Schedule tests to auto-start, run independently
> Identify invalid test results before deep analysis
9. LoadRunner Results
> Measures of central tendency
> Average = ∑(all samples)/(sample size) =
> Median = 50th percentile
> Mode – highest frequency, the value that occurred the most
> Measures of variability
> Min, max
> Standard Deviation =
> 90th percentile
11. Basic Statistics – Sample vs. Population
> Performance requirement: average latency < 3 seconds
> What if you ran 50 rounds? 100 rounds?
12. Basic Statistics – Sample vs. Population
> Sample – set of values, subset of population
> Population – all potentially observable values
> Measurements
> Statistic – the estimated value from a collection of samples
> Parameter – the “true” value you are attempting to estimate
Not a representative
sample!
13. Basic Statistics – Sample vs. Population
> Sampling distribution – the probability distribution of a given
statistic based on a random sample of size n
> Dependent on the underlying population
>
How do you know the system under test met the performance requirement?
14. Basic Statistics – Normal Distribution
> With larger samples, data tend to cluster around the mean
15. Basic Statistics – Normal Distribution
Sir Francis Galton’s “Bean Machine”
16. Confidence Intervals
> The probability that an interval made up of two endpoints
will contain the true mean parameter μ
>
95% confidence interval:
>
… where 1.96 is a score from the normal distribution associated with 95% probability:
17. Confidence Intervals
> In repeated rounds of testing, a confidence interval will contain the
true mean parameter with a certain probability:
True Average
18. Confidence Intervals in Excel
Statistic
Value 95%
Value 99%
Formula
Average
3.40
3.40
Standard Deviation
1.45
1.45
Sample size
500
500
Confidence Level
0.95
0.99
Significance Level
0.05
0.01
0.0127
0.167
=CONFIDENCE(Sig. Level, Std Dev, Sample Size)
Lower Bound
3.273
3.233
=Average - Margin of Error
Upper Bound
3.527
3.567
=Average + Margin of Error
Margin of Error
=1-(Confidence Level)
>
95% confidence - true average latency 3.273 to 3.527 seconds
>
99% confidence - true average latency 3.233 to 3.567 seconds
>
Our range is wider at 99% compared to 95%, 0.334 sec vs. 0.254 sec
19. The T-test
> Test that your sample mean is
greater than/less than a certain
value
> Performance requirement:
Mean latency < 3 seconds
> Null hypothesis:
Mean latency >= 3 seconds
> Alternative hypothesis:
Mean latency is < 3 seconds
Add pic
21. T-test in ‘R’
> ‘R’ for statistical analysis
> http://www.r-project.org/
Load test data from a file:
> datafile <- read.table("C:Datatest.data",
header = FALSE, col.names= c("latency"))
Attach the dataframe:
> attach(datafile)
Create a “vector” from the dataframe:
> latency <- datafile$latency
22. T.Test in ‘R’
> t.test(latency, alternative="less", mu=3, tails=1)
One Sample t-test
data:
latency
t = -2.9968, df = 499, p-value = 0.001432
alternative hypothesis: true mean is less than 3
> There is a 0.14% probability that the true average latency of the
system is greater than 3 seconds. In this case we would reject
the null hypothesis.
> There is a 99.86% probability that the true average latency is
less than 3 seconds
23. T-test – Number of Samples Required
> power.t.test(sd=sd, sig.level=0.05, power=0.90,
delta=mean(latency)*0.01, type="one.sample")
One-sample t test power calculation
n = 215.5319
delta = 0.03241267
sd = 0.1461401
sig.level = 0.05
power = 0.9
alternative = two.sided
> We need at least 216 samples
> Our sample size is 500, we have enough samples to proceed
24. Test for Normality
> Test that the data is “normal”
> Clustered around a central value, no outliers
> Roughly fits the normal distribution
> shapiro.test(latency)
Shapiro-Wilk normality test
data:
latency
p-value = 0.8943
> Our sample distribution is approximately normal
> p-value < 0.05 indicates the distribution is not normal
25. Review
> Sample vs. Population
> Normal distribution
> Confidence intervals
> T-test
> Sample size
> Test for normality
> Practical application
> Performance requirements
> Compare two code builds
> Compare system infrastructure changes
26. Case Study
> Engaged in a new web service project
> Average latency < 25ms
> Applied statistical analysis
> System did not meet requirement
> Identified problem transaction
> Development fix applied
> Additional test, requirement met
> Prevented a failure in production
27. Implementation in Agile Projects
> Involvement in early design stages
> Identify performance requirements
> Build key business processes first
> Calculate required sample size
> Apply statistical analysis
> Run fewer tests with greater confidence in your results
> Prevent performance defects from entering production
> Prevent SLA violations in production
Editor's Notes
The t-statistic was introduced in 1908 by William Sealy Gosset, a chemist working for the Guinness brewery in Dublin, Ireland ("Student" was his pen name).[1][2][3] Gosset had been hired due to Claude Guinness's innovative policy of recruiting the best graduates from Oxford and Cambridge to apply biochemistry and statistics to Guinness' industrial processes.[2] Gosset devised the t-test as a way to cheaply monitor the quality of stout. He published the test in Biometrika in 1908, but was forced to use a pen name by his employer, who regarded the fact that they were using statistics as a trade secret. In fact, Gosset's identity was unknown to fellow statisticians.