Information and Communication Technology (ICT) is not limited to software development, mobile apps and ICT service management but percolates into all kind of products with the so-called Internet of Things.
ICT depends on software where defects are common. Developing software is knowledge acquisition, not civil engineering. Thus knowledge might be missing and consequently leading to defects and failures to perform. In turn, operating ICT products involves connecting ICT services with human interaction, and is error-prone as well. There is much value in delivering software without defects. However, up to now there exists no agreed method of measuring defects in ICT. UML sequence diagrams is a software model that describes data movements between actors and objects and allows for automated measurements using ISO/IEC 19761 COSMIC. Can we also use it for defect measurements that allows applying standard Six Sigma techniques to ICT by measuring both functional size and defect density in the same model? It allows sizing of functionality and defects even if no code is available. ISO/IEC 19761 measurements are linear, thus fitting to sprints in agile development as well as for using statistical tools from Six Sigma.(IT Confidence 2014, Tokyo (Japan))
Partitioning Composite Code Changes to Facilitate Code Review (MSR2015)Sung Kim
Yida's presentation at MSR 2015!
Abstract—Developers expend significant effort on reviewing source code changes, hence the comprehensibility of code changes directly affects development productivity. Our prior study has suggested that composite code changes, which mix multiple development issues together, are typically difficult to review. Unfortunately, our manual inspection of 453 open source code changes reveals a non-trivial occurrence (up to 29%) of such composite changes.
In this paper, we propose a heuristic-based approach to automatically partition composite changes, such that each sub-change in the partition is more cohesive and self-contained. Our quantitative and qualitative evaluation results are promising in demonstrating the potential benefits of our approach for facilitating code review of composite code changes.
Partitioning Composite Code Changes to Facilitate Code Review (MSR2015)Sung Kim
Yida's presentation at MSR 2015!
Abstract—Developers expend significant effort on reviewing source code changes, hence the comprehensibility of code changes directly affects development productivity. Our prior study has suggested that composite code changes, which mix multiple development issues together, are typically difficult to review. Unfortunately, our manual inspection of 453 open source code changes reveals a non-trivial occurrence (up to 29%) of such composite changes.
In this paper, we propose a heuristic-based approach to automatically partition composite changes, such that each sub-change in the partition is more cohesive and self-contained. Our quantitative and qualitative evaluation results are promising in demonstrating the potential benefits of our approach for facilitating code review of composite code changes.
Abstract: Due to the increasing of software requirements and software features,
modern software systems continue to grow in size and complexity. Locating
source code entities that required to implement a feature in millions lines of code
is labor and cost intensive for developers. To this end, several studies have proposed
the use of Information Retrieval (IR) to rank source code entities based on
their textual similarity to an issue report. The ranked source code entities could be
at a class or function granularity level. Source code entities at the class-level are
usually large in size and might contain a lot of functions that are not implemented
for the feature. Hence, we conjecture that the class-level feature location technique
requires more effort than function-level feature location technique. In this
paper, we investigate the impact of granularity levels on a feature location technique.
We also presented a new evaluation method using effort-based evaluation.
The results indicated that function-level feature location technique outperforms
class-level feature location technique. Moreover, function-level feature location
technique also required 7 times less effort than class-level feature location technique
to localize the first relevant source code entity. Therefore, we conclude that
feature location technique at the function-level of program elements is effective
in practice.
Reference:
Chakkrit Tantithamthavorn, Akinori Ihara, Hideaki Hata and Kenichi Matsumoto, Impact Analysis of Granularity Levels on Feature Location Technique, In Proceedings of The First Asia Pacific Requirements Engineering Symposium (APRES’14), pp. 135 - 149, Aukland, New Zealand, April 28-29, 2014.
Cross-project Defect Prediction Using A Connectivity-based Unsupervised Class...Feng Zhang
Defect prediction on projects with limited historical data has attracted great interest from both researchers and practitioners. Cross-project defect prediction has been the main area of progress by reusing classifiers from other projects. However, existing approaches require some degree of homogeneity (e.g., a similar distribution of metric values) between the training projects and the target project. Satisfying the homogeneity requirement often requires significant effort (currently a very active area of research).
An unsupervised classifier does not require any training data, therefore the heterogeneity challenge is no longer an issue. In this paper, we examine two types of unsupervised classifiers: a) distance-based classifiers (e.g., k-means); and b) connectivity-based classifiers. While distance-based unsupervised classifiers have been previously used in the defect prediction literature with disappointing performance, connectivity-based classifiers have never been explored before in our community.
We compare the performance of unsupervised classifiers versus supervised classifiers using data from 26 projects from three publicly available datasets (i.e., AEEEM, NASA, and PROMISE). In the cross-project setting, our proposed connectivity-based classifier (via spectral clustering) ranks as one of the top classifiers among five widely-used supervised classifiers (i.e., random forest, naive Bayes, logistic regression, decision tree, and logistic model tree) and five unsupervised classifiers (i.e., k-means, partition around medoids, fuzzy C-means, neural-gas, and spectral clustering). In the within-project setting (i.e., models are built and applied on the same project), our spectral classifier ranks in the second tier, while only random forest ranks in the first tier. Hence, connectivity-based unsupervised classifiers offer a viable solution for cross and within project defect predictions.
Who Should Review My Code? A file-location based code-reviewer recommendation approach for modern code review.
This research study is presented at the 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER2015)
Find more information and preprint at patanamon.com
Static analysis works for mission-critical systems, why not yours? Rogue Wave Software
Take a deep dive into the world of static code analysis (SCA) by immersing yourself into different analysis techniques, examples of the problems they find, and learning how SCA fits into various types of environments, from the developer desktop to the QA team. The goal is to provide a solid foundation for you to make the best decision for testing technology and process selection, including: Types of defects found by SCA;
Typical myths and barriers to adoption; and How SCA aligns to different testing maturity levels.
In this presentation, Adrian Hunt, Pre-Sales Consultant at PRQA explains how to achieve ISO 26262 Compliance with our static analysis tools QA·C and QA·C++.
Revisiting Code Ownership and Its Relationship with Software Quality in the S...The University of Adelaide
This work was presented at The 38th International Conference on Software Engineering (ICSE2016).
Abstract: Code ownership establishes a chain of responsibility for modules in large software systems. Although prior work uncovers a link between code ownership heuristics and software quality, these heuristics rely solely on the authorship of code changes. In addition to authoring code changes, developers also make important contributions to a module by reviewing code changes. Indeed, recent work shows that reviewers are highly active in modern code review processes, often suggesting alternative solutions or providing updates to the code changes. In this paper, we complement traditional code ownership heuristics using code review activity. Through a case study of six releases of the large Qt and OpenStack systems, we find that: (1) 67%-86% of developers did not author any code changes for a module, but still actively contributed by reviewing 21%-39% of the code changes, (2) code ownership heuristics that are aware of reviewing activity share a relationship with software quality, and (3) the proportion of reviewers without expertise shares a strong, increasing relationship with the likelihood of having post-release defects. Our results suggest that reviewing activity captures an important aspect of code ownership, and should be included in approximations of it in future studies.
Driving growth in Indian manufacturing industry Sumit Roy
Indian manufacturing is just perfectly poised to Unlocking the transformation value with technology .While businesses understand that in order to build an organisation that is agile and suited to withstand current market and economic volatilities, there are several things to be considered before taking a digital leap. More than just a strategy for any individual technology trend or for combining more than one of them, companies need a systematic approach to adopt technologies in a holistic fashion. The industry trends and challenges primarily drive the appropriate selection of technology solutions, which need to be fine-tuned to a company’s needs based on its scale, capabilities and its specific issues. This joint CII-PwC report takes a closer look at two industries in particular, manufacturing and infrastructure, and tries to decode the prevalent challenges in these two sectors, the kind of initiatives being taken to drive growth and development, and how IT adoption is playing an important role to overcome these challenges
Implementing productivity models helps in the understanding of Software Development Economics, which up to now is not entirely clear. Most organizations believe that the only way to achieve improvements is lowering software rates. With a background of three years of statistical data from large multinational clients, LEDAmc presented at UKSMA 2012 Conference a study showing how the relationship between software rates and cost per function point differs from what could be expected, sometimes even far from expected. The experience gained by LEDAmc through the implementation of software productivity models over the last two years brings new and updated insights to this study, which will be presented during the conference.(IT Confidence 2014, Tokyo (Japan))
Abstract: Due to the increasing of software requirements and software features,
modern software systems continue to grow in size and complexity. Locating
source code entities that required to implement a feature in millions lines of code
is labor and cost intensive for developers. To this end, several studies have proposed
the use of Information Retrieval (IR) to rank source code entities based on
their textual similarity to an issue report. The ranked source code entities could be
at a class or function granularity level. Source code entities at the class-level are
usually large in size and might contain a lot of functions that are not implemented
for the feature. Hence, we conjecture that the class-level feature location technique
requires more effort than function-level feature location technique. In this
paper, we investigate the impact of granularity levels on a feature location technique.
We also presented a new evaluation method using effort-based evaluation.
The results indicated that function-level feature location technique outperforms
class-level feature location technique. Moreover, function-level feature location
technique also required 7 times less effort than class-level feature location technique
to localize the first relevant source code entity. Therefore, we conclude that
feature location technique at the function-level of program elements is effective
in practice.
Reference:
Chakkrit Tantithamthavorn, Akinori Ihara, Hideaki Hata and Kenichi Matsumoto, Impact Analysis of Granularity Levels on Feature Location Technique, In Proceedings of The First Asia Pacific Requirements Engineering Symposium (APRES’14), pp. 135 - 149, Aukland, New Zealand, April 28-29, 2014.
Cross-project Defect Prediction Using A Connectivity-based Unsupervised Class...Feng Zhang
Defect prediction on projects with limited historical data has attracted great interest from both researchers and practitioners. Cross-project defect prediction has been the main area of progress by reusing classifiers from other projects. However, existing approaches require some degree of homogeneity (e.g., a similar distribution of metric values) between the training projects and the target project. Satisfying the homogeneity requirement often requires significant effort (currently a very active area of research).
An unsupervised classifier does not require any training data, therefore the heterogeneity challenge is no longer an issue. In this paper, we examine two types of unsupervised classifiers: a) distance-based classifiers (e.g., k-means); and b) connectivity-based classifiers. While distance-based unsupervised classifiers have been previously used in the defect prediction literature with disappointing performance, connectivity-based classifiers have never been explored before in our community.
We compare the performance of unsupervised classifiers versus supervised classifiers using data from 26 projects from three publicly available datasets (i.e., AEEEM, NASA, and PROMISE). In the cross-project setting, our proposed connectivity-based classifier (via spectral clustering) ranks as one of the top classifiers among five widely-used supervised classifiers (i.e., random forest, naive Bayes, logistic regression, decision tree, and logistic model tree) and five unsupervised classifiers (i.e., k-means, partition around medoids, fuzzy C-means, neural-gas, and spectral clustering). In the within-project setting (i.e., models are built and applied on the same project), our spectral classifier ranks in the second tier, while only random forest ranks in the first tier. Hence, connectivity-based unsupervised classifiers offer a viable solution for cross and within project defect predictions.
Who Should Review My Code? A file-location based code-reviewer recommendation approach for modern code review.
This research study is presented at the 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER2015)
Find more information and preprint at patanamon.com
Static analysis works for mission-critical systems, why not yours? Rogue Wave Software
Take a deep dive into the world of static code analysis (SCA) by immersing yourself into different analysis techniques, examples of the problems they find, and learning how SCA fits into various types of environments, from the developer desktop to the QA team. The goal is to provide a solid foundation for you to make the best decision for testing technology and process selection, including: Types of defects found by SCA;
Typical myths and barriers to adoption; and How SCA aligns to different testing maturity levels.
In this presentation, Adrian Hunt, Pre-Sales Consultant at PRQA explains how to achieve ISO 26262 Compliance with our static analysis tools QA·C and QA·C++.
Revisiting Code Ownership and Its Relationship with Software Quality in the S...The University of Adelaide
This work was presented at The 38th International Conference on Software Engineering (ICSE2016).
Abstract: Code ownership establishes a chain of responsibility for modules in large software systems. Although prior work uncovers a link between code ownership heuristics and software quality, these heuristics rely solely on the authorship of code changes. In addition to authoring code changes, developers also make important contributions to a module by reviewing code changes. Indeed, recent work shows that reviewers are highly active in modern code review processes, often suggesting alternative solutions or providing updates to the code changes. In this paper, we complement traditional code ownership heuristics using code review activity. Through a case study of six releases of the large Qt and OpenStack systems, we find that: (1) 67%-86% of developers did not author any code changes for a module, but still actively contributed by reviewing 21%-39% of the code changes, (2) code ownership heuristics that are aware of reviewing activity share a relationship with software quality, and (3) the proportion of reviewers without expertise shares a strong, increasing relationship with the likelihood of having post-release defects. Our results suggest that reviewing activity captures an important aspect of code ownership, and should be included in approximations of it in future studies.
Driving growth in Indian manufacturing industry Sumit Roy
Indian manufacturing is just perfectly poised to Unlocking the transformation value with technology .While businesses understand that in order to build an organisation that is agile and suited to withstand current market and economic volatilities, there are several things to be considered before taking a digital leap. More than just a strategy for any individual technology trend or for combining more than one of them, companies need a systematic approach to adopt technologies in a holistic fashion. The industry trends and challenges primarily drive the appropriate selection of technology solutions, which need to be fine-tuned to a company’s needs based on its scale, capabilities and its specific issues. This joint CII-PwC report takes a closer look at two industries in particular, manufacturing and infrastructure, and tries to decode the prevalent challenges in these two sectors, the kind of initiatives being taken to drive growth and development, and how IT adoption is playing an important role to overcome these challenges
Implementing productivity models helps in the understanding of Software Development Economics, which up to now is not entirely clear. Most organizations believe that the only way to achieve improvements is lowering software rates. With a background of three years of statistical data from large multinational clients, LEDAmc presented at UKSMA 2012 Conference a study showing how the relationship between software rates and cost per function point differs from what could be expected, sometimes even far from expected. The experience gained by LEDAmc through the implementation of software productivity models over the last two years brings new and updated insights to this study, which will be presented during the conference.(IT Confidence 2014, Tokyo (Japan))
Deloitte Technology Media and Telecommunications Predictions 2016David Graham
Welcome to the 2016 edition of Deloitte’s predictions for the Technology, Media, and Telecommunications (TMT) sectors. These Predictions reveal the perspectives gained from hundreds of conversations with industry leaders, and tens of thousands of consumer interviews across the globe
In an IT context companies struggling to increase profits and often view IT as a necessary evil: one that consumes resources rather contributes to the bottom line. These organizations often don’t see value in data collection analysis or benchmarking either. However, IT can be a significant contributor when IT decisions are made after measuring and estimating both cost and return. IT data collection analysis and benchmarking continue to improve the cost of IT systems and help make decisions regarding where to spend money to stop the bleeding. As such, repeatable processes for estimating cost, schedule and risk will be addressed along with the “iron triangle” of software. The Iron Triangle looks at issues of cost, schedule, scope and quality and helps determine what must give when client increases scope, reduces schedule or reduces budget.
Additionally this presentation will address the risk adjusted Total Cost of Ownership and return an IT investment along with its consistency with long-range investment and business strategy of an organization measured against risk and key technical and performance parameters and technical debt.Finally, since this presentation will address the overriding business concerns: how much value does this software contribute to the business and is the best place to spend the money. (IT Confidence 2013, Rio de Janeiro (Brazil))
The True Cost of Open Source Software: Uncovering Hidden Costs and Maximizing...ActiveState
If you have researched open source software, even just a little, you’ve likely
encountered two distinct worldviews: believers and skeptics. Believers celebrate open source as free, collaborative code. In this paradigm, open source software isn’t just a free licensing model; it is a movement for building better, more flexible software.
But, that’s just one side of the story. Open source skeptics raise compelling counterarguments for why open source software and the enterprise don’t mix.
So, where does this leave you, especially if you are tasked with deciding whether or
not to implement open source software in your organization? In this paper we’ll delve
deep into both arguments and provide practical tools to help you decide whether or
not open source software will be a good return on your company’s investment. We’ll
also present solutions for bridging the gap between “believers” and “skeptics” in your
organization, and for reducing risks that go hand-in-hand with running open source
software in the enterprise.
Based on the speaker’s experience negotiating and managing many outsourcing contracts using Function Points as a Key Performance Indicator, this presentation describes the pitfalls that can be experienced if one takes too simplistic a view of the meaning and use of Function Point data and suggests ways in which they may be avoided
Starting with a typical outsourcing scenario, and using ISBSG project data, techniques to improve the effectiveness of a Function Point program are demonstrated.
Particular emphasis is made on the importance of setting baselines appropriate to the environment to be measured and deciding how to determine if agreed performance targets are achieved.
The use of statistical analysis beyond just averages, to enable a more sophisticated and pragmatic interpretation of measurement data is demonstrated. The view that a little statistical analysis can actually uncover “lies and damn lies” is offered.
Finally, a template for design of a successful Function Point Program is presented.(IT Confidence 2014, Tokyo (Japan))
The Value of Infrastructure Asset ManagementC.S. Davidson
Christopher W. Toms, P.E., Senior Client Representative at C.S. Davidson, Inc., discusses the benefits of transportation related asset management and the value that optimized asset management creates to identify, maintain and improve municipal infrastructure in the most economical manner.
Alyson Murphy is the in-house Senior Data Architect at Moz. She works with stakeholders to build a data solution that help Moz make data informed business decisions. Sean Work runs the blog at KISSmetrics.com. He’s been with the team since 2010.
We consolidated key data that was routinely used for analysis onto one reporting server. We then funneled key pieces of data into our web analytics solution so that there were fewer places to have to look for data when it was time to do an analysis.
We used to use email to get and prioritize projects. We shifted to Trello which allows us to have templates to ensure request quality and to be transparent about when certain projects will be worked on.
Where to Focus 2
Then we may branch out into orange and other colors.
But that’s all it is, a series of colors.
It’s not until much later that we start to see the entire picture. Where to Focus
In reality, your ball probably looks like this.
Goal 1: Build the Minimum Viable Ball
Components of a Data System There are 6 main areas of focus for building a successful and scalable data system.
Data Infrastructure Consolidating your data sources will make analysis easier and quicker which is important when you start adding people to your team.
Data Integrity Data Infrastructure and Data Integrity are perhaps the most important places to start because decisions in these areas waterfall into the other areas of your Data System.
Data Access and Visualization Data Access and Visualization is key as your company starts to grow. The goal is to make the data as easy to access as possible for people who have the skills to fish for their own data..
Infrastructure Change Process When you are in startup mode, everyone might have access to do what they need to do quickly to implement the changes they need to make. In a small organization this works out because everyone knows what everyone else is working on.
Goal 1: Build the Minimum Viable Beach Ball
To optimize the system, you may have to sub-optimize the subsystem.
Over COMMUNICATE what sections you are working on (helps with buy-in)
Ways to get buy-in
Pre-Research Buying Decisions
Data Infrastructure Data Integrity Data Access & Visualization Components of a Data System Infrastructure Change Process People in your Org Data Utilization Process Changes
In order to re-evaluate the KPI’s we look at, we had a collaborative meeting where each of the groups came up with a dashboard. We then looked for areas where we needed to create alignment. After that, we started building.
Data Infrastructure Data Integrity Data Access & Visualization Components of a Data System Process Changes Infrastructure Change Process People in your Org Data Utilization Process
Minimum Viable Product (MVP) vs. All-in-one Do you want to ship as little as possible as soon as possible and learn and add versus shipping a totally finished product all at once.
Almost every Project Management book introduces the project management triangle. Almost every certified Project Manager thinks that she or he understands the relationships between the elements of triangle correctly: “The larger the scope, the more cost and time needed”. However, especially in ICT industry majority of the projects overrun both the budget and schedule, and deliver less functionality than expected. In this presentation we take another look at the project management triangle, to learn how to get more outcomes with spending less money and time.
CTO Summit 2016: Navigating Build vs. Buy at CleverTapCleverTap
The most important question a CTO must answer is whether to build or buy their analytics solution. Sunil Thomas, CEO for CleverTap recently addressed these challenges at the CTO Summit 2016. Learn the key foundation of analytics and how to navigate what platform solution is best for your engineering team.
This presentation will talk about how sizing can be a normalising factor for both estimating, measurement and benchmarking. It will introduce the need for utilise a size measure for both functional as well as non-functional size -utilising the IFPUG method Function Point Analysis (FPA) as well as Software non-functional Assessment Process (SNAP). The presentation will then take the view of estimating to measurement for projects – to benchmarking for organisations utilising industry data as the competitive comparison. The presentation will touch on issues with requirement and how to utilise FPA and SNAP to re-cover this. Accuracy levels of size assessment for estimating. High-level view of other data then size that should be collected – but focus is on sizing as a measure – not a full measurement program.(IT Confidence 2014, Tokyo (Japan))
Innovation is a necessity for B2B companies seeking growth. Yet, even game-changing innovation requires a careful assessment of how much customer value is created and ultimately captured in price. Otherwise, your company loses precious margin and the means to sustain future innovation.
Do you truly know how much value your innovations are providing to your customers?
LeveragePoint is delighted once more to have noted pricing thought-leader and author, Stephan Liozu share his practical experience and techniques for monetizing the differential value of innovation. He will discuss how industry leading companies embed value management into their new product development process.
Learn how to link innovation, customer value and pricing for your new products in 2013.
Increasing the Business Value of Communications: Innovation, Strategy and TrustJeff Zwier
Presentation given at the Melcrum Publishing Strategic Communication Management Summit October 4-6, 2011 in Washington D.C. The presentation describes my evolution of the internal communications approach at DTTL from the remnants of a reactive publishing team in 2009 to a proactive, tightly partnered business line communications team. At its peak, the team had 30 members in four countries. The team was dissolved as part of an organizational restructuring in 2013.
This presentation reports the analysis results of clarifying factors that affect productivity of enterprise software projects as follows. (1) Productivity is inversely proportional to the root of fifth power of the test case density and fault density respectively. (2) Project where high security or reliability level software is required has low productivity, and project where objectives and priorities are very clear, project where documentation tools are used, and project where sufficient work space is provided have high productivity. (3) Productivity of the project managed by skillful project manager is low because he/she tries to detect many faults. (4) If work conditions of a project where high security, reliability, or performance and efficiency level software is required are poor such that work space is narrow or role assignment and each person’s responsibility are not clarified, the project has remarkably low productivity. (IT Confidence 2014, Tokyo (Japan))
Simpda 2014 - A living story: measuring quality of developments in a large in...SpagoWorld
The presentation supported the speech by Gabriele Ruffatti (founder of the SpagoWorld initiative) at SIMPDA 2014 (Milan, Italy - November 19-21, 2015). The presentation focuses on the innovative approach named Productivity Intelligence supported by Spago4Q - the open source analytic of SpagoBI suite for Quality and Performance Improvement- that allows companies and organizations to effectively monitor performances, improve quality practices and achieve higher capability levels. www.spagoworld.org
Semi-Automated Security Testing of Web applicationsRam G Athreya
Market research survey on Internet attacks reports that more than 70% of the attacks are on the application layer. This is because 1. More valuable information (electronic money details) is at the application level and 2. Relatively there are more unaddressed vulnerabilities. Considering the fact that there are still inadequate adoption of security development practices across the numerous application development communities, the security testing of the web applications becomes highly critical and rigorous.
In our project we have created a penetration testing tool (Black Box Testing Tool) that will check for vulnerabilities in a semi – automated fashion on a target web application. We have tested and demonstrated the functionality and effectiveness of our tool by running this tool on 1. On a target vulnerable web application created by us and 2. On live web sites of a customer organization. The results have been revealing and have been documented appropriately in the following report. We have also provided recommendations as part of corrective action against the discovered vulnerabilities and statements of best practices based on ISO27002 and such other organizations as a preventive action in order to avoid recurrence of such vulnerabilities.
Data Science at Roche: From Exploration to Productionization - Frank BlockRising Media Ltd.
The excitement about the potential opportunities for leveraging data by means of advanced analytics is huge. But, the honeymoon between business and data science is over. Stakeholders want to see value generation from data science. At Roche Diagnostics the Data Science Lab was created. Its mission is to explore business opportunities for data science across the company and to deliver productive, algorithm based systems that create impact. In his keynote, Frank presented some examples of data science initiatives going from data exploration over predictive modelling to productionization. Some of the challenges encountered were addressed as well as the learnings.
Learn what formal methods are and how they make developing bug-free, impenetrable source code a possibility in this webinar by TrustInSoft, the leading provider of formal methods-based code analysis tools.
From Sage 500 to 1000 ... Performance Testing myths exposedTrust IV Ltd
The following presentation is an account of Sage migration we were involved with. Written by Head of Service Delivery, Richard Bishop, the presentation looks at the performance issues faced during a migration of Sage 500 to Sage 1000. Richard also looks to dispel ‘myths’ that are commonly associated with performance testing.
For more information visit Trust IV online - http://trustiv.co.uk/ or check out our blog - http://blog.trustiv.co.uk/
DevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOpsSailaja Tennati
Continuous delivery is frightening to enterprise IT managers who see each new private, public or hybrid cloud infrastructure software change potentially causing service outages or security concerns.
This presentation by Marc Hornbeek, first shared at the DevOps Summit 2015 in London, explains Spirent’s comprehensive Clear DevOps Solution to support:
- Rapid paced continuous testing without compromising coverage or service quality
- Orchestration of service deployments over physical and virtual infrastructures
- Best practices for integrating continuous testing into CI infrastructures
- How to use continuous testing analytics for deployment decisions
Many people view an estimate as a quick guess that no one believes anyhow. But producing a viable estimate is core to project success as well as ROI determination and other decision making. In decades of studying the art and science of estimating it has become apparent that: most people don’t like to and/or don’t know how to estimate; those that estimate are often always wildly optimistic, full of unintentional bias; strategic misestimating provides misleading estimates when it occurs. However, it is also obvious that viable estimates can make projects successful, outsourcing more cost effective, and help businesses make the most informed decisions.
That is why metrics and models are essential to organizations, providing the tempering with that outside view of reality that is recommended by Daniel Kahneman in his Nobel Prize winning work in estimation bias and strategic mis-estimation. (IT Confidence 2014, Tokyo (Japan))
Using data from completed software projects in the ISBSG repository, we will look at how people have gone about estimating their software projects and how well they did it. We will look at estimation techniques used, the accuracy of estimates and relationships between the estimates.
We will then offer practical tips and some steps you can take to determine how realistic your own estimates are. (IT Confidence 2013, Rio de Janeiro (Brazil))
s corporate subscribers and partners to the International Software Benchmarks Standards Group (ISBSG ), PRICE has access to a wealth of data about software projects. The ISBSG was formed in 1997 with the mission “To improve the management of IT resources by both business and government through the provision and exploitation of public repositories of software engineering knowledge that are standardized, verified, recent and representative of current technologies.” This database contains detailed information on close to 6000 development and enhancement projects and more than 500 maintenance and support projects. To the best of this author’s knowledge, this database is the largest, most trusted source of publically available software data that has been vetted and quality checked.
The data covers many industry sectors and types of businesses though it is weak on data in the aerospace and defense industries. Never the less, there are many things we can learn from analysis of this data. The Development and Enhancement database contains 121 columns of project information for each project submitted. This information includes information identifying the type of business and application, the programming language(s) used, Functional Size of the project in one of many Functional Measures available in the industry (IFPUG, COSMIC, NESMA, etc.), project effort normalized based on the project phases the report contains, Project Delivery Rate (PDR), elapsed project time, etc.
PRICE Systems has recently partnered with ISBSG with licenses to both the data repositories. Although we cannot distribute the data to those without subscriptions, there is no reason we can’t use analysis of this data to provide guidance to users of our software estimation tool, TruePlanning. One effort focused on developing calibrated software estimation templates for True S based on various scenarios across industry sector, application type, development type (new or enhancement) and language type (3GL,4G). This exercise combined data mining, statistical analysis, and expert judgment. This paper discusses the methodology used to derive these templates and presents the findings of this research. While the actual analysis is focused on a particular software estimating model, the research, analysis and techniques should inform similar analyses that are tool agnostic.(IT Confidence 2013, Rio de Janeiro (Brazil))
Requirements play a crucial role in the definition of the boundaries and identity of a project: their traceability, correct development, sharing with the stakeholders and validation determine the project failure or success. Moreover, Quality Assurance (QA) processes facilitate project management activities. Proper measures and indicators are considered as key information to know if a project is on the right way or not. Engineering Group (www.eng.it) intends to show how its integrated solution, recognized compliant to CMMI-DEV principles and based on SPAGO4Q (www.spago4q.org) and the QEST nD model (Buglione-Abran), made of a set of open source and low cost tools, allows to:
• manage the application lifecycle in a complex, flexible and shared environment, enhancing the communication among the project stakeholders;
• manage internal project assessment activities by the QA Department;
• monitor projects and measure performances, allowing the information sharing among the stakeholders.
The use of an integrated, low-cost solution compliant to the CMMi requirements, which can be easily extended and integrated with other corporate tools, has been a key success factor at Engineering Group, fostering the adoption of well-defined ALM processes and an effective software lifecycle management. This solution can integrate other applications developed by different divisions of the company, reducing the duplication of information and fostering the sharing of lessons learned.(IT Confidence 2013, Rio de Janeiro (Brazil))
This presentation provides a walthrough over application development KPIs that were used to understand the performance of a 6,000 Function Points program. This program composed of 18 modules/projects was delivered in 20 months consuming over 220,000 hours. Several analysis were performed during the program execution but the presentation focus on the final results and lessons learned. The major metrics areas/KPIs that will covered area: Sizing, Duration, Effort, Staffing, Change, Productivity, Defect, Use Case (IT Confidence 2013, Rio de Janeiro (Brazil))
Organizations are constantly pressured to prove their value to their leadership and customers. A relative comparison to “peer groups” is often seen as useful and objective, thus benchmarking becomes an apparent alternative. Unfortunately, organizations new to benchmarking may have limited internal data for making valid comparisons. Feedback and subsequent “action” can quickly lead to the wrong results as organizations focus on improving their comparisons instead of improving their capability and consistency.
Adding to the challenge of improving results, software organizations may rely on more readily available schedule and financial data rather than KPIs for product quality and process consistency. This presentation provides measurement program lessons learned and insights to accelerate benchmark and quantification activities relevant to both new and mature measurement programs (IT Confidence 2013, Rio de Janeiro (Brazil))
The Total Cost Management (TCM) Framework of the Authority for the Advancement of Cost Engineering (AACE) International is an Integrated Approach to Portfolio, Program and Project Management. It provided a structured, annotated process map that explains each practice area of the cost engineering field in the context of its relationship to the other practice areas including allied professions. In other words; it is a process for applying the skills and knowledge of cost engineering. A key feature of the TCM Framework is that it highlights and differentiates the main cost management application areas: project control and strategic asset management. In this paper the focus is on project control.
In the TCM Framework, the Basis of Estimate (BOE) is characterised as the one deliverable that defines the scope of the engagement and ultimately becomes the basis for change management. When prepared correctly, any person with (capital) project experience can use the BOE to understand and assess the estimate, independent of any other supporting documentation. A well-written BOE achieves those goals by clearly and concisely stating the purpose of the estimate being prepared (i.e. cost/ effort/duration study, project options, funding, etc.), the project scope, cost basis, allowances, assumptions, exclusions, cost risks and opportunities, contingencies, and any deviations from standard practices.
A BOE document is a required component of a cost estimate. Because of its relevance in the set of AACE International recommended practices (RP) a BOE document is present. This template provides guidelines for the structure and content of a cost basis of estimate.
Although not always happy with the opinion that the Software Services Industry is different than other industries, analysis of the BOE shows that the structure is applicable but needs to be adapted to meet the practise in Software Services. In addition, the terminology used does not reflect the activities, components, items, issues, etc. of the Software Services Industry. The tailored version; Basis of Estimate – As Applied for the Software Services Industries provides guidelines for the structure and content of a cost basis of estimate specific to the software services industries (i.e. software development, maintenance & support, infrastructure, services, research & development, etc.).
With this BOE a structure is provided for further standardisation of the Estimation Process, a more consistent use of metrics (sizing, effort, schedule, quality), transparent options for control (benchmark, audit, bid validation) and a common approach on assumptions and associated risks.(IT Confidence 2013, Rio de Janeiro (Brazil))
Implementing productivity models helps in the understanding of Software Development Economics, which up to now is not entirely clear. Most organizations believe that the only way to achieve improvements is lowering software rates. With a background of three years of statistical data from large multinational clients, the presentation shows how the relationship between software rates and cost per function point differs from what could be expected, sometimes even far from expected. Leaning on a statistical demonstration, the reached conclusion is that excessive pressure on software rates destroys the concept of software rates in outsourcing processes. The study results also lead us to some considerations regarding how software development activity is understood and managed, both from the client and the software provider perspective. (IT Confidence 2013, Rio de Janeiro (Brazil))
One of the core considerations with data analytics is recognizing “what is your quest”. Many options and approaches are used in data analytics, several of which are of interest to the software sector. The world has changed culturally and technically, the need to be value focused and innovative is more important today than ever before. Steven shares several perspectives and analogies, where data with analytics can alter future behaviour, lowering risk and optimizing the solution. Several updates will also be shared from cloud, government and academia regarding activities and how the metrics community can collaborate. (IT Confidence Conference 2013 Keynote, October 3 2013, Rio de Janeiro (Brazil)
More from International Software Benchmarking Standards Group (ISBSG) (9)
Enterprise Excellence is Inclusive Excellence.pdfKaiNexus
Enterprise excellence and inclusive excellence are closely linked, and real-world challenges have shown that both are essential to the success of any organization. To achieve enterprise excellence, organizations must focus on improving their operations and processes while creating an inclusive environment that engages everyone. In this interactive session, the facilitator will highlight commonly established business practices and how they limit our ability to engage everyone every day. More importantly, though, participants will likely gain increased awareness of what we can do differently to maximize enterprise excellence through deliberate inclusion.
What is Enterprise Excellence?
Enterprise Excellence is a holistic approach that's aimed at achieving world-class performance across all aspects of the organization.
What might I learn?
A way to engage all in creating Inclusive Excellence. Lessons from the US military and their parallels to the story of Harry Potter. How belt systems and CI teams can destroy inclusive practices. How leadership language invites people to the party. There are three things leaders can do to engage everyone every day: maximizing psychological safety to create environments where folks learn, contribute, and challenge the status quo.
Who might benefit? Anyone and everyone leading folks from the shop floor to top floor.
Dr. William Harvey is a seasoned Operations Leader with extensive experience in chemical processing, manufacturing, and operations management. At Michelman, he currently oversees multiple sites, leading teams in strategic planning and coaching/practicing continuous improvement. William is set to start his eighth year of teaching at the University of Cincinnati where he teaches marketing, finance, and management. William holds various certifications in change management, quality, leadership, operational excellence, team building, and DiSC, among others.
3.0 Project 2_ Developing My Brand Identity Kit.pptxtanyjahb
A personal brand exploration presentation summarizes an individual's unique qualities and goals, covering strengths, values, passions, and target audience. It helps individuals understand what makes them stand out, their desired image, and how they aim to achieve it.
Premium MEAN Stack Development Solutions for Modern BusinessesSynapseIndia
Stay ahead of the curve with our premium MEAN Stack Development Solutions. Our expert developers utilize MongoDB, Express.js, AngularJS, and Node.js to create modern and responsive web applications. Trust us for cutting-edge solutions that drive your business growth and success.
Know more: https://www.synapseindia.com/technology/mean-stack-development-company.html
Memorandum Of Association Constitution of Company.pptseri bangash
www.seribangash.com
A Memorandum of Association (MOA) is a legal document that outlines the fundamental principles and objectives upon which a company operates. It serves as the company's charter or constitution and defines the scope of its activities. Here's a detailed note on the MOA:
Contents of Memorandum of Association:
Name Clause: This clause states the name of the company, which should end with words like "Limited" or "Ltd." for a public limited company and "Private Limited" or "Pvt. Ltd." for a private limited company.
https://seribangash.com/article-of-association-is-legal-doc-of-company/
Registered Office Clause: It specifies the location where the company's registered office is situated. This office is where all official communications and notices are sent.
Objective Clause: This clause delineates the main objectives for which the company is formed. It's important to define these objectives clearly, as the company cannot undertake activities beyond those mentioned in this clause.
www.seribangash.com
Liability Clause: It outlines the extent of liability of the company's members. In the case of companies limited by shares, the liability of members is limited to the amount unpaid on their shares. For companies limited by guarantee, members' liability is limited to the amount they undertake to contribute if the company is wound up.
https://seribangash.com/promotors-is-person-conceived-formation-company/
Capital Clause: This clause specifies the authorized capital of the company, i.e., the maximum amount of share capital the company is authorized to issue. It also mentions the division of this capital into shares and their respective nominal value.
Association Clause: It simply states that the subscribers wish to form a company and agree to become members of it, in accordance with the terms of the MOA.
Importance of Memorandum of Association:
Legal Requirement: The MOA is a legal requirement for the formation of a company. It must be filed with the Registrar of Companies during the incorporation process.
Constitutional Document: It serves as the company's constitutional document, defining its scope, powers, and limitations.
Protection of Members: It protects the interests of the company's members by clearly defining the objectives and limiting their liability.
External Communication: It provides clarity to external parties, such as investors, creditors, and regulatory authorities, regarding the company's objectives and powers.
https://seribangash.com/difference-public-and-private-company-law/
Binding Authority: The company and its members are bound by the provisions of the MOA. Any action taken beyond its scope may be considered ultra vires (beyond the powers) of the company and therefore void.
Amendment of MOA:
While the MOA lays down the company's fundamental principles, it is not entirely immutable. It can be amended, but only under specific circumstances and in compliance with legal procedures. Amendments typically require shareholder
RMD24 | Debunking the non-endemic revenue myth Marvin Vacquier Droop | First ...BBPMedia1
Marvin neemt je in deze presentatie mee in de voordelen van non-endemic advertising op retail media netwerken. Hij brengt ook de uitdagingen in beeld die de markt op dit moment heeft op het gebied van retail media voor niet-leveranciers.
Retail media wordt gezien als het nieuwe advertising-medium en ook mediabureaus richten massaal retail media-afdelingen op. Merken die niet in de betreffende winkel liggen staan ook nog niet in de rij om op de retail media netwerken te adverteren. Marvin belicht de uitdagingen die er zijn om echt aansluiting te vinden op die markt van non-endemic advertising.
Accpac to QuickBooks Conversion Navigating the Transition with Online Account...PaulBryant58
This article provides a comprehensive guide on how to
effectively manage the convert Accpac to QuickBooks , with a particular focus on utilizing online accounting services to streamline the process.
Affordable Stationery Printing Services in Jaipur | Navpack n PrintNavpack & Print
Looking for professional printing services in Jaipur? Navpack n Print offers high-quality and affordable stationery printing for all your business needs. Stand out with custom stationery designs and fast turnaround times. Contact us today for a quote!
Business Valuation Principles for EntrepreneursBen Wann
This insightful presentation is designed to equip entrepreneurs with the essential knowledge and tools needed to accurately value their businesses. Understanding business valuation is crucial for making informed decisions, whether you're seeking investment, planning to sell, or simply want to gauge your company's worth.
"𝑩𝑬𝑮𝑼𝑵 𝑾𝑰𝑻𝑯 𝑻𝑱 𝑰𝑺 𝑯𝑨𝑳𝑭 𝑫𝑶𝑵𝑬"
𝐓𝐉 𝐂𝐨𝐦𝐬 (𝐓𝐉 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬) is a professional event agency that includes experts in the event-organizing market in Vietnam, Korea, and ASEAN countries. We provide unlimited types of events from Music concerts, Fan meetings, and Culture festivals to Corporate events, Internal company events, Golf tournaments, MICE events, and Exhibitions.
𝐓𝐉 𝐂𝐨𝐦𝐬 provides unlimited package services including such as Event organizing, Event planning, Event production, Manpower, PR marketing, Design 2D/3D, VIP protocols, Interpreter agency, etc.
Sports events - Golf competitions/billiards competitions/company sports events: dynamic and challenging
⭐ 𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐝 𝐩𝐫𝐨𝐣𝐞𝐜𝐭𝐬:
➢ 2024 BAEKHYUN [Lonsdaleite] IN HO CHI MINH
➢ SUPER JUNIOR-L.S.S. THE SHOW : Th3ee Guys in HO CHI MINH
➢FreenBecky 1st Fan Meeting in Vietnam
➢CHILDREN ART EXHIBITION 2024: BEYOND BARRIERS
➢ WOW K-Music Festival 2023
➢ Winner [CROSS] Tour in HCM
➢ Super Show 9 in HCM with Super Junior
➢ HCMC - Gyeongsangbuk-do Culture and Tourism Festival
➢ Korean Vietnam Partnership - Fair with LG
➢ Korean President visits Samsung Electronics R&D Center
➢ Vietnam Food Expo with Lotte Wellfood
"𝐄𝐯𝐞𝐫𝐲 𝐞𝐯𝐞𝐧𝐭 𝐢𝐬 𝐚 𝐬𝐭𝐨𝐫𝐲, 𝐚 𝐬𝐩𝐞𝐜𝐢𝐚𝐥 𝐣𝐨𝐮𝐫𝐧𝐞𝐲. 𝐖𝐞 𝐚𝐥𝐰𝐚𝐲𝐬 𝐛𝐞𝐥𝐢𝐞𝐯𝐞 𝐭𝐡𝐚𝐭 𝐬𝐡𝐨𝐫𝐭𝐥𝐲 𝐲𝐨𝐮 𝐰𝐢𝐥𝐥 𝐛𝐞 𝐚 𝐩𝐚𝐫𝐭 𝐨𝐟 𝐨𝐮𝐫 𝐬𝐭𝐨𝐫𝐢𝐞𝐬."
Buy Verified PayPal Account | Buy Google 5 Star Reviewsusawebmarket
Buy Verified PayPal Account
Looking to buy verified PayPal accounts? Discover 7 expert tips for safely purchasing a verified PayPal account in 2024. Ensure security and reliability for your transactions.
PayPal Services Features-
🟢 Email Access
🟢 Bank Added
🟢 Card Verified
🟢 Full SSN Provided
🟢 Phone Number Access
🟢 Driving License Copy
🟢 Fasted Delivery
Client Satisfaction is Our First priority. Our services is very appropriate to buy. We assume that the first-rate way to purchase our offerings is to order on the website. If you have any worry in our cooperation usually You can order us on Skype or Telegram.
24/7 Hours Reply/Please Contact
usawebmarketEmail: support@usawebmarket.com
Skype: usawebmarket
Telegram: @usawebmarket
WhatsApp: +1(218) 203-5951
USA WEB MARKET is the Best Verified PayPal, Payoneer, Cash App, Skrill, Neteller, Stripe Account and SEO, SMM Service provider.100%Satisfection granted.100% replacement Granted.
Personal Brand Statement:
As an Army veteran dedicated to lifelong learning, I bring a disciplined, strategic mindset to my pursuits. I am constantly expanding my knowledge to innovate and lead effectively. My journey is driven by a commitment to excellence, and to make a meaningful impact in the world.
Fehlmann and Kranich - Measuring tests using cosmic
1. Measuring Tests Using COSMIC
2°International Conferen
IT Data collection, Analysis and Benchma
Tokyo (Japan) - October 22,
Thomas M. Fehlmann, Zü
Eberhard Kranich, Duisb
Testing ICT Services
in the Cloud
2. 2IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Measuring Tests
Using COSMIC
Goals of the presentation
G1. Understand COSMIC measurements in testing
G2. Free software testing from lines of code (LoC)
G3. Measure and benchmark software testing
3. 3IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
l 1981: Dr. Math. ETHZ
l 1991: Six Sigma for Software Black Belt
l 1999: Euro Project Office AG, Zürich
l 2001: Akao Price 2001 for original contributions to QFD
l 2003: SwissICT Expert for Software Metrics, ICTscope.ch
l 2004: Member of the Board QFD Institute Deutschland – QFD Architect
l 2007: CMMI for Software – Level 4 & 5
l 2011: Net Promoter® Certified Associate
l 2013: Vice-President ISBSG
Dr. Thomas Fehlmann
l 1981: Dr. Math. ETHZ
l 1991: Six Sigma for Software Black Belt
l 1999: Euro Project Office AG, Zürich
l 2001: Akao Price 2001 for original contributions to QFD
l 2003: SwissICT Expert for Software Metrics, ICTscope.ch
l 2004: Member of the Board QFD Institute Deutschland – QFD Architect
l 2007: CMMI for Software – Level 4 & 5
l 2011: Net Promoter® Certified Associate
l 2013: Vice-President ISBSG
4. 4IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Eberhard Kranich
l Mathematics and Computer Science
l Emphasis on Mathematical Statistics
l Mathematical Optimization
l Theory of Polynomial Complexity of Algorithms
l Working at T-Systems International GmbH in Bonn, Germany
l Six Sigma Black Belt for Software Development
l Software Quality Assurance Manager
l Mathematics and Computer Science
l Emphasis on Mathematical Statistics
l Mathematical Optimization
l Theory of Polynomial Complexity of Algorithms
l Worked at T-Systems International GmbH in Bonn, Germany
l Six Sigma Black Belt for Software Development
l Software Quality Assurance Manager
5. 5IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
What is a Defect?
l Defect = Behavior impacting expected or required functionality of software
è How many bugs?
è By counting the
size of defect repositories?
è By number of entries???
6. 6IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Software Testing as a Game
l Tester sees selected sequences in the UML
sequence diagram
l Tester can “walk” the data movements when
planning or executing tests
è Functionality becomes visible to the agile team
è Defects impacting functionality become visible
to testers
Other
Application
Other
Application
Some
Device
8.// Move some data
9.// Move some data
10.// Move some data
11.// Move some data
Other
Device
7. 7IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Functionality, Defect Size, and Defect Density
l What happens if data movements have
defects?
l Testers mark the data movement where a
defect has been detected
l Same Metric:
èè ISO/IEC 19761 COSMICISO/IEC 19761 COSMIC
Other
Application
Other
Application
Some
Device
8.// Move some data
Move some data
10.// Move some data
11.// Move some data
Other
Device
l Functional Size
è Number of Data Movements needed to implement all FUR
l Test Size
è Number of Data Movements executed in Tests
l Test Story
è Collection of Test Cases aiming at certain FURs
l Defect Count
è Number of Data Movements affected by some defect detected in a test story
8. 8IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Defects Density Prediction?
Now he counts the defects!
And counts and adjusts test size
By ISO/IEC 19761 COSMICISO/IEC 19761 COSMIC
Other
Application
Other
Application
Some
Device
8.// Move some data
9.// Move some data
10.// Move some data
11.// Move some data
Other
Device
How does he knowHow does he know
that he foundthat he found
all the defects?all the defects?
9. 9IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
ISO/IEC Standard 29119 on Software Testing
Published as ISO/IEC 29119 (2013-07)
International Standard
Defines the Test Process
Calls for suitable Test Measures
11. 11IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
The SW Testing Qualifications Board
l ISTQB
è 295’000 Certificates
è Iqnite Conferences (Sydney 2013)
l Third after ITIL and PMI
l Importance of Testing grows
12. 12IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
But it’s Even Worse…
l What is Defect Density?
è Defects per KDLOC?
l What is Test Coverage?
è Code lines executed by some test case?
13. 13IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
SW Testing and SW Metrics
l Counting practices for defect counting are undocumented
è “Number of Defects Found” per Stages / with Tests / etc.
è How do you count “Number of Defects”?
l Is it simply the number of entries in a defect repository?
è How can you avoid double reporting?
è Or make sure two defects are reported twice and not in a single report?
l A successor to the “Defect Measurement Manual” published by UKSMA in October 2000 is
under review: “Defect Measurement and Analysis Handbook”
è By European cooperation
è Important enhancement for ISBSG’s Data Collection!
14. 14IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
SW Testing and SW Metrics
l Counting practices for defect counting are undocumented
è “Number of Defects Found” per Stages / with Tests / etc.
è How do you count “Number of Defects”?
l Is it simply the number of entries in a defect repository?
è How can you avoid double reporting?
è Or make sure two defects are reported twice and not in a single report?
l A successor to the “Defect Measurement Manual” published by UKSMA in October 2000 is
under review: “Defect Measurement and Analysis Handbook”
è By European cooperation
è Important enhancement for ISBSG’s Data Collection!
ReviewComments
15. 15IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Goal Profile
0.62
0.69
0.37
1) R001 Search Data
2) R002 Answer Questions
3) R003 Keep Data Safe
Functional User Requirements
A Simple Data Search Example
l Functional User
Requirements (FUR) describe
a very simple data search
l They meet Customer’s Needs
l And have a
Priority Profile
1 Entry (E) + 2 eXit (X) + 2 Read (R) + 1 Write (W) = 6 CFP
User DataData
1.// Search Criteria
Trigger
2.// Write Search
3.// Get Result
4.// Show Result
5.// Nothing Found
6.// Show Error Message
Customer's Needs Topics Attributes
Y.a Data Access y1 Access Data Always Reliable Frequently
y2 Repeatable Responses Responses identical Always
Y.b Data Integrity y3 Cannot impact data No Write allowed
Goal Profile derived fromGoal Profile derived from
Voice of the CustomerVoice of the Customer
16. 16IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
The Following SW Tests look Appropriate:
l Test Stories (scenarios) have
è Many Test Cases
è Each Test Case has
• Test Data
• Known Expected Response
l Test Size and Test Profiles can be measured
è by Functionality Covered
Test Story
CT-A Prepare CT-A.1 Retrieve Previous Responses
CT-A.2 Detect Missing Data
CT-A.3 Data Stays Untouched
Priority
TestSize
FALSE
Measured Defect Profile
0.43 11
0.74 18
0.51 12
41
Show Defects
17. 17IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Execute Test Case CT-A.1.1
Entering valid search string
è Returns expected response
è Test Size is 4 CFP
Test Story No. 1
Functional User Requirements
CT-A.1 Retrieve Previous Responses R001: Search Data R002: Answer Questions R003: Keep Data Safe Expected Response CFP
CT-A.1.1 Enter valid Search String X001,R001,W001,E001 X001,E001 R001,W001 Return (known) Answer 8
CT-A.1.2 Enter invalid Search String E001 R002,W001 Invalid Search String 3
Test Story Contribution (CFP): 5 2 4 Test Size 11
Test Case Measurements
for Test Story CT-A.1
1 Entry (E) + 2 eXit (X) + 2 Read (R) + 1 Write (W) = 6 CFP
User DataData
1.// Search Criteria
Trigger
2.// Write Search
3.// Get Result
4.// Show Result
5.// Nothing Found
6.// Show Error Message
18. 18IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Total Test Size
Total Test Size is 11 + 18 + 12 = 41 CFP
è Compares to Functional Size of 6 CFP
è Yields a Test Intensity of 41/6 = 6.8
è On average, < 7 tests per data movement
Test Story No. 1
Functional User Requirements
CT-A.1 Retrieve Previous Responses R001: Search Data R002: Answer Questions R003: Keep Data Safe Expected Response CFP
CT-A.1.1 Enter valid Search String X001,R001,W001,E001 X001,E001 R001,W001 Return (known) Answer 8
CT-A.1.2 Enter invalid Search String E001 R002,W001 Invalid Search String 3
Test Story Contribution (CFP): 5 2 4 Test Size 11
Test Case Measurements
for Test Story CT-A.1
CT-A.2 Detect Missing Data R001: Search Data R002: Answer Questions R003: Keep Data Safe Expected Response CFP
CT-A.2.1 Enter valid Search String for No Data X002,R002,W001,E001 X001,R001,W001,E001 R002,W001 No Data Available 10
CT-A.2.2 Enter invalid Search String R001,W001,X002,E001 X002,E001 R002,W001 Invalid Search String 8
Test Story Contribution (CFP): 8 6 4 Test Size 18
CT-A.3 Data Stays Untouched R001: Search Data R002: Answer Questions R003: Keep Data Safe Expected Response CFP
CT-A.3.1 Enter valid Search String W001,E001 X001,R001 R001,W001 Return identical Answer 6
CT-A.3.2 Enter invalid Search String X002,E001 Invalid Search String 2
CT-A.3.3 Enter Same String Again R001,W001,X001,E001 Return identical Answer 4
Test Story Contribution (CFP): 2 8 2 Test Size 12
Test Size in CFP: 41
Test Intensity in CFP: 6.8
Test Coverage: 100%
19. 19IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Recording a Defect
Test Story No. 2
Functional User Requirements
CT-A.2 Detect Missing Data R001: Search Data R002: Answer Questions R003: Keep Data Safe Expected Response CFP
CT-A.2.1 Enter valid Search String for No Data X002,R002,W001,E001 X001,R001,W001,E001 R002,W001 No Data Available 10
CT-A.2.2 Enter invalid Search String R001,W001,X002,E001 X002,E001 R002,W001 Invalid Search String 8
Test Story Contribution (CFP): 8 6 4 Test Size 18
Test Case Measurements
for Test Story CT-A.2
l “Bug” in 6.// Show Error Message
è Detected with data base containing no data
è Test Size is 4 CFP
è 1 Defect1 Defect1 Defect1 Defect found!
1 Entry (E) + 2 eXit (X) + 2 Read (R) + 1 Write (W) = 6 CFP
User DataData
1.// Search Criteria
Trigger
2.// Write Search
3.// Get Result
4.// Show Result
5.// Nothing Found
6.// Show Error Message
20. 20IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Defect Observed
One Defect Found
è Possibly observable in all tests touching the data movement 6.// Show Error Message – named X002
è Counted once
Name Label Description Name Label
#001 Escape Chars Some characters such as 'ä' are wrongly interpreted as escape characetrs in strings R001 GetResult
1
Defects Observed Data Movements Affected
Defect Count
Test Story No. 2
Functional User Requirements
CT-A.2 Detect Missing Data R001: Search Data R002: Answer Questions R003: Keep Data Safe Expected Response CFP
CT-A.2.1 Enter valid Search String for No Data X002,R002,W001,E001 X001,R001,W001,E001 R002,W001 No Data Available 10
CT-A.2.2 Enter invalid Search String R001,W001,X002,E001 X002,E001 R002,W001 Invalid Search String 8
Test Story Contribution (CFP): 8 6 4 Test Size 18
Test Case Measurements
for Test Story CT-A.2
21. 21IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Test Status Reporting
l Test Status Characterization
è Test Size is the total number of data movements executed in all Test Cases of all test stories
è One Data Movement can have only one defect identified per Test Story
l However, one misbehavior found might affect more than one data movement and thus
count for more than one defect
Test Status Summary
Total CFP: 6
Defects Pending for Removal: 2 Test Size in CFP: 41
Defects Found in Total: 2 Test Intensity in CFP: 6.8
Defect Density: 33% Test Coverage: 100%
22. 22IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Caveat
l Test size increases with meaningless test cases added to the test stories – little variations
of test data with almost identical expected responses might spoil measurements!
è We need a metric indicating that our test strategy is appropriate
23. 23IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
What is Six Sigma Testing?
24. 24IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
What is Lean Six Sigma Testing?
Replace manyReplace many
Test StoriesTest Stories……
… by those
needed for the
Eigensolution
…… go for thego for the
Eigenvector!Eigenvector!
25. 25IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Calculating the Eigenvector with Jacobi Iterative
AAAAT
x
27. 27IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Test Stories
GoalTestCoverage
RetrievePreviousResponses
DetectMissingData
DataStaysUntouched
AchievedCoverage
CT-A.1
CT-A.2
CT-A.3
R001 Search Data 0.62 5 8888 2 0.64
R002 Answer Questions 0.69 2 6 8888 0.66
R003 Keep Data Safe 0.37 4 4 2 0.40
Ideal Profile for Test Stories 0.43 0.74 0.51 Convergence Gap
0.42 0.7 0.5 0.04
0.10 Convergence Range
0.20 Convergence Limit
Test Stories
Deployment Combinator
Functional User Requirements
Measuring Test Coverage with Eigensolution
Number ofNumber of
data movementsdata movements
executedexecuted
28. 28IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Lean & Six Sigma for Software Testing
Six Sigma
è Design of Experiments
è Multi-linear Regression
for Root Cause Analysis
and process control
l Lean
è Detect Waste (muda, 無駄
)
è Test-Driven Development
è Defect-free delivery
ll Lean Six SigmaLean Six SigmaLean Six SigmaLean Six SigmaLean Six SigmaLean Six SigmaLean Six SigmaLean Six Sigma
è Predict Waste (muda, 無駄)
è Eigenvectors solutionsEigenvectors solutionsEigenvectors solutionsEigenvectors solutions for Root
Cause Analysis and process control
è Predict Defect DensityDefect DensityDefect DensityDefect Density
è Q Control ChartsQ Control ChartsQ Control ChartsQ Control Charts for SW Testing
29. 29IT Confidence 2014 – October 22, 2014 http://itconfidence2014.wordpress.com
Questions?