Tom Gilb, an independent teacher, consultant and writer.
He was a keynote speaker at I T.A.K.E. Unconference 2014 (http://2014.itakeunconf.com/)
He talked about:
- architecture than can be made better by developers
- how one can get quality by designing it in, not by debugging it
- that engineering is defined by multidimensional problem solving.
www.mozaicworks.com
The Good the Bad and the Ugly of Dealing with Smelly Code (ITAKE Unconference)Radu Marinescu
We all have a burning desire to write clean code. Every morning we wake up, look in the mirror, and promise ourselves that today we will follow the principles and best practices learned from Uncle Bob and his disciples. But we live in a cruel environment, surrounded by millions of smelly lines of code, reflections of a stinky design… and these constantly challenge our pure-hearted desire for writing clean code.
In such an environment, the stubbornness to practice daily the writing of clean code is vital.
But is it enough? Can we avoid getting lost in a sea of smelly code and design?
In this talk I will try to persuade you that, in dealing with large-scale systems, craftsmanship must be supported by proper techniques and tools that can help us to quickly understand, assess and improve the sea of smelly design that surrounds us.
I will present a pragmatic approach on how design anti-patterns (e.g. God Class, Feature Envy, Refused Bequest, Shotgun Surgery) can be automatically detected using a set of metrics-based detection rules, by analyzing the history of the system, and by using intriguing software visualizations.
The presentation will also include a live demo of tools that can automate the entire approach to a high-extent. These tools are so robust that they can deal with systems of several million lines of code; but they are also friendly enough to provide you with customized hints that help you deal with each and every case of an “unclean” code.
Testing As A Bottleneck - How Testing Slows Down Modern Development Processes...TEST Huddle
We often claim the purpose of testing is to verify that software meets a desired level of quality. Frequently, the term “testing” is associated with checking for functional correctness. However, in large, complex software systems with an established user-base, it is also important to verify system constraints such as backward compatibility, reliability, security, accessibility, usability. Kim Herzig from Microsoft explores these issues with the latest webinar on test Huddle.
Basic overview of software test types, methodologies.
Explaining and reasons to test and common pitfalls with various testing methodologies.
Example scenarios for the viewer to think about test strategies.
Tips to avoid having to write tests in the first place.
Content created and presented by Nico Heidtke at the "Die Programmierer" meetup organized by Binary-Gears in Darmstadt, Germany at 02.07.2019.
The Good the Bad and the Ugly of Dealing with Smelly Code (ITAKE Unconference)Radu Marinescu
We all have a burning desire to write clean code. Every morning we wake up, look in the mirror, and promise ourselves that today we will follow the principles and best practices learned from Uncle Bob and his disciples. But we live in a cruel environment, surrounded by millions of smelly lines of code, reflections of a stinky design… and these constantly challenge our pure-hearted desire for writing clean code.
In such an environment, the stubbornness to practice daily the writing of clean code is vital.
But is it enough? Can we avoid getting lost in a sea of smelly code and design?
In this talk I will try to persuade you that, in dealing with large-scale systems, craftsmanship must be supported by proper techniques and tools that can help us to quickly understand, assess and improve the sea of smelly design that surrounds us.
I will present a pragmatic approach on how design anti-patterns (e.g. God Class, Feature Envy, Refused Bequest, Shotgun Surgery) can be automatically detected using a set of metrics-based detection rules, by analyzing the history of the system, and by using intriguing software visualizations.
The presentation will also include a live demo of tools that can automate the entire approach to a high-extent. These tools are so robust that they can deal with systems of several million lines of code; but they are also friendly enough to provide you with customized hints that help you deal with each and every case of an “unclean” code.
Testing As A Bottleneck - How Testing Slows Down Modern Development Processes...TEST Huddle
We often claim the purpose of testing is to verify that software meets a desired level of quality. Frequently, the term “testing” is associated with checking for functional correctness. However, in large, complex software systems with an established user-base, it is also important to verify system constraints such as backward compatibility, reliability, security, accessibility, usability. Kim Herzig from Microsoft explores these issues with the latest webinar on test Huddle.
Basic overview of software test types, methodologies.
Explaining and reasons to test and common pitfalls with various testing methodologies.
Example scenarios for the viewer to think about test strategies.
Tips to avoid having to write tests in the first place.
Content created and presented by Nico Heidtke at the "Die Programmierer" meetup organized by Binary-Gears in Darmstadt, Germany at 02.07.2019.
Panoramic Quality: The Fellowship of Testing in DevOpsBrendan Connolly
DevOps has expanded the opportunity for testers to become arbiters of quality. I'll share 3 core responsibilities of testers in DevOps: to know, protect, and verify. I'll establish a working definition of Quality Ownership and discuss its relationship to Product Ownership to help testers look beyond deriving quality from executing tests and shift instead towards becoming quality owners. Helping testers to find their path to enabling continuous quality, through pairing and sharing test ownership across the team while instilling value from pull request to production.
Dr. Nicole Forsgren will present the latest research that uncovers what really drives business outcomes of market share, profitability, and productivity as well as DevOps transformation awesomeness... Hint: these need the right mix of IT, culture, and practice, and include continuous delivery and lean management. This exciting research was done with Jez Humble and Gene Kim, and is promising exciting new projects in the space.
If you like the ideas raised in this presentation, don't forget to check out my latest book, Directing the Agile Organisation (http://theagiledirector.com/book).
Why Automated Testing Matters To DevOpsdpaulmerrill
“Automated testing is a pain in my ear! Why can’t QA get it right? Why do the tests keep breaking? And for Pete’s sake, stop blaming the infrastructure!”
…Ok, maybe you chose a different word than “ear”.
How often do you have thoughts like this? Daily?
Let’s talk about these frustrations, why they exist and how we can use them to improve our systems!
In this talk, Paul Merrill, founder and Principal Automation Engineer at Beaufort Fairmont explores why automated testing matters to DevOps. Join us to learn how automated testing can be a useful tool in the creation and release of your systems!
Acceptance Testing for Continuous Delivery by Dave Farley at #AgileIndia2019Agile India
Writing and maintaining a suite of acceptance tests that can give you a high level of confidence in the behaviour and configuration of your system is a complex task. In this session, Dave will describe approaches to acceptance testing that allow teams to:
work quickly and effectively
build excellent functional coverage for complex enterprise-scale systems
manage and maintain those tests in the face of change, and of evolution in both the codebase and the understanding of the business problem.
This workshop will answer the following questions, and more:
How do you fail fast?
How do you make your testing scalable?
How do you isolate test cases from one-another?
How do you maintain a working body of tests when you radically change the interface to your system?
More details:
https://confengine.com/agile-india-2019/proposal/8539/acceptance-testing-for-continuous-delivery
Conference link: https://2019.agileindia.org
Towards FutureOps: Stable, Repeatable environments from Dev to ProdNaresh Jain
Modern human history is a story of humans inventing new tools to do more with less. "Doing more" has allowed most of us to no longer worry about producing our own food, collecting water, planning long journeys, etc. Instead, we’re able to specialize, buy what we need for less, and to some extent explore ourselves a lot more.
We're far from done, and of course humanity is far from perfect. In this talk, Mitchell Hashimoto discusses the role that automations and computers play in building a brighter future.
More details: https://confengine.com/agile-india-2017/proposal/3618/towards-futureops-stable-repeatable-environments-from-dev-to-prod
DevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOpsSailaja Tennati
Continuous delivery is frightening to enterprise IT managers who see each new private, public or hybrid cloud infrastructure software change potentially causing service outages or security concerns.
This presentation by Marc Hornbeek, first shared at the DevOps Summit 2015 in London, explains Spirent’s comprehensive Clear DevOps Solution to support:
- Rapid paced continuous testing without compromising coverage or service quality
- Orchestration of service deployments over physical and virtual infrastructures
- Best practices for integrating continuous testing into CI infrastructures
- How to use continuous testing analytics for deployment decisions
Integrating hardware development processes (using the Waterfall method / V-model) and Agile software development. This presentation explains the basics of the V-model and how it has evolved into an iterative model, but also tells you about managing hardware and software lifecycle processes in a single release. Then, a live demonstration shows you how to integrate these lifecycles (xLM) in practice.
Looking to move to Continuous Delivery? Worried about the quality of your the code? Helping your developers understand clean-code practices and getting the right testing strategy in place can take a while. What should you do to control the quality of the incoming code till then? This talk shares our experience of using PRRiskAdvisor to gradually educate and influence developers to write better code and also help the code reviewer to be more effective at their reviews.
Every time a developer raises a pull-request, PRRiskAdvisor analyzes the files that were changed and publishes a report on the pull request itself with the overall risk associated with this pull request and also risk associated with each file. It also runs static code analysis using SonarQube and publishes the configured violations as comments on the pull request. This way the reviewer just has to look at the pull request to get a decent idea of what it means to review this pull request. If there are too many violations, then PRRiskAdvisor can also automatically reject the pull request.
By doing this, we saw our developers starting paying more attention to clean code practices and hence the overall quality of the incoming code improved, while we worked on putting the right engineering practices and testing strategy in place.
More details: https://confengine.com/last-conference-canberra-2018/proposal/7294/improving-the-quality-of-incoming-code
Conference Link: https://2019.agileindia.org
Professional Software Development, Practices and EthicsLemi Orhan Ergin
This is the slides of my talk in Marmara University Faculty of Engineering to undergraduate students. It is mainly about professionalism in software development, agile, scrum, test driven development, practices and ethics
With the drive for continuous integration and delivery, the implications and approaches for designing more testable software are receiving substantial discussion and debate. What does testability really mean in practice? How do you take the idea of testability—how easy it is to test software—and put it into action through the different dimensions of designing and testing a real-world product? Nir Szilagyi recognizes that the challenges of difficult-to-test software can transform a testing cycle from a small automation and exploratory effort to a long struggle of test preparation, execution, and debugging. He says testability starts with software design, goes through implementation, and encompasses building modular software, abstraction, simplicity, clear data interface, separation of business logic into self-sustained entities, and more. On the technical side of testability, Nir explores ways quality engineers and leaders can influence testability from early development through deployment. From his experiences Nir shares real-life testability examples which touch on the human process of building software including the relationship between testers and developers.
Matt tesauro Lessons from DevOps: Taking DevOps practices into your AppSec Li...Matt Tesauro
Bruce Lee once said “Don’t get set into one form, adapt it and build your own, and let it grow, be like water“.
AppSec needs to look beyond itself for answers to solving problems since we live in a world of every increasing numbers of apps. Technology and apps have invaded our lives, so how to you lead a security counter-insurgency? One way is to look at the key tenants of DevOps and apply those that make sense to your approach to AppSec. Something has to change as the application landscape is already changing around us.
Fostering Long-Term Test Automation SuccessTechWell
In today’s environment of plummeting software delivery cycle times, test automation becomes a more critical and strategic necessity. How can we possibly keep up with software delivery’s explosive pace while retaining satisfactory test coverage, keeping the reins on costs, and reducing risk? Carl Nagle maintains that the long-term solution is a greater level of “sustainable” test automation. The SAFS method separates test design from test execution with a data-driven/action-based approach that encapsulates volatile application-specific data into readily localizable “maps” for simple maintenance. Test designs (scripts) are completely independent of the ready-to-run SAFS engines that will execute them. And since the test design methodology does not change over long periods of time, testers can focus more on getting robust automation in place quickly, with little attention paid to each new technology, testing tool, or test IDE. Join Carl to learn how test automation thrives when testers and tools are not tied up in application-specific silos.
QA Fest 2017. Ilari Henrik Aegerter. Complexity Thinking, Cynefin & Why Your ...QAFest
From your own experience it might not come as a surprise that most of today’s testing is unhelpful, filled with unnecessary paper work and folkloric activities. For some reason testing work often does not seem to be very helpful in projects. That is definitely a problem. If you are a tester, your manager might ask you for metrics that don’t make sense to you. And since you are a smart person, you have probably once in a while gamed the system. All that is certainly damaging to the industry. What can you do? This session brings you insight into Complexity Thinking with Dave Snowden’s Cynefin model and ties that to your job as a software tester. It offers you a way to look at software testing from a complexity thinking standpoint of view and gives you tools to argue your case if you are exposed to dysfunctional project settings. In addition to that, we will have some fun with idiotic metrics and to lighten up the serious topic we’ll engage in hilariously entertaining real life examples of bad metrics. To round it up, we’ll propose more meaningful alternatives.
Continuous Deployment and Testing Workshop from Better Software WestCory Foy
In this workshop from the 2015 SQE Better Software West conference, Cory Foy details the Continuous Paradigm companies are embracing - including Continuous Integration, Continuous Deployment, and Continuous Testing. This presentation was co-created by Jared Richardson.
(IMPROVED VERSION FROM GEECON)
How can we quickly tell what an application is about? How can we quickly tell what it does? How can we distinguish business concepts from architecture clutter? How can we quickly find the code we want to change? How can we instinctively know where to add code for new features? Purely looking at unit tests is either not possible or too painful. Looking at higher-level tests can take a long time and still not give us the answers we need. For years, we have all struggled to design and structure projects that reflect the business domain.
In this talk Sandro will be sharing how he designed the last application he worked on, twisting a few concepts from Domain-Driven Design, properly applying MVC, borrowing concepts from CQRS, and structuring packages in non-conventional ways. Sandro will also be touching on SOLID principles, Agile incremental design, modularisation, and testing. By iteratively modifying the project structure to better model the application requirements, he has come up with a design style that helps developers create maintainable and domain-oriented software.
Workshop from I T.A.K.E. Unconference 2014 on how to lead technical teams.
Topics covered:
- Five duties of technical leaders
- Change management process
- How to communicate effectively
Panoramic Quality: The Fellowship of Testing in DevOpsBrendan Connolly
DevOps has expanded the opportunity for testers to become arbiters of quality. I'll share 3 core responsibilities of testers in DevOps: to know, protect, and verify. I'll establish a working definition of Quality Ownership and discuss its relationship to Product Ownership to help testers look beyond deriving quality from executing tests and shift instead towards becoming quality owners. Helping testers to find their path to enabling continuous quality, through pairing and sharing test ownership across the team while instilling value from pull request to production.
Dr. Nicole Forsgren will present the latest research that uncovers what really drives business outcomes of market share, profitability, and productivity as well as DevOps transformation awesomeness... Hint: these need the right mix of IT, culture, and practice, and include continuous delivery and lean management. This exciting research was done with Jez Humble and Gene Kim, and is promising exciting new projects in the space.
If you like the ideas raised in this presentation, don't forget to check out my latest book, Directing the Agile Organisation (http://theagiledirector.com/book).
Why Automated Testing Matters To DevOpsdpaulmerrill
“Automated testing is a pain in my ear! Why can’t QA get it right? Why do the tests keep breaking? And for Pete’s sake, stop blaming the infrastructure!”
…Ok, maybe you chose a different word than “ear”.
How often do you have thoughts like this? Daily?
Let’s talk about these frustrations, why they exist and how we can use them to improve our systems!
In this talk, Paul Merrill, founder and Principal Automation Engineer at Beaufort Fairmont explores why automated testing matters to DevOps. Join us to learn how automated testing can be a useful tool in the creation and release of your systems!
Acceptance Testing for Continuous Delivery by Dave Farley at #AgileIndia2019Agile India
Writing and maintaining a suite of acceptance tests that can give you a high level of confidence in the behaviour and configuration of your system is a complex task. In this session, Dave will describe approaches to acceptance testing that allow teams to:
work quickly and effectively
build excellent functional coverage for complex enterprise-scale systems
manage and maintain those tests in the face of change, and of evolution in both the codebase and the understanding of the business problem.
This workshop will answer the following questions, and more:
How do you fail fast?
How do you make your testing scalable?
How do you isolate test cases from one-another?
How do you maintain a working body of tests when you radically change the interface to your system?
More details:
https://confengine.com/agile-india-2019/proposal/8539/acceptance-testing-for-continuous-delivery
Conference link: https://2019.agileindia.org
Towards FutureOps: Stable, Repeatable environments from Dev to ProdNaresh Jain
Modern human history is a story of humans inventing new tools to do more with less. "Doing more" has allowed most of us to no longer worry about producing our own food, collecting water, planning long journeys, etc. Instead, we’re able to specialize, buy what we need for less, and to some extent explore ourselves a lot more.
We're far from done, and of course humanity is far from perfect. In this talk, Mitchell Hashimoto discusses the role that automations and computers play in building a brighter future.
More details: https://confengine.com/agile-india-2017/proposal/3618/towards-futureops-stable-repeatable-environments-from-dev-to-prod
DevOps Summit 2015 Presentation: Continuous Testing At the Speed of DevOpsSailaja Tennati
Continuous delivery is frightening to enterprise IT managers who see each new private, public or hybrid cloud infrastructure software change potentially causing service outages or security concerns.
This presentation by Marc Hornbeek, first shared at the DevOps Summit 2015 in London, explains Spirent’s comprehensive Clear DevOps Solution to support:
- Rapid paced continuous testing without compromising coverage or service quality
- Orchestration of service deployments over physical and virtual infrastructures
- Best practices for integrating continuous testing into CI infrastructures
- How to use continuous testing analytics for deployment decisions
Integrating hardware development processes (using the Waterfall method / V-model) and Agile software development. This presentation explains the basics of the V-model and how it has evolved into an iterative model, but also tells you about managing hardware and software lifecycle processes in a single release. Then, a live demonstration shows you how to integrate these lifecycles (xLM) in practice.
Looking to move to Continuous Delivery? Worried about the quality of your the code? Helping your developers understand clean-code practices and getting the right testing strategy in place can take a while. What should you do to control the quality of the incoming code till then? This talk shares our experience of using PRRiskAdvisor to gradually educate and influence developers to write better code and also help the code reviewer to be more effective at their reviews.
Every time a developer raises a pull-request, PRRiskAdvisor analyzes the files that were changed and publishes a report on the pull request itself with the overall risk associated with this pull request and also risk associated with each file. It also runs static code analysis using SonarQube and publishes the configured violations as comments on the pull request. This way the reviewer just has to look at the pull request to get a decent idea of what it means to review this pull request. If there are too many violations, then PRRiskAdvisor can also automatically reject the pull request.
By doing this, we saw our developers starting paying more attention to clean code practices and hence the overall quality of the incoming code improved, while we worked on putting the right engineering practices and testing strategy in place.
More details: https://confengine.com/last-conference-canberra-2018/proposal/7294/improving-the-quality-of-incoming-code
Conference Link: https://2019.agileindia.org
Professional Software Development, Practices and EthicsLemi Orhan Ergin
This is the slides of my talk in Marmara University Faculty of Engineering to undergraduate students. It is mainly about professionalism in software development, agile, scrum, test driven development, practices and ethics
With the drive for continuous integration and delivery, the implications and approaches for designing more testable software are receiving substantial discussion and debate. What does testability really mean in practice? How do you take the idea of testability—how easy it is to test software—and put it into action through the different dimensions of designing and testing a real-world product? Nir Szilagyi recognizes that the challenges of difficult-to-test software can transform a testing cycle from a small automation and exploratory effort to a long struggle of test preparation, execution, and debugging. He says testability starts with software design, goes through implementation, and encompasses building modular software, abstraction, simplicity, clear data interface, separation of business logic into self-sustained entities, and more. On the technical side of testability, Nir explores ways quality engineers and leaders can influence testability from early development through deployment. From his experiences Nir shares real-life testability examples which touch on the human process of building software including the relationship between testers and developers.
Matt tesauro Lessons from DevOps: Taking DevOps practices into your AppSec Li...Matt Tesauro
Bruce Lee once said “Don’t get set into one form, adapt it and build your own, and let it grow, be like water“.
AppSec needs to look beyond itself for answers to solving problems since we live in a world of every increasing numbers of apps. Technology and apps have invaded our lives, so how to you lead a security counter-insurgency? One way is to look at the key tenants of DevOps and apply those that make sense to your approach to AppSec. Something has to change as the application landscape is already changing around us.
Fostering Long-Term Test Automation SuccessTechWell
In today’s environment of plummeting software delivery cycle times, test automation becomes a more critical and strategic necessity. How can we possibly keep up with software delivery’s explosive pace while retaining satisfactory test coverage, keeping the reins on costs, and reducing risk? Carl Nagle maintains that the long-term solution is a greater level of “sustainable” test automation. The SAFS method separates test design from test execution with a data-driven/action-based approach that encapsulates volatile application-specific data into readily localizable “maps” for simple maintenance. Test designs (scripts) are completely independent of the ready-to-run SAFS engines that will execute them. And since the test design methodology does not change over long periods of time, testers can focus more on getting robust automation in place quickly, with little attention paid to each new technology, testing tool, or test IDE. Join Carl to learn how test automation thrives when testers and tools are not tied up in application-specific silos.
QA Fest 2017. Ilari Henrik Aegerter. Complexity Thinking, Cynefin & Why Your ...QAFest
From your own experience it might not come as a surprise that most of today’s testing is unhelpful, filled with unnecessary paper work and folkloric activities. For some reason testing work often does not seem to be very helpful in projects. That is definitely a problem. If you are a tester, your manager might ask you for metrics that don’t make sense to you. And since you are a smart person, you have probably once in a while gamed the system. All that is certainly damaging to the industry. What can you do? This session brings you insight into Complexity Thinking with Dave Snowden’s Cynefin model and ties that to your job as a software tester. It offers you a way to look at software testing from a complexity thinking standpoint of view and gives you tools to argue your case if you are exposed to dysfunctional project settings. In addition to that, we will have some fun with idiotic metrics and to lighten up the serious topic we’ll engage in hilariously entertaining real life examples of bad metrics. To round it up, we’ll propose more meaningful alternatives.
Continuous Deployment and Testing Workshop from Better Software WestCory Foy
In this workshop from the 2015 SQE Better Software West conference, Cory Foy details the Continuous Paradigm companies are embracing - including Continuous Integration, Continuous Deployment, and Continuous Testing. This presentation was co-created by Jared Richardson.
(IMPROVED VERSION FROM GEECON)
How can we quickly tell what an application is about? How can we quickly tell what it does? How can we distinguish business concepts from architecture clutter? How can we quickly find the code we want to change? How can we instinctively know where to add code for new features? Purely looking at unit tests is either not possible or too painful. Looking at higher-level tests can take a long time and still not give us the answers we need. For years, we have all struggled to design and structure projects that reflect the business domain.
In this talk Sandro will be sharing how he designed the last application he worked on, twisting a few concepts from Domain-Driven Design, properly applying MVC, borrowing concepts from CQRS, and structuring packages in non-conventional ways. Sandro will also be touching on SOLID principles, Agile incremental design, modularisation, and testing. By iteratively modifying the project structure to better model the application requirements, he has come up with a design style that helps developers create maintainable and domain-oriented software.
Workshop from I T.A.K.E. Unconference 2014 on how to lead technical teams.
Topics covered:
- Five duties of technical leaders
- Change management process
- How to communicate effectively
BDD with Cucumber-JVM as presented at I T.A.K.E. Unconference in Bucharest 2014TSundberg
Behaviour Driven Development, BDD, is a way to increase communication between stakeholders in a project and at the same time create executable examples.
Tools needed to build a Continuous delivery pipeline. Most tools are generic and can be used regardless of language, some are specific for Java/JVM.
http://2014.itakeunconf.com/
Programmers love science! At least, so they say. Because when it comes to the ‘science’ of developing code, the most used tool is brutal debate. Vim versus emacs, static versus dynamic typing, Java versus C#, this can go on for hours at end. In this session, software engineering professor Felienne Hermans will present the latest research in software engineering that tries to understand and explain what programming methods, languages and tools are best suited for different types of development.
I T.A.K.E. talk: "When DDD meets FP, good things happen"Cyrille Martraire
Domain-Driven Design (DDD) and Functional Programming (FP) have a lot of good things in common: DDD has borrowed many ideas from the FP community, and both share a common inspiration on established formalisms like maths.
For the software developer, the result is a style of code that mixes the best of DDD, OO and FP. Even in non functional languages like Java or C#, this combined set of practices helps craft simple and powerful code that reads well and that is very easy to test.
In this talk we will have a closer look at some of these ideas, in the context of domain models inspired from real-world projects. From basic FP hygiene like immutability and closure of operations to more mathematical inspirations from abstract algebra like monoids, we will show how all that translates into beautiful code.
WARNING: This may influence your coding style…
This talk was presented on the first day of I T.A.K.E. 2013 at Bucharest http://itakeunconf.com/
Machine learning has become an important tool in the modern software toolbox, and high-performing organizations are increasingly coming to rely on data science and machine learning as a core part of their business. eBay introduced machine learning to its commerce search ranking and drove double-digit increases in revenue. Stitch Fix built a multibillion dollar clothing retail business in the US by combining the best of machines with the best of humans. And WeWork is bringing machine-learned approaches to the physical office environment all around the world. In all cases, algorithmic techniques started simple and slowly became more sophisticated over time. This talk will use these examples to derive an agile approach to machine learning, and will explore that approach across several different dimensions. We will set the stage by outlining the kinds of problems that are most amenable to machine-learned approaches as well as describing some important prerequisites, including investments in data quality, a robust data pipeline, and experimental discipline. Next, we will choose the right (algorithmic) tool for the right job, and suggest how to incrementally evolve the algorithmic approaches we bring to bear. Most fancy cutting-edge recommender systems in the real world, for example, started out with simple rules-based techniques or basic regression. Finally, we will integrate machine learning into the broader product development process, and see how it can help us to accelerate business results
Keynote 2 - The 20% of software engineering practices that contribute to 80% ...ESEM 2014
This presentation will challenge the application of metrics and other software engineering practices in commercial companies that do not have to comply with safety/ regulatory standards and thus can chose the SDLC approach that they feel more appropriate and cost effective for the intended purpose. Which are the practices that are really applied to make things happen under the tight constraints of time to market and profitability? A snapshot of the "hands-on" situation from the perspective of a large consulting company engaged with many customers in various markets and domains.
Bio: Gualtiero Bazzana is Chairman of ITA-STQB, Head of Marketing WG for ISTQB and Managing Director of Alten Italia. He has been working in the IT domain since 20 years with a long lasting experience in the areas of testing, process improvement, quality. He has authored 50+ papers at international conferences on such subjects
For numerous large enterprises, the alignment of hardware and software processes is critical to managing an Agile environment. Agile Hardware implementations can be put in place by using the same framework as our typical Agile Software Development transformations. Start off with assessing the organization’s current state, then move to planning and preparing by and putting together a transition backlog, start execution with training and coaching, spread the cultural shift with change management and maintain and scale the transformation.
Building and Scaling High Performing Technology Organizations by Jez Humble a...Agile India
High performing organizations don't trade off quality, throughput, and reliability: they work to improve all of these and use their software delivery capability to drive organizational performance. In this talk, Jez presents the results from DevOps Research and Assessment's five-year research program, including how continuous delivery and good architecture produce higher software delivery performance, and how to measure culture and its impact on IT and organizational culture. They explain the importance of knowing how (and what) to measure so you focus on what’s important and communicate progress to peers, leaders, and stakeholders. Great outcomes don’t realize themselves, after all, and having the right metrics gives us the data we need to keep getting better at building, delivering, and operating software systems.
More details:
https://confengine.com/agile-india-2019/proposal/8524/building-and-scaling-high-performing-technology-organizations
Conference link: https://2019.agileindia.org
How To Avoid Continuously Delivering Faulty SoftwareErika Barron
As organizations continue to compress development and delivery lifecycles, the risk of regressions, integration errors, and other defects rises. But how can development teams integrate defect prevention strategies into their release cycles to ensure that they're not continuously delivering faulty software? In this presentation, learn the key development testing processes to add to your Continuous Delivery system to reduce the risk of automating the release of software defects.
Mistakes we make_and_howto_avoid_them_v0.12Trevor Warren
This presentation was put together for the CMGA (www.cmga.org.au) meetup in Canberra (ACT), Australia. It's an attempt to share some of my experiences building and delivering systems over the last decade and a half.
Chris Munns, DevOps @ Amazon: Microservices, 2 Pizza Teams, & 50 Million Depl...TriNimbus
Keynote presentation from Vancouver's 2016 Canadian Executive DevOps & Cloud Summit on Thursday, May 5th.
Speaker: Chris Munns, Business Development Manager, DevOps at Amazon Web Services
Title: DevOps @ Amazon: Microservices, 2 Pizza Teams, & 50 Million Deploys a Year
SecDevOps is a set of business methodologies, operational procedures, & cultural practices proven to increase security, improve software quality, improve release frequency, & provide immediate insight into organizational exposures.
This presentation was accepted to the ASIA 2018 conference, authored by Thomas Cappetta.
Technical Excellence Doesn't Just Happen - AgileIndy 2016Allison Pollard
The ninth principle from the Agile Manifesto states that technical excellence enhances agility, but when the codebase is ugly and the deadlines are tight, most teams don’t choose to refactor mercilessly, adopt TDD, or evaluate automated testing tools—unless they have the proper support. In our experience working with multiple teams in a single codebase, developers can feel victim to a legacy codebase if only a few people are writing clean code or refactoring; guiding them on how to decrease technical debt while delivering their projects helps "unstuck" their other agile practices. We will talk about the challenges we’ve seen with Product Owners, Managers, and Scrum Masters interacting with teams at various stages of agile+technical excellence and how a focus on technical practices sparked a wider interest in craftsmanship. Learn how can you influence the team towards the right practices while fostering their sense of ownership. Getting serious about technical excellence requires support from technical and non-technical roles, and we’ll share how we partnered as coaches to help an organization through a technical turnaround with some tips for others who need to do the same.
Building an Open Source AppSec PipelineMatt Tesauro
Take the concepts of DevOps and apply them to AppSec and you have an AppSec Pipeline. Allow automation, orchestration and some ChatOps to expand the flow of your AppSec team since its not likely to get any bigger.
How to Avoid Continuously Delivering Faulty SoftwarePerforce
As organizations continue to compress development and delivery lifecycles, the risk of regressions, integration errors, and other defects rises. But how can development teams integrate defect prevention strategies into their release cycles to ensure that they're not continuously delivering faulty software? In this session, learn the key development testing processes to add to your Continuous Delivery system to reduce the risk of automating the release of software defects.
What are the prerequisites of a successful retrospective? What impediments should be analysed during this meeting? Does the goal of the retrospective influence its format? What are some retrospective formats and what kind of activities do they require for each step? These are some of the questions I will answer during this talk.
Expect a lot of examples which will generate insights on how to create custom retrospectives fit for your team.
Create Software Design with unit testing, build user experience with UX testing, check definition of done with functional testing – all these are my day-to-day activities. Indeed, I am a developer who has found the value of testing to deliver quality software.
In this presentation I share with you how I have come to use tests for: understanding the features, choosing the best user experience design, choosing the best technical solution, implementing the features and test them to create a reliable system.
You will see practical examples of how tools like Jasmine, Spock, Geb are used for the above types of tests. You will see a project with test code and we will discuss how testing can effectively enhance your professional performance.
Story mapping: build better products with a happier teamMozaic Works
Once you get it, story mapping is a simple yet very powerful technique. Use it to better define your product and find the weak spots where you need more customer info. Use it to plan and prioritize feature development and show progress to your stakeholders.
My presentation will mainly focus on the product owner’s perspective, but you can even use it to plan your vacation.
Adi Bolboacă: Architecture For Disaster Resistant Systems at I T.A.K.E. Unco...Mozaic Works
Aviation has learned how to deal with risks and we can learn from their experience in Software. This talk is about how to apply some of the Aviation concepts into Software Architecture.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Tom Gilb - Power to the Programmers @ I T.A.K.E. Unconference 2014, Bucharest
1. ‘Architecture’ for Devs
or
Power to the Programmers
I T.A.K.E. Unconference, Bucharest
#itakeunconf
30th May 2014 Keynote
1400 to 1500
@ImTomGilb
Tom at Gilb dot com
Gilb.com
These slides are at
http://www.gilb.com//file14
Additional stuff
http://tinyurl.com/ITAKEGILB
The Leader of the Revolution
Motto “Join or Die”
Or
“Code or Create,
To determine your fate”
2. Danger
Warning
You will be shown slides with lots of
detail !
Too much to read now !
DON’T EVEN THINK ABOUT IT !!
WHY DO I INSIST ON THIS DETAIL?
I have a message
I have real cases, real experience, real facts
with named people and organizations
I am not feeding you generalized bullshit
I know what I am recommending really
works in practice
You would be a fool to believe a conference
speaker, without this fact-based evidence,
and references
I think you are NOT fools (but I am taking no
chances : ) )
and
Old Men Can be forgetful !
21 July, 2014 Copyright Tom@Gilb.com 2014 2
3. PS If you insist on oversimplified slides
and presentations for people less-intelligent than you
see https://www.youtube.com/watch?v=kOfK6rSLVTA
21 July, 2014 Copyright Tom@Gilb.com 2014 3
4. Confessions of a Coder
• I was a programmer (1958-1978),
– But I decided I wanted more power and
influence
• on the quality and usefulness of my work
• I did not want to be part of the 50% totally failed
IT projects
• I wanted my projects to ALWAYS succeed
– And I was tired of being told what to do by
managers and users
• Who did not strike me as blindingly savvy
• So I became a real ‘Software Engineer’
– I did not just change my ‘title’
– I really turned to ENGINEERING
21 July, 2014 Copyright Tom@Gilb.com 2014 4
5. Basic ideas: of this talk
• IT Architects are incompetent
– Especially to specify architecture for devs
• http://vimeo.com/user22258446/review/79092608/600e7bd650
• IT Architects are theoretically a necessary good idea
– But since they are totally incompetent
– At managing quality and cost
– We need to learn to live without them
• Iterative incremental delivery (‘Agile’) gives us developers a tool, to take the
power away from the incompetent ‘Architects’ , and from the equally
incompetent managers
– And does a much better job for our projects and companies
– AND increasea the fun and creativity, and PRIDE in work level, for devs
• BUT
– The ‘sprint’ must prove and learn numerically, in several value-and-cost dimensions.
– That is called ‘engineering’
– Software Engineering is a higher level calling for mature devs
– Coding is fun, software engineering is more fun
• Because you are not just in control of the stupid computer
• You are in control of the results people value: and people are challenging
– (that is why you prefer talking to machines, you geek)
21 July, 2014 Copyright Tom@Gilb.com 2014 5
Alan Turing
6. Tom, telling 300 IT Architects that they are
ridiculous, incompetent, immature and
embarassing, and pompous
(diplomatically, of course!)
21 July, 2014 Copyright Tom@Gilb.com 2014 6
7. How?
• Make devs responsible for delivery of the ‘quantified’ critical
requirements (Performance, Qualities, cost, deadline)
• Give them the freedom to decide the right designs
– With immediate responsibility to measure that they are delivering
the results
• Get the ‘unprofessional’ users and customers ‘off their backs’
– Avoid receiving features and stories which are usually amateur
design, by people who have no overview or responsibility ar design
ability (users and customers, and managers)
• Elevate your talent to becoming a real ‘software ENGINEER’
– With coding expert craftsmanship as you base talent
21 July, 2014 Copyright Tom@Gilb.com 2014 7
8. Cases: Raytheon and IBM
use ‘Defect prevention Process’
(DPP, CMM Level 5) to
EMPOWER DEVS TO RADICALLY CHANGE THEIR WORK ENVIRONMENT
•
21 July, 2014 Copyright Tom@Gilb.com 2014 8
9. 21 July, 2014 Copyright Tom@Gilb.com 2014 9
Software Process Improvement
at Raytheon
• Source : Raytheon Report 1995
– http://resources.sei.cmu.edu/library/asset-
view.cfm?assetid=12403 (this is a header to the download) Tested
May 2014
– Search “Dion & Raytheon”
– http://resources.sei.cmu.edu/asset_files/TechnicalReport/1995_0
05_001_16415.pdf
• An excellent example of process improvement driven by
measurement of improvement
• Main Motor:
– “Document Inspection”, Defect Detection
• Main Driver:
– “Defect Prevention Process” (DPP)
10. Cost of Quality over Time: Raytheon 95
The individual learning curve
??
Cost of Rework
(non-conformance)
Cost of
Conformance
End 1988 End 1994
43% Start of Effort
5%
Bad
Process
Change
21 July, 2014 Copyright Tom@Gilb.com 2014 10
13. Examples of Process Improvements: Raytheon 95
• Process Improvements Made
• Erroneous interfaces during integration and test -
– Increased the detail required for interface design during the requirements
analysis phase and preliminary design phase - Increased thoroughness of
inspections of interface specifications
• Lack of regression test repeatability -
– Automated testing - Standardized the tool set for automated testing -
Increased frequency of regression testing
• Inconsistent inspection process -
– Established control limits that are monitored by project teams - Trained project
teams in the use of statistical process control - Continually analyze the inspection
data for trends at the organisation level
• Late requirements up-dates -
– Improved the tool set for maintaining requirements traceability - Confirm the requirements mapping
at each process phase
• Unplanned growth of functionality during Requirements Analysis
– - Improved the monitoring of the evolving specifications against the customer baseline - Continually map the
requirements to the functional proposal baseline to identify changes in addition to the passive monitoring of
code growth - Improved requirements, design, cost, and schedule tradeoffs to reduce impacts
21 July, 2014 Copyright Tom@Gilb.com 2014 13
14. Overall Product Quality: Raytheon 95
Defect Density Versus Time
21 July, 2014 Copyright Tom@Gilb.com 2014 14
15. 21 July, 2014 Copyright Tom@Gilb.com 2014 15
Return On Investment
• $7.70 per $1 invested at Raytheon
• Sell your improvement program to top
management on this basis
• Set a concrete target for it
– PLAN [Our Division, 2 years hence] 8 to 1
17. What’s Going on Here?
• 1,000 programmers
– Later joined by 1,000 merged new programmers
– Are
• Analyzing their own bugs and spec defects
• Suggesting their own work environment changes
• And reducing their 43% rework by 10 X
• Power has been delegated to the
programmers
21 July, 2014 Copyright Tom@Gilb.com 2014 17
18. Background 1970-1980
MANAGERS FAIL
• Michael Fagan and Ron Radice co-invent
‘Software Inspection’
– The intent was to collect data on bugs and defects
– Use it to find frequent common causes
– To improve development processes
– The attitude was explicitly
• ‘managers should manage’ (MEF to TsG)
– THEY FAILED TO GET REAL PROCESS
IMPROVEMENT
21 July, 2014 Copyright Tom@Gilb.com 2014 18
19. 1980
The ‘Troops’ succeed, where the Generals Failed
• Robert Mays and Carol L. Jones, at IBM Research
Triangle Park, NC
• Invent ‘Defect Prevention Process’
• Major idea:
– Delegate power to devs to
• Analyze their OWN defects
• And fix their OWN process
• THAT WORKED
21 July, 2014 Copyright Tom@Gilb.com 2014 19
20. Improving the Reliability Attribute
Primark, London (Gilb Client)
see case study Dick Holland, “Agent of Change” from Gilb.com
Using, Inspections, Defect Prevention, and Planguage for Management Objectives
20Copyright Tom@Gilb.com 201421 July, 2014
21. Positive Motivation:
Personal Improvement
80 Majors Found
(~160-240 exist!)
40
23
8
00
20
40
60
80
100
0 1 2 3 4 5
Defects/Page
February April
Inspections of Gary’s Designs
“Gary” at
McDonnell-Douglas
“We find an hour of doing Inspection
is worth ten hours of company
classroom training.”
A McDonnell-Douglas line manager
“Even if Inspection did not have all
the other measurable quality and
cost benefits which we are finding,
then it would still pay off for the
training value alone.”
A McDonnellDouglas Director
21Copyright Tom@Gilb.com 201421 July, 2014
22. Half-day Inspection Economics. Gilb@acm.org
Prevention + Pre-test Detection
is the most effective and efficient
• Prevention data based on state of the art prevention experiences (IBM RTP), Others
(Space Shuttle IBM SJ 1-95) 95%+ (99.99% in Fixes)
• Cumulative Inspection detection data based on state of the art Inspection (in an
environment where prevention is also being used, IBM MN, Sema UK, IBM UK)
50%
70%
80%
90%
<-Mays & Jones 50% prevented(IBM) 1990
<- Mays 1993, 70% prevented
1 2 3 4 5 6
"Prevented"
70% Detection
by Inspection
95% cumulative detection
by Inspection (state of the art limit)
Test
"Detected
Cheaply"
100%
Use
22Copyright Tom@Gilb.com 201421 July, 2014
23. Half-day Inspection Economics. Gilb@acm.org
IBM MN & NC DP Experience
• 2162 DPP Actions implemented
– between Dec. 91 and May 1993 (30 months)<-Kan
• RTP about 182 per year for 200 people.<-Mays 1995
– 1822 suggested ten years (85-94)
– 175 test related
• RTP 227 person org<- Mays slides
– 130 actions (@ 0.5 work-years
– 34 causal analysis meetings @ 0.2 work-years
– 19 action team meetings @ 0.1work-years
– Kickoff meeting @ 0.1 work-years
– TOTAL costs 1% of org. resources
• ROI DPP 10:1 to 13:1, internal 2:1 to 3:1
• Defect Rates at all stages 50% lower with DPP
23Copyright Tom@Gilb.com 201421 July, 2014
24. Summary DPP
Managers: 0 Devs : 1
• Devs are better at managing their own work
environment, than their managers are
• ‘Directors’ should NOT design the work
environment
• Devs should ‘evolve the enviornment’
– through practical deep personal insights,
– and take responsibility for their own work situation
21 July, 2014 Copyright Tom@Gilb.com 2014 24
29. Real Example of 1 of the 25 Quality Requirements
Usability.Productivity (taken from Confirmit 8.5,
performed a set of predefined steps, to produce a
standard MR Report.
development)
Scale for quantification: Time in minutes to set up a
typical specified Market Research-report
Past Level [Release 8.0]: 65 mins.,
Tolerable Limit [Release 8.5]: 35 mins.,
Goal [Release 8.5]: 25 mins.
Note: end result was actually 20
minutes
Meter [Weekly Step]: Candidates with Reportal
experience, and with knowledge of MR-specific
reporting features
29
Trond JohansenCopyright Tom@Gilb.com 201421 July, 2014
30. Shift: from Function to Quality
• Our new focus is on the day-to-day
operations of our Market Research
users,
– not a list of features that they might or
might not like. 50% never used!
– We KNOW that increased efficiency, which
leads to more profit, will please them.
– The ‘45 minutes actually saved x
thousands of customer reports’
• = big $$$ saved
• After one week we had defined more or
less all the requirements for the next
version (8.5) of Confirmit.
Copyright Tom@Gilb.com 201421 July, 2014 30
33. EVO Plan Confirmit 8.5 in Evo Step Impact Measurement
4 product areas were attacked in all: 25 Qualities concurrently, one quarter
of a year. Total development staff = 13
9
8
3
3
Copyright Tom@Gilb.com 201421 July, 2014 33
34. Confirmit Evo Weekly Value Delivery Cycle
Copyright Tom@Gilb.com 201421 July, 2014 34
35. Evo’s impact on Confirmit product qualities 1st Qtr
• Only 5 highlights of the 25 impacts are listed here
Release 8.5
Copyright Tom@Gilb.com 2014
36. Initial Experiences and conclusions
• EVO has resulted in
– increased motivation and
– enthusiasm amongst
developers,
– it opens up for empowered
creativity
• Developers
– embraced the method and
– saw the value of using it,
– even though they found parts
of Evo difficult to understand
and execute
Trond Johansen
Copyright Tom@Gilb.com 201421 July, 2014 36
38. Initial perceived value of the new release
(Base 73 people)
Base: 73
Copyright Tom@Gilb.com 201421 July, 2014 38
39. Evo’s impact on Confirmit 9.0 product qualities
Results from the second quarter of using Evo. 1/2
Productivity
Intuitiveness
Product quality
Time reduced by
38%
Time in minutes for a defined
advanced user, with full knowledge of
9.0 functionality, to set up a defined
advanced survey correctly.
Probability
increased by
175%
Probability that an inexperienced user
can intuitively figure out how to set
up a defined Simple Survey correctly.
Customer valueDescription
Productivity
Product quality
Time reduced by
83%and
error tracking
increased by 25%
Time (in minutes) to test a defined survey
and identify 4 inserted script errors,
starting from when the questionnaire is
finished to the time testing is complete and
is ready for production. (Defined Survey:
Complex survey, 60 questions,
comprehensive JScripting.)
Customer valueDescription
39Copyright Tom@Gilb.com 201421 July, 2014
40. Evo’s impact on Confirmit 9.0 product qualities
Results from the second quarter of using Evo. 2/2
Number of responses
increased by 1400%
Number of responses a database can
contain if the generation of a defined table
should be run in 5 seconds.
Performance
Number of panelists
increased by 700%
Ability to accomplish a bulk-update of X
panelists within a timeframe of Z second
Scalability
Performance
Product quality
Number of panelists
increased by
1500%
Max number of panelists that the system
can support without exceeding a defined
time for the defined task, with all
components of the panel system
performing acceptable.
Customer valueDescription
40Copyright Tom@Gilb.com 201421 July, 2014
48. Code quality – ”green” week
Empowered Creativity: for Maintainability
• Instead of Refactoring 1 day a week (failed)
• Let the Dev Teams engineer using ‘agile’ (Evo): Design Dev
Quality in to their own process
• To meeting their own internal stakeholder Quality Objectives
• 1 week a month
Speed
Maintainability
Nunit Tests
PeerTests
TestDirectorTests
Robustness.Correctness
Robustness.Boundary
Conditions
ResourceUsage.CPU
Maintainability.DocCode
SynchronizationStatus
Copyright Tom@Gilb.com 201421 July, 2014 48
49. Same Process as for their External
(User, Customer) stakeholders
• 1. define better quality dev and testing environment
QUANTITATIVELY
– Scale of measure and Goal level
• 2. Figure out, brainstorm ANY systems engineering design
or architecture to get to their self determined improvement
goals
– Not just code refactoring, but any tools, processes, motivations,
hardware etc that WORK
• 3. Implement, measure
– Keep the stuff that works
– Dump the stuff that does not MEASURABLY work
• 4. Keep on trucking’ (monthly, forever, or …)
– DONE is when devs have no further improvement needs
21 July, 2014 Copyright Tom@Gilb.com 2014 49
50. The Monthly ‘Green Week’
User
Week 1
Select a Goal
Brainstorm Designs
Estimate Design
Impact/Cost
Pick best design
Implement design
Test design
Update Progress to Goa
User
Week 2
Select a Goal
Brainstorm Designs
Estimate Design
Impact/Cost
Pick best design
Implement design
Test design
Update Progress to Goa
User
Week 3
Select a Goal
Brainstorm Designs
Estimate Design
Impact/Cost
Pick best design
Implement design
Test design
Update Progress to Goa
Developer
Week 4
Select a Goal
Brainstorm Designs
Estimate Design
Impact/Cost
Pick best design
Implement design
Test design
Update Progress to Goal
21 July, 2014 Copyright Tom@Gilb.com 2014 50
51. Conclusion: Technical Debt
• Devs
Acting like real software engineers
Can engineer technical debt reduction
It is NOT about refactoring, and patterns
though if they work measurably best, we can use them.
But, did you ever see measurement or re they just belief
systems?
It is about mature teams, with common goals, and practical
experience, taking charge of their own fate
If management resists, I suggest going on strike!
Why should we suffer agonizing technical debt, wasting 50% or
more of our work hours,
Surely we have better things to do!
21 July, 2014 Copyright Tom@Gilb.com 2014 51
55. Quinnan: IBM FSD Cleanroom
Dynamic Design to Cost
Quinnan describes the process control loop used by IBM FSD to ensure that cost targets are met.
'Cost management. . . yields valid cost plans linked to technical performance. Our practice carries cost
management farther by introducing design-to-cost guidance. Design, development, and managerial practices
are applied in an integrated way to ensure that software technical management is consistent with cost
management. The method [illustrated in this book by Figure 7.10] consists of developing a design, estimating
its cost, and ensuring that the design is cost-effective.' (p. 473)
He goes on to describe a design iteration process trying to meet cost targets by either redesign or by
sacrificing 'planned capability.' When a satisfactory design at cost target is achieved for a single increment,
the 'development of each increment can proceed concurrently with the program design of the others.'
'Design is an iterative process in which each design level is a refinement of the previous level.' (p. 474)
It is clear from this that they avoid the big bang cost estimation approach. Not only do they iterate in
seeking the appropriate balance between cost and design for a single increment, but they iterate through a
series of increments, thus reducing the complexity of the task, and increasing the probability of learning from
experience, won as each increment develops, and as the true cost of the increment becomes a fact.
'When the development and test of an increment are complete, an estimate to complete the remaining
increments is computed.' (p. 474)
Source: Robert E. Quinnan, 'Software Engineering Management Practices', IBM Systems Journal, Vol. 19, No. 4, 1980, pp. 466~77
This text is cut from Gilb: The Principles of Software Engineering Management, 1988
21 July, 2014 Copyright Tom@Gilb.com 2013 55
56. Quinnan: IBM FSD Cleanroom
Dynamic Design to Cost
Quinnan describes the process control loop used by IBM FSD to ensure that cost targets are met.
'Cost management. . . yields valid cost plans linked to technical performance. Our practice carries cost
management farther by introducing design-to-cost guidance. Design, development, and managerial practices
are applied in an integrated way to ensure that software technical management is consistent with cost
management. The method [illustrated in this book by Figure 7.10] consists of developing a design, estimating
its cost, and ensuring that the design is cost-effective.' (p. 473)
He goes on to describe a design iteration process trying to meet cost targets by either redesign or by
sacrificing 'planned capability.' When a satisfactory design at cost target is achieved for a single increment,
the 'development of each increment can proceed concurrently with the program design of the others.'
'Design is an iterative process in which each design level is a refinement of the previous level.' (p. 474)
It is clear from this that they avoid the big bang cost estimation approach. Not only do they iterate in
seeking the appropriate balance between cost and design for a single increment, but they iterate through a
series of increments, thus reducing the complexity of the task, and increasing the probability of learning from
experience, won as each increment develops, and as the true cost of the increment becomes a fact.
'When the development and test of an increment are complete, an estimate to complete the remaining
increments is computed.' (p. 474)
Source: Robert E. Quinnan, 'Software Engineering Management Practices', IBM Systems Journal, Vol. 19, No. 4, 1980, pp. 466~77
This text is cut from Gilb: The Principles of Software Engineering Management, 1988
21 July, 2014 Copyright Tom@Gilb.com 2013 56
of developing a design,
estimating its cost, and
ensuring that the design
is cost-effective
57. Quinnan: IBM FSD Cleanroom
Dynamic Design to Cost
Quinnan describes the process control loop used by IBM FSD to ensure that cost targets are met.
'Cost management. . . yields valid cost plans linked to technical performance. Our practice carries cost
management farther by introducing design-to-cost guidance. Design, development, and managerial practices
are applied in an integrated way to ensure that software technical management is consistent with cost
management. The method [illustrated in this book by Figure 7.10] consists of developing a design, estimating
its cost, and ensuring that the design is cost-effective.' (p. 473)
He goes on to describe a design iteration process trying to meet cost targets by either redesign or by
sacrificing 'planned capability.' When a satisfactory design at cost target is achieved for a single increment,
the 'development of each increment can proceed concurrently with the program design of the others.'
'Design is an iterative process in which each design level is a refinement of the previous level.' (p. 474)
It is clear from this that they avoid the big bang cost estimation approach. Not only do they iterate in
seeking the appropriate balance between cost and design for a single increment, but they iterate through a
series of increments, thus reducing the complexity of the task, and increasing the probability of learning from
experience, won as each increment develops, and as the true cost of the increment becomes a fact.
'When the development and test of an increment are complete, an estimate to complete the remaining
increments is computed.' (p. 474)
Source: Robert E. Quinnan, 'Software Engineering Management Practices', IBM Systems Journal, Vol. 19, No. 4, 1980, pp. 466~77
This text is cut from Gilb: The Principles of Software Engineering Management, 1988
21 July, 2014 Copyright Tom@Gilb.com 2013 57
iteration process
trying to meet cost
targets by either
redesign or by
sacrificing 'planned
capability’
58. Quinnan: IBM FSD Cleanroom
Dynamic Design to Cost
Quinnan describes the process control loop used by IBM FSD to ensure that cost targets are met.
'Cost management. . . yields valid cost plans linked to technical performance. Our practice carries cost
management farther by introducing design-to-cost guidance. Design, development, and managerial practices
are applied in an integrated way to ensure that software technical management is consistent with cost
management. The method [illustrated in this book by Figure 7.10] consists of developing a design, estimating
its cost, and ensuring that the design is cost-effective.' (p. 473)
He goes on to describe a design iteration process trying to meet cost targets by either redesign or by
sacrificing 'planned capability.' When a satisfactory design at cost target is achieved for a single increment,
the 'development of each increment can proceed concurrently with the program design of the others.'
'Design is an iterative process in which each design level is a refinement of the previous level.' (p. 474)
It is clear from this that they avoid the big bang cost estimation approach. Not only do they iterate in
seeking the appropriate balance between cost and design for a single increment, but they iterate through a
series of increments, thus reducing the complexity of the task, and increasing the probability of learning from
experience, won as each increment develops, and as the true cost of the increment becomes a fact.
'When the development and test of an increment are complete, an estimate to complete the remaining
increments is computed.' (p. 474)
Source: Robert E. Quinnan, 'Software Engineering Management Practices', IBM Systems Journal, Vol. 19, No. 4, 1980, pp. 466~77
This text is cut from Gilb: The Principles of Software Engineering Management, 1988
21 July, 2014 Copyright Tom@Gilb.com 2013 58
Design is an
iterative process
59. Quinnan: IBM FSD Cleanroom
Dynamic Design to Cost
Quinnan describes the process control loop used by IBM FSD to ensure that cost targets are met.
'Cost management. . . yields valid cost plans linked to technical performance. Our practice carries cost
management farther by introducing design-to-cost guidance. Design, development, and managerial practices
are applied in an integrated way to ensure that software technical management is consistent with cost
management. The method [illustrated in this book by Figure 7.10] consists of developing a design, estimating
its cost, and ensuring that the design is cost-effective.' (p. 473)
He goes on to describe a design iteration process trying to meet cost targets by either redesign or by
sacrificing 'planned capability.' When a satisfactory design at cost target is achieved for a single increment,
the 'development of each increment can proceed concurrently with the program design of the others.'
'Design is an iterative process in which each design level is a refinement of the previous level.' (p. 474)
It is clear from this that they avoid the big bang cost estimation approach. Not only do they iterate in
seeking the appropriate balance between cost and design for a single increment, but they iterate through a
series of increments, thus reducing the complexity of the task, and increasing the probability of learning from
experience, won as each increment develops, and as the true cost of the increment becomes a fact.
'When the development and test of an increment are complete, an estimate to complete the remaining
increments is computed.' (p. 474)
Source: Robert E. Quinnan, 'Software Engineering Management Practices', IBM Systems Journal, Vol. 19, No. 4, 1980, pp. 466~77
This text is cut from Gilb: The Principles of Software Engineering Management, 1988
21 July, 2014 Copyright Tom@Gilb.com 2013 59
but they iterate through a series of
increments,
thus reducing the complexity of the
task,
and increasing the probability of
learning from experience
60. Quinnan: IBM FSD Cleanroom
Dynamic Design to Cost
Quinnan describes the process control loop used by IBM FSD to ensure that cost targets are met.
'Cost management. . . yields valid cost plans linked to technical performance. Our practice carries cost
management farther by introducing design-to-cost guidance. Design, development, and managerial practices
are applied in an integrated way to ensure that software technical management is consistent with cost
management. The method [illustrated in this book by Figure 7.10] consists of developing a design, estimating
its cost, and ensuring that the design is cost-effective.' (p. 473)
He goes on to describe a design iteration process trying to meet cost targets by either redesign or by
sacrificing 'planned capability.' When a satisfactory design at cost target is achieved for a single increment,
the 'development of each increment can proceed concurrently with the program design of the others.'
'Design is an iterative process in which each design level is a refinement of the previous level.' (p. 474)
It is clear from this that they avoid the big bang cost estimation approach. Not only do they iterate in
seeking the appropriate balance between cost and design for a single increment, but they iterate through a
series of increments, thus reducing the complexity of the task, and increasing the probability of learning from
experience, won as each increment develops, and as the true cost of the increment becomes a fact.
'When the development and test of an increment are complete, an estimate to complete the remaining
increments is computed.' (p. 474)
Source: Robert E. Quinnan, 'Software Engineering Management Practices', IBM Systems Journal, Vol. 19, No. 4, 1980, pp. 466~77
This text is cut from Gilb: The Principles of Software Engineering Management, 1988
21 July, 2014 Copyright Tom@Gilb.com 2013 60
an estimate to
complete the remaining
increments is
computed.
68. The Revolution is here
• Programmers of the world Unite!
21 July, 2014 Copyright Tom@Gilb.com 2014 68
69. For a free underground revolutionary Handbook
for Coder -> Software Engineer Revolution
Email Tom @ Gilb . Com
with Subject “Revo”
21 July, 2014 Copyright Tom@Gilb.com 2014 69