This is my story of lessons learned on why our improvement efforts fail... I had a great team. We were disciplined about best practices and spent tons of time on improvements. Then I watched my team slam into a brick wall. We brought down a fully-ramped semiconductor factory three times in a row, then couldn't ship again for a year.
Despite our best efforts with CI, unit testing, design reviews, and code reviews, we lost our ability to understand the system. I discovered our mistakes weren't caused by technical debt. Most of the problems were caused by human factors. We failed to improve because we didn't solve the right problems.
To learn, we need a feedback loop. To improve, we need a feedback loop with a goal.
There's five different ways our project feedback loop can break:
* **Broken Target** - Our definition of "better" is broken.
* **Broken Visibility** - We don't see the pain, so we take no action.
* **Broken Clarity** - We don't understand what's causing the pain.
* **Broken Awareness** - We don't know how to avoid the pain.
* **Broken Focus** - We see the pain, but our attention is focused on something else.
Find out how to repair the broken feedback loops on your software project.
Stop Getting Crushed By Business PressureArty Starr
The document discusses strategies for measuring and reducing "pain" or friction in software development projects. It describes tracking sources of unexpected behavior and long troubleshooting times to identify the biggest problems causing pain. Common causes of pain include human errors and factors that make code harder to understand over time. The document advocates measuring and categorizing specific pain points, identifying the largest problems, becoming a risk translator to communicate issues to managers, and refactoring organization structures to improve feedback when problems arise.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of "technical debt", but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the "friction" we experience into explicit risk models for project decision-making.
The document discusses measuring developer experience (DX) when working with Gradle. It proposes measuring "pain" caused by friction during the idea flow process of writing and troubleshooting code. Examples are given of mapping idea flow to identify sources of pain like learning unfamiliar code, assumptions causing rework, and troubleshooting unexpected behavior. Reducing friction in the software supply chain is important as most software issues occur in dependencies outside a developer's control. Pilot projects aim to collect standardized pain data across the community to help partners reduce friction and improve overall DX.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the loss of productivity, the escalating costs and risks, and could steer our projects with a data-driven feedback loop?
By measuring the friction in “Idea Flow”, the flow of ideas between the developer and the software, we can create a data-driven feedback loop for learning what works. Rather than making decisions based on anecdote and gut feel, we can start driving our improvement decisions with real data.
Data-Driven Software Mastery is about learning and improving faster than ever.
Find out how you can:
• Identify the biggest causes of productivity loss on your software project.
• Avoid spending tons of time solving the wrong problems
• Collaborate with other industry professionals in the art of data-driven software mastery
Idea Flow gives us a universal language for describing our experience, so we can share the patterns and principles of what works. With a feedback loop, we can run real experiments!
Idea Flow turns the development community into a scientific community.
There’s a huge disconnect between the business world and the engineering world that drives our software projects into the ground. We rewrite our software over and over again, not because we lack the engineering skills to build great software, but because we fail to communicate, make decisions in ignorance, and don’t adapt when our current strategy is obviously failing.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the loss of productivity, the escalating costs and risks, and could steer our projects with a data-driven feedback loop?
Visibility changes everything. With visibility, we can bridge the gap between the business world and the engineering world, and get everyone pulling the same direction.
Find out how you can:
1. Identify the biggest causes of productivity loss on your software project
2. Translate the world of developer pain into explicit costs and risks
3. Collaborate with other industry professionals in the art of data-driven software mastery
Let's break down the challenges and learn our way to success, one small victory at a time.
Speaker: Janelle Klein
Janelle is a NFJS Tour Speaker and author of the book, Idea Flow: How to Measure the PAIN in Software Development: a modern strategy for systematically optimizing software productivity with a data-driven feedback loop.
Once we make our pain visible with Idea Flow Mapping, we've got a data-driven feedback to learn what works. Objective data enables us to do something we've never been able to do before in our industry: science. This talk is about how to do science in software development.
The Lean Startup community has pioneered the art of everyday science to reduce the risk of building the wrong product by running customer experiments to learn what works. By mapping these same basic scientific principles to technical risk management, we can run experiments to learn our way to AWESOME!
Edit
Archive
Delete
In this talk we'll cover:
How science is used in the Lean Startup world to run business model experiments
How science is used in the Lean Manufacturing world to support process control & supply chain optimization
How we can apply science in software development to systematically learn what works.
If you want to start learning and improving faster than ever before, you won't want to miss this talk.
What makes software development complex isn't the code, it's the humans. The most effective way to improve our capabilities in software development is to better understand ourselves.
In this talk, I'll introduce a conceptual model for human interaction, identity, culture, communication, relationships, and learning based on the foundational model of Idea Flow. If you were to write a simulator to describe the interaction of humans, this talk would describe the architecture.
Learn how to understand the humans on your team and fix the bugs in communication, by thinking about your teammates like code!
Edit
Archive
Delete
I'm not a scientist or a psychologist. These ideas are based on a combination of personal experience, reading lots of cognitive science books, and a couple years of running experiments on developers. As I struggled through the challenges of getting a software concept from my head to another developer's head (interpersonal Idea Flow), I learned a whole lot about human interaction.
As software developers, we have to work together, think together, and solve problems together to do our jobs. Code? We get it. Humans? WTF?!
Fortunately, humans are predictably irrational, predictably emotional, and predictably judgmental creatures. Of course those pesky humans will always do a few unexpected things, but once we know the algorithm for peace and harmony among humans, we can start debugging the communication problems on our team.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of “technical debt”, but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the “friction” we experience into explicit risk models for project decision-making.
Stop Getting Crushed By Business PressureArty Starr
The document discusses strategies for measuring and reducing "pain" or friction in software development projects. It describes tracking sources of unexpected behavior and long troubleshooting times to identify the biggest problems causing pain. Common causes of pain include human errors and factors that make code harder to understand over time. The document advocates measuring and categorizing specific pain points, identifying the largest problems, becoming a risk translator to communicate issues to managers, and refactoring organization structures to improve feedback when problems arise.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of "technical debt", but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the "friction" we experience into explicit risk models for project decision-making.
The document discusses measuring developer experience (DX) when working with Gradle. It proposes measuring "pain" caused by friction during the idea flow process of writing and troubleshooting code. Examples are given of mapping idea flow to identify sources of pain like learning unfamiliar code, assumptions causing rework, and troubleshooting unexpected behavior. Reducing friction in the software supply chain is important as most software issues occur in dependencies outside a developer's control. Pilot projects aim to collect standardized pain data across the community to help partners reduce friction and improve overall DX.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the loss of productivity, the escalating costs and risks, and could steer our projects with a data-driven feedback loop?
By measuring the friction in “Idea Flow”, the flow of ideas between the developer and the software, we can create a data-driven feedback loop for learning what works. Rather than making decisions based on anecdote and gut feel, we can start driving our improvement decisions with real data.
Data-Driven Software Mastery is about learning and improving faster than ever.
Find out how you can:
• Identify the biggest causes of productivity loss on your software project.
• Avoid spending tons of time solving the wrong problems
• Collaborate with other industry professionals in the art of data-driven software mastery
Idea Flow gives us a universal language for describing our experience, so we can share the patterns and principles of what works. With a feedback loop, we can run real experiments!
Idea Flow turns the development community into a scientific community.
There’s a huge disconnect between the business world and the engineering world that drives our software projects into the ground. We rewrite our software over and over again, not because we lack the engineering skills to build great software, but because we fail to communicate, make decisions in ignorance, and don’t adapt when our current strategy is obviously failing.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the loss of productivity, the escalating costs and risks, and could steer our projects with a data-driven feedback loop?
Visibility changes everything. With visibility, we can bridge the gap between the business world and the engineering world, and get everyone pulling the same direction.
Find out how you can:
1. Identify the biggest causes of productivity loss on your software project
2. Translate the world of developer pain into explicit costs and risks
3. Collaborate with other industry professionals in the art of data-driven software mastery
Let's break down the challenges and learn our way to success, one small victory at a time.
Speaker: Janelle Klein
Janelle is a NFJS Tour Speaker and author of the book, Idea Flow: How to Measure the PAIN in Software Development: a modern strategy for systematically optimizing software productivity with a data-driven feedback loop.
Once we make our pain visible with Idea Flow Mapping, we've got a data-driven feedback to learn what works. Objective data enables us to do something we've never been able to do before in our industry: science. This talk is about how to do science in software development.
The Lean Startup community has pioneered the art of everyday science to reduce the risk of building the wrong product by running customer experiments to learn what works. By mapping these same basic scientific principles to technical risk management, we can run experiments to learn our way to AWESOME!
Edit
Archive
Delete
In this talk we'll cover:
How science is used in the Lean Startup world to run business model experiments
How science is used in the Lean Manufacturing world to support process control & supply chain optimization
How we can apply science in software development to systematically learn what works.
If you want to start learning and improving faster than ever before, you won't want to miss this talk.
What makes software development complex isn't the code, it's the humans. The most effective way to improve our capabilities in software development is to better understand ourselves.
In this talk, I'll introduce a conceptual model for human interaction, identity, culture, communication, relationships, and learning based on the foundational model of Idea Flow. If you were to write a simulator to describe the interaction of humans, this talk would describe the architecture.
Learn how to understand the humans on your team and fix the bugs in communication, by thinking about your teammates like code!
Edit
Archive
Delete
I'm not a scientist or a psychologist. These ideas are based on a combination of personal experience, reading lots of cognitive science books, and a couple years of running experiments on developers. As I struggled through the challenges of getting a software concept from my head to another developer's head (interpersonal Idea Flow), I learned a whole lot about human interaction.
As software developers, we have to work together, think together, and solve problems together to do our jobs. Code? We get it. Humans? WTF?!
Fortunately, humans are predictably irrational, predictably emotional, and predictably judgmental creatures. Of course those pesky humans will always do a few unexpected things, but once we know the algorithm for peace and harmony among humans, we can start debugging the communication problems on our team.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of “technical debt”, but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the “friction” we experience into explicit risk models for project decision-making.
Identify Development Pains and Resolve Them with Idea FlowTechWell
With the explosion of new frameworks, a mountain of automation, and our applications distributed across hundreds of services in the cloud, the level of complexity in software development is growing at an insane pace. With increased complexity comes increased costs and risks. When diagnosing unexpected behavior can take days, weeks, or sometimes months, all while our release is on the line, our projects plunge into chaos. In the invisible world of software development, how do we identify what's causing our pain? How do we escape the chaos? Janelle Klein presents a novel approach to measuring the chaos, identifying the causes, and systematically driving improvement with a data-driven feedback loop. Rather than measuring the problems in the code, Janelle suggests measuring the "friction in Idea Flow", the time it takes a developer to diagnose and resolve unexpected confusion, which disrupts the flow of progress during development. With visibility of the symptoms, we can identify the cause—whether it's bad architecture, collaboration problems, or technical debt. Janelle discusses how to measure Idea Flow, why it matters, and the implications for our teams, our organizations, and our industry.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the effects of learning curves, collaboration pain, and problems building up in the code?
We could:
Identify the highest leverage opportunities for improvement
Make the case to management that budget should be allocated for a solution
Lead the organization in making better decisions with a data-driven feedback loop to guide the way
Several years ago, I stumbled into a solution for measuring the growing “friction” in developer experience. Visibility turned my world upside-down.
We've been trying to explain the pain of Technical Debt for generations, but we've never been able to measure it. Visibility introduces a whole new world of possibilities.
In this talk, I'll show you what I'm measuring, how exactly I'm measuring it, then we'll talk through the implications for our teams, our organizations, and our industry.
We can identify the highest leverage improvement opportunities and steer our projects with a data-driven feedback loop.
We can breakdown the "wall of ignorance" between developers and management by defining an explicit language for managing technical risk.
We can teach the art of software development with a data-driven feedback loop and codify our knowledge into sharable decision principles.
We can revolutionize our business accounting methods to take the pain of software development into account, so the costs and risks are visible at the highest levels of the organization.
We can conquer the challenges across the software industry by working together, learning together, and sharing our knowledge with the world.
With visibility, we can start a revolution in data-driven learning.
Bringing Science to Software DevelopmentArty Starr
Twenty-five years ago, Peter Senge wrote “The Fifth Discipline”, considered the seminal text for how to build a learning organization. With obvious benefits, and the recipe needed for success, why don't we see more learning organizations? That was twenty-five years ago!
As Ash Maurya pointed out in his new book, Scaling Lean, “The goal isn't learning, the goal is traction.” Without a process that helps us turn learning into momentum, a culture of learning gets us nowhere. Without a strategy to overcome the challenges of distributed decision-making, we still make most decisions in ignorance.
Let's dust off these old ideas in light of all the discoveries we've made over the last decade in Lean Startup, Agile, and Continuous Delivery.
What are the critical elements that are missing in our organizations that prevent us from building a learning organization? What are the key obstacles to success?
In this talk, we'll breakdown the concept of a learning organization into discrete system components and analyze the requirements like engineers. Then we'll discuss a strategy for overcoming the challenges and iteratively transforming our organizations into learning organizations. From the building blocks of culture, to the design of organizational architecture, we'll build a roadmap for learning how to learn together.
Want to learn your way to being an AWESOME company? Learn how to become a learning organization.
Deck for the Global Scrum Gathering in Austin, TX on May 22, 2019.
Summary:
Sometimes organizations that are going through an agile transformation complain that they aren’t getting the benefits that they expected, especially as it’s related to quality and sustaining their pace of delivery. One of the possible reasons could be that insufficient attention has been given to performing the technical practices that support the agile values and principles. One of the big problems that I have seen is development teams not doing the engineering practices and managers de-emphasizing or “not allowing” developers do them. We need to renew the emphasis on agile engineering practices and embrace the ideas of software craftsmanship – without this, agility will suffer. Join in the session as we talk about the relationship of Agile development and code quality and how lack of technical excellence impacts maintainability and time to market. Then we’ll review some agile engineering practices and recommendations on how to get started.
Learning Objectives:
What is clean code Description of technical practices Why lack of technical excellence can negatively impact the team's ability to sustain their delivery pace.
The Rationale for Continuous Delivery by Dave FarleyBosnia Agile
The production of software is a complex, collaborative process that stretches our ability as human beings to cope with its demands.
Many people working in software development spend their careers without seeing what good really looks like.
Our history is littered with inefficient processes creating poor quality output, too late to capitalise on the expected business value. How have we got into this state? How do we get past it? What does good really look like?
Continuous Delivery changes the economics of software development for some of the biggest companies in the world, whatever the nature of their software development, find out how and why.
What We Learned from Three Years of Sciencing the Crap Out of DevOpsSeniorStoryteller
This document summarizes research from three years of studying DevOps practices. Some key findings include:
- Continuous delivery practices like reducing lead time and increasing release frequency are correlated with higher IT performance. However, tools like configuration management tools are not correlated.
- Ineffective testing practices include developers not creating tests or environments being difficult to reproduce. But having QA primarily create tests is not ineffective.
- While managing work-in-progress is thought to be important, the correlation between WIP and IT performance is actually negligible.
- DevOps culture and practices around information sharing and collaboration are valid constructs that are predictive of both IT and organizational performance. But data testing is needed to validate assumptions.
Testing for cognitive bias in ai systemsPeter Varhol
The document discusses how machine learning systems can produce biased results based on issues with the training data used, and provides examples of how biases have emerged in commercial AI systems. It then outlines approaches for testing machine learning systems to identify potential biases, including understanding the training data, defining objective success criteria, and testing with diverse edge cases. The challenges of addressing biases that emerge from limitations in the data or human decisions are also examined.
How To (Not) Open Source - Javazone, Oslo 2014gdusbabek
Releasing an open source project while maintaining a shipping product is hard! Different behaviors, attitudes and actions can help or hinder your cause; and they are not always obvious.
The Blueflood distributed metrics engine was released as open source software by Rackspace in August 2012. In the succeeding months the team had to strike a manageable balance between the challenges of growing a community, being good open source stewards, and maintaining a shipping product for Rackspace. Find out what worked, what did not work, and the lessons that can be applied as you endeavor to take your project out into the open.
In this presentation you will learn about strategies for releasing open source products, pitfalls to avoid, and the potential benefits of moving more of your development out in the open.
We have also made a few realizations about the community growing up around metrics. It is still young, and there are problems that come with that youth. I'll talk about some things we can do to make a better software ecosystem.
This document discusses the rationale for adopting continuous delivery practices in software development. It summarizes several studies that found high rates of project failures and benefits not being realized from traditional development approaches. Continuous delivery is presented as an approach that can help address these issues by focusing on rapid, reliable, and automated software releases. Case studies are provided of organizations like Google, Amazon, and HP that have successfully implemented continuous delivery at large scales. Adopting these practices is associated with benefits like increased throughput, reliability, innovation, and business performance.
RecSysOps: Best Practices for Operating a Large-Scale Recommender SystemEhsan38
Ensuring the health of a modern large-scale recommendation system is a very challenging problem. To address this, we need to put in place proper logging, sophisticated exploration policies, develop ML-interpretability tools or even train new ML models to predict/detect issues of the main production model. In this talk, we shine a light on this less-discussed but important area and share some of the best practices, called RecSysOps, that we’ve learned while operating our increasingly complex recommender systems at Netflix. RecSysOps is a set of best practices for identifying issues and gaps as well as diagnosing and resolving them in a large-scale machine-learned recommender system. RecSysOps helped us to 1) reduce production issues and 2) increase recommendation quality by identifying areas of improvement and 3) make it possible to bring new innovations faster to our members by enabling us to spend more of our time on new innovations and less on debugging and firefighting issues.
https://dl.acm.org/doi/10.1145/3460231.3474620
Dealing with Estimation, Uncertainty, Risk, and CommitmentTechWell
Here are three key uncertainties that are often important for software projects:
1. Requirements uncertainty - Unclear or changing requirements can introduce significant risk. Getting requirements right up front reduces later changes.
2. Technical uncertainty - The complexity of the technical solution, unproven technologies, and integration risks can all increase uncertainty. Spikes or prototypes help reduce technical risk.
3. Resource uncertainty - Not knowing if the necessary skills and staff will be available when needed can jeopardize a project. Ensuring resources are committed reduces this risk.
Focusing on these top uncertainties early helps establish a realistic plan and reduces risk of cost and schedule overruns. Other risks like market changes or third party risks are also important to evaluate based
Troublefree troubleshooting ian campbell sps jhb 2019Ian Campbell
The document provides instructions for attendees of an event hosted by SPS Events in Johannesburg, South Africa. It notes that session schedules may not be printed for all attendees and can be found by session room doors or online. Attendees are asked to provide feedback on sessions and to stay for the prize giving at the end of the day. They are also encouraged to interact with sponsors and speakers, take selfies with speakers to enter a photo competition, and share their learnings on social media using the #SPSJHB hashtag.
The document discusses agile testing and bug prevention. It advocates for embedding testers within development teams to focus on prevention rather than detection of bugs. The ideal approach involves continuous testing parallel to development with the entire team involved in testing.
This document appears to be a slide presentation on DevOps practices and culture. Some key points discussed include:
- High-performing IT organizations are twice as likely to exceed goals in areas like profitability and customer satisfaction.
- DevOps focuses on continuous delivery, quality, lean processes, effective collaboration, and a culture of learning from failures.
- Culture can be measured and influenced by providing employees the tools and training to do their jobs successfully.
- Adopting DevOps practices may lead to improved lead times, release frequency, change fail rates, and service restoration times.
Good project from scratch - from developer's point of viewPaweł Lewtak
Slides for my talk at PHPExperience 2018 in São Paulo.
It's about 10 things I believe are important in order to have a successful long-term IT project.
SecureWorld: Security is Dead, Rugged DevOps 1fGene Kim
This document provides an introduction to a presentation by Joshua Corman and Gene Kim on Rugged DevOps. It includes brief biographies of the presenters and outlines some of the key topics to be covered, including how security is evolving from a separate function to an integrated part of rapid software development. The presentation will explore how organizations can adopt practices like DevOps to help break the chronic conflict between rapid innovation and stable operations.
A Rapid Introduction to Rapid Software TestingTechWell
This document provides a summary of a presentation on Rapid Software Testing. The presentation was given by Michael Bolton of DevelopSense and covered the methodology and mindset of rapid software testing. It emphasizes testing software expertly under uncertainty and time pressure. The presentation defines rapid testing as testing more quickly and less expensively while still achieving excellent results. It compares rapid testing to other approaches like exhaustive, ponderous, and slapdash testing. The presentation also discusses principles of rapid testing, how to recognize problems quickly using heuristics, and testing rapidly to fulfill the mission of testing.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2HkIr87.
Justin Becker focuses on the jerk part of “brilliant jerk”. He talks about the Emotional Intelligence and why it matters in developing and operating software systems effectively. He provides opinions and perspective from his experience as an engineer and then manager at Netflix and answers the questions: “what is and why we can’t afford to have a brilliant jerk” and “Am I a brilliant jerk?”. Filmed at qconsf.com.
Justin Becker is an engineering manager for the Playback API team at Netflix. He has worked at Netflix for seven years, the first five years as an engineer. His focus is building scalable, high availability, services running in a cloud environment.
This document discusses human error in systems operation and provides examples of common slips and lapses that can occur. It outlines three approaches to modeling human error - THERP, GEMS, and CREAM. It also discusses designing systems to minimize errors through mechanisms like forcing functions, narrowing the gulf of execution and evaluation, and considering human and organizational factors rather than just technical approaches. Key points are that human error is often implicated in accidents but may not be the sole cause, and that it can be difficult to definitively classify actions as errors.
Poka-yoke is a Japanese term that means "mistake-proofing" and refers to mechanisms in manufacturing processes that help operators avoid errors. Its purpose is to eliminate defects by preventing, correcting, or drawing attention to human mistakes. Poka-yoke was developed by Shigeo Shingo at Toyota to achieve "zero defects" through fail-safe mechanisms. Examples include lifts that prevent overloading and include alarms if weight limits are exceeded. Implementing poka-yoke helps improve quality, productivity, and efficiency by reducing errors, rework, and waste in manufacturing processes.
Identify Development Pains and Resolve Them with Idea FlowTechWell
With the explosion of new frameworks, a mountain of automation, and our applications distributed across hundreds of services in the cloud, the level of complexity in software development is growing at an insane pace. With increased complexity comes increased costs and risks. When diagnosing unexpected behavior can take days, weeks, or sometimes months, all while our release is on the line, our projects plunge into chaos. In the invisible world of software development, how do we identify what's causing our pain? How do we escape the chaos? Janelle Klein presents a novel approach to measuring the chaos, identifying the causes, and systematically driving improvement with a data-driven feedback loop. Rather than measuring the problems in the code, Janelle suggests measuring the "friction in Idea Flow", the time it takes a developer to diagnose and resolve unexpected confusion, which disrupts the flow of progress during development. With visibility of the symptoms, we can identify the cause—whether it's bad architecture, collaboration problems, or technical debt. Janelle discusses how to measure Idea Flow, why it matters, and the implications for our teams, our organizations, and our industry.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the effects of learning curves, collaboration pain, and problems building up in the code?
We could:
Identify the highest leverage opportunities for improvement
Make the case to management that budget should be allocated for a solution
Lead the organization in making better decisions with a data-driven feedback loop to guide the way
Several years ago, I stumbled into a solution for measuring the growing “friction” in developer experience. Visibility turned my world upside-down.
We've been trying to explain the pain of Technical Debt for generations, but we've never been able to measure it. Visibility introduces a whole new world of possibilities.
In this talk, I'll show you what I'm measuring, how exactly I'm measuring it, then we'll talk through the implications for our teams, our organizations, and our industry.
We can identify the highest leverage improvement opportunities and steer our projects with a data-driven feedback loop.
We can breakdown the "wall of ignorance" between developers and management by defining an explicit language for managing technical risk.
We can teach the art of software development with a data-driven feedback loop and codify our knowledge into sharable decision principles.
We can revolutionize our business accounting methods to take the pain of software development into account, so the costs and risks are visible at the highest levels of the organization.
We can conquer the challenges across the software industry by working together, learning together, and sharing our knowledge with the world.
With visibility, we can start a revolution in data-driven learning.
Bringing Science to Software DevelopmentArty Starr
Twenty-five years ago, Peter Senge wrote “The Fifth Discipline”, considered the seminal text for how to build a learning organization. With obvious benefits, and the recipe needed for success, why don't we see more learning organizations? That was twenty-five years ago!
As Ash Maurya pointed out in his new book, Scaling Lean, “The goal isn't learning, the goal is traction.” Without a process that helps us turn learning into momentum, a culture of learning gets us nowhere. Without a strategy to overcome the challenges of distributed decision-making, we still make most decisions in ignorance.
Let's dust off these old ideas in light of all the discoveries we've made over the last decade in Lean Startup, Agile, and Continuous Delivery.
What are the critical elements that are missing in our organizations that prevent us from building a learning organization? What are the key obstacles to success?
In this talk, we'll breakdown the concept of a learning organization into discrete system components and analyze the requirements like engineers. Then we'll discuss a strategy for overcoming the challenges and iteratively transforming our organizations into learning organizations. From the building blocks of culture, to the design of organizational architecture, we'll build a roadmap for learning how to learn together.
Want to learn your way to being an AWESOME company? Learn how to become a learning organization.
Deck for the Global Scrum Gathering in Austin, TX on May 22, 2019.
Summary:
Sometimes organizations that are going through an agile transformation complain that they aren’t getting the benefits that they expected, especially as it’s related to quality and sustaining their pace of delivery. One of the possible reasons could be that insufficient attention has been given to performing the technical practices that support the agile values and principles. One of the big problems that I have seen is development teams not doing the engineering practices and managers de-emphasizing or “not allowing” developers do them. We need to renew the emphasis on agile engineering practices and embrace the ideas of software craftsmanship – without this, agility will suffer. Join in the session as we talk about the relationship of Agile development and code quality and how lack of technical excellence impacts maintainability and time to market. Then we’ll review some agile engineering practices and recommendations on how to get started.
Learning Objectives:
What is clean code Description of technical practices Why lack of technical excellence can negatively impact the team's ability to sustain their delivery pace.
The Rationale for Continuous Delivery by Dave FarleyBosnia Agile
The production of software is a complex, collaborative process that stretches our ability as human beings to cope with its demands.
Many people working in software development spend their careers without seeing what good really looks like.
Our history is littered with inefficient processes creating poor quality output, too late to capitalise on the expected business value. How have we got into this state? How do we get past it? What does good really look like?
Continuous Delivery changes the economics of software development for some of the biggest companies in the world, whatever the nature of their software development, find out how and why.
What We Learned from Three Years of Sciencing the Crap Out of DevOpsSeniorStoryteller
This document summarizes research from three years of studying DevOps practices. Some key findings include:
- Continuous delivery practices like reducing lead time and increasing release frequency are correlated with higher IT performance. However, tools like configuration management tools are not correlated.
- Ineffective testing practices include developers not creating tests or environments being difficult to reproduce. But having QA primarily create tests is not ineffective.
- While managing work-in-progress is thought to be important, the correlation between WIP and IT performance is actually negligible.
- DevOps culture and practices around information sharing and collaboration are valid constructs that are predictive of both IT and organizational performance. But data testing is needed to validate assumptions.
Testing for cognitive bias in ai systemsPeter Varhol
The document discusses how machine learning systems can produce biased results based on issues with the training data used, and provides examples of how biases have emerged in commercial AI systems. It then outlines approaches for testing machine learning systems to identify potential biases, including understanding the training data, defining objective success criteria, and testing with diverse edge cases. The challenges of addressing biases that emerge from limitations in the data or human decisions are also examined.
How To (Not) Open Source - Javazone, Oslo 2014gdusbabek
Releasing an open source project while maintaining a shipping product is hard! Different behaviors, attitudes and actions can help or hinder your cause; and they are not always obvious.
The Blueflood distributed metrics engine was released as open source software by Rackspace in August 2012. In the succeeding months the team had to strike a manageable balance between the challenges of growing a community, being good open source stewards, and maintaining a shipping product for Rackspace. Find out what worked, what did not work, and the lessons that can be applied as you endeavor to take your project out into the open.
In this presentation you will learn about strategies for releasing open source products, pitfalls to avoid, and the potential benefits of moving more of your development out in the open.
We have also made a few realizations about the community growing up around metrics. It is still young, and there are problems that come with that youth. I'll talk about some things we can do to make a better software ecosystem.
This document discusses the rationale for adopting continuous delivery practices in software development. It summarizes several studies that found high rates of project failures and benefits not being realized from traditional development approaches. Continuous delivery is presented as an approach that can help address these issues by focusing on rapid, reliable, and automated software releases. Case studies are provided of organizations like Google, Amazon, and HP that have successfully implemented continuous delivery at large scales. Adopting these practices is associated with benefits like increased throughput, reliability, innovation, and business performance.
RecSysOps: Best Practices for Operating a Large-Scale Recommender SystemEhsan38
Ensuring the health of a modern large-scale recommendation system is a very challenging problem. To address this, we need to put in place proper logging, sophisticated exploration policies, develop ML-interpretability tools or even train new ML models to predict/detect issues of the main production model. In this talk, we shine a light on this less-discussed but important area and share some of the best practices, called RecSysOps, that we’ve learned while operating our increasingly complex recommender systems at Netflix. RecSysOps is a set of best practices for identifying issues and gaps as well as diagnosing and resolving them in a large-scale machine-learned recommender system. RecSysOps helped us to 1) reduce production issues and 2) increase recommendation quality by identifying areas of improvement and 3) make it possible to bring new innovations faster to our members by enabling us to spend more of our time on new innovations and less on debugging and firefighting issues.
https://dl.acm.org/doi/10.1145/3460231.3474620
Dealing with Estimation, Uncertainty, Risk, and CommitmentTechWell
Here are three key uncertainties that are often important for software projects:
1. Requirements uncertainty - Unclear or changing requirements can introduce significant risk. Getting requirements right up front reduces later changes.
2. Technical uncertainty - The complexity of the technical solution, unproven technologies, and integration risks can all increase uncertainty. Spikes or prototypes help reduce technical risk.
3. Resource uncertainty - Not knowing if the necessary skills and staff will be available when needed can jeopardize a project. Ensuring resources are committed reduces this risk.
Focusing on these top uncertainties early helps establish a realistic plan and reduces risk of cost and schedule overruns. Other risks like market changes or third party risks are also important to evaluate based
Troublefree troubleshooting ian campbell sps jhb 2019Ian Campbell
The document provides instructions for attendees of an event hosted by SPS Events in Johannesburg, South Africa. It notes that session schedules may not be printed for all attendees and can be found by session room doors or online. Attendees are asked to provide feedback on sessions and to stay for the prize giving at the end of the day. They are also encouraged to interact with sponsors and speakers, take selfies with speakers to enter a photo competition, and share their learnings on social media using the #SPSJHB hashtag.
The document discusses agile testing and bug prevention. It advocates for embedding testers within development teams to focus on prevention rather than detection of bugs. The ideal approach involves continuous testing parallel to development with the entire team involved in testing.
This document appears to be a slide presentation on DevOps practices and culture. Some key points discussed include:
- High-performing IT organizations are twice as likely to exceed goals in areas like profitability and customer satisfaction.
- DevOps focuses on continuous delivery, quality, lean processes, effective collaboration, and a culture of learning from failures.
- Culture can be measured and influenced by providing employees the tools and training to do their jobs successfully.
- Adopting DevOps practices may lead to improved lead times, release frequency, change fail rates, and service restoration times.
Good project from scratch - from developer's point of viewPaweł Lewtak
Slides for my talk at PHPExperience 2018 in São Paulo.
It's about 10 things I believe are important in order to have a successful long-term IT project.
SecureWorld: Security is Dead, Rugged DevOps 1fGene Kim
This document provides an introduction to a presentation by Joshua Corman and Gene Kim on Rugged DevOps. It includes brief biographies of the presenters and outlines some of the key topics to be covered, including how security is evolving from a separate function to an integrated part of rapid software development. The presentation will explore how organizations can adopt practices like DevOps to help break the chronic conflict between rapid innovation and stable operations.
A Rapid Introduction to Rapid Software TestingTechWell
This document provides a summary of a presentation on Rapid Software Testing. The presentation was given by Michael Bolton of DevelopSense and covered the methodology and mindset of rapid software testing. It emphasizes testing software expertly under uncertainty and time pressure. The presentation defines rapid testing as testing more quickly and less expensively while still achieving excellent results. It compares rapid testing to other approaches like exhaustive, ponderous, and slapdash testing. The presentation also discusses principles of rapid testing, how to recognize problems quickly using heuristics, and testing rapidly to fulfill the mission of testing.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2HkIr87.
Justin Becker focuses on the jerk part of “brilliant jerk”. He talks about the Emotional Intelligence and why it matters in developing and operating software systems effectively. He provides opinions and perspective from his experience as an engineer and then manager at Netflix and answers the questions: “what is and why we can’t afford to have a brilliant jerk” and “Am I a brilliant jerk?”. Filmed at qconsf.com.
Justin Becker is an engineering manager for the Playback API team at Netflix. He has worked at Netflix for seven years, the first five years as an engineer. His focus is building scalable, high availability, services running in a cloud environment.
This document discusses human error in systems operation and provides examples of common slips and lapses that can occur. It outlines three approaches to modeling human error - THERP, GEMS, and CREAM. It also discusses designing systems to minimize errors through mechanisms like forcing functions, narrowing the gulf of execution and evaluation, and considering human and organizational factors rather than just technical approaches. Key points are that human error is often implicated in accidents but may not be the sole cause, and that it can be difficult to definitively classify actions as errors.
Poka-yoke is a Japanese term that means "mistake-proofing" and refers to mechanisms in manufacturing processes that help operators avoid errors. Its purpose is to eliminate defects by preventing, correcting, or drawing attention to human mistakes. Poka-yoke was developed by Shigeo Shingo at Toyota to achieve "zero defects" through fail-safe mechanisms. Examples include lifts that prevent overloading and include alarms if weight limits are exceeded. Implementing poka-yoke helps improve quality, productivity, and efficiency by reducing errors, rework, and waste in manufacturing processes.
This document provides an overview of 5S, Kaizen, and Poka-Yoke concepts. 5S is a workplace organization methodology using five Japanese words - Sorting, Straightening, Shining, Standardizing, and Sustaining. Kaizen refers to continuous improvement and focuses on simplifying processes. Poka-Yoke aims to eliminate defects by preventing human errors through mechanisms that prove mistakes. Examples given include diskettes that only insert correctly and sensors that turn off water in sinks.
POKA-YOKE - A Lean Strategy to Mistake ProofingTimothy Wooi
A Lean Strategy in Human Error Prevention aims to detect and correct possible error immediately, eliminating defects at the source.
Poka-Yoke overcome the inefficiencies of inspection through the use of automatic devices that seek,
1.Not to accept a defect for the process
2.Not to Create a Defect
3.Not to Allow a Defect to be passed to the next process
Its purpose is to eliminate product defects by preventing, correcting, or drawing attention to human errors as they occur.
The concept was formalized, and the term adopted, by Shigeo Shingo as part of the
Toyota Production System.
It was originally described as baka-yoke, meaning "fool-proofing“ or “idiot proofing” but the name was later changed to the milder poka-yoke.
10 Mistakes Brands & Causes Make When Creating ContentBen Stroup
The document outlines 10 common mistakes brands and causes make when creating content, including assuming too much about the audience, dumping all content at once instead of gradually, failing to provide a clear call to action, getting too technical, and focusing on proving a point rather than starting a conversation. It emphasizes that effective content writing should meet audiences where they are, offer new perspectives, and lead to action rather than just stating a case.
The document discusses poka-yoke or mistake-proofing techniques. It defines poka-yoke as methods used to prevent human and machine errors from occurring in processes. Poka-yoke techniques make it impossible to make mistakes by providing instant feedback and eliminating opportunities for errors. Examples of different poka-yoke methods include contact methods using guides to ensure proper part placement, counting methods to verify the correct number of parts or steps, and motion sequence methods using sensors to check processes are performed in the right order. Poka-yoke is an important quality improvement technique that can be applied to any process to drive defects out and improve reliability.
Poka-yoke, also known as mistake-proofing, aims to eliminate defects by preventing or correcting mistakes as early as possible. It was developed by Dr. Shigeo Shingo as part of the Toyota Production System. The presentation defines poka-yoke, discusses common errors and their impacts, and provides examples of mistake-proofing strategies and devices that make errors harder or impossible, like limit switches, guide pins, and checklists. The goal of poka-yoke is to achieve zero defects through early detection and prevention of mistakes in the production process.
This document discusses Poka-Yoke, a technique used to eliminate errors in manufacturing processes. Poka-Yoke aims to make mistakes impossible by implementing fail-safe methods that detect or prevent errors. Examples provided include part locators that ensure correct placement, and sensors that check proper assembly sequences. The document outlines common sources of defects, importance of preventing errors, appropriate uses of Poka-Yoke, and methods like contact and counting techniques. Real-world examples demonstrate Poka-Yoke in everyday products and complex systems like submarines.
#Microposts16 - Comparing Social Media and Traditional Surveys around the Bos...Cody Buntain
A presentation of our work in comparing social science work on social media and using traditional surveys. Our major takeaway from this lessons-learned work is that social media is timely, big, and cheap, and nationally representative surveys provide higher quality data at higher cost.
BRN Symposium 03/06/06 Conclusions : The Microbiome in respiratory medicine brnmomentum
The document discusses the lung microbiome and remaining challenges in the field. It notes that culture-independent techniques have improved understanding of microbes living in contact with the body. The lung was found to not be sterile as previously believed. Understanding the lung microbiome's relationship with respiratory surfaces is a promising area. However, standardizing sampling and analyzing the microbiome present additional complexity. Several challenges remain such as harmonizing genomic methods, characterizing the microbiome's functions, and improving epidemiological studies with longitudinal patient data. Addressing these challenges will provide insights into how the lung microbiome impacts disease pathogenesis and can be managed.
El documento habla sobre el programa Hoy No Circula y la contaminación ambiental causada por los automóviles en la Ciudad de México. El programa se implementó para proteger la salud pública y mejorar la calidad del aire. Incluye restricciones sobre qué días pueden circular los vehículos dependiendo de su placa. Los vehículos eléctricos están exentos. También explica cómo aplica el programa a transporte público, taxis y vehículos de carga. La contaminación se debe principalmente a los combustibles fósiles quemados en los automóviles. Se
The document provides information on different search methods that can be used to find candidates on job boards and within a company's database. It discusses:
1) Searching job boards using advanced search options, keywords, locations, experience levels, and saved searches that can be customized and shared.
2) Developing a location search strategy based on the client's needs, including executive searches of targeted companies, databases searches of past candidates, and online job advertisements.
3) A universal 5-step search method involving understanding the job requirements, listing all required skills, researching synonyms, running the search, and filtering results.
4) How Boolean logic using AND, OR, and NOT can help refine searches.
Ranjit Singh has over 10 years of experience as a Scaffolding & Painting Supervisor/Inspector in Abu Dhabi, UAE. He has worked on numerous oil and gas projects in Iraq and the UAE for companies such as Weatherford International and ADMA. His responsibilities include ensuring safety compliance, certifying scaffolding work, coordinating personnel and materials, and attending daily meetings. He holds several technical certifications in scaffolding, safety, and industrial skills.
Este documento presenta un proyecto de investigación sobre las redes sociales. Incluye una introducción, objetivos generales y específicos, una justificación y un marco teórico sobre redes sociales como Facebook, WhatsApp y Twitter. El autor es Shendry Jaramillo de la carrera de Ciencias Sociales en la Universidad Nacional de Chimborazo.
Chicago is a city full of cultural offerings, from excellent art museums to great parks and legendary nightclubs. Go to a baseball game, the zoo, or learn about the architecture and history. There's something for everyone in Chicago.
The document provides information about resume samples, cover letters, interview questions, and other career resources for associate managers. It lists top resume types including chronological, functional, curriculum vitae, combination, targeted, professional, new graduate, and executive resumes. It also provides links to additional interview preparation materials and resume examples on resume123.org.
What does "better" really mean? If we eliminate duplication, is the code better? If we decide to skip the unit tests, are we doing worse? How do we decide if one design is better than another design?
About 8 years ago, my project failed, despite "doing all the right things", and shattered my faith in best practices. Since then, I've learned to measure developer experience, use *data* to learn what works, and I've been codifying "better" into patterns and decision principles for years. In this talk, I'll show you the paradigm shift that led to all my discoveries, and hopefully change your perspective on "better".
"Idea Flow" is an alternative to the Technical Debt metaphor that focuses on problems in human interaction rather than problems inside the code. By measuring the "friction" that occurs when developers interact with the code, we can identify the biggest causes of friction and systematically optimize developer experience.
Why go to all this trouble? From my experience, the biggest causes of pain are seldom what we think. When we try to make things "better", we can easily miss our biggest problems, or inadvertently make things worse. Visibility turned my beliefs about "better" upside-down.
First, I'll walk you through the conceptual metaphor of "Idea Flow" and how to recognize friction in developer experience.
Next, we'll write a little code and record the experience using the open source "Idea Flow Mapping" software.
Finally, we'll discuss a handful of "decision principles" for optimizing developer experience and analyze our coding experience as a group.
Empirical Methods in Software Engineering - an Overviewalessio_ferrari
A first introductory lecture on empirical methods in software engineering. It includes:
1) Motivation for empirical software engineering studies
2) How to define research questions
3) Measures and data collection methods
4) Formulating theories in software engineering
5) Software engineering research strategies
Find the videos at: https://www.youtube.com/playlist?list=PLSKM4VZcJjV-P3fFJYMu2OhlTjEr9Bjl0
Showing How Security Has (And Hasn't) Improved, After Ten Years Of TryingDan Kaminsky
The document discusses the results of fuzz testing software from 2000-2010 to analyze how software security has improved over the last decade. The testing involved fuzzing four file formats (Office, PDF, etc.) across 18 programs from different years. This resulted in over 175,000 crashes. Analysis found over 900 unique bugs. Later versions had fewer exploitable bugs, indicating improving code quality. The results provide a potential "fuzzmark" metric for software security improvements, though comparisons across formats require more controls. The testing process and challenges ensuring data integrity are also outlined.
The Portal Builder Story: From Hell to Lean, from Zero to Cloud - part 2SOFTENG
Christian Rodriguez gave a presentation on avoiding pitfalls when using Scrum. He discussed how Scrum initially helped his team with steady development and working software, but they later struggled with internal quality issues causing many bugs. He emphasized the importance of internal quality and technical practices to support Scrum. The presentation also covered detecting impediments, improving estimation practices, and adapting to finding more valuable work during a sprint.
- Dr. Andy Zaidman is an associate professor in software engineering at Delft University of Technology who studies software analytics.
- His research has found that developers often overestimate the amount of time spent on testing activities like writing test code compared to actual measurements, with testing found to account for 28-72% of time rather than the commonly assumed 50%.
- Metrics and analytics can help developers better understand their development behaviors, but intuition also provides value by detecting code smells that may be more urgent to address or easier to understand problems with than metric-based detections alone.
GDG Cloud Southlake #24: Arty Starr: Enabling Powerful Software Insights by V...James Anderson
Enabling Powerful Software Insights by Visualizing Friction and Flow
In an Agile software development process, a software team will typically meet on a regular basis in a “retrospective meeting” to reflect on the challenges faced by the team and opportunities for improvement. On the surface, this challenge might seem straight-forward, but modern software projects are complex endeavors, and developers are human – identifying what’s most important in a complex sociotechnical system is a task humans struggle to do well. What if developers had tools that recorded and helped them explore their historical experiences with the code, and they could identify hotspots of team friction, worthy of discussion, based on empirical data? This talk will explore the possibility and impact of such tools through a design fiction and working prototype of an Augmented Reality (AR) Code Planetarium powered by FlowInsight developer tools.
Arty Starr, PhD student, University of Victoria & Founder, FlowInsight
Arty is a recognized Flow Experience expert, researcher, speaker and thought leader, and the author of Idea Flow. This expertise, along with her experience as a former CTO and software engineer inspired Arty’s mission to improve the efficiency and morale of engineering teams, culminating in her founding FlowInsight.
Arty teaches system models for better understanding the Flow Experience of software development, and the practice of using Flow Metrics to systematically optimize programming flow. “Flow as a practice” is the art of getting in and staying in flow state to optimize productivity.
The company she founded, FlowInsight, is on a mission to bring back joy to our everyday work.
Data Scientists Are Analysts Are Also Software EngineersDomino Data Lab
by William Whipple Neely
Director of Data Science at Electronic Arts
Data scientists and analysts write code, sometimes a lot of code, so we are also software developers as much as model builders and algorithm creators. This talk is about the challenges a team of data scientists and analysts face when trying to scale their work, to make their work repeatable and testable. I’ll talk about how our data science team is leveling-up their skills as software developers, the challenges we’ve faced and the strategies that are helping.
Testing is necessary for software because:
1) Humans make mistakes and defects can be introduced during development that can later cause failures, from minor issues to potentially serious consequences like environmental damage or injury.
2) Defects are more expensive to fix the later they are found, so testing aims to find defects early.
3) Not all defects will necessarily lead to failures, but failures can be caused by defects from development or the environment, as well as human errors, so testing is needed to improve quality and reduce risks.
1. Testing is necessary because software defects originating from human errors and mistakes can cause failures in software systems. These failures can harm people, the environment, or a company through financial losses or other damages.
2. Testing aims to find defects before software is used, to promote quality and reduce risks of failures. The need for testing depends on the context, such as safety-critical software requiring more rigorous testing than an e-commerce site.
3. Defects arise from mistakes made during software development and design. Not all defects result in failures, but when a defect is triggered during use, a failure can occur.
Fact or Fiction? What Software Analytics Can Do For UsAndy Zaidman
This document summarizes findings from software analytics research on developer testing practices. It finds that developers overestimate the amount of time spent on testing, with most spending 25-75% of their time on test code compared to an estimated 50%. Tests are rarely executed in IDEs, with only 20% failing, compared to 60% failing in IDEs. Most projects have test code but over half of developers did not interact with tests. Testing is crucial for continuous integration, with 98% of projects failing builds when tests fail. The research helps developers understand their own behaviors and identifies challenges for improving tools and education.
Foundations of software testing - ISTQB Certification.pdfSaraj Hameed Sidiqi
1. Testing is necessary because humans inevitably make mistakes when developing software, which can introduce defects. These defects may cause failures when the software is used.
2. Defects arise from errors made during software design and development. When defects are encountered during use, it can lead to failures in the software's functionality. Not all defects will necessarily cause failures.
3. The risks from software failures depend on the context and system. Failures in safety-critical systems pose more risk than in everyday programs. Minor defects may be tolerable for some systems but not for others, like those affecting health, safety, or major business functions. Testing aims to find defects that could lead to failures with high impact.
This document discusses various techniques for rapid application testing (RAT) such as unit testing, integration testing, smoke testing, system testing, regression testing, performance testing, and test-driven development. It emphasizes automating test plans and test execution to allow tests to be run multiple times for little additional cost. The goal of testing is to balance cost and risk by reusing automated tests that are fast and good predictors of issues while throwing more tests at critical areas.
The document discusses strategies for software product development that balance speed and quality, including:
1) Focusing on getting a minimum viable product to market quickly through short iterative development cycles rather than extensive planning.
2) Establishing processes like continuous integration, source control, and automated testing to catch defects early and allow fast iteration.
3) Hiring selectively and spreading ownership of the product across a small team to allow flexibility over bureaucracy.
Metric Abuse: Frequently Misused Metrics in OracleSteve Karam
This is a presentation I created for RMOUG 2014 which I was sadly unable to attend. However, I wanted to share it with the Oracle community so that you can learn a bit about metrics that are frequently cited, frequently demonized, and frequently misused. In this deck we will go through the steps to diagnose issues and what NOT to blame as you go through the process.
The topics and concepts discussed here were originally formed in a blog post on the OracleAlchemist.com site: http://www.oraclealchemist.com/news/these-arent-the-metrics-youre-looking-for/
Selective 97 things every programmer should knowMuhammad Ahsan
This document contains 97 things that every programmer should know. It is a list of principles, best practices, and pieces of advice for programmers. The document is attributed to Kevlin Henney and many other programmers.
The document contains a collection of short passages on various topics for programmers. A few key passages are summarized below:
1. "Do Lots of Deliberate Practice" emphasizes the importance of repetition and practicing to improve skills through deliberate practice, not just completing tasks.
2. "Learn to Estimate" distinguishes estimates, which are approximate, from targets and commitments, which are more precise expectations of delivery.
3. "Know Your Next Commit" highlights the importance of having a clear understanding of what code will be committed and when, rather than just focusing on task details.
Continuous Deployment involves shipping code as frequently as possible, even multiple times per day. It allows for smaller changes with less risk, faster feedback, and a competitive advantage. To achieve this, companies optimize their deployment process, automate testing and deployments, and measure everything to learn and improve continuously. This approach is enabled by technologies like cloud computing and embraced by companies like Google, Amazon, and Facebook.
These are the slides used in my #devone (www.devone.at) keynote presentation:
DevOps is one of the most abused and overrated marketing terms in the last years! That’s not an alternative fact! It’s just Andi’s opinion! Yet - it is a very real thing that allowed many software companies to transform the way they think about software engineering. DevOps can mean something totally different thought depending on who you are and what type of business your company is doing. To clarify things, Andi gives us insights on how he explains the benefits to “DevOps Newbies” and how software companies around the world implement it in their own ways. Andi will answer: What does it really mean for developers, testers and operators? What will change? How does Facebook deploy twice a day without big issues? How does DevOps work in financial, government or healthcare where you have tight regulations? Does it mean Devs are responsible for Ops? Does it only work in the cloud? Or can we apply it to “old fashioned” on premise software as well? Learn for yourself and make up your own mind on whether DevOps is just a marketing term or something that can benefit you!
The document provides an overview of working with legacy code. It defines legacy code as code that is in production, functional, provides business value but is outdated, expensive to change, and lacks tests. It discusses challenges like complexity, high entropy, and roots of evil like dumb engineering decisions. It recommends examining the system, identifying pain points, creating a master plan with goals, and building a toolbox with techniques like automation and documentation. It provides dos and don'ts like reducing complexity, writing clear commits, and focusing on preventing issues rather than fixing them.
The document summarizes Gayle Laakmann's advice for cracking the technical interview. It discusses the typical interview process, what companies look for in candidates, how to prepare for different types of technical questions, and tips for soft skills. The key points covered are researching the company, preparing projects and common data structures/algorithms, using strategies like pattern matching to solve problems, and demonstrating passion through good communication skills.
Similar to Top 5 Reasons Why Improvement Efforts Fail (20)
Consistent toolbox talks are critical for maintaining workplace safety, as they provide regular opportunities to address specific hazards and reinforce safe practices.
These brief, focused sessions ensure that safety is a continual conversation rather than a one-time event, which helps keep safety protocols fresh in employees' minds. Studies have shown that shorter, more frequent training sessions are more effective for retention and behavior change compared to longer, infrequent sessions.
Engaging workers regularly, toolbox talks promote a culture of safety, empower employees to voice concerns, and ultimately reduce the likelihood of accidents and injuries on site.
The traditional method of conducting safety talks with paper documents and lengthy meetings is not only time-consuming but also less effective. Manual tracking of attendance and compliance is prone to errors and inconsistencies, leading to gaps in safety communication and potential non-compliance with OSHA regulations. Switching to a digital solution like Safelyio offers significant advantages.
Safelyio automates the delivery and documentation of safety talks, ensuring consistency and accessibility. The microlearning approach breaks down complex safety protocols into manageable, bite-sized pieces, making it easier for employees to absorb and retain information.
This method minimizes disruptions to work schedules, eliminates the hassle of paperwork, and ensures that all safety communications are tracked and recorded accurately. Ultimately, using a digital platform like Safelyio enhances engagement, compliance, and overall safety performance on site. https://safelyio.com/
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
React.js, a JavaScript library developed by Facebook, has gained immense popularity for building user interfaces, especially for single-page applications. Over the years, React has evolved and expanded its capabilities, becoming a preferred choice for mobile app development. This article will explore why React.js is an excellent choice for the Best Mobile App development company in Noida.
Visit Us For Information: https://www.linkedin.com/pulse/what-makes-reactjs-stand-out-mobile-app-development-rajesh-rai-pihvf/
In this infographic, we have explored cost-effective strategies for iOS app development, focusing on building high-quality apps within a budget. Key points covered include prioritizing essential features, leveraging existing tools and libraries, adopting cross-platform development approaches, optimizing for a Minimum Viable Product (MVP), and integrating with cloud services and third-party APIs. By implementing these strategies, businesses and developers can create functional and engaging iOS apps while minimizing development costs and time-to-market.
Why Apache Kafka Clusters Are Like Galaxies (And Other Cosmic Kafka Quandarie...Paul Brebner
Closing talk for the Performance Engineering track at Community Over Code EU (Bratislava, Slovakia, June 5 2024) https://eu.communityovercode.org/sessions/2024/why-apache-kafka-clusters-are-like-galaxies-and-other-cosmic-kafka-quandaries-explored/ Instaclustr (now part of NetApp) manages 100s of Apache Kafka clusters of many different sizes, for a variety of use cases and customers. For the last 7 years I’ve been focused outwardly on exploring Kafka application development challenges, but recently I decided to look inward and see what I could discover about the performance, scalability and resource characteristics of the Kafka clusters themselves. Using a suite of Performance Engineering techniques, I will reveal some surprising discoveries about cosmic Kafka mysteries in our data centres, related to: cluster sizes and distribution (using Zipf’s Law), horizontal vs. vertical scalability, and predicting Kafka performance using metrics, modelling and regression techniques. These insights are relevant to Kafka developers and operators.
🏎️Tech Transformation: DevOps Insights from the Experts 👩💻campbellclarkson
Connect with fellow Trailblazers, learn from industry experts Glenda Thomson (Salesforce, Principal Technical Architect) and Will Dinn (Judo Bank, Salesforce Development Lead), and discover how to harness DevOps tools with Salesforce.
Photoshop Tutorial for Beginners (2024 Edition)alowpalsadig
Photoshop Tutorial for Beginners (2024 Edition)
Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."
Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
Photoshop Tutorial for Beginners (2024 Edition)Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
The importance of developing and designing programming in 2024
Programming design and development represents a vital step in keeping pace with technological advancements and meeting ever-changing market needs. This course is intended for anyone who wants to understand the fundamental importance of software development and design, whether you are a beginner or a professional seeking to update your knowledge.
Course objectives:
1. **Learn about the basics of software development:
- Understanding software development processes and tools.
- Identify the role of programmers and designers in software projects.
2. Understanding the software design process:
- Learn about the principles of good software design.
- Discussing common design patterns such as Object-Oriented Design.
3. The importance of user experience (UX) in modern software:
- Explore how user experience can improve software acceptance and usability.
- Tools and techniques to analyze and improve user experience.
4. Increase efficiency and productivity through modern development tools:
- Access to the latest programming tools and languages used in the industry.
- Study live examples of applications
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
Ensuring Efficiency and Speed with Practical Solutions for Clinical OperationsOnePlan Solutions
Clinical operations professionals encounter unique challenges. Balancing regulatory requirements, tight timelines, and the need for cross-functional collaboration can create significant internal pressures. Our upcoming webinar will introduce key strategies and tools to streamline and enhance clinical development processes, helping you overcome these challenges.
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
The Comprehensive Guide to Validating Audio-Visual Performances.pdfkalichargn70th171
Ensuring the optimal performance of your audio-visual (AV) equipment is crucial for delivering exceptional experiences. AV performance validation is a critical process that verifies the quality and functionality of your AV setup. Whether you're a content creator, a business conducting webinars, or a homeowner creating a home theater, validating your AV performance is essential.
The Power of Visual Regression Testing_ Why It Is Critical for Enterprise App...kalichargn70th171
Visual testing plays a vital role in ensuring that software products meet the aesthetic requirements specified by clients in functional and non-functional specifications. In today's highly competitive digital landscape, users expect a seamless and visually appealing online experience. Visual testing, also known as automated UI testing or visual regression testing, verifies the accuracy of the visual elements that users interact with.
Streamlining End-to-End Testing Automation with Azure DevOps Build & Release Pipelines
Automating end-to-end (e2e) test for Android and iOS native apps, and web apps, within Azure build and release pipelines, poses several challenges. This session dives into the key challenges and the repeatable solutions implemented across multiple teams at a leading Indian telecom disruptor, renowned for its affordable 4G/5G services, digital platforms, and broadband connectivity.
Challenge #1. Ensuring Test Environment Consistency: Establishing a standardized test execution environment across hundreds of Azure DevOps agents is crucial for achieving dependable testing results. This uniformity must seamlessly span from Build pipelines to various stages of the Release pipeline.
Challenge #2. Coordinated Test Execution Across Environments: Executing distinct subsets of tests using the same automation framework across diverse environments, such as the build pipeline and specific stages of the Release Pipeline, demands flexible and cohesive approaches.
Challenge #3. Testing on Linux-based Azure DevOps Agents: Conducting tests, particularly for web and native apps, on Azure DevOps Linux agents lacking browser or device connectivity presents specific challenges in attaining thorough testing coverage.
This session delves into how these challenges were addressed through:
1. Automate the setup of essential dependencies to ensure a consistent testing environment.
2. Create standardized templates for executing API tests, API workflow tests, and end-to-end tests in the Build pipeline, streamlining the testing process.
3. Implement task groups in Release pipeline stages to facilitate the execution of tests, ensuring consistency and efficiency across deployment phases.
4. Deploy browsers within Docker containers for web application testing, enhancing portability and scalability of testing environments.
5. Leverage diverse device farms dedicated to Android, iOS, and browser testing to cover a wide range of platforms and devices.
6. Integrate AI technology, such as Applitools Visual AI and Ultrafast Grid, to automate test execution and validation, improving accuracy and efficiency.
7. Utilize AI/ML-powered central test automation reporting server through platforms like reportportal.io, providing consolidated and real-time insights into test performance and issues.
These solutions not only facilitate comprehensive testing across platforms but also promote the principles of shift-left testing, enabling early feedback, implementing quality gates, and ensuring repeatability. By adopting these techniques, teams can effectively automate and execute tests, accelerating software delivery while upholding high-quality standards across Android, iOS, and web applications.
Baha Majid WCA4Z IBM Z Customer Council Boston June 2024.pdfBaha Majid
IBM watsonx Code Assistant for Z, our latest Generative AI-assisted mainframe application modernization solution. Mainframe (IBM Z) application modernization is a topic that every mainframe client is addressing to various degrees today, driven largely from digital transformation. With generative AI comes the opportunity to reimagine the mainframe application modernization experience. Infusing generative AI will enable speed and trust, help de-risk, and lower total costs associated with heavy-lifting application modernization initiatives. This document provides an overview of the IBM watsonx Code Assistant for Z which uses the power of generative AI to make it easier for developers to selectively modernize COBOL business services while maintaining mainframe qualities of service.
2. , Developer, Consultant, CTO @
Specialized in Statistical Process Control (SPC)
and Supply Chain Optimization from Lean Manufacturing (data geek)
Continuous Delivery infrastructure, automation strategy & technical mentorship
Janelle Klein
Who Am I?
How to Measure the PAIN
in Software Development
Janelle Klein
Author of “Idea Flow”
leanpub.com/ideaflow
Founder of
newiron.com
11. The Situation
10 year old software project, 1.5M LOC, 24/7 uptime,
programmable statistical processing engine
Brought down production three releases in a row
12 developers on the team,
disciplined with best practices,
constantly working on improvements
12. The Retrospective
“What are we going to do?!”
Our biggest problem
“Well, we know we’ve got
a quality problem right?”
13. The Retrospective
“What are we going to do?!”
Our biggest problem
“The problem is we don’t have
enough test automation!”
14. So the Test Automation Began…
Our regression testing
took 3-4 weeks…
Let’s automate the tests!
17. 80%
It was still really PAINFUL…
“Well, at least our regression
cycle is faster, right?”
Our regression cycle still took 3-4 weeks!
Percent Automated:
So the Test Automation Began…
20. The First Mistake
Our biggest problem
“Well, we know we’ve got
a quality problem right?”
“The problem is we don’t have
enough test automation!”
What’s the mistake we made?
36. The amount of PAIN was caused by…
Likeliness(of((
Unexpected(
Behavior(
Cost(to(Troubleshoot(and(Repair(
High(Frequency(
Low(Impact(
Low(Frequency(
Low(Impact(
Low(Frequency(
High(Impact(
PAIN(
37. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
Most of the pain was caused by human factors.
What causes PAIN?
38. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
What causes PAIN?
Most of the pain was caused by human factors.
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
39. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
What causes PAIN?
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
Most of the pain was caused by human factors.
40. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
What causes PAIN?
PAIN is a consequence of how we interact with the code.
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
41. PAIN occurs during the process of
understanding and extending the software
Complex(
So*ware(
PAIN
Not the Code.
Optimize “Idea Flow”
48. “Green Washing” — Kerry Kimbrough
If we don’t understand the system,
we fix the tests incorrectly.
49. The “Waxy Coating” Principle
Optimize the signal to noise ratio.
Software
(before)
Software
(after)
Tests are like a waxy coating poured over the code.
50. Cost & Risk are a Function of Increased Difficulty
Cost
&
Risk
Difficulty of Work
The Difficulty of Doing Our Jobs
Human
Limitations
54. Our biggest problem
“Let’s brainstorm a list of all
the problems!”
The Third Mistake
What’s the mistake we made?
55. “What’s the best opportunity for improvement?
“The awful email
template engine code!”
Our biggest problem
The Retrospective
56. “Fill in missing
unit tests!”
Our biggest problem
The Retrospective
“What’s the best opportunity for improvement?
57. “We should clean up
the database code!”
Our biggest problem
The Retrospective
“What’s the best opportunity for improvement?
58. “Let’s improve maintainability
of our test framework!”
Our biggest problem
The Retrospective
“What’s the best opportunity for improvement?
59. Just because a problem comes to mind,
doesn’t mean it’s an important problem to solve.
Our biggest problem
The Retrospective
“What’s the best opportunity for improvement?
60. Our biggest problem
What do I feel the
most intensely about?
Daniel Kahneman
Thinking Fast and Slow
The Retrospective
“What’s the best opportunity for improvement?
61. “The awful email
template engine code!”
Recency Bias
Our biggest problem
The Retrospective
“What’s the best opportunity for improvement?
62. Guilt Bias
“Fill in missing
unit tests!”
Our biggest problem
The Retrospective
“What’s the best opportunity for improvement?
63. Known Solution Bias
Our biggest problem
The Retrospective
“What’s the best opportunity for improvement?
“We should clean up
the database code!”
64. Sunk Cost Bias
“Let’s improve maintainability
of our test framework!”
Our biggest problem
The Retrospective
“What’s the best opportunity for improvement?
65. Mistake #3:
Assuming the biggest problems will come to mind.
How do I know the right problems are on the list?
What do I feel the
most intensely about?
Daniel Kahneman
Thinking Fast and Slow
67. The Retrospective
“What are we supposed to do?!”
Our biggest problem
“We should stop rushing
before the deadline.”
“Yes. Fixed deadline.
Variable scope.”
68. Urgency Leads to High-Risk Decisions
7:01
Iterative Validation with Unit Tests
7:010:00
14:230:00
Skipping Tests and Validating at the End
We gamble to save time:
If I make no mistakes I save ~2 hours.
If I make several mistakes I lose ~8 hours.
88. We built TONS of automation.
The only real difference:
We didn’t design a single solution
without a specific problem in mind.
89. Our biggest problem
“We should stop rushing
before the deadline.”
“Yes. Fixed deadline.
Variable scope.”
The Forth Mistake
What’s the mistake we made?
90. Mistake #4:
Over-simplification and hand-waving.
How are you adapting your decision habits?
Time%
Pressure%
Compromise%
Safety%
for%
Speed%
Increase%
Number%&%
Severity%of%
Hazards%
%
More%Pain%
and%Higher%
Task%Effort%
Constant'
Urgency'
Fewer%
Problems%to%
Fix%
Stop%%
and%
Think%
Mi8gate%
the%
Risk%
Increased%
Produc8vity%
and%
Innova8on%
Safety'
91. Then I got into consulting…
The Software Rewrite Cycle
Start%
Over%
Unmaintainable%
So0ware%
92. We Start with the Best of Intentions
High Quality Code
Low Technical Debt
Easy to Maintain
Good Code Coverage
94. PAIN
The Classic Story of Project Failure
Problems get deferred
Builds start breaking
Releases get chaotic
Productivity slows to a crawl
Developers begging for time
It’s never enough
Project Meltdown
96. What if we could measure our PAIN?
1. Test Data Generation
2. Merging Problems
3. Repairing Tests
1000 hours/month
The Biggest Problem:
~700 hours/month generating test data
97. “Better”
“Better”
What if we could get managers and developers
all pulling the same direction?
Managers
Developers
98. The Biggest Cause of FAILURE in our Industry:
Next Talk: Stop Getting Crushed By Business Pressure
101. The First Mistake
Our biggest problem
“Well, we know we’ve got
a quality problem right?”
“The problem is we don’t have
enough test automation!”
What’s the mistake we made?
102. “What problem am I trying to solve?”
Mistake #1:
Starting with the Solution
103. The Second Mistake
Our biggest problem
“The problem is all the
technical debt that’s causing
us to make mistakes.”
What’s the mistake we made?
105. Our biggest problem
“Let’s brainstorm a list of all
the problems!”
The Third Mistake
What’s the mistake we made?
106. Mistake #3:
Assuming the biggest problems will come to mind.
How do I know the right problems are on the list?
What do I feel the
most intensely about?
Daniel Kahneman
Thinking Fast and Slow
107. Our biggest problem
“We should stop rushing
before the deadline.”
“Yes. Fixed deadline.
Variable scope.”
The Forth Mistake
What’s the mistake we made?
108. Mistake #4:
Over-simplification and hand-waving.
Time%
Pressure%
Compromise%
Safety%
for%
Speed%
Increase%
Number%&%
Severity%of%
Hazards%
%
More%Pain%
and%Higher%
Task%Effort%
Constant'
Urgency'
Fewer%
Problems%to%
Fix%
Stop%%
and%
Think%
Mi8gate%
the%
Risk%
Increased%
Produc8vity%
and%
Innova8on%
Safety'
How are you adapting your decision habits?
109. The Fifth Mistake
“We should just
quit our jobs.” “Yeah, it’s hopeless.”
What are we supposed to do?
112. Input:
Decision Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1.
Visibility
2.
Clarity
3.
Awareness
F ocus!
Output: “Friction” in Idea Flow
Improve Quality of Decisions
Data-Driven Software Mastery
113. Input:
Decision Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1.
Visibility
2.
Clarity
3.
Awareness
F ocus!
Output: “Friction” in Idea Flow
Target - The direction of “better”
Target: Optimize the Rate of Idea Flow
Data-Driven Software Mastery
114. Input:
Decision Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1.
Visibility
2.
Clarity
3.
Awareness
F ocus!
Output: “Friction” in Idea Flow
Input - The constraints that limit our short-term choices…
Data-Driven Software Mastery
115. Input:
Decision Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1.
Visibility
2.
Clarity
3.
Awareness
F ocus!
Output: “Friction” in Idea Flow
Output - The pain signal we’re trying to improve
Data-Driven Software Mastery
116. Input:
Decision Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1.
Visibility
2.
Clarity
3.
Awareness
F ocus!
Output: “Friction” in Idea Flow
Focus on the biggest pain…
F ocus!
Data-Driven Software Mastery
117. Input:
Decision Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1.
Visibility
2.
Clarity
3.
Awareness
F ocus!
Output: “Friction” in Idea Flow
1. Visibility - Identify the specific patterns
1.
Visibility
Data-Driven Software Mastery
118. Input:
Decision Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1.
Visibility
2.
Clarity
3.
Awareness
F ocus!
Output: “Friction” in Idea Flow
2. Clarity - Understand cause and effect
2.
Clarity
Data-Driven Software Mastery
119. Input:
Decision Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1.
Visibility
2.
Clarity
3.
Awareness
F ocus!
Output: “Friction” in Idea Flow
3.
Awareness
3. Awareness - Stop and think to adjust habits
Data-Driven Software Mastery
120. Input:
Decision Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1.
Visibility
2.
Clarity
3.
Awareness
F ocus!
Output: “Friction” in Idea Flow
4. Run Experiments to Learn What Works
Data-Driven Software Mastery
122. My team spent tons of time working on
improvements that didn’t make much difference.
We had tons of automation, but the
automation didn’t catch our bugs.
123. My team spent tons of time working on
improvements that didn’t make much difference.
We had well-modularized code,
but it was still extremely time-consuming to troubleshoot defects.
124. The hard part isn’t solving the problems
it’s identifying the right problems to solve.
“What are the specific problems
that are causing the team’s pain?”
125. Retrospective: “Mastery Circle”
Circle Leader
Circle Member
Focus: What’s the problem to solve?
Observe: Ask questions about the facts
Conclude: Breakdown causes into patterns
Optimize: Discuss strategies for improvement
Learn: Run experiments to learn what works
Observation
Questions
Make a F.O.C.O.L. Point!
129. Talk #3:
Let’s Make the PAIN Visible!
#OpenDX
An Open Standard for Measuring PAIN
(Specification for Data Collection)
Developer
Experience
130. Talk #4:
Learn Your Way to AWESOME.
Input:
Decision Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1.
Visibility
2.
Clarity
3.
Awareness
F ocus!
Output: “Friction” in Idea Flow
131. LEARN YOUR WAY TO AWESOME.
Free to Join Industry Peer Mentorship Network
openmastery.org
132. Industry Peer Mentorship Network
Companies
Community
Groups
HQ in
Austin
Open Mastery
Austin
meetup.com/Open-Mastery-Austin
133. Janelle Klein
openmastery.org @janellekz
Check out openmastery.org for details.
Read my Book.
Think About It.
FREE with
MembershipBuy It
How to Measure the PAIN
in Software Development
Janelle Klein
Editor's Notes
The really awesome thing about doing technical assessment and mentorship for a living, is for the last 5 years, I’ve been working on a research project on codifying the art of software development into a teachable craft
[Read]
We were trying to do all the right things. We had CI, unit testing, design reviews, code reviews. All that stuff you’re supposed to do.
We were building this factory automation system that was responsible for detecting manufacturing problems then shutting down the tool responsible.
I’d been on the project about 6 months, we were working through the final testing of a major release, tied a bow on it, shipped to production.
Later that night we were on this conference call with IT. And I hear this guy just screaming in the background. Apparently, we had shut down every tool in the factory.
So we rolled back the release and tried to figure out what happened. There was a configuration change that didn’t quite make it to production.
We all felt terrible, but there wasn’t much we could do at this point. So we fixed the problem, and shipped to production... again.
Back on the conference call with IT. And guess what... the same thing happened. What were we supposed to say… oops?
So once again, we rolled back the release.
We couldn’t reproduce the problem. We spent months trying to figure it out, and we were just completely stumped. We tried everything we could think of.
Meanwhile, our development team was pretty much idle so management just told them to go ahead with the next release.
Everyone was just working like nothing was wrong, but we couldn’t ship anything. We had a whole nother release in the queue before we finally figured it out. Guess what was wrong?
We were scared to death to try again, but we didn’t really have a choice. So we cross our fingers, and shipped to production again.
Back on the conference call with IT. We were all watching these real time activity charts and holding our breath. Finally everything seemed to be ok.
I was so relieved that things would finally be back to normal again. And then about 3am, my phone rang.
It was my team lead calling... he asked me about some code that I’d wrote and I knew exactly what happened.
I’d made some improvements to the code that caused a memory leak. And my changes ground the system to a screeching halt. This time, it was my fault.
So once again, we rolledback the release. This time the rollback failed.
Fully ramped semiconductor factory. 50k wafer starts a day. Completely offline. My fault.
I felt so horrible. I was in my bosses office, just sobbing.
I thought the main obstacle was all the technical debt building up in the code base that was causing us to make mistakes.
and if we made changes in the code that had more technical debt, we’d be more likely to make mistakes.
So I got this idea to build a tool that could detect high-risk changes, and tell us where we needed to do more testing -- but what I found wasn’t what I found wasn’t what I expected at all.
Our bugs were mostly in the code written by the senior engineers on the team where the design actually got the most scrutiny. It’s not like we didn’t have any awful crufty code -- but that’s not where the bugs were.
The correlation I did find in the data was this...
[read]
And while that made some sense, I couldn’t help but think, there had to be more to the story...
When I had to work with complex code, it was really painful.
[read]
So I started keeping track of all my painful interaction with the code and visualizing it on a timeline like this.
The pain started [] when I ran into some unexpected behavior and ended [] when I had the problem resolved.
So that was 5 hours and 18 minutes of troubleshooting, I think everyone would agree that’s pretty painful.
The amount of pain was driven by two factors...
So I started breaking down the problems into categories. And when I did this, I realized that most of the pain was actually caused by human factors.
This is when I have an idea in my head about how the code is supposed to work, but it doesn’t work that way anymore.
This is when your running an experiment, and there’s multiple possibilities for how a behavior can occur, and you make a bad assumption, and down the rabbit hole you go.
These aren’t really problems with the code itself, [read]
These aren’t really problems with the code itself… [read]
The pain isn’t something inside the code, pain occurs during the process of interacting with the code. So I started optimizing for… and I did that, with the help of a data driven feedback loop.
On our project, we ended up [read]
For almost a year! [read]
[read]
Then we started asking []
[read]
That’s when everything changed []
We were finally able to turn the project around. And I learned one of the most valuable lessons in my career. [read]
The pain was caused by problems building up in the code.
A typical improvement effort usually starts with brainstorming a list
[slow] We think about the things that bugged us recently, how we’re not following best practices, or the code that just makes us feel shameful.
[] -- Then all that goes into our technical debt backlog, and we chip away at improvements for months.
But just because a problem comes to mind, doesn’t mean it’s an important problem to solve
When we’re brainstorming, [] we can easily miss our biggest problems then [our improvements don’t make...].
[] Don’t do this.
From the outside it looks like we’re trying to drive a car without a steering wheel.
We line up the car’s trajectory based on our ideals, then close our eyes and floor the gas pedal.
The other thing I think we really need to question is [read]
All of these things are really a means to an end. If you’ve ever spent hours and hours troubleshooting a bug in that one line of magic code, it really makes you wonder [read]. [read]
What are we aiming for as an industry?
So let me summarize what Idea Flow brings to the table.
Since best practices are solution-focused, we’re always start with the hammer and looking for the nail.
Test automation is our favorite hammer.
Instead we need to be characterizing all the different types of nails,
Since best practices are solution-focused, we’re always start with the hammer and looking for the nail.
Test automation is our favorite hammer.
Instead we need to be characterizing all the different types of nails,
Really wanted to help.
Really wanted to help.
Find out what the substitution thing is called.
Find out what the substitution thing is called.
We optimize for execution time, even when the time spent on human cycles can completely dwarf the execution time. Why do you think that is?
When it comes to solving these really complex problems, our intuition is just wrong. It leads us astray.
The pain isn’t something inside the code, pain occurs during the process of interacting with the code. The problems I focused on fundamentally changed.
Avoiding Pain
Rework Risk is driven by the likelihood...
Things like... bad assumptions about the architecture or design or bad assumptions about customer requirements.
The longer we delay before making corrections, the greater the rework.
This is from a project about 10 months old where we actively focused on reducing troubleshooting time.
With our everyday problem-solving effort, we still spent about 10-20% of our time on friction.
So in this first case study, there was a huge mess inherited by a really great team, it was a 12 year old project, where all the original developers had left. This is what it looks like when 90% of your time figuring out what to do, and 10% of your time actually doing stuff.
The lack of familiarity has an enormous impact on how much friction we experience.
So there were tons of problems, and the team wasn’t sure what to focus on, so they set a goal to raise unit test coverage by 5%.
If you start adding up all the problems across the team [], these guys were spending about 700 hours per month generating test data to support whatever task they were working on. But oddly, in all the retrospective meetings, this problem didn’t even come up. It was just part of the work.
This second case studies was with a massive rewrite effort. They had this big monolith application that they rewrote completely from scratch, with micro services, a continuous delivery pipeline, the whole nine years.
And what really surprised me about this project, is that after only 18 months, they were already spending 40-60% of their development capacity troubleshooting problems.
They had this design for the architecture, that looked good on paper, but then once they distributed the design across teams, and discovered the architecture had some flaws, they were stuck. The good ol’ Conway’s law effect, and they couldn’t seem to adapt.
So I got involved with the team, just as they were getting into the thrashing stage, and starting to lose control.
You could see this pattern of pain building up over time, that we always talk about, but have never been able to measure.
So I don’t have quite enough data to make a chart like this, but these are some of the patterns you could see.
First, learning is front-loaded while the team figures out what to do.
Then there’s this rush before the deadline where validation ends up deferred.
Then the pain builds, and you see the baseline friction level rising over time.
Then finally chaos reigns, and the unpredictable work stops fitting in the timebox. So I’m measuring capacity hours over time, so even though all these releases are the same size, you can see how the team had to work twice as many hours to get the release out the door.
Then management got resentful because nothing was actually getting better.
Now I want to point out something. For all three projects, these tasks all took one to three days.
Generally speaking, as the problems build, we can still break down the work into bite-sized chunks.
but what we work on during that time dramatically changes.
[read] even when the problems are severe.
So if you thought about how much time you spend doing troubleshooting, learning and rework.
What percentage of time do you think it would be? Which do you think is the biggest?
What do you think the biggest causes are of troubleshooting time?
So If I wanted to know what was causing the pain I needed to understand the things that caused these 2 factors.
A lot of the problems had more to do with human factors than anything going on with the code.
Stale Memory mistakes, Ambiguous Clues.
But once I understood what was causing the pain, [read -- most of the problems were easy to avoid]
For example...
Really wanted to help.
Really wanted to help.
We’d always get a different set of bugs. What would you do?
Troubleshooting Risk we’ve already talked about, it’s driven by the likelihood...
So If I wanted to know what was causing the pain I needed to understand the things that caused these 2 factors.
A lot of the problems had more to do with human factors than anything going on with the code.
Stale Memory mistakes, Ambiguous Clues.
But once I understood what was causing the pain, [read -- most of the problems were easy to avoid]
For example...
For the problem categories --
I use hashtags in the Idea Flow Maps, then add up the durations for each hashtag.
For the problem categories --
I use hashtags in the Idea Flow Maps, then add up the durations for each hashtag.
For the problem categories --
I use hashtags in the Idea Flow Maps, then add up the durations for each hashtag.
[read]
And while that made some sense, I couldn’t help but think, there had to be more to the story...
[read]
And while that made some sense, I couldn’t help but think, there had to be more to the story...
We’d always get a different set of bugs. What would you do?
Really wanted to help.
Really wanted to help.
So we’re all familiar with the haystack principle…
Then I realized George was missing a key conceptual model.
N
Rework Risk is driven by the likelihood...
Things like... bad assumptions about the architecture or design or bad assumptions about customer requirements.
The longer we delay before making corrections, the greater the rework.
Used thinking checklists to codify a decision-making process… let me show you what I mean.
Focus on one decision principle until you have it down.
It’s not that best practices are bad, or wrong, they’re just backwards.
Really wanted to help.
Really wanted to help.
We start off with the best of intentions…
We're going to write high quality code that’s low in technical debt, easy to maintain and of course, has good code coverage.
So I started working 60-70 hour weeks for about 6 months strait. And my team started working 60 hour weeks for 6 months strait. Then the releases started falling apart. Things were just constantly going wrong.
Crazy deadlines, and I tried to explain to management that we needed to go slower, but they threatened to outsource the project if we didn’t get it done. This project was my baby.
When we fall into urgency mode, we start compromising safety for speed.
We make decisions that don’t seem like a big deal at the time, but they create a hazardous work environment.
Instead of taking a little more time to put our toys away, we end up falling down the stairs and in the hospital.
First, we ignore the risks, basically ignoring tango -- a lot of times because the risk isn’t obvious.
First, we ignore the risks, basically ignoring tango -- a lot of times because the risk isn’t obvious.
First, we ignore the risks, basically ignoring tango -- a lot of times because the risk isn’t obvious.
First, we ignore the risks, basically ignoring tango -- a lot of times because the risk isn’t obvious.
First, we ignore the risks, basically ignoring tango -- a lot of times because the risk isn’t obvious.
Troubleshooting Risk we’ve already talked about, it’s driven by the likelihood...
When we fall into urgency mode, we start compromising safety for speed.
We make decisions that don’t seem like a big deal at the time, but they create a hazardous work environment.
Instead of taking a little more time to put our toys away, we end up falling down the stairs and in the hospital.
Next thing you know we’re working late nights and weekends, choking down red bull to stay awake...
hacking out last minute fixes and hoping that nothing else breaks.
Who’s done this before?
We make jokes about programmers running on caffeine and pizza... but this problem is really serious.
When our project is on the line, we give up a lot -- we’re skipping our kids recitals, missing our annivery dinners, we get sick, we gain weight
stress deteriorates our health and can tear apart our relationships. Just because we’re not bleeding doesn’t mean we don’t get hurt by all this.
We can’t run a sustainable business by compromising the safety of the people doing the work.
I watched my project get crushed.
[read]
What do you guys think? What are the biggest obstacles?
If our feedback loop is broken, we don’t respond.
Troubleshooting Risk we’ve already talked about, it’s driven by the likelihood...
Learning Risk is driven by the likelihood...
Things like... lots of 3rd party libraries, complex frameworks, a really large code base, or high turn-over rate --
all these things can cause extra learning work.
Rework Risk is driven by the likelihood...
Things like... bad assumptions about the architecture or design or bad assumptions about customer requirements.
The longer we delay before making corrections, the greater the rework.
We make decisions that save a few hours that lead to side effects that cost several hours. When we try to go faster, we do things that increase the likelihood of mistakes and the cost to recover when things go wrong. We’ve been in this pattern for the last 2 years, and now we’re here.
So If I wanted to know what was causing the pain I needed to understand the things that caused these 2 factors.
A lot of the problems had more to do with human factors than anything going on with the code.
Stale Memory mistakes, Ambiguous Clues.
But once I understood what was causing the pain, [read -- most of the problems were easy to avoid]
For example...
We’ve been collecting lots of data and have identified our biggest problems.
I think we can dramatically reduce risk with some focused effort.
I'd like to propose a 3-month trial [] with one person working full-time on these problems. The team will make decisions on the improvement work, and I’ll share our progress and lessons learned with you each month.
I know we can do this, but I need your help. Will you help me make this happen?
Really wanted to help.
From the outside it looks like we’re trying to drive a car without a steering wheel.
We line up the car’s trajectory based on our ideals, then close our eyes and floor the gas pedal.
[read]
That’s when everything changed []
We were finally able to turn the project around. And I learned one of the most valuable lessons in my career. [read]
Iterative clarify then implement “better”
Iterative clarify then implement “better”
If you want to join me, then read the book, and think about the ideas, see if this is something you want to be a part of.
You can either buy the book, or if you start a reading group for Idea Flow, I’ll provide free e-books for all the attendees. Check out openmastery.org for details.
And if you don’t want to take my word for it, you should read Idea Flow because Rene and Matt said so.