What if we could measure the indirect costs of pain building up on a software project? What if we could measure the loss of productivity, the escalating costs and risks, and could steer our projects with a data-driven feedback loop?
By measuring the friction in “Idea Flow”, the flow of ideas between the developer and the software, we can create a data-driven feedback loop for learning what works. Rather than making decisions based on anecdote and gut feel, we can start driving our improvement decisions with real data.
Data-Driven Software Mastery is about learning and improving faster than ever.
Find out how you can:
• Identify the biggest causes of productivity loss on your software project.
• Avoid spending tons of time solving the wrong problems
• Collaborate with other industry professionals in the art of data-driven software mastery
Idea Flow gives us a universal language for describing our experience, so we can share the patterns and principles of what works. With a feedback loop, we can run real experiments!
Idea Flow turns the development community into a scientific community.
Top 5 Reasons Why Improvement Efforts FailArty Starr
This is my story of lessons learned on why our improvement efforts fail... I had a great team. We were disciplined about best practices and spent tons of time on improvements. Then I watched my team slam into a brick wall. We brought down a fully-ramped semiconductor factory three times in a row, then couldn't ship again for a year.
Despite our best efforts with CI, unit testing, design reviews, and code reviews, we lost our ability to understand the system. I discovered our mistakes weren't caused by technical debt. Most of the problems were caused by human factors. We failed to improve because we didn't solve the right problems.
To learn, we need a feedback loop. To improve, we need a feedback loop with a goal.
There's five different ways our project feedback loop can break:
* **Broken Target** - Our definition of "better" is broken.
* **Broken Visibility** - We don't see the pain, so we take no action.
* **Broken Clarity** - We don't understand what's causing the pain.
* **Broken Awareness** - We don't know how to avoid the pain.
* **Broken Focus** - We see the pain, but our attention is focused on something else.
Find out how to repair the broken feedback loops on your software project.
Stop Getting Crushed By Business PressureArty Starr
This is my story of lessons learned on how to stop the crushing effects of business pressure... I was team lead with full control of our green-field project. After a year, we had continuous delivery, a beautiful clean code base, and worked directly with our customers to design the features. Then our company split in two, we were moved under different management, and I watched my project get crushed.
As a consultant, I saw the same pattern of relentless business pressure everywhere, driving one project after another into the ground. I made it my mission to help the development teams solve this problem. This is my story of lessons learned on how to transform an organization from the bottom up. I'll show you how to lead the way.
**Warning:** This strategy won't work in all organizations. In some cases, management doesn't want to know the truth. However, in most organizations I've worked with, management wants to improve, but doesn't know how to fix the system.
The crushing business pressure is caused by a broken feedback loop that's baked into the organization's design. In this presentation, I'll show you how to fix the broken feedback loop. Learn how to:
* Gather evidence of developer productivity loss
* Identify the key organizational changes required for success
* Make the case to management for improvement
* Partner with your manager for long-term success
If the system is broken, we need to fix the system. You can *change* the system by making the decision to lead.
**Note:** *This talk is not strictly dependent on attending, "Top 5 Reasons Why Improvement Efforts Fail", but you'll get way more out of the session, if you attend both.*
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of “technical debt”, but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the “friction” we experience into explicit risk models for project decision-making.
There’s a huge disconnect between the business world and the engineering world that drives our software projects into the ground. We rewrite our software over and over again, not because we lack the engineering skills to build great software, but because we fail to communicate, make decisions in ignorance, and don’t adapt when our current strategy is obviously failing.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the loss of productivity, the escalating costs and risks, and could steer our projects with a data-driven feedback loop?
Visibility changes everything. With visibility, we can bridge the gap between the business world and the engineering world, and get everyone pulling the same direction.
Find out how you can:
1. Identify the biggest causes of productivity loss on your software project
2. Translate the world of developer pain into explicit costs and risks
3. Collaborate with other industry professionals in the art of data-driven software mastery
Let's break down the challenges and learn our way to success, one small victory at a time.
Speaker: Janelle Klein
Janelle is a NFJS Tour Speaker and author of the book, Idea Flow: How to Measure the PAIN in Software Development: a modern strategy for systematically optimizing software productivity with a data-driven feedback loop.
Once we make our pain visible with Idea Flow Mapping, we've got a data-driven feedback to learn what works. Objective data enables us to do something we've never been able to do before in our industry: science. This talk is about how to do science in software development.
The Lean Startup community has pioneered the art of everyday science to reduce the risk of building the wrong product by running customer experiments to learn what works. By mapping these same basic scientific principles to technical risk management, we can run experiments to learn our way to AWESOME!
Edit
Archive
Delete
In this talk we'll cover:
How science is used in the Lean Startup world to run business model experiments
How science is used in the Lean Manufacturing world to support process control & supply chain optimization
How we can apply science in software development to systematically learn what works.
If you want to start learning and improving faster than ever before, you won't want to miss this talk.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of "technical debt", but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the "friction" we experience into explicit risk models for project decision-making.
What makes software development complex isn't the code, it's the humans. The most effective way to improve our capabilities in software development is to better understand ourselves.
In this talk, I'll introduce a conceptual model for human interaction, identity, culture, communication, relationships, and learning based on the foundational model of Idea Flow. If you were to write a simulator to describe the interaction of humans, this talk would describe the architecture.
Learn how to understand the humans on your team and fix the bugs in communication, by thinking about your teammates like code!
Edit
Archive
Delete
I'm not a scientist or a psychologist. These ideas are based on a combination of personal experience, reading lots of cognitive science books, and a couple years of running experiments on developers. As I struggled through the challenges of getting a software concept from my head to another developer's head (interpersonal Idea Flow), I learned a whole lot about human interaction.
As software developers, we have to work together, think together, and solve problems together to do our jobs. Code? We get it. Humans? WTF?!
Fortunately, humans are predictably irrational, predictably emotional, and predictably judgmental creatures. Of course those pesky humans will always do a few unexpected things, but once we know the algorithm for peace and harmony among humans, we can start debugging the communication problems on our team.
Top 5 Reasons Why Improvement Efforts FailArty Starr
This is my story of lessons learned on why our improvement efforts fail... I had a great team. We were disciplined about best practices and spent tons of time on improvements. Then I watched my team slam into a brick wall. We brought down a fully-ramped semiconductor factory three times in a row, then couldn't ship again for a year.
Despite our best efforts with CI, unit testing, design reviews, and code reviews, we lost our ability to understand the system. I discovered our mistakes weren't caused by technical debt. Most of the problems were caused by human factors. We failed to improve because we didn't solve the right problems.
To learn, we need a feedback loop. To improve, we need a feedback loop with a goal.
There's five different ways our project feedback loop can break:
* **Broken Target** - Our definition of "better" is broken.
* **Broken Visibility** - We don't see the pain, so we take no action.
* **Broken Clarity** - We don't understand what's causing the pain.
* **Broken Awareness** - We don't know how to avoid the pain.
* **Broken Focus** - We see the pain, but our attention is focused on something else.
Find out how to repair the broken feedback loops on your software project.
Stop Getting Crushed By Business PressureArty Starr
This is my story of lessons learned on how to stop the crushing effects of business pressure... I was team lead with full control of our green-field project. After a year, we had continuous delivery, a beautiful clean code base, and worked directly with our customers to design the features. Then our company split in two, we were moved under different management, and I watched my project get crushed.
As a consultant, I saw the same pattern of relentless business pressure everywhere, driving one project after another into the ground. I made it my mission to help the development teams solve this problem. This is my story of lessons learned on how to transform an organization from the bottom up. I'll show you how to lead the way.
**Warning:** This strategy won't work in all organizations. In some cases, management doesn't want to know the truth. However, in most organizations I've worked with, management wants to improve, but doesn't know how to fix the system.
The crushing business pressure is caused by a broken feedback loop that's baked into the organization's design. In this presentation, I'll show you how to fix the broken feedback loop. Learn how to:
* Gather evidence of developer productivity loss
* Identify the key organizational changes required for success
* Make the case to management for improvement
* Partner with your manager for long-term success
If the system is broken, we need to fix the system. You can *change* the system by making the decision to lead.
**Note:** *This talk is not strictly dependent on attending, "Top 5 Reasons Why Improvement Efforts Fail", but you'll get way more out of the session, if you attend both.*
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of “technical debt”, but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the “friction” we experience into explicit risk models for project decision-making.
There’s a huge disconnect between the business world and the engineering world that drives our software projects into the ground. We rewrite our software over and over again, not because we lack the engineering skills to build great software, but because we fail to communicate, make decisions in ignorance, and don’t adapt when our current strategy is obviously failing.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the loss of productivity, the escalating costs and risks, and could steer our projects with a data-driven feedback loop?
Visibility changes everything. With visibility, we can bridge the gap between the business world and the engineering world, and get everyone pulling the same direction.
Find out how you can:
1. Identify the biggest causes of productivity loss on your software project
2. Translate the world of developer pain into explicit costs and risks
3. Collaborate with other industry professionals in the art of data-driven software mastery
Let's break down the challenges and learn our way to success, one small victory at a time.
Speaker: Janelle Klein
Janelle is a NFJS Tour Speaker and author of the book, Idea Flow: How to Measure the PAIN in Software Development: a modern strategy for systematically optimizing software productivity with a data-driven feedback loop.
Once we make our pain visible with Idea Flow Mapping, we've got a data-driven feedback to learn what works. Objective data enables us to do something we've never been able to do before in our industry: science. This talk is about how to do science in software development.
The Lean Startup community has pioneered the art of everyday science to reduce the risk of building the wrong product by running customer experiments to learn what works. By mapping these same basic scientific principles to technical risk management, we can run experiments to learn our way to AWESOME!
Edit
Archive
Delete
In this talk we'll cover:
How science is used in the Lean Startup world to run business model experiments
How science is used in the Lean Manufacturing world to support process control & supply chain optimization
How we can apply science in software development to systematically learn what works.
If you want to start learning and improving faster than ever before, you won't want to miss this talk.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of "technical debt", but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the "friction" we experience into explicit risk models for project decision-making.
What makes software development complex isn't the code, it's the humans. The most effective way to improve our capabilities in software development is to better understand ourselves.
In this talk, I'll introduce a conceptual model for human interaction, identity, culture, communication, relationships, and learning based on the foundational model of Idea Flow. If you were to write a simulator to describe the interaction of humans, this talk would describe the architecture.
Learn how to understand the humans on your team and fix the bugs in communication, by thinking about your teammates like code!
Edit
Archive
Delete
I'm not a scientist or a psychologist. These ideas are based on a combination of personal experience, reading lots of cognitive science books, and a couple years of running experiments on developers. As I struggled through the challenges of getting a software concept from my head to another developer's head (interpersonal Idea Flow), I learned a whole lot about human interaction.
As software developers, we have to work together, think together, and solve problems together to do our jobs. Code? We get it. Humans? WTF?!
Fortunately, humans are predictably irrational, predictably emotional, and predictably judgmental creatures. Of course those pesky humans will always do a few unexpected things, but once we know the algorithm for peace and harmony among humans, we can start debugging the communication problems on our team.
Bringing Science to Software DevelopmentArty Starr
Twenty-five years ago, Peter Senge wrote “The Fifth Discipline”, considered the seminal text for how to build a learning organization. With obvious benefits, and the recipe needed for success, why don't we see more learning organizations? That was twenty-five years ago!
As Ash Maurya pointed out in his new book, Scaling Lean, “The goal isn't learning, the goal is traction.” Without a process that helps us turn learning into momentum, a culture of learning gets us nowhere. Without a strategy to overcome the challenges of distributed decision-making, we still make most decisions in ignorance.
Let's dust off these old ideas in light of all the discoveries we've made over the last decade in Lean Startup, Agile, and Continuous Delivery.
What are the critical elements that are missing in our organizations that prevent us from building a learning organization? What are the key obstacles to success?
In this talk, we'll breakdown the concept of a learning organization into discrete system components and analyze the requirements like engineers. Then we'll discuss a strategy for overcoming the challenges and iteratively transforming our organizations into learning organizations. From the building blocks of culture, to the design of organizational architecture, we'll build a roadmap for learning how to learn together.
Want to learn your way to being an AWESOME company? Learn how to become a learning organization.
We have a lot to do on the cybersecurity side, and we are almost always lacking people, or budget, or both. Can we take lessons and approaches from entrepreneurship to apply to our cybersecurity programs? Can we do more with what we have, or for each addition can we make sure it has a large impact?
We’ll explore some entrepreneurship principles and then dive into some ways to improve security without large increases in headcount or budget.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the effects of learning curves, collaboration pain, and problems building up in the code?
We could:
Identify the highest leverage opportunities for improvement
Make the case to management that budget should be allocated for a solution
Lead the organization in making better decisions with a data-driven feedback loop to guide the way
Several years ago, I stumbled into a solution for measuring the growing “friction” in developer experience. Visibility turned my world upside-down.
We've been trying to explain the pain of Technical Debt for generations, but we've never been able to measure it. Visibility introduces a whole new world of possibilities.
In this talk, I'll show you what I'm measuring, how exactly I'm measuring it, then we'll talk through the implications for our teams, our organizations, and our industry.
We can identify the highest leverage improvement opportunities and steer our projects with a data-driven feedback loop.
We can breakdown the "wall of ignorance" between developers and management by defining an explicit language for managing technical risk.
We can teach the art of software development with a data-driven feedback loop and codify our knowledge into sharable decision principles.
We can revolutionize our business accounting methods to take the pain of software development into account, so the costs and risks are visible at the highest levels of the organization.
We can conquer the challenges across the software industry by working together, learning together, and sharing our knowledge with the world.
With visibility, we can start a revolution in data-driven learning.
Four years and over 20,000 respondents later, and we have learned a lot about what makes IT and organizational performance awesome. This year we include insights into security, containers, trunk-based development, and lean product management. Tune in for practical take-aways to make your teams' technology transformations even better.
This is a summary of the blogs by Eric Ries on the Five Whys at http://startuplessonslearned.blogspot.com/2008/11/five-whys.html. It was used for an internal presentation at Cogent Consulting. If Eric or anyone else thinks this should not be public I will take it down, but I hope I'll drive (a little) more traffic to his blog :-)
Operational Insight: Concepts and Examples (w/o Presenter Notes)royrapoport
The 2015-06-15 Operational Insight presentation, without presenter notes (because the way Keynote handles presenter notes makes them dominate the presentation)
A/B Testing and the Infinite Monkey TheoryUseItBetter
Surveys show that on average only 1 out of 7 A/B tests run by e-commerces end up to be successful. Lukasz Twardowski, the CEO of UseItBetter, tries to explain how some of the most successful online businesses master this process turning it into iterative, evidence-led experimentation at scale programme.
The Goal Discussion Guide - Participants GuideCraig Paxson
Early in 2015, I volunteered to lead a reading discussion group at work. The book I chose to read was The Goal by Eliyahu Goldratt. I scoured the Internet for a reading and discussion guide appropriate for a weekly group session and could not discover any. I found plenty of synopses and some college syllabi, but not any discussion guides. So I decided to create one. This book is the discussion guide I created.
Because The Goal uses the Socratic Method: “ask tell – ask,” I decided to create the readings using that same method. Each week’s reading begins with Alex asking a question of Jonah, then Jonah giving a response, Alex learning from that answer, and then we move on to the next question posed by Alex.
The discussion guide is broken into 7 weeks of reading. Each week’s reading includes questions to be answered by the participants. Some weeks include exercises (for instance, the dice game played on the hike) that are designed to further illustrate the concepts discussed in the book. It will be helpful if the leader can customize the discussion questions and exercises for their particular organization.
To purchase the Leaders Guide, visit www.craigpaxson.com/book/the-goal-discussion-guide/
How To Run a 5 Whys (With Humans, Not Robots)Dan Milstein
Slides from a talk at the Lean Startup conference (video link below).
Update: I've interleaved slides covering what I actually talked about onstage.
Update Update: video is up at http://www.ustream.tv/recorded/27482093/highlight/310486
What does "better" really mean? If we eliminate duplication, is the code better? If we decide to skip the unit tests, are we doing worse? How do we decide if one design is better than another design?
About 8 years ago, my project failed, despite "doing all the right things", and shattered my faith in best practices. Since then, I've learned to measure developer experience, use *data* to learn what works, and I've been codifying "better" into patterns and decision principles for years. In this talk, I'll show you the paradigm shift that led to all my discoveries, and hopefully change your perspective on "better".
"Idea Flow" is an alternative to the Technical Debt metaphor that focuses on problems in human interaction rather than problems inside the code. By measuring the "friction" that occurs when developers interact with the code, we can identify the biggest causes of friction and systematically optimize developer experience.
Why go to all this trouble? From my experience, the biggest causes of pain are seldom what we think. When we try to make things "better", we can easily miss our biggest problems, or inadvertently make things worse. Visibility turned my beliefs about "better" upside-down.
First, I'll walk you through the conceptual metaphor of "Idea Flow" and how to recognize friction in developer experience.
Next, we'll write a little code and record the experience using the open source "Idea Flow Mapping" software.
Finally, we'll discuss a handful of "decision principles" for optimizing developer experience and analyze our coding experience as a group.
How to manage web projects without setting your hair on fireKathy Gill
It seems like everyone in the organization believes they know what makes a website "work" despite having no design training. Managers insist that "their" pages look or act in ways directly contrary to the rest of the website. Or the web.
What are the unique characteristics of the web that make managing design a challenge? How can we empower stakeholders while also creating a seamless user experience? And how would an iterative, collaborative design process facilitate a responsive web, one where sites work well on phones, tablets and desktops?
Bringing Science to Software DevelopmentArty Starr
Twenty-five years ago, Peter Senge wrote “The Fifth Discipline”, considered the seminal text for how to build a learning organization. With obvious benefits, and the recipe needed for success, why don't we see more learning organizations? That was twenty-five years ago!
As Ash Maurya pointed out in his new book, Scaling Lean, “The goal isn't learning, the goal is traction.” Without a process that helps us turn learning into momentum, a culture of learning gets us nowhere. Without a strategy to overcome the challenges of distributed decision-making, we still make most decisions in ignorance.
Let's dust off these old ideas in light of all the discoveries we've made over the last decade in Lean Startup, Agile, and Continuous Delivery.
What are the critical elements that are missing in our organizations that prevent us from building a learning organization? What are the key obstacles to success?
In this talk, we'll breakdown the concept of a learning organization into discrete system components and analyze the requirements like engineers. Then we'll discuss a strategy for overcoming the challenges and iteratively transforming our organizations into learning organizations. From the building blocks of culture, to the design of organizational architecture, we'll build a roadmap for learning how to learn together.
Want to learn your way to being an AWESOME company? Learn how to become a learning organization.
We have a lot to do on the cybersecurity side, and we are almost always lacking people, or budget, or both. Can we take lessons and approaches from entrepreneurship to apply to our cybersecurity programs? Can we do more with what we have, or for each addition can we make sure it has a large impact?
We’ll explore some entrepreneurship principles and then dive into some ways to improve security without large increases in headcount or budget.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the effects of learning curves, collaboration pain, and problems building up in the code?
We could:
Identify the highest leverage opportunities for improvement
Make the case to management that budget should be allocated for a solution
Lead the organization in making better decisions with a data-driven feedback loop to guide the way
Several years ago, I stumbled into a solution for measuring the growing “friction” in developer experience. Visibility turned my world upside-down.
We've been trying to explain the pain of Technical Debt for generations, but we've never been able to measure it. Visibility introduces a whole new world of possibilities.
In this talk, I'll show you what I'm measuring, how exactly I'm measuring it, then we'll talk through the implications for our teams, our organizations, and our industry.
We can identify the highest leverage improvement opportunities and steer our projects with a data-driven feedback loop.
We can breakdown the "wall of ignorance" between developers and management by defining an explicit language for managing technical risk.
We can teach the art of software development with a data-driven feedback loop and codify our knowledge into sharable decision principles.
We can revolutionize our business accounting methods to take the pain of software development into account, so the costs and risks are visible at the highest levels of the organization.
We can conquer the challenges across the software industry by working together, learning together, and sharing our knowledge with the world.
With visibility, we can start a revolution in data-driven learning.
Four years and over 20,000 respondents later, and we have learned a lot about what makes IT and organizational performance awesome. This year we include insights into security, containers, trunk-based development, and lean product management. Tune in for practical take-aways to make your teams' technology transformations even better.
This is a summary of the blogs by Eric Ries on the Five Whys at http://startuplessonslearned.blogspot.com/2008/11/five-whys.html. It was used for an internal presentation at Cogent Consulting. If Eric or anyone else thinks this should not be public I will take it down, but I hope I'll drive (a little) more traffic to his blog :-)
Operational Insight: Concepts and Examples (w/o Presenter Notes)royrapoport
The 2015-06-15 Operational Insight presentation, without presenter notes (because the way Keynote handles presenter notes makes them dominate the presentation)
A/B Testing and the Infinite Monkey TheoryUseItBetter
Surveys show that on average only 1 out of 7 A/B tests run by e-commerces end up to be successful. Lukasz Twardowski, the CEO of UseItBetter, tries to explain how some of the most successful online businesses master this process turning it into iterative, evidence-led experimentation at scale programme.
The Goal Discussion Guide - Participants GuideCraig Paxson
Early in 2015, I volunteered to lead a reading discussion group at work. The book I chose to read was The Goal by Eliyahu Goldratt. I scoured the Internet for a reading and discussion guide appropriate for a weekly group session and could not discover any. I found plenty of synopses and some college syllabi, but not any discussion guides. So I decided to create one. This book is the discussion guide I created.
Because The Goal uses the Socratic Method: “ask tell – ask,” I decided to create the readings using that same method. Each week’s reading begins with Alex asking a question of Jonah, then Jonah giving a response, Alex learning from that answer, and then we move on to the next question posed by Alex.
The discussion guide is broken into 7 weeks of reading. Each week’s reading includes questions to be answered by the participants. Some weeks include exercises (for instance, the dice game played on the hike) that are designed to further illustrate the concepts discussed in the book. It will be helpful if the leader can customize the discussion questions and exercises for their particular organization.
To purchase the Leaders Guide, visit www.craigpaxson.com/book/the-goal-discussion-guide/
How To Run a 5 Whys (With Humans, Not Robots)Dan Milstein
Slides from a talk at the Lean Startup conference (video link below).
Update: I've interleaved slides covering what I actually talked about onstage.
Update Update: video is up at http://www.ustream.tv/recorded/27482093/highlight/310486
What does "better" really mean? If we eliminate duplication, is the code better? If we decide to skip the unit tests, are we doing worse? How do we decide if one design is better than another design?
About 8 years ago, my project failed, despite "doing all the right things", and shattered my faith in best practices. Since then, I've learned to measure developer experience, use *data* to learn what works, and I've been codifying "better" into patterns and decision principles for years. In this talk, I'll show you the paradigm shift that led to all my discoveries, and hopefully change your perspective on "better".
"Idea Flow" is an alternative to the Technical Debt metaphor that focuses on problems in human interaction rather than problems inside the code. By measuring the "friction" that occurs when developers interact with the code, we can identify the biggest causes of friction and systematically optimize developer experience.
Why go to all this trouble? From my experience, the biggest causes of pain are seldom what we think. When we try to make things "better", we can easily miss our biggest problems, or inadvertently make things worse. Visibility turned my beliefs about "better" upside-down.
First, I'll walk you through the conceptual metaphor of "Idea Flow" and how to recognize friction in developer experience.
Next, we'll write a little code and record the experience using the open source "Idea Flow Mapping" software.
Finally, we'll discuss a handful of "decision principles" for optimizing developer experience and analyze our coding experience as a group.
How to manage web projects without setting your hair on fireKathy Gill
It seems like everyone in the organization believes they know what makes a website "work" despite having no design training. Managers insist that "their" pages look or act in ways directly contrary to the rest of the website. Or the web.
What are the unique characteristics of the web that make managing design a challenge? How can we empower stakeholders while also creating a seamless user experience? And how would an iterative, collaborative design process facilitate a responsive web, one where sites work well on phones, tablets and desktops?
BizON had the honour of sponsoring the Business Transition Forum! We would like to share some valuable information with our audience from the forum in case you did not have the opportunity to attend!
Have you ever wondered whether your retrospective format was actually effective at fueling learning and improvement? Are you ready to try something different?
"FOCOL Point" is Idea Flow Learning Framework's 5-step learning and improvement protocol. It works great for software improvement, but it also works for team reflection, personal reflection, or mentorship. Rather than searching for answers, a FOCOL Point is all about finding the right questions.
Once I walk through the protocol as a group, we'll make a FOCOL Point together!
First, we'll identify the biggest software problems faced by the audience using the "flashstorming" technique. Then we'll focus on the top problems of the group and start digging into the details by walking through a group-adapted version of the stop and think protocol:
1. **Focus**: What's the journey we're trying to understand?
2. **Observe**: What patterns do we see? (for all journey pattern types)
3. **Conclude**: What obstacles seem to be causing the pain?
4. **Optimize**: How could we have avoided the obstacles?
5. **Learn**: What questions should we ask ourselves in the future?
Amplify your learning by reflecting more productively on your own or with your team! You can immediately apply this technique on your own projects.
Identify Development Pains and Resolve Them with Idea FlowTechWell
With the explosion of new frameworks, a mountain of automation, and our applications distributed across hundreds of services in the cloud, the level of complexity in software development is growing at an insane pace. With increased complexity comes increased costs and risks. When diagnosing unexpected behavior can take days, weeks, or sometimes months, all while our release is on the line, our projects plunge into chaos. In the invisible world of software development, how do we identify what's causing our pain? How do we escape the chaos? Janelle Klein presents a novel approach to measuring the chaos, identifying the causes, and systematically driving improvement with a data-driven feedback loop. Rather than measuring the problems in the code, Janelle suggests measuring the "friction in Idea Flow", the time it takes a developer to diagnose and resolve unexpected confusion, which disrupts the flow of progress during development. With visibility of the symptoms, we can identify the cause—whether it's bad architecture, collaboration problems, or technical debt. Janelle discusses how to measure Idea Flow, why it matters, and the implications for our teams, our organizations, and our industry.
Every year, software companies spend a huge amount of time and effort estimating large projects, and still end up regularly missing the mark - often by huge amounts. What the heck is going on? With all of the planning poker, and PI planning, and #noestimates, why isn't this figured out yet?
In this talk, we'll dive into probability theory and psychology to discover some of the common underlying causes for a lack of predictability. Once we understand why the world is so uncertain, we'll talk about how we can live with our estimation failures, while still thrilling our customers and maintaining enough predictability to succeed as an organization.
Monitoring Complex Systems - Chicago Erlang, 2014Brian Troutwine
Imagine being responsible for monitoring 100 servers. Now imagine 1000. Each server has 100 different things to keep track of. What do you pay attention to and what do you ignore? What is important? In this talk Brian will show how Erlang can be used to capture more information without compromising clarity — i.e. to keep track of the forest without loosing site of the trees!
Being Right Starts By Knowing You're WrongData Con LA
Data Con LA 2020
Description
The recent proliferation of predictive analytics within companies is of limited benefit unless these companies learn to measure, understand, and embrace a critical concept: error. There is no such thing as a perfect predictive model and all tools using any sort of predictive model will have error. Despite being relatively easy to implement and understand, consistent error measurement continues to be underutilized or even completely avoided. In this session we will discuss
*Why embracing error is so valuable to companies.
*We will then review basic ways to measure error in commonly used models and in data source systems such as CRMs and ERPs.
*Most importantly, we will review some ways to approach company leadership with the concept of error.
Speaker
Ryan Johnson, GoGuardian, Director of Science and Analytics
The Pragmatic Agilist: estimating, improving quality, and communication with...Thiago Colares
Money doesn’t grow on trees: developer teams are expensive and always need to deliver value. I’ll describe in a pragmatic way how we have adopted agile practices to deliver more value with the same team and to solve 3 pains:
- estimation and deadlines
- bug fixes and quality assurance
- inefficient communication
And without working overtime (or almost never).
Open Mastery: Let's Conquer the Challenges of the Industry!Arty Starr
What if you could get upper management to care about your technical developer problems? Would you be willing to measure and prioritize the problems?
What if **WE** could stop the relentless business pressure that drives our software projects into the ground *across the industry*? I know this probably sounds impossible, but before you dismiss the idea entirely, let me show you that it *is* possible.
We can start a cascade of changes across the industry with only a handful of people that are willing to work together to make it happen.
Open Mastery is a peer learning network focused on codifying open decision models and standards to solve industry-wide problems. This presentation is about the obstacles, the strategy, and the business model.
Lastly, I want your help in looking for gaps in my ideas. Let's identify where the strategy might break, and figure out how to make it work. I'm launching Open Mastery in early 2016. Let's make this dream a reality.
Empirical Methods in Software Engineering - an Overviewalessio_ferrari
A first introductory lecture on empirical methods in software engineering. It includes:
1) Motivation for empirical software engineering studies
2) How to define research questions
3) Measures and data collection methods
4) Formulating theories in software engineering
5) Software engineering research strategies
Find the videos at: https://www.youtube.com/playlist?list=PLSKM4VZcJjV-P3fFJYMu2OhlTjEr9Bjl0
Self-Service Operations: Because Ops Still HappensRundeck
Keynote Presentation by Damon Edwards, co-founder of Rundeck, at DevOps Days Austin , May 4, 2017.
Deployment is a solved problem. Sure there is still work to be done, but the DevOps community has successfully proven that anyone can both scale deployment automation and distribute the capability to execute deployments. Now, we have to turn our attention to the next critical constraint: What happens after deployment?
We all know that failure is inevitable and is coming our way at any moment. How do we respond quickly and effectively to those failures? What works when there is just a small set of teams or an isolated system to manage will quickly break down when the organization grows in size and complexity. But on the other hand, what has been commonly practiced in large-scale enterprises is proving to be too cumbersome, too silo dependent, and simply too slow for today's business needs.
How do we rapidly respond to incidents and recover complex interdependent systems while working within an equally complex and interdependent organization? How do Ops teams embrace the DevOps and Agile inspired demand for speed while maintaining quality and control?
This talk examines the trial-and-error lessons learned by some forward-thinking enterprises who are currently streamlining how they:
-Resolve incidents
-Reduce friction between teams
-Divide up operational responsibilities
-Improve the quality of their ongoing operations (and organizational learning)
See a Demo of Rundeck Enterprise :
https://www.rundeck.com/see-demo
--or--
Download Rundeck Open Source here:
https://rundeck.com/open-source
Connect:
Stack Overflow community: https://stackoverflow.com/questions/tagged/rundeck
Github: https://github.com/rundeck/rundeck/issues
Twitter: https://twitter.com/Rundeck
Facebook: https://www.facebook.com/RundeckInc/
LinkedIn: www.linkedin.com › company › rundeck-inc
Similar to Data-Driven Software Mastery @Open Mastery Austin (20)
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
2. , Developer, Consultant, CTO @New Iron
Specialized in Statistical Process Control (SPC)
and Supply Chain Optimization from Lean Manufacturing (I <3 cool data)
Continuous Delivery infrastructure, automation strategy & technical mentorship
Janelle Klein
Who Am I?
I’m also a hobbyist Cognitive Scientist
How to Measure the PAIN
in Software Development
Janelle Klein
Author of “Idea Flow”
leanpub.com/ideaflow
3. LEARN YOUR WAY TO AWESOME.
Community of Practice for
Data-Driven Software Mastery
Why Are We Here?
4. Why Should You Care?
Problem #1: Organizational Dysfunction
5. Why Should You Care?
Problem #2: Solving the Wrong Problem
6. Why Should You Care?
This talk is about a
REALISTIC STRATEGY
to solve these problems.
7. I’m going to show you:
POINT A
(I’ve been working on this for 5 years)
8. I Need Your Help to Get to:
POINT AWESOME
Point A Point AWESOME?
Iterate!
9. LEARN YOUR WAY TO AWESOME.
Why Are We Here?
Learning Takes Work.
12. Five Years Ago I Read This Book:
How to Build a Learning Organization
13. Five Disciplines of a Learning Organization
Personal Mastery
Mental Models
Shared Vision
Team Learning
Systems Thinking
These disciplines were emergent practice on our team.
14. Five Disciplines of a Learning Organization
Personal Mastery
Mental Models
Shared Vision
Team Learning
Systems Thinking
These disciplines are emergent practice in
software development
20. Five Disciplines of a Learning Organization
Personal Mastery
Mental Models
Team Learning
Systems Thinking
This book is about the art of group problem-solving.
Shared LearningShared Better
21. Five Years to Put “Better” into Words
How to Measure the PAIN
in Software Development
Janelle Klein
This is the my story.
22. We were trying to do
all the “right” things.
About 8 Years Ago…
34. 80%
It was still really PAINFUL…
“Well, at least our regression
cycle is faster, right?”
Our regression cycle still took 3-4 weeks!
Percent Automated:
So the Test Automation Began…
44. The amount of PAIN was caused by…
Likeliness(of((
Unexpected(
Behavior(
Cost(to(Troubleshoot(and(Repair(
High(Frequency(
Low(Impact(
Low(Frequency(
Low(Impact(
Low(Frequency(
High(Impact(
PAIN(
45. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
Most of the pain was caused by human factors.
What causes PAIN?
46. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
What causes PAIN?
Most of the pain was caused by human factors.
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
47. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
What causes PAIN?
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
Most of the pain was caused by human factors.
48. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
What causes PAIN?
PAIN is a consequence of how we interact with the code.
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
49. PAIN occurs during the process of
understanding and extending the software
Complex(
So*ware(
PAIN
Not the Code.
Optimize “Idea Flow”
50. Analyzing the Types of Mistakes
#1 Cause of Mistakes:
Misunderstandings of how the system worked
54. The Waxy Coating Principle
Optimize the signal to noise ratio.
Software
(before)
Software
(after)
Tests are like a waxy coating poured over the code.
58. My team spent tons of time working on
improvements that didn’t make much difference.
We had tons of automation, but the
automation didn’t catch our bugs.
59. My team spent tons of time working on
improvements that didn’t make much difference.
We had well-modularized code,
but it was still extremely time-consuming to troubleshoot defects.
60. The hard part isn’t solving the problems
it’s identifying the right problems to solve.
“What are the specific problems
that are causing the team’s pain?”
67. FRICTION is a Function of Increased Difficulty
Cost
&
Risk
Difficulty of Work
The Difficulty of Doing Our Jobs
Human
Limitations
68. The Rhythm of “Idea Flow”
Write a little code.
Work out the kinks.
Write a little code.
Work out the kinks.
Write a little code.
Work out the kinks.
78. Reading Visual Indicators in Idea Flow Maps
Le#$Atrium$
Le#$Ventricle$
Right$Ventricle$
Right$Atrium$
What’s$causing$this$pa7ern?$
Similar to how an EKG helps doctors diagnose heart problems...
79. ...Idea Flow Maps help developers diagnose software problems.
Problem-Solving
Machine
Reading Visual Indicators in Idea Flow Maps
95. The Team’s Improvement Focus:
Increasing unit test coverage by 5%
Case Study: Huge Mess with Great Team
1. Test Data Generation
2. Merging Problems
3. Repairing Tests
1000 hours/month
The Biggest Problem:
~700 hours/month generating test data
97. “What’s the biggest opportunity for improvement?”
“The awful email
template engine code!”
Our biggest problem
The Retrospective
98. “What’s the biggest opportunity for improvement?”
“Fill in missing
unit tests!”
Our biggest problem
The Retrospective
99. “What’s the biggest opportunity for improvement?”
“I know how to improve
database performance!”
Our biggest problem
The Retrospective
100. “What’s the biggest opportunity for improvement?”
“Let’s improve maintainability
of our test framework!”
Our biggest problem
The Retrospective
101. “What’s the biggest opportunity for improvement?”
Just because a problem comes to mind,
doesn’t mean it’s an important problem to solve.
Our biggest problem
The Retrospective
102. “What’s the biggest opportunity for improvement?”
Our biggest problem
What do I feel the
most intensely about?
Daniel Kahneman
Thinking Fast and Slow
The Retrospective
103. “What’s the biggest opportunity for improvement?”
“The awful email
template engine code!”
Recency Bias
Our biggest problem
The Retrospective
104. “What’s the biggest opportunity for improvement?”
Guilt Bias
“Fill in missing
unit tests!”
Our biggest problem
The Retrospective
105. “What’s the biggest opportunity for improvement?”
“I know how to improve
database performance!”
Known Solution Bias
Our biggest problem
The Retrospective
106. “What’s the biggest opportunity for improvement?”
Sunk Cost Bias
“Let’s improve maintainability
of our test framework!”
Our biggest problem
The Retrospective
108. 18 months after a Micro-Services/Continuous Delivery rewrite.
Troubleshooting
Progress
Learning
40-60% of dev capacity on “friction”
0:00 28:15
12:230:00
Case Study: From Monolith to Microservices
111. PAIN
The Classic Story of Project Failure
Problems get deferred
Builds start breaking
Releases get chaotic
Productivity slows to a crawl
Developers begging for time
It’s never enough
Project Meltdown
112. The Cost of Escalating Risk
0%
100%
Release 1 Release 2 Release 3
Troubleshooting
Progress
Learning
Percentage Capacity spent on Troubleshooting (red) and Learning (blue)
(extrapolated from samples)
113. 0%
100%
Release 1 Release 2 Release 3
Percentage Capacity spent on Troubleshooting (red) and Learning (blue)
Figure out what to do
Learning is front-loaded
Troubleshooting
Progress
Learning
The Cost of Escalating Risk
114. 0%
100%
Release 1 Release 2 Release 3
Percentage Capacity spent on Troubleshooting (red) and Learning (blue)
Rush Before the Deadline
Validation is Deferred
Troubleshooting
Progress
Learning
The Cost of Escalating Risk
115. 0%
100%
Release 1 Release 2 Release 3
Percentage Capacity spent on Troubleshooting (red) and Learning (blue)
Pain Builds
Baseline friction keeps rising
Troubleshooting
Progress
Learning
The Cost of Escalating Risk
116. 0%
100%
Release 1 Release 2 Release 3
Percentage Capacity spent on Troubleshooting (red) and Learning (blue)
Chaos Reigns
Unpredictable work stops
fitting in the timebox
Troubleshooting
Progress
Learning
The Cost of Escalating Risk
117. This is the Biggest Cause of FAILURE
In Our Industry:
Have you struggled with these problems in your organization?
120. How can we reduce the impact of
bad architecture assumptions?
(Assumption Risk)
The cost of bad architecture decisions
in the microservices world
is EXTREMELY HIGH.
121. If you’ve got beautiful code
Because you pushed all the PAIN
to your clients…
Your code SUCKS.
How do we know what the
client’s experience will be like?
141. Team Visibility Platform
IFM Data
Collection
Team Visibility
Raw
Data
Timeline
Abstraction
1. Record an Idea Flow Map
2. Review/Annotate an Idea Flow Map
3. Identify the Biggest Pains
Project
Abstraction
#Private
Taxonomy
142. Community Analytics Platform
Anonymizer Anonymizer Anonymizer Anonymizer
#Private
Taxonomy
Team
Visibility
#Private
Taxonomy
Team
Visibility
#Private
Taxonomy
Team
Visibility
#Private
Taxonomy
Team
Visibility
Community Analytics
#Shared
Taxonomy
143. Community Analytics Platform
Community Analytics
Shared Taxonomy
Tiger
Bear
Requirement: Analyze Patterns & Codify Lessons Learned
Community
Focus
Community Analytics
#Shared
Taxonomy
145. Team
Server
Team
Server
“Idea Flow Factory”
Supply Chain Cost Model
Supply Chain Optimization
Team
Server
“Customer Factory”
Revenue Model
(Ash Maurya)
Throughput Accounting
(Effort vs Revenue)
Organizational Mastery Platform
146. Community Problem #3:
Solve This Pain with Lack of Skills:
Company
Nope.Nope.Nope.Nope.Nope.
Nope.Nope.Nope.Nope.
Nope.Nope.
Nope.Nope.Nope.
Nope.Nope.Nope.Nope.Nope.Nope.Nope.Nope.
Nope.
Yay!
We need a
Sr Developer!
147. Open Certification Platform
(Crowd-Sourced Peer Mentorship)
Decision Assessment
Archetype Examples
& Decision-Making Tests
Mentorship Training
M1
M2
M3
C1
C2
C3
Crowd-Sourced
Peer Video Assessment
#Shared
Taxonomy
151. Get together over lunch and share lessons learned.
Leaders Circle
Developers Circle
152. Open Mastery Circle Meetings
Circle Leader
Circle Member
Focus: What’s the problem to solve?
What: Ask questions about the facts
Why: Breakdown the causes
How: Strategies to reduce the pain
Codify: Lessons Learned into #Patterns.
Observation
Questions
The Explicit Mastery Loop
153. The Explicit Mastery Loop
Input:
Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1. What2. Why3. How
F ocus!
Output: Pain Signal
Improve Quality of Decisions
154. Input:
Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1. What2. Why3. How
F ocus!
Output: Pain Signal
Target - The direction of “better”
Target: Optimize the Rate of Idea Flow
The Explicit Mastery Loop
155. Input:
Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1. What2. Why3. How
F ocus!
Output: Pain Signal
Input - The constraints that limit our short-term choices…
The Explicit Mastery Loop
156. Input:
Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
1. What2. Why3. How
F ocus!
Output: Pain Signal
Output - The pain signal we’re trying to improve
The Explicit Mastery Loop
157. Input:
Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
2. What3. Why4. How
F ocus!
Output: Pain Signal
1. !
1. Focus on the biggest pain
F ocus!
The Explicit Mastery Loop
158. Input:
Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
2. What3. Why4. How
F ocus!
Output: Pain Signal
1. !
2. What - Identify the specific symptoms
2. What
The Explicit Mastery Loop
159. Input:
Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
2. What3. Why4. How
F ocus!
Output: Pain Signal
1. !
3. Why - Breakdown cause and effect
3. Why
The Explicit Mastery Loop
160. Input:
Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
2. What3. Why4. How
F ocus!
Output: Pain Signal
1. !
4. How
4. How - Run Experiments to learn what works
The Explicit Mastery Loop
161. Input:
Constraints
Target: Optimize the Rate of Idea Flow
short-term looplong-term
loop
2. What3. Why4. How
F ocus!
Output: Pain Signal
1. !
5. Codify Lessons Learned and Modify Behavior
The Explicit Mastery Loop