The document discusses strategies for measuring and reducing "pain" or friction in software development projects. It describes tracking sources of unexpected behavior and long troubleshooting times to identify the biggest problems causing pain. Common causes of pain include human errors and factors that make code harder to understand over time. The document advocates measuring and categorizing specific pain points, identifying the largest problems, becoming a risk translator to communicate issues to managers, and refactoring organization structures to improve feedback when problems arise.
Top 5 Reasons Why Improvement Efforts FailArty Starr
This is my story of lessons learned on why our improvement efforts fail... I had a great team. We were disciplined about best practices and spent tons of time on improvements. Then I watched my team slam into a brick wall. We brought down a fully-ramped semiconductor factory three times in a row, then couldn't ship again for a year.
Despite our best efforts with CI, unit testing, design reviews, and code reviews, we lost our ability to understand the system. I discovered our mistakes weren't caused by technical debt. Most of the problems were caused by human factors. We failed to improve because we didn't solve the right problems.
To learn, we need a feedback loop. To improve, we need a feedback loop with a goal.
There's five different ways our project feedback loop can break:
* **Broken Target** - Our definition of "better" is broken.
* **Broken Visibility** - We don't see the pain, so we take no action.
* **Broken Clarity** - We don't understand what's causing the pain.
* **Broken Awareness** - We don't know how to avoid the pain.
* **Broken Focus** - We see the pain, but our attention is focused on something else.
Find out how to repair the broken feedback loops on your software project.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of "technical debt", but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the "friction" we experience into explicit risk models for project decision-making.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the loss of productivity, the escalating costs and risks, and could steer our projects with a data-driven feedback loop?
By measuring the friction in “Idea Flow”, the flow of ideas between the developer and the software, we can create a data-driven feedback loop for learning what works. Rather than making decisions based on anecdote and gut feel, we can start driving our improvement decisions with real data.
Data-Driven Software Mastery is about learning and improving faster than ever.
Find out how you can:
• Identify the biggest causes of productivity loss on your software project.
• Avoid spending tons of time solving the wrong problems
• Collaborate with other industry professionals in the art of data-driven software mastery
Idea Flow gives us a universal language for describing our experience, so we can share the patterns and principles of what works. With a feedback loop, we can run real experiments!
Idea Flow turns the development community into a scientific community.
There’s a huge disconnect between the business world and the engineering world that drives our software projects into the ground. We rewrite our software over and over again, not because we lack the engineering skills to build great software, but because we fail to communicate, make decisions in ignorance, and don’t adapt when our current strategy is obviously failing.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the loss of productivity, the escalating costs and risks, and could steer our projects with a data-driven feedback loop?
Visibility changes everything. With visibility, we can bridge the gap between the business world and the engineering world, and get everyone pulling the same direction.
Find out how you can:
1. Identify the biggest causes of productivity loss on your software project
2. Translate the world of developer pain into explicit costs and risks
3. Collaborate with other industry professionals in the art of data-driven software mastery
Let's break down the challenges and learn our way to success, one small victory at a time.
Speaker: Janelle Klein
Janelle is a NFJS Tour Speaker and author of the book, Idea Flow: How to Measure the PAIN in Software Development: a modern strategy for systematically optimizing software productivity with a data-driven feedback loop.
Once we make our pain visible with Idea Flow Mapping, we've got a data-driven feedback to learn what works. Objective data enables us to do something we've never been able to do before in our industry: science. This talk is about how to do science in software development.
The Lean Startup community has pioneered the art of everyday science to reduce the risk of building the wrong product by running customer experiments to learn what works. By mapping these same basic scientific principles to technical risk management, we can run experiments to learn our way to AWESOME!
Edit
Archive
Delete
In this talk we'll cover:
How science is used in the Lean Startup world to run business model experiments
How science is used in the Lean Manufacturing world to support process control & supply chain optimization
How we can apply science in software development to systematically learn what works.
If you want to start learning and improving faster than ever before, you won't want to miss this talk.
What makes software development complex isn't the code, it's the humans. The most effective way to improve our capabilities in software development is to better understand ourselves.
In this talk, I'll introduce a conceptual model for human interaction, identity, culture, communication, relationships, and learning based on the foundational model of Idea Flow. If you were to write a simulator to describe the interaction of humans, this talk would describe the architecture.
Learn how to understand the humans on your team and fix the bugs in communication, by thinking about your teammates like code!
Edit
Archive
Delete
I'm not a scientist or a psychologist. These ideas are based on a combination of personal experience, reading lots of cognitive science books, and a couple years of running experiments on developers. As I struggled through the challenges of getting a software concept from my head to another developer's head (interpersonal Idea Flow), I learned a whole lot about human interaction.
As software developers, we have to work together, think together, and solve problems together to do our jobs. Code? We get it. Humans? WTF?!
Fortunately, humans are predictably irrational, predictably emotional, and predictably judgmental creatures. Of course those pesky humans will always do a few unexpected things, but once we know the algorithm for peace and harmony among humans, we can start debugging the communication problems on our team.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of “technical debt”, but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the “friction” we experience into explicit risk models for project decision-making.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the effects of learning curves, collaboration pain, and problems building up in the code?
We could:
Identify the highest leverage opportunities for improvement
Make the case to management that budget should be allocated for a solution
Lead the organization in making better decisions with a data-driven feedback loop to guide the way
Several years ago, I stumbled into a solution for measuring the growing “friction” in developer experience. Visibility turned my world upside-down.
We've been trying to explain the pain of Technical Debt for generations, but we've never been able to measure it. Visibility introduces a whole new world of possibilities.
In this talk, I'll show you what I'm measuring, how exactly I'm measuring it, then we'll talk through the implications for our teams, our organizations, and our industry.
We can identify the highest leverage improvement opportunities and steer our projects with a data-driven feedback loop.
We can breakdown the "wall of ignorance" between developers and management by defining an explicit language for managing technical risk.
We can teach the art of software development with a data-driven feedback loop and codify our knowledge into sharable decision principles.
We can revolutionize our business accounting methods to take the pain of software development into account, so the costs and risks are visible at the highest levels of the organization.
We can conquer the challenges across the software industry by working together, learning together, and sharing our knowledge with the world.
With visibility, we can start a revolution in data-driven learning.
Top 5 Reasons Why Improvement Efforts FailArty Starr
This is my story of lessons learned on why our improvement efforts fail... I had a great team. We were disciplined about best practices and spent tons of time on improvements. Then I watched my team slam into a brick wall. We brought down a fully-ramped semiconductor factory three times in a row, then couldn't ship again for a year.
Despite our best efforts with CI, unit testing, design reviews, and code reviews, we lost our ability to understand the system. I discovered our mistakes weren't caused by technical debt. Most of the problems were caused by human factors. We failed to improve because we didn't solve the right problems.
To learn, we need a feedback loop. To improve, we need a feedback loop with a goal.
There's five different ways our project feedback loop can break:
* **Broken Target** - Our definition of "better" is broken.
* **Broken Visibility** - We don't see the pain, so we take no action.
* **Broken Clarity** - We don't understand what's causing the pain.
* **Broken Awareness** - We don't know how to avoid the pain.
* **Broken Focus** - We see the pain, but our attention is focused on something else.
Find out how to repair the broken feedback loops on your software project.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of "technical debt", but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the "friction" we experience into explicit risk models for project decision-making.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the loss of productivity, the escalating costs and risks, and could steer our projects with a data-driven feedback loop?
By measuring the friction in “Idea Flow”, the flow of ideas between the developer and the software, we can create a data-driven feedback loop for learning what works. Rather than making decisions based on anecdote and gut feel, we can start driving our improvement decisions with real data.
Data-Driven Software Mastery is about learning and improving faster than ever.
Find out how you can:
• Identify the biggest causes of productivity loss on your software project.
• Avoid spending tons of time solving the wrong problems
• Collaborate with other industry professionals in the art of data-driven software mastery
Idea Flow gives us a universal language for describing our experience, so we can share the patterns and principles of what works. With a feedback loop, we can run real experiments!
Idea Flow turns the development community into a scientific community.
There’s a huge disconnect between the business world and the engineering world that drives our software projects into the ground. We rewrite our software over and over again, not because we lack the engineering skills to build great software, but because we fail to communicate, make decisions in ignorance, and don’t adapt when our current strategy is obviously failing.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the loss of productivity, the escalating costs and risks, and could steer our projects with a data-driven feedback loop?
Visibility changes everything. With visibility, we can bridge the gap between the business world and the engineering world, and get everyone pulling the same direction.
Find out how you can:
1. Identify the biggest causes of productivity loss on your software project
2. Translate the world of developer pain into explicit costs and risks
3. Collaborate with other industry professionals in the art of data-driven software mastery
Let's break down the challenges and learn our way to success, one small victory at a time.
Speaker: Janelle Klein
Janelle is a NFJS Tour Speaker and author of the book, Idea Flow: How to Measure the PAIN in Software Development: a modern strategy for systematically optimizing software productivity with a data-driven feedback loop.
Once we make our pain visible with Idea Flow Mapping, we've got a data-driven feedback to learn what works. Objective data enables us to do something we've never been able to do before in our industry: science. This talk is about how to do science in software development.
The Lean Startup community has pioneered the art of everyday science to reduce the risk of building the wrong product by running customer experiments to learn what works. By mapping these same basic scientific principles to technical risk management, we can run experiments to learn our way to AWESOME!
Edit
Archive
Delete
In this talk we'll cover:
How science is used in the Lean Startup world to run business model experiments
How science is used in the Lean Manufacturing world to support process control & supply chain optimization
How we can apply science in software development to systematically learn what works.
If you want to start learning and improving faster than ever before, you won't want to miss this talk.
What makes software development complex isn't the code, it's the humans. The most effective way to improve our capabilities in software development is to better understand ourselves.
In this talk, I'll introduce a conceptual model for human interaction, identity, culture, communication, relationships, and learning based on the foundational model of Idea Flow. If you were to write a simulator to describe the interaction of humans, this talk would describe the architecture.
Learn how to understand the humans on your team and fix the bugs in communication, by thinking about your teammates like code!
Edit
Archive
Delete
I'm not a scientist or a psychologist. These ideas are based on a combination of personal experience, reading lots of cognitive science books, and a couple years of running experiments on developers. As I struggled through the challenges of getting a software concept from my head to another developer's head (interpersonal Idea Flow), I learned a whole lot about human interaction.
As software developers, we have to work together, think together, and solve problems together to do our jobs. Code? We get it. Humans? WTF?!
Fortunately, humans are predictably irrational, predictably emotional, and predictably judgmental creatures. Of course those pesky humans will always do a few unexpected things, but once we know the algorithm for peace and harmony among humans, we can start debugging the communication problems on our team.
Since the dawn of software development, we've struggled with a huge disconnect between the management world and the engineering world. We try to explain our problems in terms of “technical debt”, but somehow the message seems to get lost in translation, and we drive our projects into the ground, over and over again.
What if we could detect the earliest indicators of a project going off the rails, and had data to convince management to take action? What if we could bridge this communication gap once and for all?
In this session, we'll focus on a key paradigm shift for how we can measure the human factors in software development, and translate the “friction” we experience into explicit risk models for project decision-making.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the effects of learning curves, collaboration pain, and problems building up in the code?
We could:
Identify the highest leverage opportunities for improvement
Make the case to management that budget should be allocated for a solution
Lead the organization in making better decisions with a data-driven feedback loop to guide the way
Several years ago, I stumbled into a solution for measuring the growing “friction” in developer experience. Visibility turned my world upside-down.
We've been trying to explain the pain of Technical Debt for generations, but we've never been able to measure it. Visibility introduces a whole new world of possibilities.
In this talk, I'll show you what I'm measuring, how exactly I'm measuring it, then we'll talk through the implications for our teams, our organizations, and our industry.
We can identify the highest leverage improvement opportunities and steer our projects with a data-driven feedback loop.
We can breakdown the "wall of ignorance" between developers and management by defining an explicit language for managing technical risk.
We can teach the art of software development with a data-driven feedback loop and codify our knowledge into sharable decision principles.
We can revolutionize our business accounting methods to take the pain of software development into account, so the costs and risks are visible at the highest levels of the organization.
We can conquer the challenges across the software industry by working together, learning together, and sharing our knowledge with the world.
With visibility, we can start a revolution in data-driven learning.
Identify Development Pains and Resolve Them with Idea FlowTechWell
With the explosion of new frameworks, a mountain of automation, and our applications distributed across hundreds of services in the cloud, the level of complexity in software development is growing at an insane pace. With increased complexity comes increased costs and risks. When diagnosing unexpected behavior can take days, weeks, or sometimes months, all while our release is on the line, our projects plunge into chaos. In the invisible world of software development, how do we identify what's causing our pain? How do we escape the chaos? Janelle Klein presents a novel approach to measuring the chaos, identifying the causes, and systematically driving improvement with a data-driven feedback loop. Rather than measuring the problems in the code, Janelle suggests measuring the "friction in Idea Flow", the time it takes a developer to diagnose and resolve unexpected confusion, which disrupts the flow of progress during development. With visibility of the symptoms, we can identify the cause—whether it's bad architecture, collaboration problems, or technical debt. Janelle discusses how to measure Idea Flow, why it matters, and the implications for our teams, our organizations, and our industry.
Bringing Science to Software DevelopmentArty Starr
Twenty-five years ago, Peter Senge wrote “The Fifth Discipline”, considered the seminal text for how to build a learning organization. With obvious benefits, and the recipe needed for success, why don't we see more learning organizations? That was twenty-five years ago!
As Ash Maurya pointed out in his new book, Scaling Lean, “The goal isn't learning, the goal is traction.” Without a process that helps us turn learning into momentum, a culture of learning gets us nowhere. Without a strategy to overcome the challenges of distributed decision-making, we still make most decisions in ignorance.
Let's dust off these old ideas in light of all the discoveries we've made over the last decade in Lean Startup, Agile, and Continuous Delivery.
What are the critical elements that are missing in our organizations that prevent us from building a learning organization? What are the key obstacles to success?
In this talk, we'll breakdown the concept of a learning organization into discrete system components and analyze the requirements like engineers. Then we'll discuss a strategy for overcoming the challenges and iteratively transforming our organizations into learning organizations. From the building blocks of culture, to the design of organizational architecture, we'll build a roadmap for learning how to learn together.
Want to learn your way to being an AWESOME company? Learn how to become a learning organization.
The Rationale for Continuous Delivery by Dave FarleyBosnia Agile
The production of software is a complex, collaborative process that stretches our ability as human beings to cope with its demands.
Many people working in software development spend their careers without seeing what good really looks like.
Our history is littered with inefficient processes creating poor quality output, too late to capitalise on the expected business value. How have we got into this state? How do we get past it? What does good really look like?
Continuous Delivery changes the economics of software development for some of the biggest companies in the world, whatever the nature of their software development, find out how and why.
What We Learned from Three Years of Sciencing the Crap Out of DevOpsSeniorStoryteller
This document summarizes research from three years of studying DevOps practices. Some key findings include:
- Continuous delivery practices like reducing lead time and increasing release frequency are correlated with higher IT performance. However, tools like configuration management tools are not correlated.
- Ineffective testing practices include developers not creating tests or environments being difficult to reproduce. But having QA primarily create tests is not ineffective.
- While managing work-in-progress is thought to be important, the correlation between WIP and IT performance is actually negligible.
- DevOps culture and practices around information sharing and collaboration are valid constructs that are predictive of both IT and organizational performance. But data testing is needed to validate assumptions.
Testing for cognitive bias in ai systemsPeter Varhol
The document discusses how machine learning systems can produce biased results based on issues with the training data used, and provides examples of how biases have emerged in commercial AI systems. It then outlines approaches for testing machine learning systems to identify potential biases, including understanding the training data, defining objective success criteria, and testing with diverse edge cases. The challenges of addressing biases that emerge from limitations in the data or human decisions are also examined.
Dealing with Estimation, Uncertainty, Risk, and CommitmentTechWell
Here are three key uncertainties that are often important for software projects:
1. Requirements uncertainty - Unclear or changing requirements can introduce significant risk. Getting requirements right up front reduces later changes.
2. Technical uncertainty - The complexity of the technical solution, unproven technologies, and integration risks can all increase uncertainty. Spikes or prototypes help reduce technical risk.
3. Resource uncertainty - Not knowing if the necessary skills and staff will be available when needed can jeopardize a project. Ensuring resources are committed reduces this risk.
Focusing on these top uncertainties early helps establish a realistic plan and reduces risk of cost and schedule overruns. Other risks like market changes or third party risks are also important to evaluate based
How To (Not) Open Source - Javazone, Oslo 2014gdusbabek
Releasing an open source project while maintaining a shipping product is hard! Different behaviors, attitudes and actions can help or hinder your cause; and they are not always obvious.
The Blueflood distributed metrics engine was released as open source software by Rackspace in August 2012. In the succeeding months the team had to strike a manageable balance between the challenges of growing a community, being good open source stewards, and maintaining a shipping product for Rackspace. Find out what worked, what did not work, and the lessons that can be applied as you endeavor to take your project out into the open.
In this presentation you will learn about strategies for releasing open source products, pitfalls to avoid, and the potential benefits of moving more of your development out in the open.
We have also made a few realizations about the community growing up around metrics. It is still young, and there are problems that come with that youth. I'll talk about some things we can do to make a better software ecosystem.
This document discusses the rationale for adopting continuous delivery practices in software development. It summarizes several studies that found high rates of project failures and benefits not being realized from traditional development approaches. Continuous delivery is presented as an approach that can help address these issues by focusing on rapid, reliable, and automated software releases. Case studies are provided of organizations like Google, Amazon, and HP that have successfully implemented continuous delivery at large scales. Adopting these practices is associated with benefits like increased throughput, reliability, innovation, and business performance.
RecSysOps: Best Practices for Operating a Large-Scale Recommender SystemEhsan38
Ensuring the health of a modern large-scale recommendation system is a very challenging problem. To address this, we need to put in place proper logging, sophisticated exploration policies, develop ML-interpretability tools or even train new ML models to predict/detect issues of the main production model. In this talk, we shine a light on this less-discussed but important area and share some of the best practices, called RecSysOps, that we’ve learned while operating our increasingly complex recommender systems at Netflix. RecSysOps is a set of best practices for identifying issues and gaps as well as diagnosing and resolving them in a large-scale machine-learned recommender system. RecSysOps helped us to 1) reduce production issues and 2) increase recommendation quality by identifying areas of improvement and 3) make it possible to bring new innovations faster to our members by enabling us to spend more of our time on new innovations and less on debugging and firefighting issues.
https://dl.acm.org/doi/10.1145/3460231.3474620
This document outlines 101 weird ideas that companies can implement to make their workplace more fun, engaging, and creative for employees. Some of the ideas include having an "open meeting policy" where employees can optionally observe meetings, organizing a "family day" for employees to bring their families to work, creating a "wall of fame" to recognize employees, allowing flexible work schedules like "firefighters hours" of working 10 hours a day 4 days a week, and holding "customer clinics" where customers are invited to be trained. The overall message is that embracing weird, creative ideas can help companies attract and retain top talent in today's competitive environment.
Troublefree troubleshooting ian campbell sps jhb 2019Ian Campbell
The document provides instructions for attendees of an event hosted by SPS Events in Johannesburg, South Africa. It notes that session schedules may not be printed for all attendees and can be found by session room doors or online. Attendees are asked to provide feedback on sessions and to stay for the prize giving at the end of the day. They are also encouraged to interact with sponsors and speakers, take selfies with speakers to enter a photo competition, and share their learnings on social media using the #SPSJHB hashtag.
A Rapid Introduction to Rapid Software TestingTechWell
This document provides a summary of a presentation on Rapid Software Testing. The presentation was given by Michael Bolton of DevelopSense and covered the methodology and mindset of rapid software testing. It emphasizes testing software expertly under uncertainty and time pressure. The presentation defines rapid testing as testing more quickly and less expensively while still achieving excellent results. It compares rapid testing to other approaches like exhaustive, ponderous, and slapdash testing. The presentation also discusses principles of rapid testing, how to recognize problems quickly using heuristics, and testing rapidly to fulfill the mission of testing.
Good project from scratch - from developer's point of viewPaweł Lewtak
Slides for my talk at PHPExperience 2018 in São Paulo.
It's about 10 things I believe are important in order to have a successful long-term IT project.
This document appears to be a slide presentation on DevOps practices and culture. Some key points discussed include:
- High-performing IT organizations are twice as likely to exceed goals in areas like profitability and customer satisfaction.
- DevOps focuses on continuous delivery, quality, lean processes, effective collaboration, and a culture of learning from failures.
- Culture can be measured and influenced by providing employees the tools and training to do their jobs successfully.
- Adopting DevOps practices may lead to improved lead times, release frequency, change fail rates, and service restoration times.
The document describes an audit of a company's DevOps practices. It initially presents a negative scenario where developers deploy code without approval. However, it then shifts to describe positive controls the company has implemented, such as automated code testing and peer reviews. The document discusses how to engage audit, security and compliance functions in a collaborative manner from the beginning of a project rather than as obstructors at the end. It emphasizes the importance of integrating non-functional requirements like security through automation.
The document discusses root cause analysis (RCA) and various tools used to perform RCA, including the 5 Whys technique and fishbone (or cause-and-effect) diagrams. It provides examples of how to apply these tools to identify underlying causal factors. The 5 Whys involves repeatedly asking "why" to trace effects back to their root causes, while fishbone diagrams graphically display possible causes arranged by category to help reveal the path to the root problem. Performing thorough RCA focuses on systems and processes rather than individuals, digs deeper through repeated questioning, and aims to identify causes that can create sustainable solutions.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2HkIr87.
Justin Becker focuses on the jerk part of “brilliant jerk”. He talks about the Emotional Intelligence and why it matters in developing and operating software systems effectively. He provides opinions and perspective from his experience as an engineer and then manager at Netflix and answers the questions: “what is and why we can’t afford to have a brilliant jerk” and “Am I a brilliant jerk?”. Filmed at qconsf.com.
Justin Becker is an engineering manager for the Playback API team at Netflix. He has worked at Netflix for seven years, the first five years as an engineer. His focus is building scalable, high availability, services running in a cloud environment.
Mythbusting Software Estimation - By Tood LittleSynerzip
In this webinar, some of the myths that will be explored include:
Historical Estimation Accuracy
Relative Estimation
The Cone of Uncertainty
Velocity
Scope Creep
Wisdom of Crowds
Read more at https://www.synerzip.com/webinar/mythbusting-software-estimation-webinar-october-2014/
The document discusses various types of anti-patterns that can occur in software development, including methodological, coding, object-oriented design, software design, project management, user interface, and organizational anti-patterns. It provides examples of specific anti-patterns like copy-paste programming, magic numbers, big ball of mud architecture, death march projects, and click-here links. The goals are to help recognize these ineffective patterns, understand their root causes like haste and ignorance, and implement better solutions.
This document discusses NES Global Talent's expertise in providing research and consultancy services to clients. It offers industry research, management of legislation and contractual risk, and recruitment process consultancy. Services include producing bespoke reports on staffing trends, salaries, talent availability and projects. Consultants have expertise placing over 5,000 staff annually globally. The company helps clients understand legislation, ensure compliance, and manage risk. It also advises on streamlining recruitment and onboarding processes. Case studies provide examples of tailored solutions and positive outcomes for clients.
Identify Development Pains and Resolve Them with Idea FlowTechWell
With the explosion of new frameworks, a mountain of automation, and our applications distributed across hundreds of services in the cloud, the level of complexity in software development is growing at an insane pace. With increased complexity comes increased costs and risks. When diagnosing unexpected behavior can take days, weeks, or sometimes months, all while our release is on the line, our projects plunge into chaos. In the invisible world of software development, how do we identify what's causing our pain? How do we escape the chaos? Janelle Klein presents a novel approach to measuring the chaos, identifying the causes, and systematically driving improvement with a data-driven feedback loop. Rather than measuring the problems in the code, Janelle suggests measuring the "friction in Idea Flow", the time it takes a developer to diagnose and resolve unexpected confusion, which disrupts the flow of progress during development. With visibility of the symptoms, we can identify the cause—whether it's bad architecture, collaboration problems, or technical debt. Janelle discusses how to measure Idea Flow, why it matters, and the implications for our teams, our organizations, and our industry.
Bringing Science to Software DevelopmentArty Starr
Twenty-five years ago, Peter Senge wrote “The Fifth Discipline”, considered the seminal text for how to build a learning organization. With obvious benefits, and the recipe needed for success, why don't we see more learning organizations? That was twenty-five years ago!
As Ash Maurya pointed out in his new book, Scaling Lean, “The goal isn't learning, the goal is traction.” Without a process that helps us turn learning into momentum, a culture of learning gets us nowhere. Without a strategy to overcome the challenges of distributed decision-making, we still make most decisions in ignorance.
Let's dust off these old ideas in light of all the discoveries we've made over the last decade in Lean Startup, Agile, and Continuous Delivery.
What are the critical elements that are missing in our organizations that prevent us from building a learning organization? What are the key obstacles to success?
In this talk, we'll breakdown the concept of a learning organization into discrete system components and analyze the requirements like engineers. Then we'll discuss a strategy for overcoming the challenges and iteratively transforming our organizations into learning organizations. From the building blocks of culture, to the design of organizational architecture, we'll build a roadmap for learning how to learn together.
Want to learn your way to being an AWESOME company? Learn how to become a learning organization.
The Rationale for Continuous Delivery by Dave FarleyBosnia Agile
The production of software is a complex, collaborative process that stretches our ability as human beings to cope with its demands.
Many people working in software development spend their careers without seeing what good really looks like.
Our history is littered with inefficient processes creating poor quality output, too late to capitalise on the expected business value. How have we got into this state? How do we get past it? What does good really look like?
Continuous Delivery changes the economics of software development for some of the biggest companies in the world, whatever the nature of their software development, find out how and why.
What We Learned from Three Years of Sciencing the Crap Out of DevOpsSeniorStoryteller
This document summarizes research from three years of studying DevOps practices. Some key findings include:
- Continuous delivery practices like reducing lead time and increasing release frequency are correlated with higher IT performance. However, tools like configuration management tools are not correlated.
- Ineffective testing practices include developers not creating tests or environments being difficult to reproduce. But having QA primarily create tests is not ineffective.
- While managing work-in-progress is thought to be important, the correlation between WIP and IT performance is actually negligible.
- DevOps culture and practices around information sharing and collaboration are valid constructs that are predictive of both IT and organizational performance. But data testing is needed to validate assumptions.
Testing for cognitive bias in ai systemsPeter Varhol
The document discusses how machine learning systems can produce biased results based on issues with the training data used, and provides examples of how biases have emerged in commercial AI systems. It then outlines approaches for testing machine learning systems to identify potential biases, including understanding the training data, defining objective success criteria, and testing with diverse edge cases. The challenges of addressing biases that emerge from limitations in the data or human decisions are also examined.
Dealing with Estimation, Uncertainty, Risk, and CommitmentTechWell
Here are three key uncertainties that are often important for software projects:
1. Requirements uncertainty - Unclear or changing requirements can introduce significant risk. Getting requirements right up front reduces later changes.
2. Technical uncertainty - The complexity of the technical solution, unproven technologies, and integration risks can all increase uncertainty. Spikes or prototypes help reduce technical risk.
3. Resource uncertainty - Not knowing if the necessary skills and staff will be available when needed can jeopardize a project. Ensuring resources are committed reduces this risk.
Focusing on these top uncertainties early helps establish a realistic plan and reduces risk of cost and schedule overruns. Other risks like market changes or third party risks are also important to evaluate based
How To (Not) Open Source - Javazone, Oslo 2014gdusbabek
Releasing an open source project while maintaining a shipping product is hard! Different behaviors, attitudes and actions can help or hinder your cause; and they are not always obvious.
The Blueflood distributed metrics engine was released as open source software by Rackspace in August 2012. In the succeeding months the team had to strike a manageable balance between the challenges of growing a community, being good open source stewards, and maintaining a shipping product for Rackspace. Find out what worked, what did not work, and the lessons that can be applied as you endeavor to take your project out into the open.
In this presentation you will learn about strategies for releasing open source products, pitfalls to avoid, and the potential benefits of moving more of your development out in the open.
We have also made a few realizations about the community growing up around metrics. It is still young, and there are problems that come with that youth. I'll talk about some things we can do to make a better software ecosystem.
This document discusses the rationale for adopting continuous delivery practices in software development. It summarizes several studies that found high rates of project failures and benefits not being realized from traditional development approaches. Continuous delivery is presented as an approach that can help address these issues by focusing on rapid, reliable, and automated software releases. Case studies are provided of organizations like Google, Amazon, and HP that have successfully implemented continuous delivery at large scales. Adopting these practices is associated with benefits like increased throughput, reliability, innovation, and business performance.
RecSysOps: Best Practices for Operating a Large-Scale Recommender SystemEhsan38
Ensuring the health of a modern large-scale recommendation system is a very challenging problem. To address this, we need to put in place proper logging, sophisticated exploration policies, develop ML-interpretability tools or even train new ML models to predict/detect issues of the main production model. In this talk, we shine a light on this less-discussed but important area and share some of the best practices, called RecSysOps, that we’ve learned while operating our increasingly complex recommender systems at Netflix. RecSysOps is a set of best practices for identifying issues and gaps as well as diagnosing and resolving them in a large-scale machine-learned recommender system. RecSysOps helped us to 1) reduce production issues and 2) increase recommendation quality by identifying areas of improvement and 3) make it possible to bring new innovations faster to our members by enabling us to spend more of our time on new innovations and less on debugging and firefighting issues.
https://dl.acm.org/doi/10.1145/3460231.3474620
This document outlines 101 weird ideas that companies can implement to make their workplace more fun, engaging, and creative for employees. Some of the ideas include having an "open meeting policy" where employees can optionally observe meetings, organizing a "family day" for employees to bring their families to work, creating a "wall of fame" to recognize employees, allowing flexible work schedules like "firefighters hours" of working 10 hours a day 4 days a week, and holding "customer clinics" where customers are invited to be trained. The overall message is that embracing weird, creative ideas can help companies attract and retain top talent in today's competitive environment.
Troublefree troubleshooting ian campbell sps jhb 2019Ian Campbell
The document provides instructions for attendees of an event hosted by SPS Events in Johannesburg, South Africa. It notes that session schedules may not be printed for all attendees and can be found by session room doors or online. Attendees are asked to provide feedback on sessions and to stay for the prize giving at the end of the day. They are also encouraged to interact with sponsors and speakers, take selfies with speakers to enter a photo competition, and share their learnings on social media using the #SPSJHB hashtag.
A Rapid Introduction to Rapid Software TestingTechWell
This document provides a summary of a presentation on Rapid Software Testing. The presentation was given by Michael Bolton of DevelopSense and covered the methodology and mindset of rapid software testing. It emphasizes testing software expertly under uncertainty and time pressure. The presentation defines rapid testing as testing more quickly and less expensively while still achieving excellent results. It compares rapid testing to other approaches like exhaustive, ponderous, and slapdash testing. The presentation also discusses principles of rapid testing, how to recognize problems quickly using heuristics, and testing rapidly to fulfill the mission of testing.
Good project from scratch - from developer's point of viewPaweł Lewtak
Slides for my talk at PHPExperience 2018 in São Paulo.
It's about 10 things I believe are important in order to have a successful long-term IT project.
This document appears to be a slide presentation on DevOps practices and culture. Some key points discussed include:
- High-performing IT organizations are twice as likely to exceed goals in areas like profitability and customer satisfaction.
- DevOps focuses on continuous delivery, quality, lean processes, effective collaboration, and a culture of learning from failures.
- Culture can be measured and influenced by providing employees the tools and training to do their jobs successfully.
- Adopting DevOps practices may lead to improved lead times, release frequency, change fail rates, and service restoration times.
The document describes an audit of a company's DevOps practices. It initially presents a negative scenario where developers deploy code without approval. However, it then shifts to describe positive controls the company has implemented, such as automated code testing and peer reviews. The document discusses how to engage audit, security and compliance functions in a collaborative manner from the beginning of a project rather than as obstructors at the end. It emphasizes the importance of integrating non-functional requirements like security through automation.
The document discusses root cause analysis (RCA) and various tools used to perform RCA, including the 5 Whys technique and fishbone (or cause-and-effect) diagrams. It provides examples of how to apply these tools to identify underlying causal factors. The 5 Whys involves repeatedly asking "why" to trace effects back to their root causes, while fishbone diagrams graphically display possible causes arranged by category to help reveal the path to the root problem. Performing thorough RCA focuses on systems and processes rather than individuals, digs deeper through repeated questioning, and aims to identify causes that can create sustainable solutions.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2HkIr87.
Justin Becker focuses on the jerk part of “brilliant jerk”. He talks about the Emotional Intelligence and why it matters in developing and operating software systems effectively. He provides opinions and perspective from his experience as an engineer and then manager at Netflix and answers the questions: “what is and why we can’t afford to have a brilliant jerk” and “Am I a brilliant jerk?”. Filmed at qconsf.com.
Justin Becker is an engineering manager for the Playback API team at Netflix. He has worked at Netflix for seven years, the first five years as an engineer. His focus is building scalable, high availability, services running in a cloud environment.
Mythbusting Software Estimation - By Tood LittleSynerzip
In this webinar, some of the myths that will be explored include:
Historical Estimation Accuracy
Relative Estimation
The Cone of Uncertainty
Velocity
Scope Creep
Wisdom of Crowds
Read more at https://www.synerzip.com/webinar/mythbusting-software-estimation-webinar-october-2014/
The document discusses various types of anti-patterns that can occur in software development, including methodological, coding, object-oriented design, software design, project management, user interface, and organizational anti-patterns. It provides examples of specific anti-patterns like copy-paste programming, magic numbers, big ball of mud architecture, death march projects, and click-here links. The goals are to help recognize these ineffective patterns, understand their root causes like haste and ignorance, and implement better solutions.
This document discusses NES Global Talent's expertise in providing research and consultancy services to clients. It offers industry research, management of legislation and contractual risk, and recruitment process consultancy. Services include producing bespoke reports on staffing trends, salaries, talent availability and projects. Consultants have expertise placing over 5,000 staff annually globally. The company helps clients understand legislation, ensure compliance, and manage risk. It also advises on streamlining recruitment and onboarding processes. Case studies provide examples of tailored solutions and positive outcomes for clients.
This document provides a bibliography of sources about Russia including encyclopedia entries, country profiles, and guides on Russian language, culture and customs. References cover topics such as Russian history, geography, politics, society and culture. Sources include published books, online encyclopedias and country profiles from the CIA, BBC and Australian government.
Python is a lightweight language well-suited for data science. It uses duck typing and dynamic name resolution. Common features include lists, dictionaries, tuples, sets, higher-order functions, list comprehensions, generators, object-oriented programming, and data analysis libraries like Pandas and Seaborn. Guido van Rossum has spent up to 50% of his time working on Python improvements at Dropbox.
LibreOffice es una poderosa suite de oficina de código abierto que incluye aplicaciones como procesador de texto, hoja de cálculo, presentaciones, dibujo y base de datos. Ofrece una interfaz limpia y herramientas para aumentar la productividad y creatividad del usuario de manera gratuita, aunque también existe una versión de pago con soporte personalizado para empresas.
IACT Global provides Big Data Certification in support with IBM Big Insight.To know more about the course connect with the counselors of IACT Global
http://www.iactglobal.in/course.aspx?coursename=big-data-vc
This document provides an overview of International Financial Reporting Standards (IFRS). It discusses that IFRS are a global set of accounting standards developed by the IASB to provide consistency in how public companies report financial information. It also outlines the importance of IFRS in allowing easier comparison between companies, as well as advantages like increased investment and disadvantages like increased costs of transition and potential differences in interpretation. Finally, it lists the key components required in IFRS financial statements and provides a listing of the individual IFRS standards.
CLASSIFICATION OF RESEARCH BY PURPOSE & METHODDr.Shazia Zamir
This document classifies research by purpose and method. For purpose, it discusses basic vs applied research, research and development, and evaluative research. For method, it discusses historical research which describes past conditions, descriptive research which describes present data and characteristics, and experimental research which manipulates variables to discern effects.
The document discusses issues with dragline maintenance competency and reliability at mining companies and proposes training and services from The Rebuild Training Company to address these issues. Specifically, it notes that 1) experienced fitters need additional training in bearings to properly fit and identify faults, 2) failures are variable and not always fully analyzed, and 3) experts can help identify knowledge gaps. The proposed solution is for fitters to undergo blended training in bearings from experts and be assessed as competent before working on equipment independently. Engaging The Rebuild Training Company for training, condition monitoring support, and equipment rebuilds during shuts is presented as helping improve reliability and reducing costs.
The document announces the recipients of the 2003 Emerging and Established Artists Awards from The Leeway Foundation. Twelve women writers from the Philadelphia region received awards for their diverse styles and voices. The recipients include Susan Abulhawa, Robin Black, Rachel Cantor, Lorene Cary, Yvonne Chism-Peace, Gloria Klaiman, Molly Layton, Teresa Leo, Susan Magee, Ilana Stanger-Ross, Carol Towarnicky, and Sharon White. A series of readings will be held in November and December to celebrate the award recipients and their work.
This document discusses the risks of a 4°C rise in global temperatures according to climate scientists. It notes that a rise above 2°C increases risks like sea level rise, collapse of ecosystems, and more extreme weather events. The IPCC scenario of emissions increasing through 2100 could lead to a 5°C rise by 2120 and 6°C by 2150, leaving the planet unrecognizable. Evidence shows weather disasters have tripled in the last 30 years compared to a slight increase in geophysical events, indicating global warming is contributing to more extreme weather. Action is needed to avoid the risks of a 4°C warmer world.
P. John Britto is seeking a position as an Android and iOS developer. He has over 5 years of experience developing mobile applications using technologies like Android SDK, Xcode, and SQLite. Some of the apps he has developed include Plug n Snap, Text2Mail, and iRescue. He also developed several personal apps available on Google Play. Britto holds a B.Com degree and is proficient in languages like Java, C/C++, and Objective-C.
The document discusses measuring developer experience (DX) when working with Gradle. It proposes measuring "pain" caused by friction during the idea flow process of writing and troubleshooting code. Examples are given of mapping idea flow to identify sources of pain like learning unfamiliar code, assumptions causing rework, and troubleshooting unexpected behavior. Reducing friction in the software supply chain is important as most software issues occur in dependencies outside a developer's control. Pilot projects aim to collect standardized pain data across the community to help partners reduce friction and improve overall DX.
Open Mastery: Let's Conquer the Challenges of the Industry!Arty Starr
What if you could get upper management to care about your technical developer problems? Would you be willing to measure and prioritize the problems?
What if **WE** could stop the relentless business pressure that drives our software projects into the ground *across the industry*? I know this probably sounds impossible, but before you dismiss the idea entirely, let me show you that it *is* possible.
We can start a cascade of changes across the industry with only a handful of people that are willing to work together to make it happen.
Open Mastery is a peer learning network focused on codifying open decision models and standards to solve industry-wide problems. This presentation is about the obstacles, the strategy, and the business model.
Lastly, I want your help in looking for gaps in my ideas. Let's identify where the strategy might break, and figure out how to make it work. I'm launching Open Mastery in early 2016. Let's make this dream a reality.
Every year, software companies spend a huge amount of time and effort estimating large projects, and still end up regularly missing the mark - often by huge amounts. What the heck is going on? With all of the planning poker, and PI planning, and #noestimates, why isn't this figured out yet?
In this talk, we'll dive into probability theory and psychology to discover some of the common underlying causes for a lack of predictability. Once we understand why the world is so uncertain, we'll talk about how we can live with our estimation failures, while still thrilling our customers and maintaining enough predictability to succeed as an organization.
Ensuring Project Success Through Automated Risk ManagementMitchell College
The document discusses an automated project governance solution called Insight that provides visibility, control, and early detection of risks to ensure project success. It analyzes problems companies face with traditional risk management practices. Insight addresses these by collecting regular risk assessments and feedback to provide comprehensive visibility into project risks. This helps limit issues like burnout and turnover while improving information flow and risk management practices.
The document outlines the agenda and logistics for a Coach Retreat event in Montreal 2013. The retreat will use various coaching techniques applied to hypothetical coaching situations, including free style, yes and, appreciative inquiry, solution focused, crucial conversations, and real options. Sessions will be 60 minutes, repeating the same coaching problem. Coaches, seekers, and observers will participate. There will be introductions, situation selection, coaching dojo sessions, retrospectives, breaks for networking and discussion. The goal is for participants to gain experience and wisdom applying different coaching approaches.
General introduction to agile practices like Scrum and Kanban. Also covers what situations Agile is best at, what situations Agile doesn't help with, and what an Agile team should look like. This deck is a general intro to Agile for OpenSource Connections clients.
Applying Systems Thinking to Solve Wicked Problems in Software EngineeringMajed Ayyad
Software systems are essentially socio-technical systems
and they are not isolated from other systems engineering processes. Unconsciously or by intention, we implement systems thinking in multi-agent systems, microservices, DevOps, distributed systems, API-led integrations and lean based software development life cycles. However, the concrete relationship between systems thinking and software engineering is still a green area and barely highlighted as a common practice among software engineers. In this presentation, we will
elaborate how systems thinking helps us to understand the socio-technical aspects of software engineering. We will discuss why systems thinking is important in the field of software engineering, provide examples where it is currently used and show the general areas where systems thinking applies to tackle complex software problems
Presentation to Lonetree PMI Roundtable on August 27, 2008.
Abstract:
According to the Wall Street Journal agile development has "crossed the chasm." Why then are there still strong pockets of intense resistance to agile? This presentation takes a look at some of the most common misconceptions about agile development. It exposes the truth behind the myths and backs up many of the points with actual industry data. In the process, a basic business case for agility is created. The goal of this session is for all participants to leave with the knowledge necessary to answer the question "Why Agile?" In addition, participants will gain a deeper understanding of the realities of agile development and how it can help organizations.
10+ Testing Pitfalls and How to Avoid them PractiTest
Join Joel Montvelisky, PractiTest's chief solution architect in this webinar as he takes you through the common pitfalls of testing you need to be aware of and how to avoid them.
Can we write successful enterprise software without challenging assumptions? Agile doesn't happen in a vacuum. Here's what I discovered using EventStorming as a blade to cut through business, software and organisation dysfunctions. From XP2017 Cologne.
It's Okay to be Wrong (Accelerator Academy Oct '17)Matt Mower
Building a software company is hard and it's not usually about the technology but the problems of stress, communication, assumption, and strategy exacerbated by the complexity of creating software that meets customer needs.
Technology is enabling greater product offerings in financial services but also bringing challenges of managing increasingly complex systems over time. Legacy systems can be difficult to migrate and modernize due to their age and number of products supported. Business knowledge is lost when outsourcing increases. Systems are becoming old yet critical, and supporting outdated closed products is costly. Compliance requirements also continuously add management challenges. When redesigning such systems, it is important to focus on information rather than individual applications or technologies, separate stable and changing elements, and design for continuous change, monitoring and knowledge distribution across teams.
Technology is enabling greater product offerings in financial services but also bringing challenges of managing aging complex systems over long periods. To manage constant change, a good architecture separates elements that change frequently from those that don't. Microservices principles allow isolation while avoiding silos. Focusing on information modeling rather than representations reduces issues when characteristics change. Monitoring and auditing must be built-in due to regulatory requirements. Drawing from open source practices helps manage ongoing versions and releases in a complex environment. Ultimately new systems may be needed to fully take advantage of new technologies and avoid accumulating further technical debt from old systems.
Devops at scale is a hard problem challenges, insights and lessons learnedkjalleda
Kishore Jalleda discusses several DevOps initiatives and lessons learned from implementing them at scale at Yahoo. The initiatives include: 1) Directed alerting to route alerts directly to development teams; 2) Continuous delivery to enable automated deployments to production; 3) Building an automation culture to reduce manual toil; and 4) Adopting AWS for certain use cases. Key lessons include facing challenges from multiple teams, using successes to gain buy-in, empowering teams to say no, embracing failures, and incentivizing the right behaviors. The talk argues for development teams taking true ownership over their services.
A presentation of thoughts for modern agile testing, different ways to adopt testing process to your working environment and how your work as a QA person can be recognized by the whole company.
John Pourdanis will share his experience, obstacles and successes by changing from the world of development to the world of testing.
What does "better" really mean? If we eliminate duplication, is the code better? If we decide to skip the unit tests, are we doing worse? How do we decide if one design is better than another design?
About 8 years ago, my project failed, despite "doing all the right things", and shattered my faith in best practices. Since then, I've learned to measure developer experience, use *data* to learn what works, and I've been codifying "better" into patterns and decision principles for years. In this talk, I'll show you the paradigm shift that led to all my discoveries, and hopefully change your perspective on "better".
"Idea Flow" is an alternative to the Technical Debt metaphor that focuses on problems in human interaction rather than problems inside the code. By measuring the "friction" that occurs when developers interact with the code, we can identify the biggest causes of friction and systematically optimize developer experience.
Why go to all this trouble? From my experience, the biggest causes of pain are seldom what we think. When we try to make things "better", we can easily miss our biggest problems, or inadvertently make things worse. Visibility turned my beliefs about "better" upside-down.
First, I'll walk you through the conceptual metaphor of "Idea Flow" and how to recognize friction in developer experience.
Next, we'll write a little code and record the experience using the open source "Idea Flow Mapping" software.
Finally, we'll discuss a handful of "decision principles" for optimizing developer experience and analyze our coding experience as a group.
2011 06 15 velocity conf from visible ops to dev ops finalGene Kim
My presentation called "Creating the Dev/Test/PM/Ops Supertribe: From Visible Ops To DevOps"
2011 Velocity Conference:
http://velocityconf.com/velocity2011/public/schedule/detail/21123
We are uncovering better ways of developing software by valuing individuals and interactions, working software, customer collaboration, and responding to change over processes, documentation, contract negotiation, and following a plan. The document discusses agile values and principles like valuing working software over documentation, customer collaboration over contract negotiation, and responding to change over following a plan. It emphasizes delivering working software frequently through close cooperation between developers and customers.
How to justify technical debt mitigations in Software EngineeringAndré Agostinho
In this presentation André Agostinho e Cassio Silva covers the importance in dealing with technical debt in software engineering showing the real impacts, daily approaches and best practices for mitigations
The Portal Builder Story: From Hell to Lean, from Zero to Cloud - part 2SOFTENG
Christian Rodriguez gave a presentation on avoiding pitfalls when using Scrum. He discussed how Scrum initially helped his team with steady development and working software, but they later struggled with internal quality issues causing many bugs. He emphasized the importance of internal quality and technical practices to support Scrum. The presentation also covered detecting impediments, improving estimation practices, and adapting to finding more valuable work during a sprint.
Similar to Stop Getting Crushed By Business Pressure (20)
Baha Majid WCA4Z IBM Z Customer Council Boston June 2024.pdfBaha Majid
IBM watsonx Code Assistant for Z, our latest Generative AI-assisted mainframe application modernization solution. Mainframe (IBM Z) application modernization is a topic that every mainframe client is addressing to various degrees today, driven largely from digital transformation. With generative AI comes the opportunity to reimagine the mainframe application modernization experience. Infusing generative AI will enable speed and trust, help de-risk, and lower total costs associated with heavy-lifting application modernization initiatives. This document provides an overview of the IBM watsonx Code Assistant for Z which uses the power of generative AI to make it easier for developers to selectively modernize COBOL business services while maintaining mainframe qualities of service.
Photoshop Tutorial for Beginners (2024 Edition)alowpalsadig
Photoshop Tutorial for Beginners (2024 Edition)
Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."
Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
Photoshop Tutorial for Beginners (2024 Edition)Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
The importance of developing and designing programming in 2024
Programming design and development represents a vital step in keeping pace with technological advancements and meeting ever-changing market needs. This course is intended for anyone who wants to understand the fundamental importance of software development and design, whether you are a beginner or a professional seeking to update your knowledge.
Course objectives:
1. **Learn about the basics of software development:
- Understanding software development processes and tools.
- Identify the role of programmers and designers in software projects.
2. Understanding the software design process:
- Learn about the principles of good software design.
- Discussing common design patterns such as Object-Oriented Design.
3. The importance of user experience (UX) in modern software:
- Explore how user experience can improve software acceptance and usability.
- Tools and techniques to analyze and improve user experience.
4. Increase efficiency and productivity through modern development tools:
- Access to the latest programming tools and languages used in the industry.
- Study live examples of applications
A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
Ensuring Efficiency and Speed with Practical Solutions for Clinical OperationsOnePlan Solutions
Clinical operations professionals encounter unique challenges. Balancing regulatory requirements, tight timelines, and the need for cross-functional collaboration can create significant internal pressures. Our upcoming webinar will introduce key strategies and tools to streamline and enhance clinical development processes, helping you overcome these challenges.
Alluxio Webinar | 10x Faster Trino Queries on Your Data PlatformAlluxio, Inc.
Alluxio Webinar
June. 18, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Jianjian Xie (Staff Software Engineer, Alluxio)
As Trino users increasingly rely on cloud object storage for retrieving data, speed and cloud cost have become major challenges. The separation of compute and storage creates latency challenges when querying datasets; scanning data between storage and compute tiers becomes I/O bound. On the other hand, cloud API costs related to GET/LIST operations and cross-region data transfer add up quickly.
The newly introduced Trino file system cache by Alluxio aims to overcome the above challenges. In this session, Jianjian will dive into Trino data caching strategies, the latest test results, and discuss the multi-level caching architecture. This architecture makes Trino 10x faster for data lakes of any scale, from GB to EB.
What you will learn:
- Challenges relating to the speed and costs of running Trino in the cloud
- The new Trino file system cache feature overview, including the latest development status and test results
- A multi-level cache framework for maximized speed, including Trino file system cache and Alluxio distributed cache
- Real-world cases, including a large online payment firm and a top ridesharing company
- The future roadmap of Trino file system cache and Trino-Alluxio integration
A Comprehensive Guide on Implementing Real-World Mobile Testing Strategies fo...kalichargn70th171
In today's fiercely competitive mobile app market, the role of the QA team is pivotal for continuous improvement and sustained success. Effective testing strategies are essential to navigate the challenges confidently and precisely. Ensuring the perfection of mobile apps before they reach end-users requires thoughtful decisions in the testing plan.
Nashik's top web development company, Upturn India Technologies, crafts innovative digital solutions for your success. Partner with us and achieve your goals
Penify - Let AI do the Documentation, you write the Code.KrishnaveniMohan1
Penify automates the software documentation process for Git repositories. Every time a code modification is merged into "main", Penify uses a Large Language Model to generate documentation for the updated code. This automation covers multiple documentation layers, including InCode Documentation, API Documentation, Architectural Documentation, and PR documentation, each designed to improve different aspects of the development process. By taking over the entire documentation process, Penify tackles the common problem of documentation becoming outdated as the code evolves.
https://www.penify.dev/
Flutter vs. React Native: A Detailed Comparison for App Development in 2024dhavalvaghelanectarb
Choosing the right framework for your cross-platform mobile app can be a tough decision. Both Flutter and React Native offer compelling features and have earned their place in the development world. Here is a detailed comparison to help you weigh their strengths and weaknesses. Here are the pros and cons of developing mobile apps in React Native vs Flutter.
Superpower Your Apache Kafka Applications Development with Complementary Open...Paul Brebner
Kafka Summit talk (Bangalore, India, May 2, 2024, https://events.bizzabo.com/573863/agenda/session/1300469 )
Many Apache Kafka use cases take advantage of Kafka’s ability to integrate multiple heterogeneous systems for stream processing and real-time machine learning scenarios. But Kafka also exists in a rich ecosystem of related but complementary stream processing technologies and tools, particularly from the open-source community. In this talk, we’ll take you on a tour of a selection of complementary tools that can make Kafka even more powerful. We’ll focus on tools for stream processing and querying, streaming machine learning, stream visibility and observation, stream meta-data, stream visualisation, stream development including testing and the use of Generative AI and LLMs, and stream performance and scalability. By the end you will have a good idea of the types of Kafka “superhero” tools that exist, which are my favourites (and what superpowers they have), and how they combine to save your Kafka applications development universe from swamploads of data stagnation monsters!
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
React.js, a JavaScript library developed by Facebook, has gained immense popularity for building user interfaces, especially for single-page applications. Over the years, React has evolved and expanded its capabilities, becoming a preferred choice for mobile app development. This article will explore why React.js is an excellent choice for the Best Mobile App development company in Noida.
Visit Us For Information: https://www.linkedin.com/pulse/what-makes-reactjs-stand-out-mobile-app-development-rajesh-rai-pihvf/
Stork Product Overview: An AI-Powered Autonomous Delivery FleetVince Scalabrino
Imagine a world where instead of blue and brown trucks dropping parcels on our porches, a buzzing drove of drones delivered our goods. Now imagine those drones are controlled by 3 purpose-built AI designed to ensure all packages were delivered as quickly and as economically as possible That's what Stork is all about.
How GenAI Can Improve Supplier Performance Management.pdfZycus
Data Collection and Analysis with GenAI enables organizations to gather, analyze, and visualize vast amounts of supplier data, identifying key performance indicators and trends. Predictive analytics forecast future supplier performance, mitigating risks and seizing opportunities. Supplier segmentation allows for tailored management strategies, optimizing resource allocation. Automated scorecards and reporting provide real-time insights, enhancing transparency and tracking progress. Collaboration is fostered through GenAI-powered platforms, driving continuous improvement. NLP analyzes unstructured feedback, uncovering deeper insights into supplier relationships. Simulation and scenario planning tools anticipate supply chain disruptions, supporting informed decision-making. Integration with existing systems enhances data accuracy and consistency. McKinsey estimates GenAI could deliver $2.6 trillion to $4.4 trillion in economic benefits annually across industries, revolutionizing procurement processes and delivering significant ROI.
2. , Developer, Consultant, CTO @
Specialized in Statistical Process Control (SPC)
and Supply Chain Optimization from Lean Manufacturing (data geek)
Continuous Delivery infrastructure, automation strategy & technical mentorship
Janelle Klein
Who Am I?
How to Measure the PAIN
in Software Development
Janelle Klein
Author of “Idea Flow”
leanpub.com/ideaflow
Founder of
newiron.com
3. This is a HARD Problem.
What is this talk about?
4. “Better”
“Better”
What if we could get managers and developers
all pulling the same direction?
Managers
Developers
13. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
Most of the pain was caused by human factors.
What causes PAIN?
14. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
What causes PAIN?
Most of the pain was caused by human factors.
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
15. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
What causes PAIN?
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
Most of the pain was caused by human factors.
16. What Causes Unexpected
Behavior (likeliness)?
What Makes Troubleshooting
Time-Consuming (impact)?
Non-Deterministic Behavior
Ambiguous Clues
Lots of Code Changes
Noisy Output
Cryptic Output
Long Execution Time
Environment Cleanup
Test Data Creation
Using Debugger
What causes PAIN?
PAIN is a consequence of how we interact with the code.
Semantic Mistakes
Stale Memory Mistakes
Association Mistakes
Bad Input Assumption
Tedious Change Mistakes
Copy-Edit Mistakes
Transposition Mistakes
Failed Refactor Mistakes
False Alarm
17. PAIN occurs during the process of
understanding and extending the software
Complex(
So*ware(
PAIN
Not the Code.
Optimize “Idea Flow”
18. My team spent tons of time working on
improvements that didn’t make much difference.
We had tons of automation, but the
automation didn’t catch our bugs.
19. My team spent tons of time working on
improvements that didn’t make much difference.
We had well-modularized code,
but it was still extremely time-consuming to troubleshoot defects.
20. The hard part isn’t solving the problems
it’s identifying the right problems to solve.
“What are the specific problems
that are causing the team’s pain?”
21. Then I got into consulting…
The Software Rewrite Cycle
Start%
Over%
Unmaintainable%
So0ware%
22. We Start with the Best of Intentions
High Quality Code
Low Technical Debt
Easy to Maintain
Good Code Coverage
30. RESET
“A description of the goal is not a strategy.”
-- Richard P. Rumelt
What’s wrong with our current strategy?
31. Our “Strategy” for Success
High Quality Code
Low Technical Debt
Easy to Maintain
Good Code Coverage
32. RESET
“A good strategy is a specific and coherent response to—
and approach for overcoming—the obstacles to progress.”
-- Richard P. Rumelt
The problem is we don’t have a strategy...
33. What are the obstacles?
Obstacle 1:
Management doesn’t care about interest payments.
Obstacle 2:
Management would rather you shut up and do your job.
Obstacle 3:
The Problem is outside anyone’s control.
34. What are the obstacles?
Obstacle 1:
Your manager doesn’t care about interest payments.
Obstacle 2:
Management would rather you shut up and do your job.
Obstacle 3:
The Problem is outside anyone’s control.
35. “Let’s rewrite the software!”
My new project:
“Awesome in Disguise”
I had full control.
48. “The project is already behind schedule!!”
Manager said:
“How can you possibly justify working
on anything other than the deliverables?!”
So we did what we were told.
51. Explained the problem of Technical Debt
Business Coaching
“That doesn’t sound so bad.”
The Response:
?
?
?
?
WHAT?!
52. Loans are a Predictable Financial Tool
Revenue
- Cost
Profit + 10%
Increase Price?
Increase Sales?
Reduce Cost?
What makes investment decisions harder isn’t higher costs,
it’s lower predictability.
Investment Strategy
53. Obstacle 1:
Your manager doesn’t care about interest payments.
But… Managers care A LOT about RISK.
The gradual loss of predictability
is much scarier than the gradual increase in cost.
54. What are the obstacles?
Obstacle 1:
Your manager doesn’t care about interest payments.
Obstacle 2:
Management would rather you shut up and do your job.
Obstacle 3:
The Problem is outside anyone’s control.
55. What are the obstacles?
Obstacle 1:
Your Manager doesn’t care about interest payments.
Obstacle 2:
Your manager would rather you shut up and do your job.
Obstacle 3:
The System is setup to fail.
57. “Don’t ask for permission, ask for forgiveness.”
Another new project…
58. Then We Got New Management!
I put together “a plan”…
59. “What is Janelle trying to pull?!
Who does she think she is?!”
Management said (behind my back):
Get Back Inside Your Box! (or else)
Severe Violation of
SOCIAL PROTOCOL
60. SOCIAL PROTOCOL
Never talk to your manager’s boss about a problem.
Never suggest or imply your manager
can’t do their job effectively by
trying to get others to override their decisions.
Decision-making responsibilities are
assigned by management and not to be questioned.
61. Engineers: “We’re going to CRASH!”
Manager: “What do I do? We can’t miss these deadlines.”
Then I got into Consulting…
63. The Job of a Consultant
Why do they need my help?!
Keynote
64. RESET
Consultants Bridge the Divide
Message comes through a “certified authority.”
Message comes in management-speak.
65. Obstacle 2:
Your manager would rather you shut up and do your job.
Follow
SOCIAL PROTOCOL
Stay (Mostly) Inside
Developer Box
Communicate
in Manager-Speak +
66. What are the obstacles?
Obstacle 2:
Your manager would rather you follow social protocol.
Lesson 3:
The system is setup to fail.
Obstacle 1:
Your manager doesn’t care about interest payments.
67. What are the obstacles?
Obstacle 1:
Your manager doesn’t care about interest payments.
Obstacle 2:
Your manager would rather you follow social protocol.
Obstacle 3:
The system is setup to fail.
78. What are the obstacles?
Obstacle 1:
Management doesn’t care about interest payments.
Obstacle 2:
Management would rather you follow social protocol.
Obstacle 3:
The system is setup to fail.
79. RESET
“A good strategy is a specific and coherent response to—
and approach for overcoming—the obstacles to progress.”
-- Richard P. Rumelt
80. What’s the Strategy?
2. Measure the Pain
4. Become a Risk Translator
5. Refactor the Organization
3. Identify the Biggest Problems
1. Make the Decision to Lead
81. What’s the Strategy?
2. Measure the Pain
4. Become a Risk Translator
5. Refactor the Organization
3. Identify the Biggest Problems
1. Make the Decision to Lead
83. What’s the Strategy?
2. Measure the Pain
4. Become a Risk Translator
5. Refactor the Organization
3. Identify the Biggest Problems
1. Make the Decision to Lead
84. Your manager doesn’t care that your job “feels difficult”
“In God we trust, all others bring data.”
—Edwards Deming
85. PAIN occurs during the process of
understanding and extending the software
Complex(
So*ware(
PAIN
Not the Code.
Optimize “Idea Flow”
86. Idea Flow Mapping Tools
(Open Source, Supported GA ~June 2016)
github.com/ideaflow/tools
92. The Team’s Improvement Focus:
Increasing unit test coverage by 5%
Case Study: Huge Mess with Great Team
1. Test Data Generation
2. Merging Problems
3. Repairing Tests
1000 hours/month
The Biggest Problem:
~700 hours/month generating test data
93. 18 months after a Micro-Services/Continuous Delivery rewrite.
Troubleshooting
Progress
Learning
40-60% of dev capacity on “friction”
0:00 28:15
12:230:00
Case Study: From Monolith to Microservices
94. The Architecture Looked Good on Paper
Team A Team B Team C
Complexity Moved Here
WTF?! WTF?!
96. The Cost of Escalating Risk
0%
100%
Release 1 Release 2 Release 3
Troubleshooting
Progress
Learning
Percentage Capacity spent on Troubleshooting (red) and Learning (blue)
(extrapolated from samples)
97. 0%
100%
Release 1 Release 2 Release 3
Percentage Capacity spent on Troubleshooting (red) and Learning (blue)
Figure out what to do
Learning is front-loaded
Troubleshooting
Progress
Learning
The Cost of Escalating Risk
98. 0%
100%
Release 1 Release 2 Release 3
Percentage Capacity spent on Troubleshooting (red) and Learning (blue)
Rush Before the Deadline
Validation is Deferred
Troubleshooting
Progress
Learning
The Cost of Escalating Risk
99. 0%
100%
Release 1 Release 2 Release 3
Percentage Capacity spent on Troubleshooting (red) and Learning (blue)
Pain Builds
Baseline friction keeps rising
Troubleshooting
Progress
Learning
The Cost of Escalating Risk
100. 0%
100%
Release 1 Release 2 Release 3
Percentage Capacity spent on Troubleshooting (red) and Learning (blue)
Chaos Reigns
Unpredictable work stops
fitting in the timebox
Troubleshooting
Progress
Learning
The Cost of Escalating Risk
102. Would you be willing to collect data if
you knew your management would give you
dedicated time to work on the biggest problems?
103. 1. Don’t ask for Permission
2. State your Goal
"I want to make the business case to management for fixing things around
here. No more chaos and working on weekends, this needs to stop. But I
need data to make the case so I need everyone's help."
3. State the Plan
"Here's what I'm thinking. I want to run an experiment to record data for one
month on all the time we spend troubleshooting. We can look at the data
together and identify our biggest problems, then I’ll write it up and present
the case to management to get things fixed.”
4. Enlist the Team
“Will you guys help me make this happen?”
Here’s What You Do:
104. 1. Don’t ask for Permission
2. Make the Goal Clear to Your Team
"I want to make the business case to management for fixing things around
here. No more chaos and working on weekends, this needs to stop. But I
need data to make the case so I need everyone's help."
3. State the Plan
"Here's what I'm thinking. I want to run an experiment to record data for one
month on all the time we spend troubleshooting. We can look at the data
together and identify our biggest problems, then I’ll write it up and present
the case to management to get things fixed.”
4. Enlist the Team
“Will you guys help me make this happen?”
Here’s What You Do:
105. 1. Don’t ask for Permission
2. Make the Goal Clear to Your Team
"I want to make the business case to management for fixing things around
here. No more chaos and working on weekends, this needs to stop. But I
need data to make the case so I need everyone's help."
3. State the Plan
"Here's what I'm thinking. I want to run an experiment to record data for one
month on all the time we spend troubleshooting. We can look at the data
together and identify our biggest problems, then I’ll write it up and present
the case to management to get things fixed.”
4. Enlist the Team
“Will you guys help me make this happen?”
Here’s What You Do:
106. 1. Don’t ask for Permission
2. Make the Goal Clear to Your Team
"I want to make the business case to management for fixing things around
here. No more chaos and working on weekends, this needs to stop. But I
need data to make the case so I need everyone's help."
3. State the Plan
"Here's what I'm thinking. I want to run an experiment to record data for one
month on all the time we spend troubleshooting. We can look at the data
together and identify our biggest problems, then I’ll write it up and present
the case to management to get things fixed.”
4. Enlist the Team
“Will you guys help me make this happen?”
Here’s What You Do:
107. What’s the Strategy?
2. Measure the Pain
4. Become a Risk Translator
5. Refactor the Organization
3. Identify the Biggest Problems
1. Make the Decision to Lead
108. Add up the Pain by Category
1. Test Data Generation
2. Merging Problems
3. Repairing False Alarms
1000 hours/month
What’s the biggest problem to solve?
109. Friction as a % of total capacity
What’s the biggest problem to solve?
110. Friction % versus Upcoming Demand
What’s the biggest problem to solve?
111. Friction % Grouped by Familiar vs Unfamiliar
What’s the biggest problem to solve?
112. What’s the Strategy?
2. Measure the Pain
4. Become a Risk Translator
5. Refactor the Organization
3. Identify the Biggest Problems
1. Make the Decision to Lead
120. Decisions that
save a few hours
Side-effects that cost
several hours
Save 40 hours in direct costs
(leave the toy on the stairs)
Increase chances of losing 1000 hours by 20%
(tripping and falling)
Explain Problems in Terms of Risk (Gambling)
122. Send “Project Visibility Updates”
Hi Larry,
I know it’s really hard to stay in the loop on all the different
project risks, so I wanted to send you a summarized update of
some of our recent findings.
Subject: Project Visibility Update
We started collecting data during development to track where
all of our time was going, and made some pretty frightening
discoveries.
See attached. Let me know if you’d like to talk.
124. What’s the Strategy?
2. Measure the Pain
4. Become a Risk Translator
5. Refactor the Organization
3. Identify the Biggest Problems
1. Make the Decision to Lead
128. Option 1 Option 2
Stay
the Course
Change
This is Safer (less risky)
or
Make the Case for Partnership
Focus on the Risks
(don’t negotiate schedule)
Key to Success:
129. 1. Explain Why You Decided to Collect Data
Saw this talk/read this book about…
How to Measure the PAIN
in Software Development
Janelle Klein
Consultant +1 Effect
(blame me)
131. “As the problems build, they introduce Quality Risk…
Likelihood)of))
Unexpected)
Behavior)
Cost)to)Troubleshoot)and)Repair)
High)Frequency)
Low)Impact)
Low)Frequency)
Low)Impact)
Low)Frequency)
High)Impact)
PAIN)
Likelihood
of
Mistakes
Cost
to
Recover
Quality Risk
Our application is more likely to be in a BROKEN state.
133. 2. Here’s What We Found…
Pick your WORST offending examples.
Use lots of RED.
134. Save time by
skipping diagnostic
tools
(~80 hours)
Side-effects of
Troubleshooting time
(~700 hours/month)
36h 25m0:00
Troubleshooting
Progress11 hours and 15 minutes of troubleshooting...
Creating a New Customer Report
“This is a timeline that shows all the time we spend troubleshooting…
135. Save time by
constantly rushing
(~20 hours/month)
Side-effects of
25 developers
down for 2 days
(~1000 hours/month)
“When the problems build up, they have a really big impact…
136. “When the application is broken,
these are the biggest problems in our way.
1000 hours/month
1. Test Data Generation
2. Merging Problems
3. Repairing False Alarms
Top Three Problems
137. “The deadline is coming either way…”
80% of features 100% done?100% of features 80% done?
138. “Here’s what we were thinking…”
3-Month Improvement Trial
Dedicated resources (1 or 2 developers)
Dev team identifies highest-leverage improvement
opportunities and prioritizes with management
Continue to share Project Visibility Updates each month
“Will you help us turn this project around?”
142. Two Options:
Option 1
Stay
the Course
Option 2
Take
Responsibility
or
Let’s Do This
Here’s the Catch:
In order for us to change the status quo, we have to start
working together as a community.
144. LEARN YOUR WAY TO AWESOME.
Free to Join Industry Peer Mentorship Network
openmastery.org
145. If you’d like to see our industry start
collaborating on solving these problems…
and you’re willing to Measure Your PAIN…
Let’s Make the PAIN Visible!
Next Talk:
146. #OpenDX
An Open Standard for Measuring PAIN
(Specification for Data Collection)
Developer
Experience
147. Community Analytics Platform
Idea Flow Mapping Tools
Team Mastery Tools
Team
Joe
Sally
Mark
Eric
Community Analytics
Anonymized
Data
(REST)
Shared Taxonomy
of Patterns & Principles
(with example data)
Project
Tiger
Project
Bear
148. This isn’t about me.
Janelle Klein
openmastery.org @janellekz
This is about ALL OF US.
149. This is about Ending this BULLSHIT:
Janelle Klein
openmastery.org @janellekz
153. What do you see as the
biggest obstacle to success?
Discussion:
Editor's Notes
Hi everyone, I’m janelle klein from New Iron. I’ve been a no fluff attendee for the last 7 years, this is my first time as a speaker.
Despite our best efforts with CI, unit testing, automation galore, every few years we end up kicking off a rewrite… and there’s two major reasons for this. Invisibility - our problems are invisible, they’re hard to measure, hard to explain. We rely primary on gut feel to make decisions. Relentless business pressure that doesn’t let up. This talk is about a strategy to solve both those problems.
I learned this strategy through failure.
So rather than tell you about my greatest successes, I’m going to tell you about my greatest failures and the lessons that I learned along the way.
My goal isn’t to convince you that solving this problem will be easy. My goal is to convince you that even though this is a hard problem, it’s a solvable problem. What I’m going to share with you today, isn’t an easy answer, but it’s the only answer I’ve found to actually work.
First of all, What’s the problem we’re trying to solve? Why should you care about anything I have to say?
Across the industry, you see this pattern. Every few years, we end up rewriting our software, after driving one project after another into the ground.
So why is this happening?
We always start off with the best of intentions…
We're going to write high quality code that’s low in technical debt, easy to maintain and of course, has good code coverage.
Then this happens.
We’ve got this business pressure and constant urgency to deliver features. and things just start to unravel.
And we tell the product owner about our pain, but they don’t understand the benefits []
So what do you think they pick? The benefits that make sense. []
If we don’t make time to deal with emerging risks and emerging risks, we will never get out of this cycle.
I watched my project get crushed.
So I was wondering, [what’s wrong with our current strategy?]
I mean, we’re constantly talking about the importance of maintainable code.
Then I was reading this book, Good Strategy/Bad Strategy and by the first chapter, I already had my answer.
We don’t talk about why we keep failing, despite our best efforts.
Now we talk a lot about what it means to have maintainable code and why it’s important.
But what we don’t talk about
is how the hell we’re supposed to pull it off in the context of a business system.
The problem is we don’t *have* a strategy for solving this problem and we really need one.
So Rumelt says, “”
So I started thinking… why can’t we break the rewrite cycle? What are the biggest obstacles preventing us from breaking the cycle.
And I thought about all the consulting projects I’d worked on… where companies were stuck in this pattern.
From the outside it looks like we’re driving a car without a steering wheel.
What’s fascinating though.
But we were solving really cool problems.
We setup a continuous delivery pipeline from day 1. I gotta hire the team myself. I worked directly with the customers on figuring out requirements. I designed the architecture. I designed the process.
For a year, it was my dream customer.
Later that night we were on this conference call with IT. And I hear this guy just screaming in the background. Apparently, we had shut down every tool in the factory.
So we rolled back the release and tried to figure out what happened. There was a configuration change that didn’t quite make it to production.
We all felt terrible, but there wasn’t much we could do at this point. So we fixed the problem, and shipped to production... again.
I watched my project get crushed.
Crazy deadlines, and I tried to explain to management that we needed to go slower, but they threatened to outsource the project if we didn’t get it done. This project was my baby.
So I started working 60-70 hour weeks for about 6 months strait. And my team started working 60 hour weeks for 6 months strait. Then the releases started falling apart. Things were just constantly going wrong.
When we fall into urgency mode, we start compromising safety for speed.
We make decisions that don’t seem like a big deal at the time, but they create a hazardous work environment.
Instead of taking a little more time to put our toys away, we end up falling down the stairs and in the hospital.
First, we ignore the risks, basically ignoring tango -- a lot of times because the risk isn’t obvious.
First, we ignore the risks, basically ignoring tango -- a lot of times because the risk isn’t obvious.
First, we ignore the risks, basically ignoring tango -- a lot of times because the risk isn’t obvious.
First, we ignore the risks, basically ignoring tango -- a lot of times because the risk isn’t obvious.
First, we ignore the risks, basically ignoring tango -- a lot of times because the risk isn’t obvious.
If we don’t make time to deal with emerging risks and emerging risks, we will never get out of this cycle.
Has anyone ever tried to go to management and explain all the problems with technical debt, but their manager didn’t seem to care?
When we fall into urgency mode, we start compromising safety for speed.
We make decisions that don’t seem like a big deal at the time, but they create a hazardous work environment.
Instead of taking a little more time to put our toys away, we end up falling down the stairs and in the hospital.
Next thing you know we’re working late nights and weekends, choking down red bull to stay awake...
hacking out last minute fixes and hoping that nothing else breaks.
Who’s done this before?
We make jokes about programmers running on caffeine and pizza... but this problem is really serious.
When our project is on the line, we give up a lot -- we’re skipping our kids recitals, missing our annivery dinners, we get sick, we gain weight
stress deteriorates our health and can tear apart our relationships. Just because we’re not bleeding doesn’t mean we don’t get hurt by all this.
We can’t run a sustainable business by compromising the safety of the people doing the work.
From an investment standpoint, loans are a predictable financial tool.
Another new project.
But we were solving really cool problems.
But we were solving really cool problems.
The engineers are like [read] and the managers ask me [read]
It’s not like people are unaware of the problems. So what are the obstacles that keep us stuck?
In the consulting box.
[read]
What do you guys think? What are the biggest obstacles?
If you watch what happens right before a project crashes into the wall, everyone usually knows there’s a problem.
[engineers are like],
Even when the pain is really obvious, we still get stuck in this pattern of organizational deadlock, where nobody can change direction.
And so we crash the car, and have to build a whole new car, just so we can continue driving.
The challenge with trying to steer an organization, is that decision-making, in an organizational context, is a distributed responsibility.
So when I think about distributed decision-making, I imagine [read]
We’ve got the dev team controlling the hand component, and management controlling the arm component, and if we run a little experiment, and light a fire under the pain sensor. Nothing happens.
And if we turn up the fire, so we’ve got 10x the pain… we burn.
This is what happens when there’s a broken feedback loop baked into the design of the system.
[read]
If you think about the human system design like a software design, you can see the broken feedback loop is baked into the organization’s role design.
Whenever visibility and decision-making are decoupled into different roles, communication failure will lead to this crashing pattern. When visibility and decision-making are part of the same role, we can steer in response to emerging risks.
For example the manager role is responsible for allocating money and managing risk.
All the knowledge about the technical risks on the project, are generally in the developer’s heads. So communication breakdown at this level leads to a whole lot of really bad management decisions.
Another place you see this pattern is with the product owner role. The product owner is in charge of making trade-off decisions about technical risks that they don’t understand.
So communication breakdown at this level leads to the classic case of technical debt building up on a project because the problems are constantly being deferred.
So we can try and solve this problem by finding a way to communicate better, but we’ve got tons of evidence that this strategy doesn’t work very well. There’s a reason the story I told about project failure is an archetype across our industry. This problem is baked into the blueprints that we’re using to design our organizations.
So I think it’s about time we tried something new.
Risk management in software development is an extremely complex problem. Just like it takes time to figure out the right product to build, it takes time to figure out the right improvements. We have to gather requirements.
The problem is we don’t *have* a strategy for solving this problem and we really need one.
So Rumelt says, “”
A typical improvement effort usually starts with brainstorming a list
[slow] We think about the things that bugged us recently, how we’re not following best practices, or the code that just makes us feel shameful.
[] -- Then all that goes into our technical debt backlog, and we chip away at improvements for months.
But just because a problem comes to mind, doesn’t mean it’s an important problem to solve
When we’re brainstorming, [] we can easily miss our biggest problems then [our improvements don’t make...].
[] Don’t do this.
This experience completely shattered my faith in best practices. Which turned out to be the best thing that ever happened to me… because…
I thought the main obstacle was all the technical debt building up in the code base that was causing us to make mistakes.
and if we made changes in the code that had more technical debt, we’d be more likely to make mistakes.
So I got this idea to build a tool that could detect high-risk changes, and tell us where we needed to do more testing -- but what I found wasn’t what I found wasn’t what I expected at all.
Our bugs were mostly in the code written by the senior engineers on the team where the design actually got the most scrutiny. It’s not like we didn’t have any awful crufty code -- but that’s not where the bugs were.
The correlation I did find in the data was this...
[read]
And while that made some sense, I couldn’t help but think, there had to be more to the story...
So I started keeping track of all my painful interaction with the code and visualizing it on a timeline like this.
The pain started [] when I ran into some unexpected behavior and ended [] when I had the problem resolved.
So that was 5 hours and 18 minutes of troubleshooting, I think everyone would agree that’s pretty painful.
So I started breaking down the problems into categories. And when I did this, I realized that most of the pain was actually caused by human factors.
This is when I have an idea in my head about how the code is supposed to work, but it doesn’t work that way anymore.
This is when your running an experiment, and there’s multiple possibilities for how a behavior can occur, and you make a bad assumption, and down the rabbit hole you go.
These aren’t really problems with the code itself, [read]
These aren’t really problems with the code itself… [read]
The pain isn’t something inside the code, pain occurs during the process of interacting with the code. So I started optimizing for… and I did that, with the help of a data driven feedback loop.
On our project, we ended up [read]
For almost a year! [read]
[read]
Then we started asking []
[read]
That’s when everything changed []
We were finally able to turn the project around. And I learned one of the most valuable lessons in my career. [read]
[read]
Which at the project level, I’m translating to [] [] []. [pause]
So let me show you what Idea Flow looks like in a couple case studies.
Rework Risk is driven by the likelihood...
Things like... bad assumptions about the architecture or design or bad assumptions about customer requirements.
The longer we delay before making corrections, the greater the rework.
For the problem categories --
I use hashtags in the Idea Flow Maps, then add up the durations for each hashtag.
So If I wanted to know what was causing the pain I needed to understand the things that caused these 2 factors.
A lot of the problems had more to do with human factors than anything going on with the code.
Stale Memory mistakes, Ambiguous Clues.
But once I understood what was causing the pain, [read -- most of the problems were easy to avoid]
For example...
This is from a project about 10 months old where we actively focused on reducing troubleshooting time.
With our everyday problem-solving effort, we still spent about 10-20% of our time on friction.
So in this first case study, there was a huge mess inherited by a really great team, it was a 12 year old project, where all the original developers had left. This is what it looks like when 90% of your time figuring out what to do, and 10% of your time actually doing stuff.
The lack of familiarity has an enormous impact on how much friction we experience.
So there were tons of problems, and the team wasn’t sure what to focus on, so they set a goal to raise unit test coverage by 5%.
If you start adding up all the problems across the team [], these guys were spending about 700 hours per month generating test data to support whatever task they were working on. But oddly, in all the retrospective meetings, this problem didn’t even come up. It was just part of the work.
This second case studies was with a massive rewrite effort. They had this big monolith application that they rewrote completely from scratch, with micro services, a continuous delivery pipeline, the whole nine years.
And what really surprised me about this project, is that after only 18 months, they were already spending 40-60% of their development capacity troubleshooting problems.
They had this design for the architecture, that looked good on paper, but then once they distributed the design across teams, and discovered the architecture had some flaws, they were stuck. The good ol’ Conway’s law effect, and they couldn’t seem to adapt.
So I got involved with the team, just as they were getting into the thrashing stage, and starting to lose control.
You could see this pattern of pain building up over time, that we always talk about, but have never been able to measure.
So I don’t have quite enough data to make a chart like this, but these are some of the patterns you could see.
First, learning is front-loaded while the team figures out what to do.
Then there’s this rush before the deadline where validation ends up deferred.
Then the pain builds, and you see the baseline friction level rising over time.
Then finally chaos reigns, and the unpredictable work stops fitting in the timebox. So I’m measuring capacity hours over time, so even though all these releases are the same size, you can see how the team had to work twice as many hours to get the release out the door.
First, make the decision to lead.
Step 1. [read] Leadership is not a title bestowed upon on you, it’s a choice to take responsibility. Nike’s got some good advice -- Just do it.
[read]
[read]
[read]
Troubleshooting Risk we’ve already talked about, it’s driven by the likelihood...
Learning Risk is driven by the likelihood...
Things like... lots of 3rd party libraries, complex frameworks, a really large code base, or high turn-over rate --
all these things can cause extra learning work.
Rework Risk is driven by the likelihood...
Things like... bad assumptions about the architecture or design or bad assumptions about customer requirements.
The longer we delay before making corrections, the greater the rework.
Gradual loss of predictability.
Next, you’ll need to make the case to management that change [read]
The key to success is focusing on the risks not estimating how much longer things will take. If it’s just more work, it sounds like we can throw more money at it, but working harder won’t solve the problem -- we have to work smarter.
First, we ignore the risks, basically ignoring tango -- a lot of times because the risk isn’t obvious.
Basically, we make decisions that increase the likelihood of mistakes, or the cost to recover when things go wrong, then our application is more likely to be in a broken state. It’s not about how long it takes, it’s about being broken.
Basically, we make decisions that increase the likelihood of mistakes, or the cost to recover when things go wrong, then our application is more likely to be in a broken state. It’s not about how long it takes, it’s about being broken.
Share your Idea Flow Maps. There’s nothing like showing powerpoint slides with lots of red on them that gets managers to move.
To do the work, we had to setup data in the database, setup a new reporting template, then run the entire system at once to test the reports. When there’s a bug, it’s really hard to tell where the problem is and takes countless hours to track down the bugs.
This is a graph showing how often our development environment has been broken over the last month. Red dots are completely down. Blue dots are some features not working.
Whenever the environment is broken, it doesn't just impact one person. It usually impacts the entire team. The red dots are the times the environment has been completely down.
In the one I circled. []
So If I wanted to know what was causing the pain I needed to understand the things that caused these 2 factors.
A lot of the problems had more to do with human factors than anything going on with the code.
Stale Memory mistakes, Ambiguous Clues.
But once I understood what was causing the pain, [read -- most of the problems were easy to avoid]
For example...
80% of
I know we have a big deadline coming up, and we've been hurrying to get everything done, but in trying to go faster, we've dramatically increased risk.
Now, it's so expensive when things go wrong that trying to go faster is actually slowing us down.
If we rush to get the features completed, we're likely to arrive at the finish line with a lot of things broken.
On the other hand, if we focus on reducing the risk, we’ll end up in much better shape.
I know we have a big deadline coming up, and we've been hurrying to get everything done, but in trying to go faster, we've dramatically increased risk.
Now, it's so expensive when things go wrong that trying to go faster is actually slowing us down.
If we rush to get the features completed, we're likely to arrive at the finish line with a lot of things broken.
On the other hand, if we focus on reducing the risk, we’ll end up in much better shape.
My goal isn’t to convince you that solving this problem will be easy. My goal is to convince you that even though this is a hard problem, it’s a solvable problem. What I’m going to share with you today, isn’t an easy answer, but it’s the only answer I’ve found to actually work.
I wrote this book, so I could share what I’ve learned with you.
We’re breaking down the implementation plan into an iterative roadmap for organizational transformation. So each transformation effort, you can read about, decide whether you want to participate, and it will be an opt-in project that sits on top of everything else.
Iterative clarify then implement “better”
Iterative clarify then implement “better”
If you want to join me, then read the book, and think about the ideas, see if this is something you want to be a part of.
You can either buy the book, or if you start a reading group for Idea Flow, I’ll provide free e-books for all the attendees. Check out openmastery.org for details.
And if you don’t want to take my word for it, you should read Idea Flow because Rene and Matt said so.