The document outlines the scientific method for troubleshooting problems. It discusses observing the problem, collecting observations, creating hypotheses, designing experiments to test hypotheses, performing experiments, and evaluating results. The goal is to iteratively falsify hypotheses through experimentation and build understanding of the system until reaching a solution. Applying this structured process of testing hypotheses through experimentation can help solve problems in a rigorous way.
Cognitive-Perceptual-Motor GOMS Model of Human Computer InteractionShruti Nimbkar
This document discusses different GOMS models used to analyze user behavior. It specifically explains the CPM-GOMS model, which was developed in 1988 by Bonnie John and is based on the Card, Moran, and Newell model. The CPM-GOMS model assumes perceptual, cognitive, and motor operations can occur in parallel and models multitasking behavior that experienced users can exhibit.
Sdec 13 winnipeg - want to empower your people- just begin! old-pp_versionShawn Button
This document outlines an approach called BEGIN for enabling self-organization in teams. It discusses:
- What self-organization is and examples of self-organizing systems.
- Potential problems with trying to implement self-organization.
- The BEGIN model for setting up initial conditions for self-organization, which stands for Boundaries, Empowerment, Goals, Ingredients, and Nurture.
- Examples of how to apply each element of BEGIN, such as explicitly listing team boundaries and the authority granted to the team.
- Interactive exercises where attendees drew pictures and applied BEGIN to hypothetical scenarios to experience self-organization.
Presentation about how you can make effect in your organization.
Presented at Agile Tour Toronto, Agile Ottawa and PMI-SOC Professional Development Day.
The document provides guidance for how to be an effective change agent within an organization. It discusses tools for understanding company culture and resistance to change, mapping political landscapes, building trust with others, and working on personal effectiveness. The key recommendations are to model desired behaviors, create a positive culture bubble, use early adopters to spread ideas, listen to understand other perspectives, celebrate small wins, and reflect regularly on progress.
Artificial intelligence (AI) is everywhere, promising self-driving cars, medical breakthroughs, and new ways of working. But how do you separate hype from reality? How can your company apply AI to solve real business problems?
Here’s what AI learnings your business should keep in mind for 2017.
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
We asked LinkedIn members worldwide about their levels of interest in the latest wave of technology: whether they’re using wearables, and whether they intend to buy self-driving cars and VR headsets as they become available. We asked them too about their attitudes to technology and to the growing role of Artificial Intelligence (AI) in the devices that they use. The answers were fascinating – and in many cases, surprising.
This SlideShare explores the full results of this study, including detailed market-by-market breakdowns of intention levels for each technology – and how attitudes change with age, location and seniority level. If you’re marketing a tech brand – or planning to use VR and wearables to reach a professional audience – then these are insights you won’t want to miss.
Molded together from two powerpoints on the internet:
www.biologyjunction.com/Scientific%20Method.ppt
and
newton.uor.edu/facultyfolder/tyler_nordgren/.../FYS_SciMethod.ppt
Cognitive-Perceptual-Motor GOMS Model of Human Computer InteractionShruti Nimbkar
This document discusses different GOMS models used to analyze user behavior. It specifically explains the CPM-GOMS model, which was developed in 1988 by Bonnie John and is based on the Card, Moran, and Newell model. The CPM-GOMS model assumes perceptual, cognitive, and motor operations can occur in parallel and models multitasking behavior that experienced users can exhibit.
Sdec 13 winnipeg - want to empower your people- just begin! old-pp_versionShawn Button
This document outlines an approach called BEGIN for enabling self-organization in teams. It discusses:
- What self-organization is and examples of self-organizing systems.
- Potential problems with trying to implement self-organization.
- The BEGIN model for setting up initial conditions for self-organization, which stands for Boundaries, Empowerment, Goals, Ingredients, and Nurture.
- Examples of how to apply each element of BEGIN, such as explicitly listing team boundaries and the authority granted to the team.
- Interactive exercises where attendees drew pictures and applied BEGIN to hypothetical scenarios to experience self-organization.
Presentation about how you can make effect in your organization.
Presented at Agile Tour Toronto, Agile Ottawa and PMI-SOC Professional Development Day.
The document provides guidance for how to be an effective change agent within an organization. It discusses tools for understanding company culture and resistance to change, mapping political landscapes, building trust with others, and working on personal effectiveness. The key recommendations are to model desired behaviors, create a positive culture bubble, use early adopters to spread ideas, listen to understand other perspectives, celebrate small wins, and reflect regularly on progress.
Artificial intelligence (AI) is everywhere, promising self-driving cars, medical breakthroughs, and new ways of working. But how do you separate hype from reality? How can your company apply AI to solve real business problems?
Here’s what AI learnings your business should keep in mind for 2017.
Study: The Future of VR, AR and Self-Driving CarsLinkedIn
We asked LinkedIn members worldwide about their levels of interest in the latest wave of technology: whether they’re using wearables, and whether they intend to buy self-driving cars and VR headsets as they become available. We asked them too about their attitudes to technology and to the growing role of Artificial Intelligence (AI) in the devices that they use. The answers were fascinating – and in many cases, surprising.
This SlideShare explores the full results of this study, including detailed market-by-market breakdowns of intention levels for each technology – and how attitudes change with age, location and seniority level. If you’re marketing a tech brand – or planning to use VR and wearables to reach a professional audience – then these are insights you won’t want to miss.
Molded together from two powerpoints on the internet:
www.biologyjunction.com/Scientific%20Method.ppt
and
newton.uor.edu/facultyfolder/tyler_nordgren/.../FYS_SciMethod.ppt
This document provides an overview of the scientific method and experimental process. It discusses key concepts like hypotheses, problem statements, experiments, data collection and analysis, and conclusions. It also covers developing hypotheses using an "If...then...because" structure. The document guides the reader through an example experiment involving raisins and bubbles to demonstrate these scientific process steps.
Debugging (Docker) containers in productionbcantrill
The document discusses debugging containers in production environments. It begins by providing background on containers at Joyent and how Docker has increased adoption of containers. It then discusses challenges of debugging containers in production when failures occur, noting that containers must be debugged as distributed systems. The document advocates applying the scientific method to debugging by making observations, forming hypotheses and predictions, and testing those predictions through experiments or further observation. Debugging is framed as a process of iterative refinement of understanding rather than simply making problems go away.
IMVU is referred to as the first "Lean Startup", and was the proving ground for applying Build-Measure-Learn (the Scientific Method) to rapidly iterate and continuously improve its product development process, resulting in a valuable strategic advantage and highly productive, happy, and motivated teams.
This presentation focuses on IMVU's continuous improvement to product development process through experimentation. Success factors are reviewed, including a culture that values and supports experimentation and learning, Agile and XP engineering practices (which support iterative development), and strong product vision and user experience design. Recent product development process experiments will be shared.
What makes software development complex isn't the code, it's the humans. The most effective way to improve our capabilities in software development is to better understand ourselves.
In this talk, I'll introduce a conceptual model for human interaction, identity, culture, communication, relationships, and learning based on the foundational model of Idea Flow. If you were to write a simulator to describe the interaction of humans, this talk would describe the architecture.
Learn how to understand the humans on your team and fix the bugs in communication, by thinking about your teammates like code!
Edit
Archive
Delete
I'm not a scientist or a psychologist. These ideas are based on a combination of personal experience, reading lots of cognitive science books, and a couple years of running experiments on developers. As I struggled through the challenges of getting a software concept from my head to another developer's head (interpersonal Idea Flow), I learned a whole lot about human interaction.
As software developers, we have to work together, think together, and solve problems together to do our jobs. Code? We get it. Humans? WTF?!
Fortunately, humans are predictably irrational, predictably emotional, and predictably judgmental creatures. Of course those pesky humans will always do a few unexpected things, but once we know the algorithm for peace and harmony among humans, we can start debugging the communication problems on our team.
Digging one level deeper into the details of Idea Flow Learning Framework, we'll do this second session as a group troubleshooting game! After we play the game, we'll do an "Experience Review" and analyze the causes of diagnostic difficulty, the nature of decision-making, and discuss strategies for making better decisions.
**The Troubleshooting Game:** I'll split the audience into two teams. Team 1 will stealthily hide a bug in the code. Team 2 will have to track down the bug in as little time as possible. Then Team 2 will have their chance to stump Team 1. The team that troubleshoots the bug the fastest will walk away with an exciting prize!
After we play the troubleshooting game, we'll do an "Experience Review" with each team's coding experience. Rather than optimizing the code, we'll focus on optimizing the problem-solving process. We will:
1. Visualize and discuss the differences between code sandwiches
2. Identify the major factors that caused diagnostic difficulty
3. Discuss the troubleshooting strategies used by each team and what made them more or less effective.
While it's certainly challenging to understand how we think and make decisions, it's an incredible opportunity to learn. By recognizing the inputs to our decisions, and how we evaluate trade-offs, we can compare our internal decision-making logic with our peers. With objective feedback on the consequences of our decisions, we can systematically optimize developer experience.
Learn how to master the art of software development with Idea Flow Learning Framework!
The document outlines the key steps of the scientific method including observation, hypothesis, experiment, and conclusion. It emphasizes the importance of recording specific procedures so experiments can be performed by others. The conclusion should state whether the data supports or contradicts the original hypothesis. Errors should be noted to improve future experiments, and plans for further investigation should be proposed.
The Scientific Method 2011 acloutier copyright 2011Annie C. Cloutier
The document outlines the scientific method, which is a process used by scientists to investigate questions and phenomena in a systematic way. It discusses that while there are varying versions, the core steps generally include formulating a question, developing a hypothesis, conducting an experiment, analyzing data, drawing conclusions, and communicating results. The document also notes that while scientists use this method in their work, not all steps are always needed, and that businesses have also adapted aspects of the scientific method to help solve problems.
Troublefree troubleshooting ian campbell sps jhb 2019Ian Campbell
The document provides instructions for attendees of an event hosted by SPS Events in Johannesburg, South Africa. It notes that session schedules may not be printed for all attendees and can be found by session room doors or online. Attendees are asked to provide feedback on sessions and to stay for the prize giving at the end of the day. They are also encouraged to interact with sponsors and speakers, take selfies with speakers to enter a photo competition, and share their learnings on social media using the #SPSJHB hashtag.
The document outlines the key steps of the scientific method: observation, hypothesis, experiment, and conclusion. It discusses performing experiments, including developing procedures, collecting data, and analyzing results to determine if the hypothesis is supported or needs revision. Potential sources of bias are addressed, as well as the importance of collaboration and replication in scientific research. Safety protocols for the laboratory are also covered.
Troubleshooting is a logical, systematic process for identifying problems and their causes in order to solve issues and restore functionality. It involves identifying the problem, structuring the problem by gathering details, looking at possible solutions, making a decision, implementing the solution, verifying that it worked, and preventing future recurrence. Effective troubleshooting requires skills like analytical thinking, patience, adaptability, persistence, and a desire to learn.
The document provides instructions for a school assignment on the human body systems. Students are asked to name and describe four body systems, explain how they interact and how each is affected by different activities like eating, exercise and sleep. They are then asked to create a poster or presentation to demonstrate their understanding of how the body systems work together and what happens if one system is not cared for.
SF Lean Startup Circle: The New Science of Product DevelopmentJames Birchler
IMVU is referred to as the first "Lean Startup", and was the proving ground for applying Build-Measure-Learn (the Scientific Method) to rapidly iterate and continuously improve its product development process, resulting in a valuable strategic advantage and highly productive, happy, and motivated teams.
This lecture focuses on IMVU's continuous improvement to product development process through experimentation. Success factors are reviewed, including a culture that values and supports experimentation and learning, Agile and XP engineering practices (which support iterative development), and strong product vision and user experience design. Recent product development process experiments will be shared.
Takeaway: Attendees will learn how IMVU continuously improves product development using the Scientific Method, Agile, XP, and Scrum supported by strong user experience design. Strategies to support experimental learning (retrospectives, postmortems, and 5 Whys) will be reviewed along with practical examples.
The scientific method involves identifying a problem, researching the topic, developing a testable hypothesis, conducting controlled experiments to collect data, analyzing the results, and drawing a conclusion. The steps include:
1) Identifying a problem and research question.
2) Researching previous work on the topic from various sources.
3) Developing an educated hypothesis with an expected measurable outcome.
4) Conducting multiple controlled experiments to test the hypothesis and record observations and data.
Have you ever wondered whether your retrospective format was actually effective at fueling learning and improvement? Are you ready to try something different?
"FOCOL Point" is Idea Flow Learning Framework's 5-step learning and improvement protocol. It works great for software improvement, but it also works for team reflection, personal reflection, or mentorship. Rather than searching for answers, a FOCOL Point is all about finding the right questions.
Once I walk through the protocol as a group, we'll make a FOCOL Point together!
First, we'll identify the biggest software problems faced by the audience using the "flashstorming" technique. Then we'll focus on the top problems of the group and start digging into the details by walking through a group-adapted version of the stop and think protocol:
1. **Focus**: What's the journey we're trying to understand?
2. **Observe**: What patterns do we see? (for all journey pattern types)
3. **Conclude**: What obstacles seem to be causing the pain?
4. **Optimize**: How could we have avoided the obstacles?
5. **Learn**: What questions should we ask ourselves in the future?
Amplify your learning by reflecting more productively on your own or with your team! You can immediately apply this technique on your own projects.
The document discusses the scientific method, which is a process used to investigate questions about the world through making observations and conducting organized experiments. There are several versions of the scientific method, but they generally involve identifying a problem, developing a hypothesis to predict the answer, designing an experiment to test the hypothesis, performing the experiment and analyzing the collected data to evaluate if it supports the hypothesis. Key parts of the scientific method include forming a testable hypothesis, gathering objective data through experimentation, and drawing conclusions based on the analysis of the results.
The document provides guidance on writing science lab reports for both workplace and educational audiences. While workplace audiences are more interested in results, educators want more detailed explanations to ensure students understand experiments. Both value objectivity, precision, accuracy and carefully drawn conclusions based on sufficient data presented clearly using visuals like tables and graphs. Lab reports should answer what the purpose was, what materials and procedures were used, what the results were, and what conclusions were drawn.
What does "better" really mean? If we eliminate duplication, is the code better? If we decide to skip the unit tests, are we doing worse? How do we decide if one design is better than another design?
About 8 years ago, my project failed, despite "doing all the right things", and shattered my faith in best practices. Since then, I've learned to measure developer experience, use *data* to learn what works, and I've been codifying "better" into patterns and decision principles for years. In this talk, I'll show you the paradigm shift that led to all my discoveries, and hopefully change your perspective on "better".
"Idea Flow" is an alternative to the Technical Debt metaphor that focuses on problems in human interaction rather than problems inside the code. By measuring the "friction" that occurs when developers interact with the code, we can identify the biggest causes of friction and systematically optimize developer experience.
Why go to all this trouble? From my experience, the biggest causes of pain are seldom what we think. When we try to make things "better", we can easily miss our biggest problems, or inadvertently make things worse. Visibility turned my beliefs about "better" upside-down.
First, I'll walk you through the conceptual metaphor of "Idea Flow" and how to recognize friction in developer experience.
Next, we'll write a little code and record the experience using the open source "Idea Flow Mapping" software.
Finally, we'll discuss a handful of "decision principles" for optimizing developer experience and analyze our coding experience as a group.
Empirical Methods in Software Engineering - an Overviewalessio_ferrari
A first introductory lecture on empirical methods in software engineering. It includes:
1) Motivation for empirical software engineering studies
2) How to define research questions
3) Measures and data collection methods
4) Formulating theories in software engineering
5) Software engineering research strategies
Find the videos at: https://www.youtube.com/playlist?list=PLSKM4VZcJjV-P3fFJYMu2OhlTjEr9Bjl0
This document provides instructions and information about the scientific method. It outlines the basic steps of the scientific method which include: purpose/problem, research, hypothesis, experiment, analyze data, and conclusion. It defines key terms like independent and dependent variables, constants, and controls. Examples are provided to identify the different variables, constants, and controls. Detailed guidance is given for how to design a controlled experiment, collect and analyze data, and write a conclusion that discusses whether the hypothesis was supported and how the experiment could be improved.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
This document provides an overview of the scientific method and experimental process. It discusses key concepts like hypotheses, problem statements, experiments, data collection and analysis, and conclusions. It also covers developing hypotheses using an "If...then...because" structure. The document guides the reader through an example experiment involving raisins and bubbles to demonstrate these scientific process steps.
Debugging (Docker) containers in productionbcantrill
The document discusses debugging containers in production environments. It begins by providing background on containers at Joyent and how Docker has increased adoption of containers. It then discusses challenges of debugging containers in production when failures occur, noting that containers must be debugged as distributed systems. The document advocates applying the scientific method to debugging by making observations, forming hypotheses and predictions, and testing those predictions through experiments or further observation. Debugging is framed as a process of iterative refinement of understanding rather than simply making problems go away.
IMVU is referred to as the first "Lean Startup", and was the proving ground for applying Build-Measure-Learn (the Scientific Method) to rapidly iterate and continuously improve its product development process, resulting in a valuable strategic advantage and highly productive, happy, and motivated teams.
This presentation focuses on IMVU's continuous improvement to product development process through experimentation. Success factors are reviewed, including a culture that values and supports experimentation and learning, Agile and XP engineering practices (which support iterative development), and strong product vision and user experience design. Recent product development process experiments will be shared.
What makes software development complex isn't the code, it's the humans. The most effective way to improve our capabilities in software development is to better understand ourselves.
In this talk, I'll introduce a conceptual model for human interaction, identity, culture, communication, relationships, and learning based on the foundational model of Idea Flow. If you were to write a simulator to describe the interaction of humans, this talk would describe the architecture.
Learn how to understand the humans on your team and fix the bugs in communication, by thinking about your teammates like code!
Edit
Archive
Delete
I'm not a scientist or a psychologist. These ideas are based on a combination of personal experience, reading lots of cognitive science books, and a couple years of running experiments on developers. As I struggled through the challenges of getting a software concept from my head to another developer's head (interpersonal Idea Flow), I learned a whole lot about human interaction.
As software developers, we have to work together, think together, and solve problems together to do our jobs. Code? We get it. Humans? WTF?!
Fortunately, humans are predictably irrational, predictably emotional, and predictably judgmental creatures. Of course those pesky humans will always do a few unexpected things, but once we know the algorithm for peace and harmony among humans, we can start debugging the communication problems on our team.
Digging one level deeper into the details of Idea Flow Learning Framework, we'll do this second session as a group troubleshooting game! After we play the game, we'll do an "Experience Review" and analyze the causes of diagnostic difficulty, the nature of decision-making, and discuss strategies for making better decisions.
**The Troubleshooting Game:** I'll split the audience into two teams. Team 1 will stealthily hide a bug in the code. Team 2 will have to track down the bug in as little time as possible. Then Team 2 will have their chance to stump Team 1. The team that troubleshoots the bug the fastest will walk away with an exciting prize!
After we play the troubleshooting game, we'll do an "Experience Review" with each team's coding experience. Rather than optimizing the code, we'll focus on optimizing the problem-solving process. We will:
1. Visualize and discuss the differences between code sandwiches
2. Identify the major factors that caused diagnostic difficulty
3. Discuss the troubleshooting strategies used by each team and what made them more or less effective.
While it's certainly challenging to understand how we think and make decisions, it's an incredible opportunity to learn. By recognizing the inputs to our decisions, and how we evaluate trade-offs, we can compare our internal decision-making logic with our peers. With objective feedback on the consequences of our decisions, we can systematically optimize developer experience.
Learn how to master the art of software development with Idea Flow Learning Framework!
The document outlines the key steps of the scientific method including observation, hypothesis, experiment, and conclusion. It emphasizes the importance of recording specific procedures so experiments can be performed by others. The conclusion should state whether the data supports or contradicts the original hypothesis. Errors should be noted to improve future experiments, and plans for further investigation should be proposed.
The Scientific Method 2011 acloutier copyright 2011Annie C. Cloutier
The document outlines the scientific method, which is a process used by scientists to investigate questions and phenomena in a systematic way. It discusses that while there are varying versions, the core steps generally include formulating a question, developing a hypothesis, conducting an experiment, analyzing data, drawing conclusions, and communicating results. The document also notes that while scientists use this method in their work, not all steps are always needed, and that businesses have also adapted aspects of the scientific method to help solve problems.
Troublefree troubleshooting ian campbell sps jhb 2019Ian Campbell
The document provides instructions for attendees of an event hosted by SPS Events in Johannesburg, South Africa. It notes that session schedules may not be printed for all attendees and can be found by session room doors or online. Attendees are asked to provide feedback on sessions and to stay for the prize giving at the end of the day. They are also encouraged to interact with sponsors and speakers, take selfies with speakers to enter a photo competition, and share their learnings on social media using the #SPSJHB hashtag.
The document outlines the key steps of the scientific method: observation, hypothesis, experiment, and conclusion. It discusses performing experiments, including developing procedures, collecting data, and analyzing results to determine if the hypothesis is supported or needs revision. Potential sources of bias are addressed, as well as the importance of collaboration and replication in scientific research. Safety protocols for the laboratory are also covered.
Troubleshooting is a logical, systematic process for identifying problems and their causes in order to solve issues and restore functionality. It involves identifying the problem, structuring the problem by gathering details, looking at possible solutions, making a decision, implementing the solution, verifying that it worked, and preventing future recurrence. Effective troubleshooting requires skills like analytical thinking, patience, adaptability, persistence, and a desire to learn.
The document provides instructions for a school assignment on the human body systems. Students are asked to name and describe four body systems, explain how they interact and how each is affected by different activities like eating, exercise and sleep. They are then asked to create a poster or presentation to demonstrate their understanding of how the body systems work together and what happens if one system is not cared for.
SF Lean Startup Circle: The New Science of Product DevelopmentJames Birchler
IMVU is referred to as the first "Lean Startup", and was the proving ground for applying Build-Measure-Learn (the Scientific Method) to rapidly iterate and continuously improve its product development process, resulting in a valuable strategic advantage and highly productive, happy, and motivated teams.
This lecture focuses on IMVU's continuous improvement to product development process through experimentation. Success factors are reviewed, including a culture that values and supports experimentation and learning, Agile and XP engineering practices (which support iterative development), and strong product vision and user experience design. Recent product development process experiments will be shared.
Takeaway: Attendees will learn how IMVU continuously improves product development using the Scientific Method, Agile, XP, and Scrum supported by strong user experience design. Strategies to support experimental learning (retrospectives, postmortems, and 5 Whys) will be reviewed along with practical examples.
The scientific method involves identifying a problem, researching the topic, developing a testable hypothesis, conducting controlled experiments to collect data, analyzing the results, and drawing a conclusion. The steps include:
1) Identifying a problem and research question.
2) Researching previous work on the topic from various sources.
3) Developing an educated hypothesis with an expected measurable outcome.
4) Conducting multiple controlled experiments to test the hypothesis and record observations and data.
Have you ever wondered whether your retrospective format was actually effective at fueling learning and improvement? Are you ready to try something different?
"FOCOL Point" is Idea Flow Learning Framework's 5-step learning and improvement protocol. It works great for software improvement, but it also works for team reflection, personal reflection, or mentorship. Rather than searching for answers, a FOCOL Point is all about finding the right questions.
Once I walk through the protocol as a group, we'll make a FOCOL Point together!
First, we'll identify the biggest software problems faced by the audience using the "flashstorming" technique. Then we'll focus on the top problems of the group and start digging into the details by walking through a group-adapted version of the stop and think protocol:
1. **Focus**: What's the journey we're trying to understand?
2. **Observe**: What patterns do we see? (for all journey pattern types)
3. **Conclude**: What obstacles seem to be causing the pain?
4. **Optimize**: How could we have avoided the obstacles?
5. **Learn**: What questions should we ask ourselves in the future?
Amplify your learning by reflecting more productively on your own or with your team! You can immediately apply this technique on your own projects.
The document discusses the scientific method, which is a process used to investigate questions about the world through making observations and conducting organized experiments. There are several versions of the scientific method, but they generally involve identifying a problem, developing a hypothesis to predict the answer, designing an experiment to test the hypothesis, performing the experiment and analyzing the collected data to evaluate if it supports the hypothesis. Key parts of the scientific method include forming a testable hypothesis, gathering objective data through experimentation, and drawing conclusions based on the analysis of the results.
The document provides guidance on writing science lab reports for both workplace and educational audiences. While workplace audiences are more interested in results, educators want more detailed explanations to ensure students understand experiments. Both value objectivity, precision, accuracy and carefully drawn conclusions based on sufficient data presented clearly using visuals like tables and graphs. Lab reports should answer what the purpose was, what materials and procedures were used, what the results were, and what conclusions were drawn.
What does "better" really mean? If we eliminate duplication, is the code better? If we decide to skip the unit tests, are we doing worse? How do we decide if one design is better than another design?
About 8 years ago, my project failed, despite "doing all the right things", and shattered my faith in best practices. Since then, I've learned to measure developer experience, use *data* to learn what works, and I've been codifying "better" into patterns and decision principles for years. In this talk, I'll show you the paradigm shift that led to all my discoveries, and hopefully change your perspective on "better".
"Idea Flow" is an alternative to the Technical Debt metaphor that focuses on problems in human interaction rather than problems inside the code. By measuring the "friction" that occurs when developers interact with the code, we can identify the biggest causes of friction and systematically optimize developer experience.
Why go to all this trouble? From my experience, the biggest causes of pain are seldom what we think. When we try to make things "better", we can easily miss our biggest problems, or inadvertently make things worse. Visibility turned my beliefs about "better" upside-down.
First, I'll walk you through the conceptual metaphor of "Idea Flow" and how to recognize friction in developer experience.
Next, we'll write a little code and record the experience using the open source "Idea Flow Mapping" software.
Finally, we'll discuss a handful of "decision principles" for optimizing developer experience and analyze our coding experience as a group.
Empirical Methods in Software Engineering - an Overviewalessio_ferrari
A first introductory lecture on empirical methods in software engineering. It includes:
1) Motivation for empirical software engineering studies
2) How to define research questions
3) Measures and data collection methods
4) Formulating theories in software engineering
5) Software engineering research strategies
Find the videos at: https://www.youtube.com/playlist?list=PLSKM4VZcJjV-P3fFJYMu2OhlTjEr9Bjl0
This document provides instructions and information about the scientific method. It outlines the basic steps of the scientific method which include: purpose/problem, research, hypothesis, experiment, analyze data, and conclusion. It defines key terms like independent and dependent variables, constants, and controls. Examples are provided to identify the different variables, constants, and controls. Detailed guidance is given for how to design a controlled experiment, collect and analyze data, and write a conclusion that discusses whether the hypothesis was supported and how the experiment could be improved.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
3. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
4. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
5. CUSTOMARY “WHO AM I?” SLIDE
Shawn Button
• Developer
• Agile Coach
• Lever 21
Pokémon Go
Trainer
6. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
7. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
10. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
11. To-Do DOING (1) DONE
Story of
a Team
IntroTypes of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
13. DEDUCTIVE REASONING
Starts with the assertion of a general rule and proceeds
from there to a guaranteed specific conclusion.
e.g. Math
If x = 4
And if y = 1
Then 2x + y = 9
e.g. All birds have feathers and swans are birds, so
swans have feathers.
14. INDUCTIVE REASONING
Begins with observations and proceeds to a generalized
conclusion that is likely, but not certain, in light of
accumulated evidence.
e.g. All swans that I have seen are white, therefore all
swans are white.
15. ABDUCTIVE REASONING
The generation of new ideas (hypotheses) to explain
observations. Begins with an incomplete set of
observations and proceeds to the likeliest possible
explanation for the set.
16. ABDUCTIVE REASONING
Examples:
Medical diagnosis: given this set of symptoms, what is
the diagnosis that would best explain most of them?
Criminal trial: given the evidence is the suspect more
likely innocent or guilty?
Swans get wet when they swim. That swan is wet,
therefore it was swimming, and therefore there must be
water near here.
17. ABDUCTIVE REASONING
Abductive reasoning is concerned with imaginative
reasoning, a process where new ideas or hypotheses
come into existence through observation.
We use abductive reasoning all of the time in order to
make sense of the world.
We build up a mental model of reality that is
constructed from hypotheses, which are based on
observations.
18. TYPES OF LOGICAL REASONING
Deductive Reasoning
General Rule Specific Conclusion
(true if rule is true)
Inductive Reasoning
Specific
Observations
General Conclusion
(may be true)
Abductive Reasoning
Incomplete
Observations
Best Prediction
(may be true)
21. TYPES OF LOGICAL REASONING
Deductive Reasoning
General Rule Specific Conclusion
(true if rule is true)
Inductive Reasoning
Specific
Observations
General Conclusion
(may be true)
Abductive Reasoning
Incomplete
Observations
Best Prediction
(may be true)
22. To-Do DOING (1) DONE
Story of
a Team
IntroTypes of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
23. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
27. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
28. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
31. THE SCIENTIFIC METHOD OF
TROUBLESHOOTING
Observe
Problem
Collect
Observations
Create
Hypotheses
Design
Experiment
Perform
Experiment
Evaluate
Results
Falsified, so
try another
hypothesis
Supported,
so refine
hypotheses
Our
Understanding
Of The System
Solution!
33. OBSERVE PROBLEM
• A Problem is behavior that you didn’t expect
• First thing to do it figure out how it should behave
• Focus on external behavior. What are all of the things we should
expect to see if it works.
• Ask: What input does it take? What output should it give? Etc.
• Write down the problem statement! If you can’t create a concise
description you probably don’t understand it well enough.
• Can we recreate the problem? If not we need to be able to in
order to proceed.
35. COLLECT OBSERVATIONS
• Now collect some evidence. Write down any observations about
what’s happening in the system.
• What actually happens when you recreate the problem? What
inputs did you use? What are the outputs. Are there log files? Can
we see other impacts, for example in a database or service.
• Spend some time digging here, because the more you spend the
better the quality of the hypotheses you are able to create in the
next step.
37. CREATE HYPOTHESES
• Think of as many possible causes for the observed behaviour as
you can.
• These are your hypotheses. A hypothesis is an attempt to
understand and explain what is happening.
• Hypotheses can be wrong, and many will be.
• Write everything down!
• If you can’t form clear hypotheses, you might just not know
enough, and need to gather some more data.
• There’s nothing wrong with using Stack Overflow / Google to help
form hypothesis!
39. DESIGN AND CARRY OUT
EXPERIMENTS
Observe
Problem
Collect
Observations
Create
Hypotheses
Design
Experiment
Perform
Experiment
Evaluate
Results
Falsified, so
try another
hypothesis
Supported,
so refine
hypotheses
Our
Understanding
Of The System
Solution!
40. DESIGN AND CARRY OUT
EXPERIMENTS
• Before you start your experiment write down what you are doing,
what you expect to see, and what it means if the experiment fails.
• It could be a description of the metrics you’re going to look at,
code or config you’re going to change, query you’re going to run
• An experiment could also just be gathering data or measurements,
for example from log files, metrics, other visibility tools you have.
For example “if we look in the production logs we should see this
log statement before the error”
• One way of proving a hypothesis is by looking at the code, or
stepping through a debugger.
• Again, write it down!
• E.g. If I change this line in the config file, then I expect that this
error should no longer appear in the log.
42. EVALUATE RESULTS
• What happened when you performed the experiment?
• For complicated debugging it helps to keep a record of the results,
for example save screenshots or copies of log files.
• Did the results match your prediction? If not, your hypothesis
must be false (unless you made some other mistake).
• Do the results suggest a new hypothesis, or refinement to existing
hypotheses?
• If you changed something (e.g. code or config) and the results
aren't what you expected, you should undo the change.
44. EVERY EXPERIMENT BUILDS
UNDERSTANDING
• If you are able to solving a problem without learning, the problem
is just going to reoccur later.
• A validated hypothesis doesn’t necessarily mean you’ve fixed the
problem. It might just be that you’ve learned something about the
system. You might need to go through many experiments loops to
learn enough the find the problem.
• The scientific method is actually a structured knowledge
acquisition process. Solving problem is a happy fringe benefit!
47. EXERCISE!
As a table you are going to try out the scientific
method.
Pick a scenario from the back of the handout, or
better, someone volunteer a real problem they
have or had.
48. EXERCISE!
As a table you are going to try out the scientific
method.
On the big sheet on the table make sections for:
Problem Statement
Observations
Hypotheses
49. EXERCISE!
As a table you are going to try out the scientific
method.
Together come up with each of these.
For hypotheses come up with the experiment
you will run, and how you will know if the
hypothesis is validated.
51. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
52. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
56. “First I had to become conscious while programming. I had been
programming for years… and I was astonished to discover that, even
though programming decisions came smoothly and quickly to me, I
couldn’t explain why…. The first step… was slowing down long
enough to become aware of what I was thinking, to stop pretending
that I coded by instinct”
WHY SO DELIBERATE WITH THE
PROCESS?
57. “I’m not a great programmer. I’m just a
good programmer with great habits”
WHY SO DELIBERATE WITH THE
PROCESS?
58. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
59. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
61. • Your memory isn’t as good as you think it is!
• Take notes in a notebook, whiteboard,
stickies, or a chat tool
• Provides clarity on the problem, hypotheses,
experiments and learning
• Avoids repeating experiments, or missing
connections
WRITE EVERYTHING DOWN!
62. The speed of running experiments is key. The
faster you go through the experiment loop the
faster you learn.
EXPERIMENT FASTER
63. • Isolate the problem in a smaller app that you
can run in isolation (Unit test tools like xUnit
are fantastic for this)
• Focus on faster experiments, for example
ones that only use logs from previous runs
• Tools help here! Become an expert in your
debugger, or in some languages, interactive
consoles
EXPERIMENT FASTER
64.
65. Often the problem is mired in a lot of
complexity. Can you find a way to recreate
the problem in a less complicated fashion?
• Extract the offending code into it’s own
method/class
• Refactor to clean it up
• Comment out extraneous lines
• Once you run your experiment you likely
want to revert these changes
SIMPLIFY
66. Empathize with the future person who has to
debug the problem (who might be you).
• Design in the ability to run experiments.
• Good, modular design lets you run things in
isolation. Single Responsibility Principle!
• Have unit tests. Use Test Driven Development.
• Use informative logging statements.
• Keep your logs clean.
• Good error messages beat documentation.
DESIGN FOR DEBUGGING
67. • Write Everything Down!
• Running Experiments Faster
• Simplify The System
• Create Systems That Support Troubleshooting
• Be Deliberate About The Process and Practice
THINGS THAT MAKE THIS EASIER
RECAP
68. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
69. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
71. • If you skip steps or try to hurry this, you will
immediately slow down.
• If you are a manager protect your developers
from pressure (and blame) when they are
debugging a critical problem.
• Have someone to communicate status to
anxious management types and customers,
to allow the devs time to by systematic
• Take a break if you get stuck
PRESSURE AND RUSHING SLOWS
YOU DOWN
73. If you change more than one thing at a time it
is very hard to evaluate the results of your
experiment.
Take as small steps as you can in order to
learn about the system.
ONLY CHANGE ONE VARIABLE AT
A TIME
74. You will unconsciously want to problem to have a
certain cause, so pick or exclude valid hypotheses.
• You have “gut feelings” or assumptions about
what is wrong
• There is a part of the system you distrust or you
often see problems in a certain area
• We have a tendency to think a problem is
someone else’s fault
BIAS, ASSUMPTIONS, BLAME
75. Before you blame someone else, run an
experiment that conclusively proves it is someone
else’s problem. Then provide them with the
BLAME
results of that
experiment to help them
debug the problem.
76. • Sometimes a problem can have more than
one root cause.
• Sometimes failures can interact with each
other (or even cancel!)
• Sometimes a tested supported hypothesis
can lead you down the wrong path.
COMPLEX FAILURES
77. • Don’t just randomly run experiments!
• Hypotheses are created based on
evidence.
• New hypotheses should be guided and
refined by your increasing knowledge of
the system you are investigating.
TRIAL AND ERROR
78. • Pressure and rushing slows you down
• Changing Multiple Variables confounds you
• Bias, Assumptions, Blame
• Complex Failures
• Trial and Error
WATCH OUT FOR
79. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
80. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
81. If you are wanting to learn about a new
system, the same technique applies!
State your hypothesis about how the system
works, and then run your experiment.
You build up a mental model of the system
through structured experiments.
KNOWLEDGE ACQUISITION
82. Sometimes, it’s just easier to google the
problem. Don’t think this replaces all of the
90 times a day you search Stack Overflow.
STACK OVERFLOW AND GOOGLE
83. WHERE IS THE CREATIVITY?
Scientific method of debugging is a
creative process, as you create
hypotheses to test. Creativity helps.
Sometimes framing the right hypo takes
creativity and ingenuity.
More people helps.
Thinking outside of the “box”
Does this feel dry and mechanical?
Remember Abductive reasoning is a creative
process.
Finding the right hypothesis and drawing
connections between experiments takes
creativity, ingenuity, experience.
More people helps (Mobbing ftw!)
84. • Knowledge Acquisition
• When Not To Use The Scientific Method
• Where Is The Creativity?
OTHER THINGS
85. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
86. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
87. PAIRING AND MOBBING
The best way to teach this is to demonstrate
it in action.
When a problem happens get everyone in a
room and walk through the process.
Use a big white-board or sticky notes on a
wall to record your notes.
88. TOYOTA KATA
Scientific method of debugging is a
creative process, as you create
hypotheses to test. Creativity helps.
Sometimes framing the right hypo takes
creativity and ingenuity.
More people helps.
Thinking outside of the “box”
89. TROUBLESHOOTING KATA
QUESTIONS
Scientific method of debugging is a
creative process, as you create
hypotheses to test. Creativity helps.
Sometimes framing the right hypo takes
creativity and ingenuity.
More people helps.
Thinking outside of the “box”
Questions About The Problem
1. What do we know about the
problem?
2. Is this enough to form
hypotheses?
3. What are our hypotheses?
Are we missing any?
4. Which hypothesis is most
likely to advance us towards
a solution?
Questions For The Experiment
1. What experiment can we run
to test our hypothesis?
2. What do we expect to see?
3. What were the results? Did
we see what we expected?
4. What have we learned about
the system?
90. TEACHING THIS
Scientific method of debugging is a
creative process, as you create
hypotheses to test. Creativity helps.
Sometimes framing the right hypo takes
creativity and ingenuity.
More people helps.
Thinking outside of the “box”
• Pairing and Mobbing is an excellent way to
share this process and discipline
• Use the Troubleshooting Kata questions to
start people thinking in a scientific way
92. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
93. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
94.
95.
96.
97. BONUS! EXPERIMENTAL MINDSET
Once you get into the habit of thinking about
things in terms of experiments you will find it is
applicable everywhere.
Being more explicit about experimentally
testing what you learn and believe makes you a
more powerful thinker.
98. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
99. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
100. USEFUL REFERENCES
“Debugging With Scientific Method” - @stuarthalloway talk at Clojure/conj -
https://www.youtube.com/watch?v=FihU5JxmnBg
“The Scientific Method of Troubleshooting: A FutureTalk with Blithe Rocher” -
https://blog.newrelic.com/2015/08/19/futuretalk-scientific-method-blithe-rocher/
“Scientific Debugging” - http://yellerapp.com/posts/2014-08-11-scientific-debugging.html
Game Coding Complete - Mike McShaffry
“Scientific debugging: Finding out why your code is buggy” -
http://www.embedded.com/design/debug-and-optimization/4418635/Scientific-debugging--
Finding-out-why-your-code-is-buggy---Part-1
“Each necessary, but only jointly sufficient” -
http://www.kitchensoap.com/2012/02/10/each-necessary-but-only-jointly-sufficient/
Implementation Patterns – Kent Beck
101. CREDITS
“Pair Programming” - Lisamarie Babik -
https://commons.wikimedia.org/wiki/File:Pair_programming_1.jpg
“Sherlock” - http://today.uconn.edu/wp-content/uploads/2014/07/sherlock-holmes-basil-
rathbone.jpg
”Life Is A Mastermind Game” - https://adsoftheworld.com/taxonomy/brand/mastermind
The Four Stages of Competence - https://focuspocusnow.files.wordpress.com/2013/02/4-
stages-of-competence.png
”Lab Puppy” - https://pixabay.com/en/puppy-labrador-purebred-retriever-1082141/
“Springfield Blame” - https://c2.staticflickr.com/8/7473/15650030866_f236377785.jpg
103. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
104. To-Do DOING (1) DONE
Story of
a Team
Intro
Types of
Reasoning
Back to
the
Team
Scientific
Method
Deliberate
Process
These
Make It
Easier
Watch Out
For These
Other Uses
For
Experiments
Teaching
This
Bonus
Material!
Closing and
References
Thanks for Coming!
Catch ‘Em All!
Editor's Notes
- team asked to start working on an existing system.
- Java, only a few years old, but already becoming legacy. big enterprise framework, no tests, little documentation
- team didn’t get support - different location; different management hierarchy with different priorities.-
- trying to build, deploy, run
- strong deadline and pressure
- 3 very smart developers spent 1.5 days, and they just couldn’t get it running.
we mobbed, found the problem, and got it running in 45 minutes
this started me on a path. Why were we successful when they had failed before?
We weren’t successful just because we got in a room. Collaboration isn’t a silver bullet.
I didn’t have some knowledge of the system or expertise they team didn’t have
I think we were successful because of the process we used.
I think sometimes people think good troubleshooting is innate, or due to expertise.
But is effective debugging also something we could teach?
Is it possible that I could get better at it if I was more explicit about the process
Please bare with me, as this part is a little technical, but I think it’s important background.
Effectively, it is a process of choosing the hypothesis, which would best explain the available evidence.
Think of abduction as creating and evaluating competing hypotheses.
Whichever hypothesis is stronger, for whatever reason, wins the abduction.
Sherlock actually used inductive or abductive reasoning.
I don’t know why Connan-Doyle used the term “deduction”
I think maybe because it sounds silly to say you are “inducing” or “abducting” something?
Pause for riotous laughter.
Okay, back to the team
They had been given a code repository, and were able to build
It’s a Tomcat app. big-enterprise java, complicated configuration of the app required, mysterious imported enterprise framework.
There were four configuration files in the repo. application wouldn’t start up with them. error was thrown on the console.
asked the team that was “supporting” them they were emailed different versions of configuration files. different error was shown on the console
team was randomly trying different combinations of the configuration files. They’d drop in a new file and try to restart it. Rinse and repeat.
Each restart of app took 10 minutes.
first thing we did was slow down
reverted to the original set of configuration files
We looked at the error on the console and logs.
Came up with hypotheses about what the possible causes of the errors could be
We then came up with a list of things we could try in order to test these hypotheses. Experiments. We then picked one and tried it.
When we ran the experiment we learned something about the system. Either way, pass or fail.
Sometimes we’d have no impact on the error, introduce a new error, and then that experiment didn’t work. We’d undo the changes and try another hypothesis.
Eventually we discovered that the configuration files that had two lines that were causing problems. We found that we needed to take property values from different files to make it work. There were two problems, which is part of the reason why the team couldn’t find the problem. In their experiments they’d change a bunch of things, fix the one error but introduce a different error. They were running such large and unfocused experiments that they didn’t learn anything.
First, a
There’s a lot of philosophy of science, and it’s fascinating stuff. I highly advise you to read all of it.
But for the purposes of this presentation we are going to skip Kuhn’s The Structure of Scientific Revolutions
Regardless the scientific method has served us pretty well for the last couple of hundred years, so let'
explain game
This can be solved by using the scientific.
The problem is the hidden pegs.
Your hypothesis is a guess a the colours. You experiment by placing the colours and receiving the white and black pegs as the result of your experiment.
You narrow down to the correct solution by running experiments
I’ll come back to mastermind
Explain overview
Slide: Observe failure
behavior that you didn’t expect. In this case the team was unable to start up tomcat
First thing to do it figure out how it should behave.
people miss that step, especially if this problem is coming from a bug report.
Focus on external behavior. What are all of the things we should expect to see if it works.
Ask: What input does it take? What output should it give? And so on.
Write down the problem statement. If you can’t create a concise description you probably don’t understand it well enough.
Take time with your original problem description. Try and be as precise as you can. If you mess up the problem definition, it’s easy to get stuck, or confused.
Can we recreate the problem? If not we need to be able to in order to proceed.
Slide: Observe failure
behavior that you didn’t expect. In this case the team was unable to start up tomcat
First thing to do it figure out how it should behave.
people miss that step, especially if this problem is coming from a bug report.
Focus on external behavior. What are all of the things we should expect to see if it works.
Ask: What input does it take? What output should it give? And so on.
Write down the problem statement. If you can’t create a concise description you probably don’t understand it well enough.
Take time with your original problem description. Try and be as precise as you can. If you mess up the problem definition, it’s easy to get stuck, or confused.
Can we recreate the problem? If not we need to be able to in order to proceed.
Write down any observations about it.
Now collect some evidence. What actually happens. What inputs did you use? What are the outputs. Are there log files? Can we see other impacts, for example in a database or service. Spend some time digging here, because the more you spend the better the quality of your next step which is
Think of as many possible causes for the observed behaviour as you can.
These are your hypotheses. A hypothesis is an attempt to understand and explain what is happening.
Hypotheses can be wrong, and many will be.
Write everything down!
If you can’t form clear hypotheses, you might just not know enough, and need to gather some more data.
There’s nothing wrong with using StackOverflow/Google to help form hypothesis!
Now select the hypothesis that you want to test. This doesn’t have to be the most likely, it might just be the easiest to test.
You’re looking for hypotheses that you think will give you good data, or perhaps just easy ones to run.
Pick experiments which are designed to to give you as much information as possible, for the lowest effort.
Experienced developers have an expert, intuitive sense for problems. They’ve done it so much that they eliminate unlikely problems without even knowing why. Like a chess master only looking at the best choices. (could use quote about difference between master and expert in chess, that they actually look at less options).
Before you start your experiment write down what you are doing, what you expect to see, and what it means if the experiment fails.
It could be a description of the metrics you’re going to look at, code or config you’re going to change, query you’re going to run
An experiment could also just be gathering data or measurements, for example from log files, metrics, other visibility tools you have. For example “if we look in the production logs we should see this log statement before the error”
One way of proving a hypothesis is by looking at the code, or stepping through a debugger.
Again, write it down!
E.g. If I change this line in the config file, then I expect that this error should no longer appear in the log.
What happened when you performed the experiment?
For complicated debugging it helps to keep a record of the results, for example save screenshots or copies of log files.
Did the results match your prediction? If not, your hypothesis must be false (unless you made some other mistake).
Do the results suggest a new hypothesis, or refinement to existing hypotheses?
A validated hypothesis doesn’t necessarily mean you’ve fixed the problem. It might just be that you’ve learned something about the system.
If you changed something (e.g. code or config) and the results aren't what you expected, you should undo the change.
If you are able to solving a problem without learning, the problem is just going to reoccur later.
A validated hypothesis doesn’t necessarily mean you’ve fixed the problem. It might just be that you’ve learned something about the system. You might need to go through many experiments loops to learn enough the find the problem.
The scientific method is actually a structured knowledge acquisition process. Solving problem is a happy fringe benefit.
If you are able to solving a problem without learning, the problem is just going to reoccur later.
A validated hypothesis doesn’t necessarily mean you’ve fixed the problem. It might just be that you’ve learned something about the system. You might need to go through many experiments loops to learn enough the find the problem.
The scientific method is actually a structured knowledge acquisition process. Solving problem is a happy fringe benefit.
Finally, happy day! You’ve learned enough about the system to fix the problem!
Snowboard story
Switched from skiing to snowboarding about 10 year ago.
After about 5 years it was the beginning of winter and as I put on my board at the top of the hill I realized I had no idea how to snowboard.
My body did, which way to lean, how to react, but intellectually I had no idea how to snowboard.
I decided to become conscious of what I was doing. Pay attention to how I did it.
Once I did that I could actually begin to improve!
I could experiment with different techniques, deliberatly try to get better.
I quickly improved.
Still not good, but better.
When you do something for a while you become competent, but perhaps unconsciously competent
I’m not fond of this model because I think there is a fifth stage, where by becoming more conscious you can increase your competence
You do this by paying attention to what you are doing, in order to conscious improve.
Dare I say, experiment with different ways of doing things?
Another way is because Teaching a class, where you have to learn when you teach
I know more french grammar than english
So, I would say “I’m not a great troubleshooter, I’m just a good troubleshooter with great habits”
Okay, lets come back to MasterMind for a minute.
Imagine that you couldn’t see the clues for previous experiments. How much harder would this be?
It would be a huge amount of hubris to think we could remember all of the results of the experiments, but that’s what we do in programming all of the time. We run many experiments, but don’t record the results! Write your experiments down!
Follow the process, and be deliberate at every step.
Btw, I borrowed this metaphor from a talk stuart halloway gave at Clojure/conj
bugs like this require attention to your process - just wildly hacking away will lead you to wandering in circles, confused about the state of the problem, and the system
Write everything down. If you are doing this on your own keep a lab notebook. If you are working with others or a team write on a whiteboard or sticky notes or Slack
Write down because: you avoid repeating a hypothesis that you’ve already discarded (or should not even consider), because you forgot that you tested it
make debugging harder because your “fixes” cause new problems
Compare to speed of build-measure-learn loop in lean startup
Speed of running experiments is key.
If it takes many many minutes (or days) to run an experiment then you will be tempted to take big steps
Sometimes you can only test your hypotheses by going through a complicated or long-lived procedure, for example starting up WebLogic. Sometimes you can only test your hypotheses by promoting to production!
Isolate the problem in a smaller app that you can run in isolation (Unit test tools like xUnit are fantastic for this)
Focus on faster experiments, for example ones that only use logs from previous runs
Tools help here! Become an expert in your debugger, or in some languages, interactive consoles
In my experience you can almost always figure out how to simplify and speed up the experiment, it just takes creativity and effort. It usually pays off
For example it might suck to work in Java, but if you have to then take advantage of the very mature infrastructure!
I am surprised when I work with devs and they don’t know of the cools things their interactive developer can do.
Using these tools you can quickly gather data, or run many experiments in seconds, rather than minutes.
Might be same as fast
Isolating the problem
Refactoring code to simplify so that we could see what it was doing….
The team I mentioned before had this problem. They were under pressure, so they hurried, so they changed many things at once to make it “faster”.
If you skip steps or try to hurry this you will immediately slow down.
If you are a manager protect your developers from pressure (and blame) when they are debugging a critical problem.
Assign someone to communicate status to anxious management types and customers, to allow the devs time to by systematic
Because of the way that the game works we tend to change multiple things between experiments. We always run multi-variate experiments, which makes the results of the experiments in mastermind difficult to interpret. You need to combine the results of multiple experiments in order to figure out what gave us the black or white peg. Avoid this at all costs!
Example was the team I mentioned before. They were changing multiple things at the same time, so while they fixed one error they introduced another
The most important thing when you are starting a problem is to forgot what you already know!
Often we will unconsciously think “I really want it to be this cause”
. Your understanding of the system is incomplete, and your guesses as to what the issue is can confound your experimental results. The only real way to guard against this is to watch for it, or if you can, get somebody else to repeat your experiments.
In particular we have a tendency to blame others for our failure. It’s good form to never send a bug to someone else unless you have evidence that it is their problem.
Blame tools, frameworks, other teams
Your original problem state will have some assumptions in it. Sometimes these will turn out to be just plain false. So some of your experiments should investigate those assumptions and check that they’re correct.
Assumptions – stuck on idea
Complex Failures
Complex systems fail in complex ways. Often the failures interact with each other. Assuming that there’s a single root cause is an easy route to misdiagnosis. Instead look for combinations of failures that together explain the issue at hand.
Sometimes more than 1 root cause means that you need to learn more about the system by performing experiments to learn how it works.
A tested supported hypothesis MIGHT be a dead-end.
Two failures might cancel each other out
Dangers of reductivism.
“5 whys”for example, could hide cases where there are multiple root causes, each of which is contributing, necessary but not sufficient.
Avoid “trial and error programming” balance scientific method with investigation/leaning so you can form good hypotheses
Hypotheses and experiments should contribute to your understanding of the system, which informs your hypotheses!
Unfortunately, a lot of debugging uses this approach today. Turn off the machine. Reboot. Try rearranging the order of your statements,
There are things you need to watch out for.
Knowledge Aquisition
Using the scientific method for learning about a system is the best way to do it.
For example learning a language/api by writing tests against it. Structured application of the scientific method.
I think sometimes people think scientific process is dry and lacks creativity. It’s a mechanical process. Quite the contrary, this is a process that requires discipline and rigour, but also ingenuity and creativity.
The real problem can be very different than the appearance. For example an outOfMemory error that appeared as something else, or an error due to an incorrect version of a library. It can take creativity. That’s the imaginative reasoning part of Abductive Reasoning.
Knowledge Aquisition
Using the scientific method for learning about a system is the best way to do it.
For example learning a language/api by writing tests against it. Structured application of the scientific method.
Mobbing has entered my toolset, and is now my default approach when there are lots of unknowns, lack of knowledge, a knowledge gap, divergence between team members, or high criticality.