Yet we get the physics for this in
Our mental model is not accurate enough
What else might we get wrong?
What about our mental model of software projects?
About the computational model used
! Shape is more important than size
! The idea is to model behaviour, not give precise numbers
! The same sort of bathtub-based logic is used
! A lot can be built by simulating interconnected baths
! Look up system dynamics, if you are so inclined
! It has been validated
! Not published but supported by research
! Makes a lot of intuitive sense
The simple model
Let our base project be a project with 100 tasks. The team size is 200 people, each of whom can accomplish 0.005 tasks per week, this leads to…
20% of mistakes and reasonable assumptions on additional work
Whoops, a 2.25 times longer project emerged by allowing mistakes (20%) and allowing them to cause additional work. Of course, the relationships are
more subtle but they are way too geeky to explain here. The deconstruction rate depends on how much of the project is done: it is 0 for about 50% and
grows to 1 (in the later phase, as much of effort goes into deconstruction as into rework) as the project progresses.
Team churn, turns out, does not have a significant impact
Projects are not linear
Not on a large scale.
But agile works on much smaller timescales
What is the main assumption that we have made in this modelling thus far? We test all the time. What if this is not the case? Let’s look at typical waterfall.
no testing before 30% of project duration. Which turns out to be a 25%.
The sooner you test, the better
In agile, testing starts immediately
The importance of skilled workforce. Decreasing the error rate is 1.5 times as beneficial than allowing it to increase
Incompetence is really bad
Agile breeds and needs competence
Can agile only be done properly by such competent folks who would succeed regardless?
Effects of learning will be over-and
bad HR under-estimated
Decreased error rate will have a much smaller impact
than increased error rate