A quick overview of the different approach to estimate projects and a brief overview of creating plan using the estimates. In include basis of why we estimate, concepts behind estimation techniques and ways to estimate projects.
2. Approximate UNITS
required to complete a task/feature/project
What is
ESTIMATE /
ESTIMATION
What are estimates Based On?
1. Current Knowledge
2. Size
3. Complexity
4. Dependencies
5. Unknowns / Risks
Estimates are not -
1. 100% Accurate
2. Fixed Amount of Unit
3. Proportional to efforts invested to estimate
3. Why To
Estimate 1. Pre Project / Proposal Level Estimation
1. During Project / Execution Level Estimation
● Project Go-No Go Decision
● Prioritization
● Buy vs Make Decision
● Overall Timeline and Tradeoffs
● Release Planning
● Sprint Planning
4. Proposal Level Estimates
1. Small/Large/Uncertain
2. Affinity
3. Bucket System
4. T-Shirt Sizing
5. Function Point
Sprint Level Estimations
1. Story Points
2. T-Shirt Sizing
3. Planning Poker
Estimation Approaches
Release Level Estimation
1. Ordering
2. Dot Voting
5. Generic Principal for Agile Estimates
❖ Agile Value: Individuals and interactions over processes and tools
❖ Agile Principal: The most efficient and effective method of conveying information to
and within a development team is face-to-face conversation.
Principals for most Agile Estimation Approach:
1. Inclusion of cross team members
2. Discussion and Reasoning
3. Team consensus
It include all the three aspects of general estimation:
1. Expert Opinion
2. Analogy
3. Splitting
6. 1. Unit to describe the RELATIVE Efforts need to complete a tasks/story/feature at hand
2. f (Size, Risk and Complexities)
Story Points - Principals
Why Relative Based Sizing
Measuring Scale Suggestions (Weber’s Law):
1. Fibonacci Series
2. Factor of 2
3. T-Shirt Sizing
7. Story Points - Implementation
Triangulation Based Implementation For Stories
NOTES:
1. Managing Small Tasks - No Zeros Pointer Tasks
2. Managing R&D - Spikes
3. Approach for Bugs
4. Spilled over stories
5. SP ∝ Efforts = f (Size, Complexity, Risk/Unknowns) (Story points can also be used to
represent the value delivered by implementing a feature/task)
8. Planning Poker
Process:
1. Review Story
2. Individually Identify Story Points
3. Point Disclosure and Review With Team
4. Discussion
a. Time Constrained
b. Team Inputs from outlier among sizing
5. Repeat Step 2 -4
6. After team consensus agree to the point and proceed
ENHANCEMENT: SPILT TEAM AND STORIES
9. Bucket / Affinity / Ordering / SLU
SMALL LARGE
1 2 3 5 8 13
20.40,
80..and
larger
10. 1. Create Point Bucket (1,2,3,5,8,13,20 or XS, S, M, L, XL, XXL, XXXL)
2. Team Reviews Few Stories and Assign To Bucket
3. Split Stories Among Team Member
4. Each Individual Continue Bucketing
5. Team Sanity Check (Discussion)
a. Overall bucket for story Distribution - If we see that most of the stories are getting cluttered at one
end of the spectrum. Validate the need to reshuffle buckets
b. Discuss on the stories and their placement in case feel that it is wrongly placed.
ALTERNATE: Allow user to review bucket and change buckets. Track stories which are swiped
across bucket more than 1 and discuss those stories separately
Bucket
11. Affinity / Ordering / Small Large Unkown
Ordering:
1. Place stories randomly in spectrum of Low : High size
2. Each Participant will be able to move story one step towards Low/High
3. Process would conclude when no one wants to change story placement
4. In case of discrepancy one moves up and other down, then discussion and conclusion
Affinity (and SLU):
1. Place stories across Spectrum of smaller < - > Larger
2. Relative placement of stories between the spectrum
3. Discussion and rearrangement
4. Drill down the spectrum into Relative bucketing based on T-shirt/Fibonacci
5. Discussion and confirmation
12. Other Approach - 3 Points Estimations
3 Point Estimations:
1. Take 3 Estimates
a. Optimistic = O
b. Most Likely = M
c. Pessimistic = P
2. Average Out to get final estimates
a. Simple = (O+M+P)/3
b. Expected/Weighted = (O+4M+P)/6
c. Accuracy = (P-O)/6 or (P-O)/3
3 Points
Suggested Approach Alternate Approach
Feature Optimistic
Most
Likely Pessimistic Estimate
Confidence
%
(Possible
Variance)
Login using platform creds
Login using Social Media
Bluetooth Pairing with Activity
Tracker
Sync Data
Show Activity
Push Activity to Social Media
13. Other Approach - Function Points
External
Applications
(Hardisk/Servers)
User
Application
Internal Logical
Files (ILF)
EI - External Input
EO - External Output
EQ - External Queries
EIF - External Interface File
EI
EQ
EO
Function Points
Suggested Approach Alternate
Feature ILF EIF EI EO EQ
3rd
Party
Integrati
on UI APIs
Login using platform creds
Login using Social Media
Bluetooth Pairing with Activity
Tracker
Sync Data
Show Activity
Push Activity to Social Media
14. Dot Voting
Best: Brainstorming and deciding on the features to
be DISCUSSED
Checkpoints:
1. Ideas are proportionate
2. Proper cluster to avoid ‘SPLIT VOTE ISSUE’
3. Do Not include must haves
4. Voting among dependent features
5. Avoid Cause-Effect/Problem-Solution
Features
6. Limiting Votes per Item
Process:
1. Brainstorm Ideas
2. Group/Clustre Ideas
3. Vote on cluster
Variants:
1. Red/Green: Positive and
Negative Votes
2. Color Code
3. Ranking
Limitation:
1. Not suitable For Breakthru Idea
2. Influence by other (Solution - A secret ballot)
15. Planning using
Estimations
Why To Plan:
1. What should be developed
2. Timeframe for development
3. Team Involved
Types of Plan:
Product Level Plan
of the different
releases
Release Level
Plans - Like
MVP Plan
Sprint Plan
Daily Plan
Agile planning is like ‘One
Hour Run’ compared to a
Marathon
Focus is to try to cover as much
distance as possible within one
hour
16. How To Plan
3 Step Process:
- Identify backlog to be
planned
- Estimate the backlog
- Calculate Team Velocity:
- # of Sprint = (Backlog /
Velocity)
- Process To Determine Velocity:
- Average velocity of recent past sprints
- Velocity Prediction For New Project
- New Team, New Tech Stack
- No Prv Velocity
- Multiple Suggestion of what can be
accomplished
- Ensure coverage of all types of tasks
(front end, back end, DB, API etc)
The very fact that Estimates are our assumption for the approximate Unit (efforts/cost/time) for the task implies that they are not 100% accurate. There are always a confidence factor associated with estimates.
As and when we progress in the project, the knowledge of the product and technology increase the confidence increase and possibility of variance decreases.
For instance, suppose we are working on a project with to make a aggregator platform. Features for that can be something like authentication, getting list for partner/3rd party sources (source A, B, C, D,...H) and showing best alternative.
At the start of the project we might say that getting list from each 3rd party would be X but considering the fact that we do not have their API nor past experience in integration of A-H it would be based on analogy that we had integrated something in other domain or information from dev forums and our confidence on that estimates would be less (possibility of high variance).
As we progress and start integrating A - C partners, we would not have more accurate experience of integration of partner. And we might revise our estimates based on the complexity and efforts w.r.t other features. This revised estimates would have a higher confidence factors (lower variance).
Also it has been observed that it is not necessary that if we spend more time estimating we would be getting more accurate assumptions. Rather it has been observed that while estimating initially the accuracy increase then the accuracy often tend to degrade. And no matter how much time we spend it does not reach 100% accuracy.
The estimation done at sprint level has far more accuracy compared to proposal level estimates.
Agile Principle and Values are the basis of most estimation approaches which basically promotes interactions and team consensus for sizing.
Why Relative: Because at core, we all (as Human Species ;) ) are bad at estimating absolute values, but are very efficient at comparison.
For example: If we have to compare few tasks:
Supporting FB based login
Supporting Google based login
Fetching the list of time and showing in a particular order
Even with just based idea of how things work and not actual coding experience, we might all come to same conclusion and First two tasks are of same size and third one can be twice/three times as big as that of Tasks ½
Weber’s Law: ‘"Simple differential sensitivity is inversely proportional to the size of the components of the difference; relative differential sensitivity remains the same regardless of size.’ In simple terms the we are able to distinguish change based on % of difference rather than absolute value. Simplifying it further, via example it means that there if we were to compare two weighs 100 grams and 150 grams we would be able to tell difference in weight but had it been 100 vs 10/130 we might not been able to tell different. So in this example we were able to tell difference of when weight increased by 50%. Similarly when comparing larger weight of 200 gm we might not be able to differentiate between 200 vs 250, rather we would be able to differentiate between 200 and 300. That’s why we use fibonacci series or power of two series.
If there are multiple 0 pointer tasks like changing a color of button, changing copyright text, etc, which as an independent task are negligible but cannot be 0 pointer. Because that would mean that it need not require any efforts and does not add any value. Also if we have 20 such tasks it can consume significant bandwidth of the team. So in such cases, we can merge such tasks and size a group of such tasks. This was we can show visibility of the actual tasks/value added by team even with such small tasks as well as we do not show wrong data by sizing it 0 or 0.5 for each such tasks
Approach for bug: As a team we are normally aware of the capacity of task which can be taken up even without sizing. Similar to how we know that in a 10 days sprint we would be spending approx 12-16 hours in meeting (like stand up, sprint planning, grooming, retros, demos, etc) and when we commit we would be committing based on the remaining time. Similarly even if we do not size the bug we would still know about the complexity of the bug, its size and tentative efforts. So based these facts + priority of the bug, we can take it up in sprint even w/o sizing.
Approach for Spike: Normally agile/scrum practice suggests that while working on current iteration we must also provision efforts for grooming / readiness for future, Like sprint grooming sessions (~5-10%). Going by that spike is also kind of sprint grooming where in we are doing tech feasibility check. Also spike is a fixed time bound activity where in we say that we would need 6 hours to check feasibility and after 8 hours we can define approach for implementation and size the dependent story. So the fact that it is timebox would provide insight to the team about their capacity for the current sprint to take up rest of the features.
Also point to note: When we size stories we go thru the backlog and identify that there are 20 features we will be developing and they total to around 300 points. So in turn it also indicates the value which will be delivered at the end of sprint. Now if we size bugs too, it would in turn indicate the within sprint we are providing/giving some additional value (which were bugs) and it can be a false indication to the business owners.
For example in a sprint the total commitment was 40 and there are 3 bugs totaling to say 8- 10 points, then business can also interpret that 40 worth of features were delivered.where as we have covered only 30 for features and rest were bugs which can lead to gap of understanding.
So if story points are used to check not only on features delivered but also to check the team occupancy then we can size bugs/spikes but if we maps the story points to their business value too then it shouldn't be estimated.
Spilled over stories: Ideally they should not be re-estimated unless their comparative size/efforts were not changed. If we identified that there are other unplanned complexities then we may be resize it to reflect the same. However we must not resize the story point stating that out of 5 point, 2 points worth of efforts was completed and 3 points is being spilled over. We must not do that
ENHANCEMENT: SPILT TEAM AND STORIES - If we are to use this approach, after setting up base of 4-5 stories with complete team. All stories are bifurcated among smaller groups and those group size stories individually to speed up the process. However it would need that respective smaller teams have someone to moderate discussion and resolve queries
The overall concept for Bucket, Affinity, Ordering and Small-Large-Unknown all follow similar at a high level where in we have group of stories placed relative to each other and in spectrum from small to larger and then grouped into a particular size. Each approach differs into how group discussion take place to review relative placement of stories in Small - Large spectrum.
This approach need to dwell into architecture and design to identify how the system would be interacting and set of DB and their values. Which are be difficult to design while estimating and can be time consuming.
After getting count of each of the operations distribute it between simple medium and complex operations. And then size accordingly.
Hence to reduce the complexity we can also set up basic parameters which drive the solution external integrations, UI level complexities and API (which would include backend DB design too)
Due to the complexity and time required to get to this level of estimations, it is being less preferred approach to project estimation.
It is not an estimation approach but can be used for prioritization of discussion
We see always over sprint/iteration and daily planning, but what we often miss if the fact that agile calls to look into revision of release planning too at regular interval