This document discusses techniques for estimating story points in Agile projects. It describes current estimation practices like fixed story pointing based on person hours or days, expert influence, and guestimating. These can lead to inaccurate estimates and not reflect improved productivity over time. The document proposes an approach called MAGIC which uses a story point matrix based on functional and technical analysis to measure and analyze stories, and an empirical data model using historical project data to improve and control estimates. Templates are provided for the story point matrix and empirical data model.
Agile Patterns: Agile Estimation
We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will consist of review of the problem with estimation in projects today and then an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio Team System, and much more. This is a very interactive session, so bring a lot of questions!
This August Scrum Breakfast, we have a new speaker - Mr. Pedro Gonzalez - Scrum Master at TINYpulse.
He will bring us an interesting topic about Agile estimation using story points, giving some tips on why relative estimations are far better than absolutes, why we shouldn't spend too long in details, and other issues he has experienced himself with his team.
This slide gives an excellent overview of Agile Planning and Estimation.
Will be really helpful, if presented to a Scrum/Agile Team to understand activities related to Release Planning, Sprint Planning and Estimation
Agile Patterns: Agile Estimation
We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will consist of review of the problem with estimation in projects today and then an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio Team System, and much more. This is a very interactive session, so bring a lot of questions!
This August Scrum Breakfast, we have a new speaker - Mr. Pedro Gonzalez - Scrum Master at TINYpulse.
He will bring us an interesting topic about Agile estimation using story points, giving some tips on why relative estimations are far better than absolutes, why we shouldn't spend too long in details, and other issues he has experienced himself with his team.
This slide gives an excellent overview of Agile Planning and Estimation.
Will be really helpful, if presented to a Scrum/Agile Team to understand activities related to Release Planning, Sprint Planning and Estimation
Agile Estimating & Planning by Amaad QureshiAmaad Qureshi
An introduction to Agile Estimating and how it can be used to measure the size and length of work.
Agile estimating & planning is a way of measuring the size and time it takes to complete a task. This technique is used by Agile teams in Enterprise and can be utilised in the same way by Start-ups not just for software but for all areas of the business. In this talk I will show you how estimating & planning works by:
- Writing effective user stories
- Writing tests to validate stories (acceptance criteria)
- Using story points to work out the size of a task
- Estimating using Planning Poker
- Using Story Points to calculate a team’s velocity (speed of work)
- Using a team’s velocity to calculate project length
Ever wonder why Agile teams swear by relative estimation? My teams improved sprint planning efforts by a factor or 3, once we started using relative estimation.
Without understanding Agile relative estimation, teams tend to fall back to using time-based methods. This often leads them to spend way too much time on obsolete estimates that will be made even more complex with all the unknowns and constant emergent requirements of an Agile world!
“It's better to be roughly right, than precisely wrong!”
~ John Maynard Keyenes
The Solution is simple: understand that relative estimation is only a rough order of magnitude estimate to quickly organize the product backlog. This empowers your product owners (PO) to quickly make value based trade-offs on backlog items and decide on what stories the team should work next. This gives the business the highest bang for their buck!
PROBLEMS WITH TIME-BASED ESTIMATES
-Teams spend too much time trying to get it right
-Lack of confidence/experience can lead to people being either optimistic or pessimistic
-Timeline you are estimating may be too far in the future
-Due to long timeline, there are too many risks, unknowns, changes or dependencies!
WHY USE RELATIVE ESTIMATION?
-Allows a quick comparison of stories in the backlog
-Allows you to select a predictable volume of work to do in a sprint
-Uses a simple arbitrary scale
-Allows PO to make trade-offs and take on the most valuable stories next
ESTIMATION TIPS
-Relative points or equivalent Tshirt sizes are used to estimate stories, leveraging the Fibonacci sequence modified for Agile.
-The team estimates the story, not management nor the customer.
-Story estimates account for three things: effort, complexity, and unknowns. Don’t short sell yourself by estimating effort alone, that’s where waterfall projects face issues.
-Remember to estimate all Stories, user stories or technical stories. Even estimate research or discovery spikes.
-Refine your backlog as a team on a continuous basis, to get your stories to meet the Definition of Ready.
-Only pull into your sprint, stories that are refined and estimated.
-Break down stories that are large, into smaller slivers of value to optimize your flow.
-Don’t sweat it if you get it wrong, teams often do early on but improve over time.
Product Backlog - Refinement and Prioritization TechniquesVikash Karuna
This presentation describes the important techniques used in Product Backlog refinement and prioritization in Agile development. The various techniques described here are very useful for product managers, product owners, scrum masters, and agile teams.
The goal of this presentation is to explore the most efficient way to manage the product backlog, using blitz planning, story maps (walking skeleton) and improving the quality of our stories by focusing on stronger acceptance criteria, as well as using personas. The benefit of having a better way to organize and visualize the product backlog is to improve our ability to conduct release and iteration planning, as well as produce a better product road map. By attending this session you will be better equipped to help your team and product owner work with the product backlog. As a project manager, you will be introduced to simple techniques that will help you better manage your Agile project and improve visibility to all the work.
Backlog refinement is not a Scrum event, but instead is an ongoing activity during the Sprint required to decompose, describe, estimate, and order backlog items in the Product Backlog.
This material is divided into two sections. The first section reviews the basics of backlog refinement, covering various options for conducting the activity. The second section covers tips for maintaining a healthy backlog and potential anti-patterns.
This material was presented at Agile New England in July and August 2022 as "101" introduction and "202" advanced sessions.
This presentation includes an overview of the various estimation techniques used in Agile projects. I've also put in a slide for explaining the importance of business value for Agile requirements. A simple mechanism on capacity planning before weaving it all together to come up with a reasonably foolproof plan.
Agile Estimating & Planning by Amaad QureshiAmaad Qureshi
An introduction to Agile Estimating and how it can be used to measure the size and length of work.
Agile estimating & planning is a way of measuring the size and time it takes to complete a task. This technique is used by Agile teams in Enterprise and can be utilised in the same way by Start-ups not just for software but for all areas of the business. In this talk I will show you how estimating & planning works by:
- Writing effective user stories
- Writing tests to validate stories (acceptance criteria)
- Using story points to work out the size of a task
- Estimating using Planning Poker
- Using Story Points to calculate a team’s velocity (speed of work)
- Using a team’s velocity to calculate project length
Ever wonder why Agile teams swear by relative estimation? My teams improved sprint planning efforts by a factor or 3, once we started using relative estimation.
Without understanding Agile relative estimation, teams tend to fall back to using time-based methods. This often leads them to spend way too much time on obsolete estimates that will be made even more complex with all the unknowns and constant emergent requirements of an Agile world!
“It's better to be roughly right, than precisely wrong!”
~ John Maynard Keyenes
The Solution is simple: understand that relative estimation is only a rough order of magnitude estimate to quickly organize the product backlog. This empowers your product owners (PO) to quickly make value based trade-offs on backlog items and decide on what stories the team should work next. This gives the business the highest bang for their buck!
PROBLEMS WITH TIME-BASED ESTIMATES
-Teams spend too much time trying to get it right
-Lack of confidence/experience can lead to people being either optimistic or pessimistic
-Timeline you are estimating may be too far in the future
-Due to long timeline, there are too many risks, unknowns, changes or dependencies!
WHY USE RELATIVE ESTIMATION?
-Allows a quick comparison of stories in the backlog
-Allows you to select a predictable volume of work to do in a sprint
-Uses a simple arbitrary scale
-Allows PO to make trade-offs and take on the most valuable stories next
ESTIMATION TIPS
-Relative points or equivalent Tshirt sizes are used to estimate stories, leveraging the Fibonacci sequence modified for Agile.
-The team estimates the story, not management nor the customer.
-Story estimates account for three things: effort, complexity, and unknowns. Don’t short sell yourself by estimating effort alone, that’s where waterfall projects face issues.
-Remember to estimate all Stories, user stories or technical stories. Even estimate research or discovery spikes.
-Refine your backlog as a team on a continuous basis, to get your stories to meet the Definition of Ready.
-Only pull into your sprint, stories that are refined and estimated.
-Break down stories that are large, into smaller slivers of value to optimize your flow.
-Don’t sweat it if you get it wrong, teams often do early on but improve over time.
Product Backlog - Refinement and Prioritization TechniquesVikash Karuna
This presentation describes the important techniques used in Product Backlog refinement and prioritization in Agile development. The various techniques described here are very useful for product managers, product owners, scrum masters, and agile teams.
The goal of this presentation is to explore the most efficient way to manage the product backlog, using blitz planning, story maps (walking skeleton) and improving the quality of our stories by focusing on stronger acceptance criteria, as well as using personas. The benefit of having a better way to organize and visualize the product backlog is to improve our ability to conduct release and iteration planning, as well as produce a better product road map. By attending this session you will be better equipped to help your team and product owner work with the product backlog. As a project manager, you will be introduced to simple techniques that will help you better manage your Agile project and improve visibility to all the work.
Backlog refinement is not a Scrum event, but instead is an ongoing activity during the Sprint required to decompose, describe, estimate, and order backlog items in the Product Backlog.
This material is divided into two sections. The first section reviews the basics of backlog refinement, covering various options for conducting the activity. The second section covers tips for maintaining a healthy backlog and potential anti-patterns.
This material was presented at Agile New England in July and August 2022 as "101" introduction and "202" advanced sessions.
This presentation includes an overview of the various estimation techniques used in Agile projects. I've also put in a slide for explaining the importance of business value for Agile requirements. A simple mechanism on capacity planning before weaving it all together to come up with a reasonably foolproof plan.
Agile is a philosophy for delivering solutions that embraces and promotes evolutionary change throughout the life-cycle of a product. Many teams and organizations have been using Agile to, deliver software more timely, increase quality, and ultimately increase customer satisfaction.
These planning levels were originally described by Hubert Smits in the whitepaper "5 Levels of Agile Planning: From Enterprise Product Vision to Team Stand-up".
At the start of a project or start of a major release, we always face the problem of "How do we break down this big release into stories?" " How do I move from this vision to lower level details in user stories?". My workshop & presentation at the #India Agile Week 2013 Pune was focussed on providing answers to this. This presentation provides a way to move from high level vision to user stories using Story Map.
Introduction to Agile Estimation & PlanningAmaad Qureshi
Presented by Natasha Hill & Amaad Qureshi
In this session, we will be covering the techniques of estimating Epics, Features and User Stories on an Agile project and then of creating iteration and release plans from these artefacts.
Agenda
1. Why traditional estimation approaches fail
2. What makes a good Agile Estimating and Planning approach.
3. Story points vs. Ideal Days
4. Estimating product backlog items with Planning Poker
5. Iteration planning - looking ahead and estimating no more than a few week ahead.
6. Release planning - creating a longer term plan, typically looking ahead, 3-6 months
7. Q&A
Presentation (animated) on Agilve vs Iterative vs Waterfall models in SDLC.
Detailed comparison across Process, Planning, Execution and Completion.
#Cricket Analogy#
Waterfall (Test Match) vs Iterative (ODI) Format vs Agile (T20)
#Waterfall: Test Match Format - Strategic-Phase by Phase like Innings by Innings.
Game for Specialists, Slow and Steady.
#One Day (ODI) Format : Strategic approach – First10/Middle/Slog overs.
Mix of Specialists and
All-Rounders, Result oriented.
#T20 Format: Lively,Dynamic, Full of Action. Game for All-Rounders. Changes with every over.
Highly Result oriented
Utfordringer i en Omstillingsprosess
- Å komme igjennom en omstillingsperiode og sitte igjen med den kompetanse en har bruk for i det fremtidige markedet
- Å gjennomføre en omstillingsprosess med høy grad av aksept, slik at det blir minst mulig støy og tap av produktivitet i den løpende drift.
- Unngå riper i lakken, og tap av renommé
How to separate the related inventions or combine those in one
case study: radio met car
Filing application tips - "KISS rule“
Trademark note
Software & biz methods notes including source code & flowchart
First sketches skills (Official Gazette, OG)
Drafting the specification hints
Intellectual Property Guide
patent requirements
industry approaches
patent strength -> patent map
International property offices
Scenario extends to more patents) - case study
.....
Agile estimation and Conflict Management : Presented by Arshiya SultanaoGuild .
Conflict management is a continuation to the agile estimation technique, sometimes due to conflict also we cannot arrive at a useful estimation. How agile coaches handles conflict management, we did an activity to understand conflict management and its resolution.
Learnings Using Spark Streaming and DataFrames for Walmart Search: Spark Summ...Spark Summit
In this presentation, we are going to talk about the state of the art infrastructure we have established at Walmart Labs for the Search product using Spark Streaming and DataFrames. First, we have been able to successfully use multiple micro batch spark streaming pipelines to update and process information like product availability, pick up today etc. along with updating our product catalog information in our search index to up to 10,000 kafka events per sec in near real-time. Earlier, all the product catalog changes in the index had a 24 hour delay, using Spark Streaming we have made it possible to see these changes in near real-time. This addition has provided a great boost to the business by giving the end-costumers instant access to features likes availability of a product, store pick up, etc.
Second, we have built a scalable anomaly detection framework purely using Spark Data Frames that is being used by our data pipelines to detect abnormality in search data. Anomaly detection is an important problem not only in the search domain but also many domains such as performance monitoring, fraud detection, etc. During this, we realized that not only are Spark DataFrames able to process information faster but also are more flexible to work with. One could write hive like queries, pig like code, UDFs, UDAFs, python like code etc. all at the same place very easily and can build DataFrame template which can be used and reused by multiple teams effectively. We believe that if implemented correctly Spark Data Frames can potentially replace hive/pig in big data space and have the potential of becoming unified data language.
We conclude that Spark Streaming and Data Frames are the key to processing extremely large streams of data in real-time with ease of use.
2011 Sterilization Forum presentation covering topics of strategic growth planning, resource requirements planning and Central Sterile management tools and techniques
Topic: Discover deep insights with Salesforce Einstein Analytics and Discovery
ImpactSalesforceSaturday Session
by @newdelhisfdcdug
Speaker: Jayant Joshi
AGENDA
a. What is SFDC Einstein Analytics?
b. Let us build great Visualizations using Einstein Analytics
c. Discover Deep Insights with Einstein Discovery
d. Demo and QA
https://newdelhisfdcdug.com/salesforce-einstein-analytics-and-discovery/
#NoEstimates project planning using Monte Carlo simulationDimitar Bakardzhiev
Here is the text behind the slides http://www.infoq.com/articles/noestimates-monte-carlo
Here is a video I prepared in order to help people understand how to plan a release using the Monte Carlo simulation in MS Excel http://youtu.be/r38a25ak4co
And here is an Excel file to show how Monte Carlo is done http://modernmanagement.bg/data/NoEstimate_Project_Planning_MonteCarlo.xlsx
Here are the SIPs for the baseline project http://modernmanagement.bg/data/SIPs_MonteCarlo_FVR.xlsx
Here is the planing simulation in Excel http://modernmanagement.bg/data/High_Level_Project_Planning.xlsx
The video ( after the 3:00 minute) http://youtu.be/GE9vrJ741WY on how to use the Excel files
Apache Spark Performance Troubleshooting at Scale, Challenges, Tools, and Met...Databricks
This talk is about methods and tools for troubleshooting Spark workloads at scale and is aimed at developers, administrators and performance practitioners. You will find examples illustrating the importance of using the right tools and right methodologies for measuring and understanding performance, in particular highlighting the importance of using data and root cause analysis to understand and improve the performance of Spark applications. The talk has a strong focus on practical examples and on tools for collecting data relevant for performance analysis. This includes tools for collecting Spark metrics and tools for collecting OS metrics. Among others, the talk will cover sparkMeasure, a tool developed by the author to collect Spark task metric and SQL metrics data, tools for analysing I/O and network workloads, tools for analysing CPU usage and memory bandwidth, tools for profiling CPU usage and for Flame Graph visualization.
These slides are from the Metrics-Based Process Mapping webinar delivered 09-29-2021.
Companion resources:
• View the recording - https://tkmg.com/webinars/metrics-based-process-mapping-3/
• Buy the book - https://tkmgacademy.com/products/metrics-based-process-mapping/
• Take the TKMG Academy course - https://tkmgacademy.com/courses/metrics-based-process-mapping/
Analyzing 2TB of Raw Trace Data from a Manufacturing Process: A First Use Cas...Databricks
As the development of semiconductor devices, manufacturing system leads to improve productivity and efficiency for wafer fabrication. Owing to such improvement, the number of wafers yielded from the fabrication process has been rapidly increasing. However, current software systems for semiconductor wafers are not aimed at processing large number of wafers. To resolve this issue, the BISTel (a world-class provider of manufacturing intelligence solutions and services for manufacturers) tries to build several products for big data such as Trace Analyzer (TA) and Map Analyzer (MA) using Apache Spark. TA is to analyze raw trace data from a manufacturing process. It captures details on all variable changes, big and small and give the traces' statistical summary (i.e.: min, max, slope, average, etc.). Several BISTel's customers, which are the top-tier semiconductor company in the world use the TA to analyze the massive raw trace data from their manufacturing process. Especially, TA is able to manage terabytes of data by applying Apache Spark's APIs. MA is an advanced pattern recognition tool that sorts wafer yield maps and automatically identify common yield loss patterns. Also, some semiconductor companies use MA to identify clustering patterns for more than 100,000 wafers, which can be considered as big data in the semiconductor area. This talk will introduce these two products which are developed based on the Apache Spark and present how to handle the large-scale semiconductor data in the aspects of software techniques.
Speakers: Seungchul Lee, Daeyoung Kim
Predicting Optimal Parallelism for Data AnalyticsDatabricks
A key benefit of serverless computing is that resources can be allocated on demand, but the quantity of resources to request, and allocate, for a job can profoundly impact its running time and cost. For a job that has not yet run, how can we provide users with an estimate of how the job’s performance changes with provisioned resources, so that users can make an informed choice upfront about cost-performance tradeoffs?
This talk will describe several related research efforts at Microsoft to address this question. We focus on optimizing the amount of computational resources that control a data analytics query’s achieved intra-parallelism. These use machine learning models on query characteristics to predict the run time or Performance Characteristic Curve (PCC) as a function of the maximum parallelism that the query will be allowed to exploit.
The AutoToken project uses models to predict the peak number of tokens (resource units) that is determined by the maximum parallelism that the recurring SCOPE job can ever exploit while running in Cosmos, an Exascale Big Data analytics platform at Microsoft. AutoToken_vNext, or TASQ, predicts the PCC as a function of the number of allocated tokens (limited parallelism). The AutoExecutor project uses models to predict the PCC for Apache Spark SQL queries as a function of the number of executors. The AutoDOP project uses models to predict the run time for SQL Server analytics queries, running on a single machine, as a function of their maximum allowed Degree Of Parallelism (DOP).
We will present our approaches and prediction results for these scenarios, discuss some common challenges that we handled, and outline some open research questions in this space.
The Total Cost Management (TCM) Framework of the Authority for the Advancement of Cost Engineering (AACE) International is an Integrated Approach to Portfolio, Program and Project Management. It provided a structured, annotated process map that explains each practice area of the cost engineering field in the context of its relationship to the other practice areas including allied professions. In other words; it is a process for applying the skills and knowledge of cost engineering. A key feature of the TCM Framework is that it highlights and differentiates the main cost management application areas: project control and strategic asset management. In this paper the focus is on project control.
In the TCM Framework, the Basis of Estimate (BOE) is characterised as the one deliverable that defines the scope of the engagement and ultimately becomes the basis for change management. When prepared correctly, any person with (capital) project experience can use the BOE to understand and assess the estimate, independent of any other supporting documentation. A well-written BOE achieves those goals by clearly and concisely stating the purpose of the estimate being prepared (i.e. cost/ effort/duration study, project options, funding, etc.), the project scope, cost basis, allowances, assumptions, exclusions, cost risks and opportunities, contingencies, and any deviations from standard practices.
A BOE document is a required component of a cost estimate. Because of its relevance in the set of AACE International recommended practices (RP) a BOE document is present. This template provides guidelines for the structure and content of a cost basis of estimate.
Although not always happy with the opinion that the Software Services Industry is different than other industries, analysis of the BOE shows that the structure is applicable but needs to be adapted to meet the practise in Software Services. In addition, the terminology used does not reflect the activities, components, items, issues, etc. of the Software Services Industry. The tailored version; Basis of Estimate – As Applied for the Software Services Industries provides guidelines for the structure and content of a cost basis of estimate specific to the software services industries (i.e. software development, maintenance & support, infrastructure, services, research & development, etc.).
With this BOE a structure is provided for further standardisation of the Estimation Process, a more consistent use of metrics (sizing, effort, schedule, quality), transparent options for control (benchmark, audit, bid validation) and a common approach on assumptions and associated risks.(IT Confidence 2013, Rio de Janeiro (Brazil))
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphSummit Paris - The art of the possible with Graph Technology
Estimating Story Points in Agile - MAGIC Approach
1. Estimating Story Points
in Agile - Approach
1 3
8
5
Bollapragada. Venkata. Marraju
bvmraju@yahoo.com
marraju@gmail.com
https://in.linkedin.com/in/marraju
2. There is a debate around story point estimation techniques and growing
demand for guidelines & standardization.
Fixed Story pointing
1 Story Point = 10 Person Hours
1 Story Point = 1 Person Working Day
Expert Influence
Guestimate
Fallout:
In Accurate Estimates
No reflection of Improved Velocity. Hours based assignment of points
will not reflect the improved productivity
Current Estimation Practices
Marraju Bollapragada
4. Velocity =
Story Points (Scope) achieved per Sprint Capacity (Resources x Time)
Agile Velocity Triangle
Marraju Bollapragada
5. Work vs Velocity
Marraju Bollapragada
WORK VELOCITY
WORK is a defined scope irrespective of
Resources & Time
‘VELOCITY’ is the rate at which the
‘RESOURCES’ completes ‘WORK’ within a
given ‘TIME’ period.
Work is a measure of the business value
earned or expected to earn
Velocity is the measure of a performance or
productivity after execution
Work is estimated at planning stage using
below techniques:
• WBS (Vertical Slicing)
• Relativity
• Business Value
Velocity is calculated after execution based on
the effort spent to complete a defined scope
of work
• Work Estimation
• Effort Estimation
• Cost Estimation
are 3 different terms & types of
estimations
• Velocity is a Trend
• Burndown is a Log
6. Velocity =
Story Points (Scope) achieved per Sprint Capacity (Resources x Time)
Agile Velocity Triangle
• Laying Slab of 1000 Sqft
• Laying Road 40 ft x 5km
• Producing 100 Laptops
• Serving Food for 100
People
• Developing an App for
Online Catalogue -???
• Laying Concrete Slab of 1000 Sqft –10$ /Sqft
• Laying BT Road 40 ft x 5km – 5k$ /Km
• Producing 100 Laptops- 500$ /Laptop
• Serving Food for 100 People – 5$ /Plate
• Delivering an App for Online Catalogue – Fixed
Cost or Earned Value basis
• Laying Concrete Slab of 1000 Sqft – 20 Days
• Laying BT Road 40 ft x 5km – 25 Days
• Producing 100 Laptops- 4 Days
• Serving Food for 100 People – 5 Hours
• Developing an App for Online Catalogue – Time Boxed/Effort based
Marraju Bollapragada
7. Estimation in Agile
SCOPE
• Story Pointing -1,3,5,8,13
• WBS > Relativity & Complexity
• Planning phase
TIME
• Capacity Planning - Person Hours
• WBS > Resource & Scheduling
• Planning & Execution phase
COST
• Effort in Dollars - $$
• Fixed Cost /Time & Material Cost
• Budget & Billing phase
Work Estimation
(Scope in Business Value/
Story Points)
Effort Estimation
( Work in Person Hours)
Cost Estimation
(Effort in $$)
Marraju Bollapragada
8. Estimating with MAGIC Approach – Measure, Analyze,
Improve and Control without ‘Guess’ work
Measure & Analyze using ‘Story Point Matrix’ based on
Functional & Technical Analysis
Improve & Control using Statistical Data Modeling
based on Empirical Data extracted from agile project
management tool
Proposed Solution- Approach
Marraju Bollapragada
9. Story Pointing - Technique
Create the Task Template for
Analysis
Design
Development
Testing
Packaging
Marraju Bollapragada
Story Point Matrix Empirical Data Model
Based on Expert Judgment Based on Empirical Data
Create Work Break Down Structure for the Scope
• Epics to Sub Epics
• Sub Epics to Stories
• Slicing Stories
• Stories to Tasks
Identify and analyze the
• Functional Logic
• Technical Implementation
• Testing, Doc and Packaging requirements
Compare with relatively similar type of story executed previously
Identify the
• Resources
• Skill/Expertise
• Technology
• Tools
• Complexity
• Identify the elements that are added/updated/upgrade across
the layers
• Aggregate the count by functional & technical task type and
assign the complexity factor
• Map the cumulative functional & technical points of the story to
the Range in ‘Story Point Reference Table’ to size with
appropriate Story Point
Look at the Empirical Data
• Draw the Frequency Histogram (with + 3 SD) for
completed Story Points vs Actual Hours
• Point the Story based on the estimated hours that
fall with in the +1SD of the mean in the Histogram
• Resources and Hours not considered
• Based on only Functional and Technical analysis
• Resources and Hours are considered
• Version Report (in Jira) which projects expected
completion date of project is based on empirical
data for the completed stories and hours spent
which is nothing but velocity.
10. : Story Point Matrix
Step#1 : Create ‘Story Point Reference Table’
Select the previously completed stories of different story point sizes, at least 3 stories for each
story point size
Create WBS for each of those stories by vertical slicing (as shown in next slide)
Identify the number (count) of elements/ interfaces/objects/components/TCs created/ updated/
upgraded for each of those tasks
Aggregate the count by functional & technical task types and assign the complexity factor
Take the total of cumulative functional & technical points
Repeat the above step for all the selected stories
Prepare the ‘Story Point Reference Table’ by defining the ranges for cumulative functional &
technical points by Story Points
Step#2: New Story#
Now create a similar story point matrix for new story and map the cumulative functional &
technical points of the story to the Range in ‘Story Point Reference Table’ to size with
appropriate Story Point
Marraju Bollapragada
11. : Story Point Matrix
Ex: Story Point
Reference Table
Cum Func &
Tech Points
Range
Story
Point
0-10 1sp
10-30 2sp
30-50 3sp
50-80 5sp
80-130 8sp
>130 13sp
Story# Work Breakdown Structure (WBS)
Story#1
Functional & Technical Tasks
New Update
Upgrade/
Execute
Complexity Factor Cumulative
(count) (count) (count) cf Func &Tech Points
a b c 0.1/0.2/0.3/0.5/0.8/1 (D=a+b+c) x cf
User Interface
(No. of Elements)
Business Layer
(No. of Classes, Methods,
Functions, etc..,)
Database Layer
(No. of Database Objects)
Integration - API/WebServices
(No. of APIs/Services)
Environment Setup
(No. of Products Installed)
Manual Testing
(No. of Test Cases)
Automation Testing
(No. of Test Scripts)
Packaging/CM
Documentation
(No. of Topics)
Total of Cumulative
Story Points (from Reference
Table)
Marraju Bollapragada
12. Step#1: Create Frequency Histogram:
Extract the data from the agile project
management tool for the completed stories after
the completion of project
Group the stories by story point size
Prepare the frequency histograms
by Story Point (1,2,3,5,8)
by Release version (9x, 10x, etc.,)
with Hours on X- axis and Story Count (No. of
Stories) on Y-axis
Take the Bin Range for Hours with +3 SD form
the Average (Mean)
Mark the Mean and Hours at which the frequency
peaks in the histogram
– Average No. of Hours taken to complete by
maximum Number of Stories
: Empirical Data Model
Standard
Deviation
Bin/Range in
Hours +3 SD
Frequency of 3 SP
Stories # v10.x
-3SD -50.508989 0
-2SD -25.437739 0
-1SD -0.3664885 0
Average (Mean) 24.704762 134
+1SD 49.776012 53
+2SD 74.847263 12
0 0 0
134
53
12
-20
0
20
40
60
80
100
120
140
160
-75 -50 -25 0 25 50 75 100
NoofCompletedStories
Actual Hours Spent to Complete Stories
Frequency of 3 SP Stories # v10.x
Frequency of 3 Story Point Stories…
Marraju Bollapragada
13. Step#2: Estimate New Story# based on
Frequency Histogram:
List the Tasks of the new Story#, and Identify
the Resources and Hours required for
the delivery of the Story# as shown in the
‘Task Table’
Now map the ‘total estimated hours’ of the new
story# from the ‘Task Table’ to the matching
frequency histogram into which it falls within
the range of +1SD of the mean
Take that as a Story Point for the new Story#.
: Empirical Data Model
Task Table for Story#
Task Resources Hours
Analysis Task
Design Task
Development Task
Database Task
Testing Task
CM Task
Documentation Task
Total Estimated
Hours
Story Point (from
Empirical Data in Step#1)
Marraju Bollapragada
14. : Empirical Data Model
Example:
Extracted the data from agile project management tool
for a solution suite based on below criteria and plotted
the ‘Frequency Histogram’:
Issue Type – Stories
Status – Completed
Release Version : 9.x & 10.x
Frequency Histogram plotted for – 1, 2, 3 & 5 Story point
stories
Marraju Bollapragada
15. 0 0
22
119
61
10
-20
0
20
40
60
80
100
120
140
-100 -75 -50 -25 0 25 50 75 100 125
NoofStories
Hours
Frequency for 2 SP # 9.x
Frequency for 2 SP # 8.1
0 0 0
177
68
15
-50
0
50
100
150
200
-75 -50 -25 0 25 50 75 100
No.ofStories
Hours
Frequency for 1 SP # 9.x
Frequency for 1 SP # 8.1
0 0 0
94
36
17
-20
0
20
40
60
80
100
-20 -15 -10 -5 0 5 10 15 20 25
NoofStories
Hours
Frequency for 1 SP # 10.x
Frequency for 1 SP # 9.0
0 0 0
67
34
7
-20
0
20
40
60
80
-75 -50 -25 0 25 50 75 100
NoofStories
Hours
Frequency for 2 SP # 10.x
Frequency for 2 SP # 9.0
Marraju Bollapragada
Empirical Data Model
- Histogram for 1 & 2 story point
16. 0 0
30
141
69
18
-20
0
20
40
60
80
100
120
140
160
-200 -100 0 100 200 300
NoofStories
Hours
Frequency for SP 5 # 9.x
Frequency for SP 5 # 8.1
0 0 0
257
103
34
-50
0
50
100
150
200
250
300
-150 -125 -100 -75 -50 -25 0 25 50 75 100 125 150 175 200
NoofStories
Hours
Frequency for 3 SP # 9.x
Frequency for 3 SP # 8.1
0 0 0
134
53
12
-20
0
20
40
60
80
100
120
140
160
-75 -50 -25 0 25 50 75 100
NoofStories
Hours
Frequency for 3 SP # 10.x
Frequency for 3 SP # 9.0
0 0
30
90
56
14
-50
0
50
100
-100 -50 0 50 100 150
NoofStories
Hours
Frequency for SP 5 # 10.x
Frequency for SP 5 # 9.0
Marraju Bollapragada
Empirical Data Model
- Histogram for 3 & 5 story point
17. Marraju Bollapragada
Create the Task Template for
Design
Development
Testing
Packaging
Story
Point
STORY POINT MATRIX EMPIRICAL DATA MODEL
Range of Cum Func & Tech Points
taken from Story Point Reference
Table
Range of Actual Hours Spent to
Complete the Stories taken from
Empirical Data Model
1sp 0-10 10-25
2sp 10-30 25 - 50
3sp 30-50 50-75
5sp 50-80 75-100
8sp 80-130
13sp >130
Matrix - Example
18. Marraju Bollapragada
Create the Task Template for
Design
Development
Testing
Packaging
Technique Recommendation
Suitability Story Point Matrix Estimation Empirical Data Model Estimation
New Product Yes No
New Team Yes No
New Functionality Yes No
New Technology/POC Yes No
Existing Product Yes Yes
Same Team Yes Yes
Same Code base Yes Yes
Same Technology Yes Yes
PMG/FA/BA Availability Must Depends
Definition of Ready Required Depends
Definition of Done Required Required
Finale
Recommendation
Use the Story Point Matrix for
regular Story Point Estimation by
measuring and analyzing the
functional and technical tasks of
the story
Use the Empirical Data Model for
retrospection/reviewing the team’s
performance on story sizing after the
project completion and use as a
reference to improve and control
20. Marraju Bollapragada
References from Mike Cohn’s (Mountain Goat Software) Blog:
Template for
Story Points Are Still About Effort
http://www.mountaingoatsoftware.com/blog/story-points-are-still-
about-effort
Seeing How Well a Team’s Story Points Align from One to Eight
http://www.mountaingoatsoftware.com/blog/seeing-how-well-a-
teams-story-points-align-from-one-to-eight
How Do Story Points Relate to Hours?
http://www.mountaingoatsoftware.com/blog/how-do-story-points-
relate-to-hours
References