End-to-End Quality Approach: 14 Levels of TestingJosiah Renaudin
In 2015, the Standard & Poor’s Ratings IT team set out an ambitious objective—to tighten the process and controls around the quality of code deployed to production. Based on internal cost of quality assessments, and supporting agile and waterfall internal engineering processes, distinct testing levels were identified to help push quality left and root out the underlying causes of defects as early as possible. The ‘14 Levels of Testing’ were defined to collaboratively span organizational functions, establish quality expectations, and help track towards the goal of eliminating defects. Adrian Thibodeau and Chintan Pandya review their 14 Levels of Testing and focus specifically on sharing the processes and tools employed to help govern the delivery of quality. Adrian and Chintan discuss metrics and dashboards, defect lifecycle management, their home-grown QA Workflow Portal, testing vendor SLAs and contracts, and facilitating UAT best-practices.
The certification for Foundation Level Extension – Agile Tester is designed for professionals who are working within Agile environments. It is also for professionals who are planning to start implementing Agile methods in the near future, or are working within companies that plan to do so.
Managing Kubernetes Cost and Performance with NGINX & KubecostNGINX, Inc.
Kubecost and NGINX have recently partnered together to provide a more comprehensive solution for managing cost and performance when deploying Kubernetes. The Kubecost platform helps organizations optimize and monitor their Kubernetes costs, while NGINX is a leading open source software web server, reverse proxy and ingress controller. Together, they offer a powerful combination of cost optimization and application delivery capabilities, enabling you to gain greater visibility into your Kubernetes environments and achieve better performance and efficiency.
On-Demand Link https://www.nginx.com/resources/webinars/managing-kubernetes-cost-performance-with-nginx-kubecost/
End-to-End Quality Approach: 14 Levels of TestingJosiah Renaudin
In 2015, the Standard & Poor’s Ratings IT team set out an ambitious objective—to tighten the process and controls around the quality of code deployed to production. Based on internal cost of quality assessments, and supporting agile and waterfall internal engineering processes, distinct testing levels were identified to help push quality left and root out the underlying causes of defects as early as possible. The ‘14 Levels of Testing’ were defined to collaboratively span organizational functions, establish quality expectations, and help track towards the goal of eliminating defects. Adrian Thibodeau and Chintan Pandya review their 14 Levels of Testing and focus specifically on sharing the processes and tools employed to help govern the delivery of quality. Adrian and Chintan discuss metrics and dashboards, defect lifecycle management, their home-grown QA Workflow Portal, testing vendor SLAs and contracts, and facilitating UAT best-practices.
The certification for Foundation Level Extension – Agile Tester is designed for professionals who are working within Agile environments. It is also for professionals who are planning to start implementing Agile methods in the near future, or are working within companies that plan to do so.
Managing Kubernetes Cost and Performance with NGINX & KubecostNGINX, Inc.
Kubecost and NGINX have recently partnered together to provide a more comprehensive solution for managing cost and performance when deploying Kubernetes. The Kubecost platform helps organizations optimize and monitor their Kubernetes costs, while NGINX is a leading open source software web server, reverse proxy and ingress controller. Together, they offer a powerful combination of cost optimization and application delivery capabilities, enabling you to gain greater visibility into your Kubernetes environments and achieve better performance and efficiency.
On-Demand Link https://www.nginx.com/resources/webinars/managing-kubernetes-cost-performance-with-nginx-kubecost/
Unit-I Conventional Software Management: The waterfall model, conventional
software Management performance.
Evolution of Software Economics: Software Economics, pragmatic software
cost estimation.
Improving Software Economics: Reducing Software product size, improving
software processes, improving team effectiveness, improving automation,
Achieving required quality, peer inspections.
10
Lectures
Unit-II The old way and the new: The principles of conventional software
Engineering, principles of modern software management, transitioning to an
iterative process.
Life cycle phases: Engineering and production stages, inception, Elaboration,
construction, transition phases.
Artifacts of the process: The artifact sets, Management artifacts, Engineering
artifacts, programmatic artifacts.
Model based software architectures: A Management perspective and technical
perspective.
10
Lectures
Unit-III Work Flows of the process: Software process workflows, Iteration workflows.
Checkpoints of the process: Major mile stones, Minor Milestones, Periodic
status assessments.
Iterative Process Planning: Work breakdown structures, planning guidelines,
cost and schedule estimating, Iteration planning process, Pragmatic planning.
10
Lectures
Unit-IV Project Organizations and Responsibilities: Line-of-Business Organizations,
Project Organizations, evolution of Organizations.
Process Automation: Automation Building blocks, The Project Environment.
10
Lectures
Unit-V Project Control and Process instrumentation: The seven core Metrics,
Management indicators, quality indicators, life cycle expectations, pragmatic
Software Metrics, Metrics automation.
Tailoring the Process: Process discriminants.
10
Lectures
Unit-VI Future Software Project Management: Modern Project Profiles, Next
generation Software economics, modern process transitions.
10
Lectures
Build and Deploy Cloud Native Camel Quarkus routes with Tekton and KnativeOmar Al-Safi
In this talk, we will leverage all cloud native stacks and tools to build Camel Quarkus routes natively using GraalVM native-image on Tekton pipeline and deploy these routes to Kubernetes cluster with Knative installed. We will dive into the following topics in the talk: - Introduction to Camel - Introduction to Camel Quarkus - Introduction to GraalVM Native Image - Introduction to Tekon - Introduction to Knative - Demo shows how to deploy end to end a Camel Quarkus route which include the following steps: - Look at whole deployment pipeline for Cloud Native Camel Quarkus routes - Build Camel Quarkus routes with GraalVM native-image on Tekton pipeline. - Deploy Camel Quarkus routes to Kubernetes cluster with Knative Targeted Audience: Users with basic Camel knowledge
Continuous Integration vs Continuous Delivery vs Continuous Deployment I hope you now get the difference between Continuous Integration, Continuous Delivery and Continuous Deployment. As i mentioned above, these are really an important practices which needs to be implemented to get all the benefits of DevOps.
Its a long journey to understand SCM and utilising all its benefits. Hope you enjoyed our today’s article as well ……
¿Queres realizar un proyecto de #DevSecOps y no sabés por dónde empezar?
En estas paginas no te vamos a brindar la receta mágica o la super metodología de #DevSecOps porque eso no existe. En su lugar este E-book está destinado a ser en parte terapéutico y en parte de alerta temprana. “Las historias presentadas no son una hoja de ruta. Lo que hacen es reconocer el fracaso como parte de la base de conocimiento de la comunidad DevSecOps”.
El verdadero aprendizaje, llega a través del fracaso. Si algo sale mal, tenemos que revolver, experimentar, hackear y nuestras mentes están más abiertas a recibir aportes externos.
Network Reliability Engineering and DevNetOps - Presented at ONS March 2018James Kelly
We introduce NRE and DevNetOps with inspiration from SRE and DevOps. We get into some detail on what it means to be a Network Reliability Engineer (NRE) and implement a DevNetOps pipeline. We cover the journey from manual and basic automation in NetOps to NRE generically and with the example from the journey at Riot Games. In closing we flip the narrative and look at networking in tradition DevOps and software engineering as well to contrast networking's role in DevOps vs. DevNetOps and NRE.
An introduction to software engineering, based on the first chapter of "A (Partial) Introduction to Software Engineering
Practices and Methods" By Laurie Williams
Automation of Release and Deployment Management - MavericMaveric Systems
This presentation highlights why automation platforms for application release and deployment are becoming increasingly vital for global enterprises and explores the specific requirements of such a platform in order for it to prove beneficial, effective and offer a substantial return on investment.
What is Jenkins | Jenkins Tutorial for Beginners | EdurekaEdureka!
****** DevOps Training : https://www.edureka.co/devops ******
This DevOps Jenkins Tutorial on what is Jenkins ( Jenkins Tutorial Blog Series: https://goo.gl/JebmnW ) will help you understand what is Continuous Integration and why it was introduced. This tutorial also explains how Jenkins achieves Continuous Integration in detail and includes a Hands-On session around Jenkins by the end of which you will learn how to compile a code that is present in GitHub, Review that code and Analyse the test cases present in the GitHub repository. The Hands-On session also explains how to create a build pipeline using Jenkins and how to add Jenkins Slaves.
The Hands-On session is performed on an Ubuntu-64bit machine in which Jenkins is installed.
To learn how Jenkins can be used to integrate multiple DevOps tools, watch the video titled 'DevOps Tools', by clicking this link: https://goo.gl/up9iwd
Check our complete DevOps playlist here: http://goo.gl/O2vo13
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Unit-I Conventional Software Management: The waterfall model, conventional
software Management performance.
Evolution of Software Economics: Software Economics, pragmatic software
cost estimation.
Improving Software Economics: Reducing Software product size, improving
software processes, improving team effectiveness, improving automation,
Achieving required quality, peer inspections.
10
Lectures
Unit-II The old way and the new: The principles of conventional software
Engineering, principles of modern software management, transitioning to an
iterative process.
Life cycle phases: Engineering and production stages, inception, Elaboration,
construction, transition phases.
Artifacts of the process: The artifact sets, Management artifacts, Engineering
artifacts, programmatic artifacts.
Model based software architectures: A Management perspective and technical
perspective.
10
Lectures
Unit-III Work Flows of the process: Software process workflows, Iteration workflows.
Checkpoints of the process: Major mile stones, Minor Milestones, Periodic
status assessments.
Iterative Process Planning: Work breakdown structures, planning guidelines,
cost and schedule estimating, Iteration planning process, Pragmatic planning.
10
Lectures
Unit-IV Project Organizations and Responsibilities: Line-of-Business Organizations,
Project Organizations, evolution of Organizations.
Process Automation: Automation Building blocks, The Project Environment.
10
Lectures
Unit-V Project Control and Process instrumentation: The seven core Metrics,
Management indicators, quality indicators, life cycle expectations, pragmatic
Software Metrics, Metrics automation.
Tailoring the Process: Process discriminants.
10
Lectures
Unit-VI Future Software Project Management: Modern Project Profiles, Next
generation Software economics, modern process transitions.
10
Lectures
Build and Deploy Cloud Native Camel Quarkus routes with Tekton and KnativeOmar Al-Safi
In this talk, we will leverage all cloud native stacks and tools to build Camel Quarkus routes natively using GraalVM native-image on Tekton pipeline and deploy these routes to Kubernetes cluster with Knative installed. We will dive into the following topics in the talk: - Introduction to Camel - Introduction to Camel Quarkus - Introduction to GraalVM Native Image - Introduction to Tekon - Introduction to Knative - Demo shows how to deploy end to end a Camel Quarkus route which include the following steps: - Look at whole deployment pipeline for Cloud Native Camel Quarkus routes - Build Camel Quarkus routes with GraalVM native-image on Tekton pipeline. - Deploy Camel Quarkus routes to Kubernetes cluster with Knative Targeted Audience: Users with basic Camel knowledge
Continuous Integration vs Continuous Delivery vs Continuous Deployment I hope you now get the difference between Continuous Integration, Continuous Delivery and Continuous Deployment. As i mentioned above, these are really an important practices which needs to be implemented to get all the benefits of DevOps.
Its a long journey to understand SCM and utilising all its benefits. Hope you enjoyed our today’s article as well ……
¿Queres realizar un proyecto de #DevSecOps y no sabés por dónde empezar?
En estas paginas no te vamos a brindar la receta mágica o la super metodología de #DevSecOps porque eso no existe. En su lugar este E-book está destinado a ser en parte terapéutico y en parte de alerta temprana. “Las historias presentadas no son una hoja de ruta. Lo que hacen es reconocer el fracaso como parte de la base de conocimiento de la comunidad DevSecOps”.
El verdadero aprendizaje, llega a través del fracaso. Si algo sale mal, tenemos que revolver, experimentar, hackear y nuestras mentes están más abiertas a recibir aportes externos.
Network Reliability Engineering and DevNetOps - Presented at ONS March 2018James Kelly
We introduce NRE and DevNetOps with inspiration from SRE and DevOps. We get into some detail on what it means to be a Network Reliability Engineer (NRE) and implement a DevNetOps pipeline. We cover the journey from manual and basic automation in NetOps to NRE generically and with the example from the journey at Riot Games. In closing we flip the narrative and look at networking in tradition DevOps and software engineering as well to contrast networking's role in DevOps vs. DevNetOps and NRE.
An introduction to software engineering, based on the first chapter of "A (Partial) Introduction to Software Engineering
Practices and Methods" By Laurie Williams
Automation of Release and Deployment Management - MavericMaveric Systems
This presentation highlights why automation platforms for application release and deployment are becoming increasingly vital for global enterprises and explores the specific requirements of such a platform in order for it to prove beneficial, effective and offer a substantial return on investment.
What is Jenkins | Jenkins Tutorial for Beginners | EdurekaEdureka!
****** DevOps Training : https://www.edureka.co/devops ******
This DevOps Jenkins Tutorial on what is Jenkins ( Jenkins Tutorial Blog Series: https://goo.gl/JebmnW ) will help you understand what is Continuous Integration and why it was introduced. This tutorial also explains how Jenkins achieves Continuous Integration in detail and includes a Hands-On session around Jenkins by the end of which you will learn how to compile a code that is present in GitHub, Review that code and Analyse the test cases present in the GitHub repository. The Hands-On session also explains how to create a build pipeline using Jenkins and how to add Jenkins Slaves.
The Hands-On session is performed on an Ubuntu-64bit machine in which Jenkins is installed.
To learn how Jenkins can be used to integrate multiple DevOps tools, watch the video titled 'DevOps Tools', by clicking this link: https://goo.gl/up9iwd
Check our complete DevOps playlist here: http://goo.gl/O2vo13
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
This paper examines the benefits of using open source software, along with some of the potential risks, to help you determine whether your company should turn to an open source solution, a commercial application, or a combination of both.
Personalisation in search is an increasingly important aspect of the digital marketing landscape allowing advertisers to more and more tailor their ads to increasingly more complex audiences
Czy blogerzy rozumieją społeczności? Czy marketerzy potrafią wykorzystać społecznościowy potencjał blogerów? W NapoleonCat.com badamy aktywność największych polskich blogerów, analizujemy tysiące mniejszych blogów, vlogów, mikroblogów. Wiemy gdzie jest ich społeczność i która jest aktywna. Prowadzimy największe polskie badanie blogosfery pod kątem social media: www.blogerstar.pl
Skontaktuj się z nami: oskar@napoleoncat.com
EMA's perspective on enabling development and QA teams with high quality tools that deliver visibility to WMQ messages. Nastel's "freemium" AutoPilot® On-demand for WebSphere MQ gives these teams access to a production-grade MQ diagnostics solution using a web browser, and without impacting production systems.
Agile Project Management Methods of IT ProjectsGlen Alleman
Agile project management methodologies used to develop, deploy, or acquire information technology systems have begun to enter the vocabulary of modern organizations. Much in the same way lightweight and agile manufacturing or business management processes have over the past few years. This chapter is about applying Agile methods in an environment that may be more familiar with high ceremony project management methods – methods that might be considered heavy weight in terms of today’s agile vocabulary.
The notion of managing software as a precision engineering operation remains an elusive target in the software industry, and that may remain the idealised view of the industry for a long while, partly because of the nature of software and partly because the current mainstream methodologies do not reinforce value-maximisation in the process.
DevOps shifting software engineering strategy Value based perspectiveiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
IBM Rational Software Conference 2009: Modeling, Architecture & Construction ...Kathy (Kat) Mandelstein
Track Keynote for the Modeling, Architecture & Construction Track at the IBM Rational Software Conference 2009
Software development is a business process requiring many different talents to implement effectively. The software architect and the software developer are key roles for successful software delivery. The architect is responsible for turning requirements into analysis and design models built on a sound architecture. Software architecture is crucial for designing reliable, flexible, and maintainable software systems, and helps communicate the high-level design to the various stakeholders at a level of detail that is meaningful to them. It also allows developers to create software systems that enable reuse and integration with legacy and third-party systems. Developers create the reality of the architecture ideas and models through building, modernizing, extending, integrating, and deploying software. Sessions in this track explore the benefits of a well-architected system, present the tips and techniques of organizations that have made software architecture an important part of their software delivery process, and show how both visual and code-centric development can help organizations adopt the right paradigm for their particular development needs. This track is for architects and developers interested in best practices and the latest innovations in methodology and tools for supporting architectural design, discovery, and control, and software construction and assembly.
Four key strategies for enabling innovation in the age of smartIBM Rational software
On a smarter planet, intelligence is infused into the products,systems and processes that comprise the modern world. Theseinclude the delivery of services; the development, manufactur-ing, buying and selling of physical goods; and the way peopleactually work and live. Nowhere may this transformation be more evident than in the creation of smarter products.
Application and Project Portfolio Management is the one of key tools for senior IT executives that helps them keep all their projects and applications aligned with overall business objectives.
DMT-2467 Like the Features in Rational DOORS 9? Come Check Them Out in DOORS...IBM Rational software
Interconnect 2015,
DMT-2467 Like the Features in Rational DOORS 9? Come Check Them Out in DOORS Next Generation!
By:
Paul Strachan (IBM)
Alex Ivanov (Raytheon)
Yianna Papadakis-Kantos (IBM)
Slides used during DAG-2848 workshop at InterConnect2017. The objective of this workshop is to demonstrate, through an interactive, hands-on experience, the power of IBM Rational Team Concert to support agile projects and facilitate the adoption of the IBM DevOps approach. By going through the exercises, you play different scrum roles to focus on activities that are helpful to agile teams (continuous planning and collaborative development). Whether you are involved in an agile project or you plan to start an agile initiative soon, attend this workshop to see how Rational Team Concert can help your team be more collaborative and more productive in your lean and agile initiatives. (No development skills are needed to complete this workshop.)
Essentials of UrbanCode Deploy 6.1 is an introductory course about the product. This slideset introduces the key aspects of the course such as objectives, agenda and also gives a solid product introduction.
IBM Innovate is now IBM InterConnect. Share your DevOps, agile, engineering or development expertise by submitting a speaker proposal: https://www-950.ibm.com/events/tools/interconnect/2015ems/
Factors to consider when starting a brand-new requirements management project...IBM Rational software
Some preparation work is required before starting a requirements management project. Ask the right questions, then capture decisions in a requirements plan. Implement the requirements plan in IBM Rational DOORS Next generation.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
The Art of the Pitch: WordPress Relationships and Sales
Improving software economics - Top 10 principles of achieving agility at scale
1. Improving Software Economics
Whitepaper
May 2009
Improving Software Economics
Top 10 Principles of Achieving Agility at Scale
Walker Royce
Vice President, IBM Software Services , Rational
2. Improving Software Economics
Page 2
Contents From software development to software delivery
The world is becoming more dependent on software delivery efficiency and
world economies are becoming more dependent on producing software
2 From software development to with improved economic outcomes. What we have learned over decades of
software delivery advancing software development best practice is that software production
7 The move to agility involves more of an economics than an engineering discipline. This paper
13 Top 10 Principles of Conventional provides a provocative perspective on achieving agile software delivery and
Software Management the economic foundations of modern best practices.
15 Top 10 Principles of Interative
Software Managememt Improvement in software lifecycle models and software best practices has
20 Reducing uncertainity : the basis been a long slog that accelerated in the 1980s as the engineering roots of
of best practice software management methods continued to fail in delivering acceptable
24 Achieving “Agility at Scale”: Top software project performance. IBM’s Rational team has partnered with
10 principles of agile software hundreds of software organizations and participated in thousands of
delivery software projects over the last twenty-five years. Our mission has been
29 A framework for reasoning about twofold: first, to bring software best practices to our customers, and second,
improving software economics to participate directly on their diverse projects to learn the patterns of
35 Conclusion success and failure so that we could differentiate which practices were best,
and why. The Rational team didn’t invent iterative development, object-
oriented design, UML, agile methods, or the best practices captured in the
IBM® Rational® Unified Process. The industry evolved these techniques,
and we built a business out of synthesizing the industry’s experience and
packaging lessons learned into modern processes, methods, tools, and
training. This paper provides a short history of this transition by looking at
the evolution of our management principles. It presents our view of the Top
10 principles in managing an industrial-strength software organization and
achieving agility at any scale of business challenge.
Most organizations that depend on software are struggling to transform their
lifecycle model from a development focus to a delivery focus. This subtle
distinction in wording represents a dramatic change in the principles that are
driving the management philosophy and the governance models. Namely, a
“software development” orientation focuses on the various activities required
in the development process, while a “software delivery” orientation focuses
on the results of that process. Organizations that have successfully made this
transition — perhaps thirty to forty percent by our estimate — have recognized
that engineering discipline is trumped by economics discipline in most software-
intensive endeavors.
3. Improving Software Economics
Page 3
Table 1 provides a few differentiating indicators of successfully making the trans-
formation from conventional engineering governance to more economically driven
governance.
Table 1: Differentiating conventional engineering governance from economically driven governance
4. Improving Software Economics
Page 4
Highlights Success rates in applying engineering governance (a.k.a. waterfall model
management) have been very low; most industry studies assess the success
rate at ten to twenty percent. Where waterfall model projects do succeed,
one usually finds that the project has been managed with two sets of books.
The front-office books satisfy the external stakeholders that the engineering
governance model is being followed and the back-office books, where
more agile techniques are employed with economic governance, satisfy the
development team that they can predictably deliver results in the face of the
uncertainties. The results of the back-office work gets fed back to meet the
deliverables and milestones required for the front-office books. “Managing
two sets of books” has been expensive, but it is frequently the only way for
developers to deliver a satisfactory product while adhering to the stakeholder
demand for engineering governance.
Advanced organizations have transitioned to more efficiently managing only
one set of honest plans, measures, and outcomes. Most organizations still
manage some mixture of engineering governance and economic governance to
succeed.
Let’s take a minute to think about engineering vs. economics governance
Software project managers are — i.e., precise up-front planning vs. continual course correction toward
more likely to succeed if they use a target goal — in terms even those outside the software industry can
techniques similar to those used relate to. This may be a thought-provoking hypothesis: Software project
in movie production, compared to managers are more likely to succeed if they use techniques similar to
those used conventional engineering those used in movie production, compared to those used conventional
projects. engineering projects, like bridge construction.1,2 Consider this:
• Most software professionals have no laws of physics, or properties of
materials, to constrain their problems or solutions. They are bound
only by human imagination, economic constraints, and platform
performance once they get something executable.
• Quality metrics for software products have few accepted atomic units.
With the possible exception of reliability, most aspects of quality are
very subjective, such as responsiveness, maintainability and usability.
Quality is best measured through the eyes of the audience.
5. Improving Software Economics
Page 5
Highlights • In a software project, you can seemingly change almost anything at
any time: plans, people, funding, requirements, designs, and tests.
Requirements — probably the most misused word in our industry —
rarely describe anything that is truly required. Nearly everything is
negotiable.
These three observations are equally applicable to software project
managers and movie producers. These are professionals that regularly
create a unique and complex web of intellectual property bounded only
by a vision and human creativity. Both industries experience a very low
success rate relative to mature engineering enterprises.
The last point above is worth a deeper look. The best thing about
The best thing about software is software is that it is soft (i.e., relatively easy to change) but this is also its
that it is soft (i.e., relatively easy to riskiest attribute. In most systems, the software is where we try to capture
change) but this is also its riskiest and anticipate human behavior, including abstractions and business
attribute. rules. Most software does not deal with natural phenomena where laws
of physics or materials provide a well-understood framework. Hence,
most software is constrained only by human imagination; the quality of
software is judged more like a beauty contest than by precise mathematics
and physical tolerances. If we don’t carefully manage software production,
we can lull ourselves into malignant cycles of change that result in
massive amounts of scrap, rework, and wasted resources.
With the changeability of software being its greatest asset and greatest
risk, it is imperative that we measure software change costs and qualities
and understand the trends therein. The measure of scrap and rework is
an economic concern that has long been understood as a costly variable
in traditional engineering, as in the construction industry. While in the
software industry we commonly blow up a product late in the lifecycle and
incur tremendous scrap and rework to rebuild its architecture, we rarely
do this in the construction industry. The costs are so tangibly large,
and the economic ramifications are dire. In software, we need to get an
equally tangible understanding of the probable economic outcomes.
6. Improving Software Economics
Page 6
Highlights A lesson that the construction industry learned long ago was to
eliminate the risk of reinventing the laws of construction on every
project. Consequently, they enforced standards in building codes,
materials, and techniques, particularly for the architectural engineering
aspects of structure, power, plumbing, and foundation. This resulted
in much more straightforward (i.e., predictable) construction with
innovation mostly confined to the design touches sensed by its human
users. This led to guided economic governance for the design/style/
usability aspects with standardization and engineering governance
driving most of the architecture, materials, and labor. When we innovate
during the course of planned construction projects with new materials,
new technology, or significant architectural deviations, it leads to the
For most products, systems, and same sorts of overruns and rework that we see in software projects. For
services, you want to standardize most products, systems, and services, you want to standardize where you
where you can and not reinvent. can and not reinvent.
Economic discipline and governance is needed to measure the risk and vari-
ance of the uncertain outcomes associated with innovation. Most software
organizations undertake a new software project by permitting their most
trusted craftsmen to reinvent software capabilities over and over. Each project
and each line of business defend the reasons why their application is differ-
ent, thereby requiring a custom solution without being precise about what
is different. Encumbered with more custom developed architectures and com-
ponents than reused ones, they end up falling back on the waterfall model,
which is easy to understand. But this approach is demonstrably too simplistic
for uncertain endeavors like software.
The software industry has characterized new and improved software lifecycle
models using many different terms, such as: spiral development, incremen-
tal development, evolutionary development, iterative development, and agile
development. In spirit, these models have many things in common, and, as a
class, they represent a common theme: anti-waterfall development. However,
after 20-30 years of improvement and transition, the waterfall model mindset
is still the predominant governance process in most industrial-strength soft-
ware development organizations. By my estimation, more than half of the soft-
ware projects in our industry still govern with a waterfall process, particularly
organizations with mature processes. Perhaps geriatric could be used as an
explicit level of process maturity, one that should be recognized in software
7. Improving Software Economics
Page 7
maturity models to help organizations identify when their process has become
Highlights
too mature and in need of a major overhaul.
The move to agility
We have learned many best practices We have learned many best practices as we evolved toward modern agile
as we evolved toward modern agile delivery methods. Most of them we discovered years ago as we worked
delivery methods. Most of them we with forward-looking organizations. At IBM, we have been advancing
discovered years ago as we worked techniques largely from the perspective of industrial strength software
with forward-looking organizations. engineering, where scale and criticality of applications dominate our
governance and management methods. We were one of the pioneers of
agile techniques like pair programming3 and extreme programming,4
and IBM now has a vibrant technical community with thousands of
practitioners engaged in agile practices in our own development efforts
and our professional services. Many pioneering teams inside and outside
of IBM have advanced these best practices from smaller scale techniques,
commonly referred to as “agile methods,” and these contributions were
developed separately in numerous instances across the diverse spectrum
of software domains, scales, and applications.
For years, we have worked to unite the agile consultants (i.e., small
scale development camps) with the process maturity consultants (i.e.,
industrial strength software development camps). While these camps
have been somewhat adversarial and wary of endorsing one another, both
sides have valid techniques and a common spirit, but approach common
problems with a different jargon and bias. There is no clear right or
wrong prescription for the range of solutions needed. Context and scale
are important, and every nontrivial project or organization needs a mix
of techniques, a family of process variants, common sense, and domain
experience to be successful.
Many pioneering teams inside and outside of IBM have advanced these
best practices from smaller scale techniques, commonly referred to as
“agile methods,” and these contributions were developed separately in
numerous instances across the diverse spectrum of software domains,
scales, and applications.
8. Improving Software Economics
Page 8
Highlights For years, we have worked to unite the agile consultants (i.e., small scale
development camps) with the process maturity consultants (i.e., industrial
strength software development camps). While these camps have been
somewhat adversarial and wary of endorsing one another, both sides have
valid techniques and a common spirit, but approach common problems with
a different jargon and bias. There is no clear right or wrong prescription for
the range of solutions needed. Context and scale are important, and every
nontrivial project or organization needs a mix of techniques, a family of
process variants, common sense, and domain experience to be successful.
In Software Project Management,5 I introduced my Top 10 Principles
of managing a modern software process. I will use that framework to
summarize the history of best-practice evolution. The sections that follow
describe three discrete eras of software lifecycle models by capturing the
evolution of their top 10 principles. I will denote these three stages as:
1) conventional waterfall development
2) transitional iterative development
3) modern agile delivery
I will only describe the first two eras briefly since they have been covered
elsewhere in greater detail and their description here is only to provide
benchmarks for comparison to the top 10 principles of a modern agile
delivery approach.
Progress correlates to tangible Figure 1 provides a project manager’s view of the process transition that the
intermediate outcomes, and is industry has been marching toward for decades. Project profiles representing
best measured through executable each of the three eras plot development progress versus time, where progress
demonstrations. is defined as percent executable—that is, demonstrable in its target form.
Progress in this sense correlates to tangible intermediate outcomes, and is
best measured through executable demonstrations. The term “executable”
does not imply complete, compliant, nor up to specifications; but it does
imply that the software is testable. The figure also describes the primary
measures that were used to govern projects in these eras and introduces the
measures that we find to be most important moving forward to achieve agile
software delivery success.
9. Improving Software Economics
Page 9
Conventional waterfall projects are represented by the dotted line profile in
Figure 1. The typical sequence for the conventional waterfall management
style when measured this way is:
1. Early success via paper designs and overly precise artifacts,
2. Commitment to executable code late in the life cycle,
3. Integration nightmares due to unforeseen implementation issues
and interface ambiguities,
4. Heavy budget and schedule pressure to get the system working,
5. Late shoe-horning of suboptimal fixes, with no time for redesign,
and
6. A very fragile, expensive-to-maintain product, delivered late.
10. Improving Software Economics
Page 10
Agile software delivery Iterative development Waterfall governance
Reuse, SOA, EA Middleware components Custom development
Collaborative environments Mature commercial tools Proprietary tools/methods
Agile delivery Iterative Conventional
Project Profile Project Profile Project Profile
100%
Development Progress
(% coded)
Excessive
scrap and rework
Project Schedule
Agile Econometrics Iterative Trends Waterfall measures
Accurate net present value Honest earned value Dishonest earned values
Reuse/custom asset trends Release content over time Activity/milestone completion
Release quality over time Release quality over time Code/test production
Variance in estimate to complete Prioritized risk management Requirements-design-code traceability
Release content/quality over time Scrap/rework/defect trends Inspection coverage
Actuals vs dynamic plans Actuals vs dynamic plans Actuals vs static plan
Figure 1: Improved project profiles and measures in transitioning to agile delivery processes
11. Improving Software Economics
Page 11
Highlights Most waterfall projects are mired in inefficient integration and late
discovery of substantial design issues, and they expend roughly 40
percent or more of their total resources in integration and test activities,
with much of this effort consumed in excessive scrap and rework during
the late stages of the planned project, when project management had
imagined shipping or deploying the software. Project management
typically reports a linear progression of earned value up to 90 percent
complete before reporting a major increase in the estimated cost of
completion as they suffer through the late scrap and rework.
In retrospect, software earned value systems based on conventional
activity, document, and milestone completion are not credible since
A software project that has a they ignore the uncertainties inherent in the completed work. Here is
consistently increasing progress a situation for which I have never seen a counter-example: A software
profile is certain to have a pending project that has a consistently increasing progress profile is certain to
cataclysmic regression. have a pending cataclysmic regression.
The iterative management approach represented by the middle profile in
Figure 1 forces integration into the design phase through a progression
of demonstrable releases, thereby exposing the architecturally significant
uncertainties to be addressed earlier where they can be resolved
efficiently in the context of lifecycle goals. Equally as critical to the
process improvements are a greater reliance on more standardized
architectures and reuse of operating systems, data management systems,
graphical user interfaces, networking protocols, and other middleware.
This reuse and architectural conformity contributes significantly to
reducing uncertainty through less custom development and precedent
patterns of construction. The downstream scrap and rework tarpit
is avoidable, along with late patches and malignant software fixes.
The result is a more robust and maintainable product delivered more
predictably with a higher probability of economic success. Iterative
projects can deliver a product with about half the scrap and rework
activities as waterfall projects by re-factoring architecturally significant
changes far earlier in the lifecycle.
Agile software delivery approaches start projects with an ever increasing
amount of the product coming from existing assets, architectures, and
services, as represented in the left hand profile. Integrating modern best
12. Improving Software Economics
Page 12
practices and a supporting platform that enables advanced collaboration
allows the team to iterate more effectively and efficiently. Measurable
progress and quality are accelerated and projects can converge on
deliverable products that can be released to users and testers earlier. Agile
delivery projects that have fully transitioned to a steering leadership style
based on effective measurement can optimize scope, design, and plans
to reduce this waste of unnecessary scrap and rework further, eliminate
uncertainties earlier, and significantly improve the probability of win-win
outcomes for all stakeholders.
Note that we don’t expect scrap and rework rates to be driven to zero, but
rather to a level that corresponds to healthy discovery, experimentation,
and production levels commensurate with resolving the uncertainty of the
product being developed.
Table 2 provides one indicative benchmark of this transition. The resource
expenditure trends become more balanced across the primary workflows of
a software project as a result of less human-generated stuff, more efficient
processes (less scrap and rework), more efficient people (more creative
work, less overhead), and more automation.
Table 2: Resource expenditure profiles in transitioning to agile delivery processes
13. Improving Software Economics
Page 13
Highlights Top 10 Principles of Conventional Software Management
Most software engineering references present the waterfall model6 as the
source of the “conventional” software management process, and I use
these terms interchangeably. Years ago, I asserted the top 10 principles
of the conventional software process to capture its spirit and provide a
benchmark for comparison with modern methods.
The interpretation of these principles and their order of importance
are judgments that I made based on experiences from hundreds of
project evaluations, project diagnoses performed by the Rational team,
and discussions with Winston Royce, one of the pioneers in software
management processes. My father is well-known for his work on the
waterfall model, but he was always more passionate about iterative and
agile techniques well before they became popular.7
Top 10 Management Principles of Waterfall Development
1. Freeze requirements before design.
2. Forbid coding prior to detailed design review.
3. Use a higher order programming language.
4. Complete unit testing before integration.
5. Maintain detailed traceability among all artifacts.
6. Thoroughly document each stage of the design.
7. Assess quality with an independent team.
8. Inspect everything.
9. Plan everything early with high fidelity.
10. Control source code baselines rigorously.
Conventional software management techniques typically follow a
sequential transition from requirements to design to code to test
with extensive paper-based artifacts that attempt to capture complete
intermediate representations at every stage. Requirements are first
captured in complete detail in ad hoc text and then design documents
are fully elaborated in ad hoc notations. After coding and unit testing
individual code units, they are integrated together into a complete system.
14. Improving Software Economics
Page 14
Highlights This integration activity is the first time that significant inconsistencies
among components (their interfaces and behavior) can be tangibly
exposed, and many of them are extremely difficult to resolve. Integration
— getting the software to operate reliably enough to test its usefulness
— almost always takes much longer than planned. Budget and schedule
pressures drive teams to shoehorn in the quickest fixes. Re-factoring the
design or reconsideration of requirements is usually out of the question.
Testing of system threads, operational usefulness, and requirements
compliance gets performed through a series of releases until the software
is judged adequate for the user. More than 80 percent of the time, the end
result is a late, over-budget, fragile, and expensive-to-maintain software
system.
Hindsight from thousands of software project post-mortems has
revealed a common symptom of governing a software project with an
engineering management style: the project’s integration and test activities
require an excessive expenditure of resources in time and effort. This
excessive rework is predominantly a result of postponing the resolution
of architecturally significant issues (i.e., resolving the more serious
requirements and design uncertainties) until the integration and test
phase. We observed that better performing projects would be completed
with about 40 percent of their effort spent in integration and test.
With less than one in five projects Unsuccessful projects spent even more. With less than one in five projects
succeeding, better governance succeeding, better governance methods were imperative.
methods were imperative.
One of the most common failure patterns in the software industry is to
A prolonged effort to build precise develop a five-digits-of-precision version of a requirement specification (or
requirements or a detailed plan plan) when you have only a one-digit-of-precision understanding of the
only delays a more thorough problem. A prolonged effort to build precise requirements or a detailed
understanding of the architecturally plan only delays a more thorough understanding of the architecturally
significant issues. significant issues — that is, the essential structure of a system and
its primary behaviors, interfaces, and design trade-offs. How many
frighteningly thick requirements documents or highly precise plans (i.e.,
inchstones rather than milestones) have you worked on, perfected, and
painstakingly reviewed, only to completely overhaul these documents
months later?
The single most important lesson learned in managing software projects
15. Improving Software Economics
Page 15
Highlights with the waterfall model was that software projects contain much more
uncertainty than can be accommodated with an engineering governance
approach. This traditional approach presumes well-understood requirements
and straightforward production activities based on mature engineering
precedent.
Top 10 Principles of Iterative Software Management
In the 1990s, Rational Software Corporation began evolving a modern process
framework to more formally capture the best practices of iterative development.
The primary goal was to help the industry transition from a “plan and track”
management style (the waterfall model) to a “steering” leadership style that admit-
ted uncertainties in the requirements, design, and plans.
The software management approach we evolved led to producing the architecture
Modern, iterative development first, then usable increments of partial capability, then you worry about complete-
enables better insight into quality, ness. Requirements and design flaws are detected and resolved earlier in the life
because system characteristics cycle, avoiding the big-bang integration at the end of a project by integrating in
that are largely inherent in the stages throughout the project life cycle. Modern, iterative development enables
architecture are tangible earlier in better insight into quality because system characteristics that are largely inherent
the process where issues are still in the architecture (e.g., performance, fault tolerance, adaptability, interoperability,
correctable. maintainability) are tangible earlier in the process where issues are still correct-
able without jeopardizing target costs and schedules. These techniques attacked
major uncertainties far earlier and more effectively. Here are my top 10 principles
of iterative development from the 1990s and early 2000s era:
Top 10 Management Principles of Iterative Development
1. Base the process on an architecture-first approach.
2. Establish an iterative lifecycle process that confronts risk early.
3. Transition design methods to emphasize component-based development.
4. Establish a change management environment.
5. Enhance change freedom through tools that support round-trip engineering.
6. Capture design artifacts in rigorous, model-based notation.
7. Instrument the process for objective quality control and progress assessment.
8. Use a demonstration-based approach to assess intermediate artifacts.
9. Plan intermediate releases in groups of usage scenarios with evolving levels
of detail.
10. Establish a configurable process that is economically scalable.
16. Improving Software Economics
Page 16
Highlights Whereas conventional principles drove software development activities to
overexpend in integration activities, these modern principles resulted in less
total scrap and rework through relatively more emphasis in early lifecycle
engineering and a more balanced expenditure of resources across the core
workflows of a modern process.
The architecture-first approach forces integration into the design phase,
Early demonstrations do not where the most significant uncertainties can be exposed and resolved. The
eliminate the design breakage; early demonstrations do not eliminate the design breakage; they just make
they just make it happen when it it happen when it can be addressed effectively. The downstream scrap and
can be addressed effectively. rework is significantly reduced along with late patches and sub-optimal
software fixes, resulting in a more robust and maintainable design.
Interim milestones provide tangible results. Designs are now “guilty until
proven innocent.” The project does not move forward until the objectives
of the demonstration have been achieved. This does not preclude the
renegotiation of objectives once the milestone results permit further
refactoring and understanding of the tradeoffs inherent in the requirements,
design, and plans.
Figure 2 illustrates the change in measurement mindset when moving from
waterfall model measures of activities to iterative measures of scrap and
rework trends in executable releases. The trends in cost of change9 can
be observed through measuring the complexity of change This requires a
project to quantify the rework (effort required for resolution) and number
of instances of rework. In simple terms, adaptability quantifies the ease of
changing a software baseline, with a lower value being better. When changes
are easy to implement, a project is more likely to increase the number of
changes, thereby increasing quality. With the conventional process and
custom architectures, change was more expensive to incorporate as we
proceeded later into the life cycle. For waterfall projects that measured such
trends, they tended to see the cost of change increase as they transitioned
from testing individual units of software to testing the larger, integrated
system.
17. Improving Software Economics
Page 17
Highlights This is intuitively easy to understand, since unit changes (typically
implementation issues or coding errors) were relatively easy to debug
and resolve and integration changes (design issues, interface errors or
performance issues) were relatively complicated to resolve.
A discriminating result of a successful transition to a modern iterative
In an architecture first approach, process with an architecture first approach is that the more expensive
more expensive changes are changes are discovered earlier when they can be efficiently resolved and
discovered earlier when they can get simpler and more predictable as we progress later into the life cycle.
be efficiently resolved and get
simpler and more predictable as we This is the result of attacking the uncertainties in architecturally
progress later into the life cycle. significant requirements tradeoffs and design decisions earlier. The
big change in an iterative approach is that integration activities mostly
precede unit test activities, thereby resolving the riskier architectural and
design challenges prior to investing in unit test coverage and complete
implementations. This is the single most important measure of software
project health. If you have a good architecture and an efficient process,
the long-accepted adage, “The later you are in the life cycle, the more
expensive things are to fix,” does NOT apply.10
Successful steering in iterative development is based on improved
measurement and metrics extracted directly from the evolving sequence
of executable releases. These measures, and the focus on building the
architecture first, allow the team to explicitly assess trends in progress
and quality and systematically address the primary sources of uncertainty.
The absolute measures are useful, but the relative measures (or trends) of
how progress and quality change over time are the real discriminators in
improved steering, governance, and predictability.
18. Improving Software Economics
Page 18
Waterfall Baseline Change Profile Iterative/Agile Baseline Change Profile
Measured software release changes
Demonstrable Progress
Late scrap and
Demonstrable Progress
rework
Software test releases Demo releases Software test releases
Late integration Continuous integration and a
Maintenance sound architecture results in
results in escalating Integration changes Design
40 change costs Changes Changes change cost reductions over
time
30
Hours/change
Maintenance
20 changes
Unit test Implementation
10 Changes Changes
Scrap/rework costs for project releases over time Scrap/rework costs for project releases over time
Figure 2: The discriminating improvement measure: change cost trends
19. Improving Software Economics
Page 19
Highlights Balancing innovation with standardization is critical to governing the
cost of iterating, as well as governing the extent to which you can reuse
assets versus developing more custom components. Standardization
through reuse can take on many forms including:
• Product assets: architectures, patterns, services, applications, models,
commercial components, legacy systems, legacy components
• Process assets: methods, processes, practices, measures, plans,
estimation models, artifact templates
• People assets: existing staff skills, partners, roles, ramp-up plans,
training
• Platform assets: schemas, commercial tools, custom tools, data sets,
tool integrations, scripts, portals, test suites, metrics experience
databases
The value of standardizing and While this paper is primarily concerned with the practice of reducing
reusing existing architectural uncertainty, there is an equally important practice of reusing assets
patterns, components, data, and based on standardization. The value of standardizing and reusing
services lies in the reduction in existing architectural patterns, components, data, and services lies in the
uncertainty that comes from using reduction in uncertainty that comes from using elements whose function,
elements whose function, behavior, behavior, constraints, performance, and quality are all known.
constraints, performance, and quality
are all known. The cost of standardizing and reuse is that it can constrain innovation. It
is therefore important to balance innovation and standardization, which
requires emphasis on economic governance to reduce uncertainty; but
that practice is outside the scope of this paper.
20. Improving Software Economics
Page 20
Highlights Reducing uncertainity: The basis of best practice
The top 10 principles of iterative development resulted in many best practices,
Reducing uncertainty is the which are documented in the Rational Unified Process.11 The Rational Unified
recurring theme that ties together Process includes practices for requirements management, project management,
techniques that we call best change management, architecture, design and construction, quality management,
practices. documentation, metrics, defect tracking, and many more. These best practices
are also context dependent. For example, a specific best practice used by a small
research and development team at an ISV is not necessarily a best practice for an
embedded application built to military standards. After several years of deploying
these principles and capturing a framework of best practices, we began to ask a
simple question: “Why are these best? And what makes them better?”
IBM research and the IBM Rational organization have been analyzing
these questions for over a decade, and we have concluded that reducing
uncertainty is THE recurring theme that ties together techniques that we
call best practices. Here is a simple story that Murray Cantor composed to
illustrate this conclusion.
Suppose you are the assigned project manager for a software product that
your organization needs to be delivered in 12 months to satisfy a critical
business need. You analyze the project scope and develop an initial plan
and mobilize the project resources estimated by your team. They come back
after running their empirical cost/schedule estimation models and tell you
that the project should take 11 months. Excellent! What do you do with
that information? As a savvy and scarred software manager, you know that
the model’s output is just a point estimate and simply the expected value
of a more complex random variable, and you would like to understand the
variability among all the input parameters and see the full distribution of
possible outcomes. You want to go into this project with a 95 percent chance
of delivering within 12 months. Your team comes back and shows you the
complete distribution illustrated as the “baseline estimate” at the top of
Figure 3. I’ll describe the three options shown in a moment.
21. Improving Software Economics
Page 21
Baseline estimate
0 12 months
Option 1: Expand schedule
0 15 months
Option 2: Reduce scope
0 12 months
Option 3: Reduce variance
Eliminate sources of uncertainty
0 12 months
Figure 3: A baseline estimate and alternatives in dealing with project management
constraints.
22. Improving Software Economics
Page 22
Examining the baseline estimate, you realize that about half of the
Highlights
outcomes will take longer than 12 months and you have only about a 50
percent chance of delivering on time. The reason for this dispersion is the
significant uncertainty in the various input parameters reflecting your
team’s lack of knowledge about the scope, the design, the plan, and the
team capability. Consequently, the variance of the distribution is rather
wide.12
Now, as a project manager there are essentially three paths that you can
take; these are also depicted in Figure 3:
1. Option 1: Ask the business to move out the target delivery date to 15
months to ensure that 95 percent of the outcomes complete in less
time than that.
2. Option 2: Ask the business to re-scope the work, eliminating some
of the required features or backing off on quality so that the median
schedule estimate moves up by a couple of months. This ensures that
95 percent of the outcomes complete in 12 months.
3. Option 3: This is the usual place we all end up and the project
managers that succeed work with their team to shrink the variance
You must address and reduce the of the distribution. You must address and reduce the uncertainties
uncertainties in the scope, the in the scope, the design, the plans, the team, the platform, and the
design, the plans, the team, the process. The effect of eliminating uncertainty is less dispersion in
platform, and the process. the distribution and consequently a higher probability of delivering
within the target date.
The first two options are usually deemed unacceptable, leaving the
third option as the only alternative — and the foundation of most of
the iterative and agile delivery best practices that have evolved in the
software industry. If you examine the best practices for requirements
management, use case modeling, architectural modeling, automated code
production, change management, test management, project management,
architectural patterns, reuse, and team collaboration, you will find
methods and techniques to reduce uncertainty earlier in the life cycle. If
we retrospectively examine my top 10 principles of iterative development,
one can easily conclude that many of them (specifically 1, 2, 3, 6, 8, and
9) make a significant contribution to addressing uncertainties earlier. The
others (4, 5, 7 and 10) are more concerned with establishing feedback
control environments for measurement and reporting.
23. Improving Software Economics
Page 23
Highlights It was not obvious to me that the purpose of these principles was also to
reduce uncertainty until I read Douglass Hubbard’s book How to Measure
Anything,13 where I rediscovered the following definition:
Measurement: A set of observations that reduce uncertainty where the
result is expressed as a quantity.
The scientific community does not Voila! The scientific community does not look at measurement as completely
look at measurement as completely eliminating uncertainty. Any significant reduction in uncertainty is
eliminating uncertainty. Any enough to make a measurement valuable. With that context, I concluded
significant reduction in uncertainty that the primary discriminator of software delivery best practices was that
is enough to make a measurement they effectively reduce uncertainty and thereby increase the probability
valuable. of success—even if success is defined as cancelling a project earlier so
that wasted cost was minimized. What remains to be assessed are how
much better these practices work in various domains and how do we best
instrument them. IBM research continues to invest in these important
questions.
24. Improving Software Economics
Page 24
Highlights Achieving “Agility at Scale: Top 10 principles of Agile software delivery
After ten years of experience with iterative development projects, we have
experience from 100s of projects to update our management principles. The
transitional mix of disciplines promoted in iterative development needs to
be updated to the more advanced economic disciplines of agile software
delivery. What follows is my proposed top ten principles for achieving agile
software delivery success.
Top 10 Management Principles of Agile Software Delivery
1. Reduce uncertainties by addressing architecturally significant decisions
first.
2. Establish an adaptive lifecycle process that accelerates variance
reduction.
3. Reduce the amount of custom development through asset reuse and
middleware.
4. Instrument the process to measure cost of change, quality trends, and
progress trends.
5. Communicate honest progressions and digressions with all stakeholders
6. Collaborate regularly with stakeholders to renegotiate priorities,
scope, resources, and plans.
7. Continuously integrate releases and test usage scenarios with
evolving breadth and depth.
8. Establish a collaboration platform that enhances teamwork among
potentially distributed teams.
9. Enhance the freedom to change plans, scope and code releases
through automation.
10. Establish a governance model that guarantees creative freedoms to
practitioners.
Successfully delivering software Successfully delivering software products in a predictable and profitable
products in a predictable and manner requires an evolving mixture of discovery, production, assessment,
profitable manner requires an and a steering leadership style. The word “steering” implies active
evolving mixture of discovery, management involvement and frequent course-correction to produce better
production, assessment, and a results. All stakeholders must collaborate to converge on moving targets,
steering leadership style. and the principles above delineate the economic foundations necessary to
achieve good steering mechanisms.
25. Improving Software Economics
Page 25
Three important conclusions that can be derived from these principles and
practical experience are illustrated in Figure 4.
An estimated target release date is not a point in time, it is a probability distribution
0 6 12
Scope is not a requirements document, it is a continuous negotiation
Coarse Architecturally significant Primary test Complete acceptance test
vision evaluation criteria cases and regression test suite
A plan is not a prescription, it is an evolving, moving target
Uncertainty in
Initial plan Stakeholder
Satisfaction Space
Initial state
Actual path and precision of Scope/Plan
Figure 4: The governance of Agile software delivery means managing uncertainty
and variance through steering
26. Improving Software Economics
Page 26
Highlights In a healthy software project, each phase of development produces an increased
level of understanding in the evolving plans, specifications, and completed
solution, because each phase furthers a sequence of executable capabilities and
the team’s knowledge of competing objectives. At any point in the life cycle, the
precision of the subordinate artifacts should be in balance with the evolving
precision in understanding, at compatible levels of detail and reasonably traceable
to each other.
The difference between precision and accuracy in the context of software
management is not trivial. Software management is full of gray areas,
situation dependencies, and ambiguous tradeoffs. Understanding the
difference between precision and accuracy is a fundamental skill of good
software managers, who must accurately forecast estimates, risks, and
the effects of change. Precision implies repeatability or elimination of
uncertainty. Unjustified precision — in requirements or plans — has proved
to be a substantial yet subtle recurring obstacle to success. Most of the time,
this early precision is just plain dishonest and serves to provide a counter-
productive façade for portraying illusory progress and quality. Unfortunately,
many sponsors and stakeholders demand this early precision and detail
because it gives them (false) comfort of the progress achieved.
Iterative development processes have evolved into more successful agile
delivery processes by improving the navigation through uncertainty
with balanced precision. This steering requires dynamic controls and
intermediate checkpoints, whereby stakeholders can assess what they
have achieved so far, what perturbations they should make to the target
objectives, and how to re-factor what they have achieved to adjust and
The key outcome of these deliver those targets in the most economical way. The key outcome of these
modern agile delivery principles modern agile delivery principles is increased flexibility, which enables the
is increased flexibility, which continuous negotiation of scope, plans, and solutions for effective economic
enables the continuous governance.
negotiation of scope, plans, and
solutions for effective economic Figure 5 provides another example of this important metric pattern. What
governance. this figure illustrates is the tangible evolution of a quality metric (in
this case, the demonstrated mean time between failure for the software
embedded in a large scale command and control system).14
27. Improving Software Economics
Page 27
Whereas, the conventional process would have to deal speculatively with
this critical performance requirement for most of the lifecycle, the project
that employs a modern agile delivery approach eliminates the uncertainty
in achieving this requirement early enough in the project’s schedule that
the team can effectively trade-off remaining resources to invest in more
run-time performance, added functionality, or improved profit on system
delivery. This sort of reduction in uncertainty has significant economic
leverage to all stakeholders.
WATERFALL DEVELOPMENT AGILE DELIVERY
Late quality and performance insight Continuous quality and performance insight allows
constrains flexibility to make tradeoffs flexibility in trading off cost, quality, and features
First indications
of performance
challenges
Demonstrated MTBF
Measured
progress
and quality Indications
of other quality
challenges
Software
MTBF allocation
Requirements/Design
Baseline and freeze Requirements negotiation
Design refactoring
• Speculative quality requirements • Release qualities that matter
• Unpredictable cost/schedule performance • Quality progressions/digressions
• Late shoehorning of suboptimal changes • Early requirement verification
that impact quality and/or negotiation
• Delays risk and uncertainty reduction until • Reduces critical sources of variance in
too late in the project life cycle cost to complete
• Increased flexibility in late resource
investments
Figure 5: Reduced uncertainty in critical quality requirements improves the variance
in the cost to complete and adds flexibility in downstream resource investments.
28. Improving Software Economics
Page 28
Highlights I have observed four discriminating patterns that are characteristic of
successful agile delivery projects. These patterns represent a few “abstract
gauges” that help the steering process to assess scope management,
process management, progress management, and quality management. My
hunch is that most project managers certified in traditional engineering
project management will react negatively to these notions, because they
run somewhat counter to conventional wisdom.
Scope evolves from abstract and 1. Scope evolves: Solutions evolve from stakeholder needs, and
accurate representations into precise stakeholder needs evolve from available solutions assets. [Anti-pattern:
and detailed representations as Get all the requirements right up front.] This equal and opposite
stakeholder understanding evolves interaction between user need and solution is the engine for iteration
(i.e., uncertainty is reduced). that is driving more and more asset-based development. We just don’t
build many applications dominated by custom code development
anymore. A vision statement evolves into interim evaluation criteria
which evolve into test cases and finally detailed acceptance criteria.
Scope evolves from abstract and accurate representations into precise
and detailed representations as stakeholder understanding evolves
(i.e., uncertainty is reduced).
2. Process rigor evolves: Process and instrumentation evolve from
flexible to rigorous as the lifecycle activities evolve from early,
creative tasks to later production tasks. [Anti-pattern: Define the
entire project’s lifecycle process as light or heavy.] Process rigor
should be much like the force of gravity: the closer you are to a
product release, the stronger the influence of process, tools, and
instrumentation on the day-to-day activities of the workforce. The
farther you are from a release date, the weaker the influence. This
is a key requirement to be fulfilled by the development platform
with automation support for process enactment if practitioners are to
perceive a lifecycle process that delivers ‘painless governance’..
3. Progress assessment is honest: Healthy projects display a sequence of
progressions and digressions. [Anti-pattern: consistently progressing
to 100 percent earned value as the original plan is executed, without
any noticeable digression until late in the life cycle]. The transition
to a demonstration-driven life cycle results in a very different project
profile. Rather than a linear progression (often dishonest) of earned value,
a healthy project will exhibit an honest sequence of progressions and
digressions as they resolve uncertainties, re-factor architectures and scope,
and converge on an economically governed solution.
29. Improving Software Economics
Page 29
Highlights 4. Testing is the steering mechanism: Testing of demonstrable releases
is a full lifecycle activity and the cost of change in software releases
Testing demands objective evaluation improves or stabilizes over time. [Anti-pattern: testing is a subor-
through execution of software dinate, bureaucratic, late lifecycle activity and the cost of change
releases under a controlled scenario increases over time]. Testing demands objective evaluation through
with an expected outcome. execution of software releases under a controlled scenario with an
expected outcome. In an agile delivery process that is risk-driven,
integration testing will mostly precede unit testing and result in more
flexibility in steering with more favorable cost of change trends.
With immature metrics and measures, software project managers are still overly
focused on playing defense and struggling with subjective risk management.
With further advances in software measurement and collaborative platforms
that support process enactment of best practices and integrated metrics col-
lection and reporting, we can manage uncertainty more objectively. Software
project managers can invest more in playing offense through balancing risks
with opportunities, and organizations can better exploit the value of software to
deliver better economic results in their business.
A framework for reasoning about improving software economics
Today’s empirical software cost estimation models (like COCOMO II,
SEER, QSM Slim and others) allow users to estimate costs to within 25-30
percent, on three out of four projects.15 This level of unpredictability
in the outcome of software projects is a strong indication that software
delivery and governance clearly requires an economics discipline that
can accommodate high levels of uncertainty. These cost models include
dozens of parameters and techniques for estimating a wide variety of
software development projects. For the purposes of this discussion, I will
simplify these estimation models into a function of four basic parameters:
30. Improving Software Economics
Page 30
1. Complexity. The complexity of the software is typically quantified
Highlights
1. in units of human-generated stuff and its quality. Quantities may
be assessed in lines of source code, function points, use-case points,
or other measures. Qualities like performance, reuse, reliability, and
Resources = (Complexity) (Process)
* feature richness are also captured in the complexity value. Simpler and
(Teamwork) * (Tools) more straightforward applications will result in a lower complexity value.
2. Process. This process exponent typically varies in the range 1.0 to 2.
1.25 and characterizes the governance methods, techniques, maturity,
appropriateness, and effectiveness in converging on wins for all
stakeholders. Better processes will result in a lower exponent.
3. Teamwork. This parameter captures the skills, experience, motivations
3. and know-how of the team along with its ability to collaborate toward
well-understood and shared goals. More effective teams will result in a
lower multiplier.
4. Tools. The tools parameter captures the extent of process automation,
4. process enactment, instrumentation and team synchronization. Better
tools will result in a lower multiplier.
The relationships among these parameters in modeling the estimated effort
can be expressed as follows:
Resources = (Complexity) (Process) * (Teamwork) * (Tools)
By examining the mathematical form of this equation and the empirical data
in the various models and their practical application across thousands of
industry projects, one can easily demonstrate that these four parameters are
in priority order when it comes to the potential economic leverage. In other
words, a 10 percent reduction in complexity is worth more than a 10 percent
improvement in the process, which is worth more than a 10 percent more
capable team, which is worth more than a 10 percent increase in automation.
In practice, this is exactly what IBM services teams have learned over
the last twenty-five years of helping software organizations improve their
software development and delivery capability.
31. Improving Software Economics
Page 31
We have been compiling best practices and economic improvement experiences
Highlights
for years. We are in the continuing process of synthesizing this experience
into more consumable advice and valuable intellectual property in the form
of value traceability trees, metrics patterns, benchmarks of performance, and
instrumentation tools to provide a closed loop feedback control system for
improved insight and management of the econometrics introduced earlier. Figure 6
summarizes the rough ranges of productivity impact and timeframes associated with
many of the more common initiatives that IBM is investing in and delivering every
day across the software industry. The impact on productivity typically affects only
As the scale of an organization grows, a subset of project and organization populations — they require savvy tailoring to
the impacts to productivity dampen put them into a specific context. As the scale of an organization grows, the impacts
predominantly because of standard dampen predominantly because of standard inertia — i.e., resistance to change.
inertia — i.e., resistance to change.
We have been careful to present ranges and probability distributions to ensure that it
is clear that “your mileage may vary.” The key message from Figure 6 is that there is a
range of incremental improvements that can be achieved and there is a general hierarchy
of impact. The more significant improvements, like systematic reduction in complexity
and major process transformations, also require the more significant investments and
time to implement. These tend to be broader organizational initiatives. The more incre-
mental process improvements, skill improvements, and automation improvements targeted
at individual teams, projects, or smaller organizations are more predictable and straight-
forward to deploy.
32. Improving Software Economics
Page 32
Resources = (Complexity) (Process) * (Teamwork) * (Tools)
Complexity Process Teamwork Tools
Human generated stuff Methods/maturity Skills/Experience Automation
Quality/performance Agility Collaboration Integration
Scope Metrics/Measures Motivation Process enactment
Increased Flexibility by
Reducing Complexity
Improve process
Much culture change
Costs=25%-50% Some culture change Improve Teamwork
(Per person year costs) Costs=10%-35% Automate more
Timeframe = Years (Per person year costs) Predictable
Impacts: 2x – 10x Timeframe = Months Very predictable
Costs=5-10%
Impacts: 25%-100% Costs= < 5%
(Per person year costs)
(Per person year costs)
Timeframe = Weeks
Timeframe = Days/weeks
Impacts: 15%-35%
Impacts: 5%-25%
Service Oriented Architecture 20% Code quality scanning
Process rightsizing 30% Collaborative development
Middleware reuse platform 20% Change management automation
Agile governance
Reuse success 25% Geographically distributed 15% Test management automaton
Variance reduction
Packaged applications development 15% Build management
Best practice deployment
Scope management 20% Best practices, processes 15% Metrics, reporting
Project management
Architectural breakthroughs 10% Training 10% Analysis/design automation
Process maturity advancement
10% Reinforced skills/practices 10% Requirements management
in tools and automation
Figure 6: A rough overview of expected improvements for some best practices
33. Improving Software Economics
Page 33
Highlights The main conclusion that one can draw from the experience captured in
Figure 6 is that improvements in each dimension have significant returns
on investment. The key to substantial improvement in business performance
is a balanced attack across the four basic parameters of the simplified
software cost model: reduce complexity, streamline processes, optimize team
contributions, and automate with tools. There are significant dependencies
among these four dimensions of improvement. For example, new tools enable
complexity reduction and process improvements; size reduction leads to
process changes; collaborative platforms enable more effective teamwork;
and process improvements drive tool advances. At IBM, and in our broad
customer base of software development organizations, we have found that
the key to achieving higher levels of improvements in teamwork, process
improvement, and complexity reduction lies in supporting and reinforcing
tooling and automation.
Deploying best practices and Deploying best practices and changing cultures is more straightforward
changing cultures is more when you can systematically transform ways of working. This is done
straightforward when you can through deployment of tangible tools, which automate and streamline
systematically transform ways the best practices and are embraced by the practitioners, because these
of working. This is done through tools increase the practitioner’s creative time spent in planning, analysis,
deployment of tangible tools. prototyping, design, refactoring, coding, testing and deploying, while these
tools decrease the time spent on unproductive activities such as unnecessary
rework, change propagation, traceability, progress reporting, metrics
collection, documentation, and training.
I realize that listing training among the unproductive activities will raise
the eyebrows of some people. Training is an organizational responsibility,
not a project responsibility. Any project manager who bears the burden
of training people in processes, technologies, or tools is worse off than a
project manager with a fully trained work force. A fully trained work force
on every project is almost never possible, but employing trained people is
always better than employing untrained people, other things being equal. In
this sense, training is considered a non-value-added activity. This is one of
the fundamental dilemmas that organizations face as they try to improve in
any one of the four dimensions. The overhead cost of training their teams
on new things is a significant inhibitor to project success; this cost explains
many managers’ resistance to any new change initiative, whether it regards
new tools, practices, or people.
34. Improving Software Economics
Page 34
Highlights In making the transition to new techniques and technologies, there is always
apprehension and concern about failing, particularly by project managers
Maintaining the status quo and who are asked to make significant changes in the face of tremendous
relying on existing methods is uncertainty. Maintaining the status quo and relying on existing methods
usually considered the safest path. is usually considered the safest path. In the software industry, where
In the software industry, where most organizations succeed on less than half of their software projects,
most organizations succeed on less maintaining the status quo is not a safe bet. When an organization does
than half of their software projects, decide to make a transition, two pieces of conventional wisdom are usually
maintaining the status quo is not a offered by both internal champions and external change agents: (1) Pioneer
safe bet. any new techniques on a small pilot program. (2) Be prepared to spend more
resources – money and time – on the first project that makes the transition.
In my experience, both of these recommendations are counterproductive.
Small pilot programs have their place, Small pilot programs have their place, but they rarely achieve any paradigm
but they rarely achieve any paradigm shift within an organization. Trying out a new little technique, tool, or
shift within an organization... they method on a very rapid, small-scale effort – less than three months, say, and
are almost never considered on the with just a few people – can frequently show good results, initial momentum,
critical path of the organization. or proof of concept. The problem with pilot programs is that they are almost
never considered on the critical path of the organization. Consequently, they
do not merit “A” players, adequate resources, or management attention. If
a new method, tool, or technology is expected to have an adverse impact
on the results of the trailblazing project, that expectation is almost certain
to come true. Why? Because software projects almost never do better than
planned. Unless there is a very significant incentive to deliver early (which
is very uncommon), projects will at best steer their way toward a target date.
Therefore, the trailblazing project will be a non-critical project, staffed with
non-critical personnel of whom less is expected. This adverse impact ends up
being a self-fulfilling prophecy.
In successful paradigm shifts, the The most successful organizational paradigm shifts I have seen resulted
organizations took their most critical from similar sets of circumstances: the organizations took their most critical
project and highest caliber personnel, project and highest caliber personnel, gave them adequate resources, and
gave them adequate resources, and demanded better results on that first critical project.
demanded better results on that first
critical project.