This experience report, by a project’s technical architect, details the adoption of Agile methods across several teams after one high profile success. The organisation had a long history of waterfall development and a clearly defined remit for technical architects. Years of refinement had led to a set of techniques which contradicted many of the ideals held by Agile practitioners. The author’s challenge was to maintain agility and fulfill responsibilities inherited from waterfall processes without reverting to the conventional practices that ultimately lead to the architect’s ivory tower.
Ciklum, the European leader in IT nearshoring for Small and Medium Enterprises (SMEs), presents nearshore Agile development as a relatively new, yet effective Outsourcing 2.0 trend, able to better meet with the challenging requirements of today's high-tech environment compared to traditional offshore waterfall development.
This document provides an overview of software project management. It discusses the definition of a project and software project management. Key aspects include controlling the development process, achieving closure by meeting deadlines and budgets, and producing a cost-effective result. The document also covers the roles and responsibilities of a project manager, the project life cycle including initiation, planning, execution, and closure, and different software development life cycle models. Overall, the document serves as an introduction to the concepts and processes involved in software project management.
DevOps - The Future of Application Lifecycle Automation Gunnar Menzel
Development to Operations (DevOps) will have a profound impact on the global IT sector in the near future. Realizing DevOps’ full potential, IT vendors have been agile enough in providing new products and services under the label “DevOps inside”, at an ever- increasing pace. However, with the growth in product choices, conflicting definitions and competing services, customers often encounter confusion, while making complex purchase decisions. They often seem to be unsure about how to deploy DevOps and get the most out of the solution.
While not trying to delve deep into DevOps, the Whitepaper tries to answer the following key questions:
What is DevOps?
What is DevOps trying to achieve?
How will DevOps achieve this?
How best to make use of the new developments?
Its aim is to help the reader:
Understand the DevOps concepts
Understand its current value and restrictions
Tailoring your SDLC for DevOps, Agile and moreJeff Schneider
MomentumSI encourages tailoring an SDLC based on industry best practices and philosophies. The document discusses incorporating practices from Scrum, Test Driven Development, Feature Driven Development, Lean Software Development, Agile Manifesto, Extreme Programming, DevOps, Enterprise SOA Manifesto, Harmony SOA Tenets, OpenUP, Enterprise Unified Process, BABOK, ITIL, PMBOK, and COBIT. The tailored SDLC should provide traceability back to these influences while serving the specific needs of the organization.
How to become a great DevOps Leader, an ITSM Academy WebinarITSM Academy, Inc.
Presenter: Mustafa Kapadia, Service Line Leader, IBM
The ideal DevOps Leader is a tactical or strategic individual who helps design, influence, implement or motivate the cultural transformation proven to be a critical success factor in DevOps adoption. The most successful DevOps leaders understand the human dynamics of cultural change and are equipped with practices, methods, and tools to engage people across the DevOps spectrum. We will explore the role of the DevOps Leader in more detail.
SPE London 'Geomechanics: Quo Vadis?' Event Talk - 27Oct15 Glen Burridge
This document summarizes the results of interviews and questionnaires with 28 contributors from various disciplines within the oil and gas industry regarding the current and future state of geomechanics. Key findings include:
1) Geomechanics is not fully integrated into exploration, planning, and assurance workflows.
2) Barriers to adoption include lack of standards, siloed teams, and perceptions of geomechanics as reactive rather than proactive.
3) Widespread knowledge sharing of case studies and establishing geomechanics strategies within companies is needed for it to realize its full potential in areas like drillability and reservoir performance.
The document discusses a software development process designed for small projects. It begins by outlining some of the challenges small projects face, such as having fewer team members and more external dependencies. It then describes the authors' process, which integrates portions of iterative and incremental development models with quality assurance and measurement processes. The goal is to produce high quality results on time with less overhead than typical processes designed for large projects. Key aspects of the process include its iterative nature, use of inspections to ensure quality, and measurements to support process improvement.
Ciklum, the European leader in IT nearshoring for Small and Medium Enterprises (SMEs), presents nearshore Agile development as a relatively new, yet effective Outsourcing 2.0 trend, able to better meet with the challenging requirements of today's high-tech environment compared to traditional offshore waterfall development.
This document provides an overview of software project management. It discusses the definition of a project and software project management. Key aspects include controlling the development process, achieving closure by meeting deadlines and budgets, and producing a cost-effective result. The document also covers the roles and responsibilities of a project manager, the project life cycle including initiation, planning, execution, and closure, and different software development life cycle models. Overall, the document serves as an introduction to the concepts and processes involved in software project management.
DevOps - The Future of Application Lifecycle Automation Gunnar Menzel
Development to Operations (DevOps) will have a profound impact on the global IT sector in the near future. Realizing DevOps’ full potential, IT vendors have been agile enough in providing new products and services under the label “DevOps inside”, at an ever- increasing pace. However, with the growth in product choices, conflicting definitions and competing services, customers often encounter confusion, while making complex purchase decisions. They often seem to be unsure about how to deploy DevOps and get the most out of the solution.
While not trying to delve deep into DevOps, the Whitepaper tries to answer the following key questions:
What is DevOps?
What is DevOps trying to achieve?
How will DevOps achieve this?
How best to make use of the new developments?
Its aim is to help the reader:
Understand the DevOps concepts
Understand its current value and restrictions
Tailoring your SDLC for DevOps, Agile and moreJeff Schneider
MomentumSI encourages tailoring an SDLC based on industry best practices and philosophies. The document discusses incorporating practices from Scrum, Test Driven Development, Feature Driven Development, Lean Software Development, Agile Manifesto, Extreme Programming, DevOps, Enterprise SOA Manifesto, Harmony SOA Tenets, OpenUP, Enterprise Unified Process, BABOK, ITIL, PMBOK, and COBIT. The tailored SDLC should provide traceability back to these influences while serving the specific needs of the organization.
How to become a great DevOps Leader, an ITSM Academy WebinarITSM Academy, Inc.
Presenter: Mustafa Kapadia, Service Line Leader, IBM
The ideal DevOps Leader is a tactical or strategic individual who helps design, influence, implement or motivate the cultural transformation proven to be a critical success factor in DevOps adoption. The most successful DevOps leaders understand the human dynamics of cultural change and are equipped with practices, methods, and tools to engage people across the DevOps spectrum. We will explore the role of the DevOps Leader in more detail.
SPE London 'Geomechanics: Quo Vadis?' Event Talk - 27Oct15 Glen Burridge
This document summarizes the results of interviews and questionnaires with 28 contributors from various disciplines within the oil and gas industry regarding the current and future state of geomechanics. Key findings include:
1) Geomechanics is not fully integrated into exploration, planning, and assurance workflows.
2) Barriers to adoption include lack of standards, siloed teams, and perceptions of geomechanics as reactive rather than proactive.
3) Widespread knowledge sharing of case studies and establishing geomechanics strategies within companies is needed for it to realize its full potential in areas like drillability and reservoir performance.
The document discusses a software development process designed for small projects. It begins by outlining some of the challenges small projects face, such as having fewer team members and more external dependencies. It then describes the authors' process, which integrates portions of iterative and incremental development models with quality assurance and measurement processes. The goal is to produce high quality results on time with less overhead than typical processes designed for large projects. Key aspects of the process include its iterative nature, use of inspections to ensure quality, and measurements to support process improvement.
This document discusses transitioning from a traditional project-based software development model to a product-focused model. It outlines changes needed across several dimensions, including role definitions, taxonomy, culture, talent, funding models, and leadership. It then provides an example case study of transforming a bank's IT organization by starting with an experimental product team, establishing alignment between business and technology, shifting to a product taxonomy, moving to outcome-based funding, empowering teams, and focusing on cultural transformation. The goal is to transition from a "black box IT" project model to an agile, lean, product-oriented approach over 9-12 months.
This document discusses transition management strategies and processes for a project manager taking over a high-risk project midway. It describes a case study of a project manager transitioning into a large, complex project for a bank that was facing challenges. Upon initial review, the new project manager found issues with status reporting, tracking, quality processes, stakeholder communication, and change control. A methodology is proposed for thoroughly reviewing project scope, estimates, plans, task allocation, stakeholder views, and contracts to establish an accurate baseline for transition.
Business Value of Agile Testing: Using TDD, CI, CD, & DevOpsDavid Rico
Presentation on the "Business Value of Agile Testing: Using Test Driven Development, Continuous Integration, Continuous Delivery, & DevOps," which are highly-disciplined contemporary new product development (NPD) approaches for rapidly building high-quality information technology-intensive systems. Identifies the motivation for agile methods, provide a brief introduction to agile methods, describe the fundamental mechanics of agile methods, and a brief survey of the benefits of agile methods as reported by major industry studies (including rarely seen, late-breaking economic data and results from the top consulting firms). Defines agile testing and introduce basic and advanced agile testing practices, strategies, metrics, outcomes, costs & benefits, cost of quality, and statistical performance data. Introduces basic and advanced agile scaling practices, case studies of enterprise-level agile testing, Continuous Delivery, and DevOps at major Internet firms, and common agile testing tools and automation suites. Closes with a summary of agile testing adoption rates, common barriers to agile testing, organizational change models for agile testing, and a summary of the benefits of agile testing.
This document discusses extending agile methodologies to large, distributed projects. It argues that with some modifications, agile practices can be applied successfully to complex projects. Some key extensions discussed are establishing an agile architecture team, using "super leads" to oversee multiple agile teams, and emphasizing light-weight documentation. The advantages of taking an agile approach to large projects include gaining an early market edge, improving quality through incremental releases, better managing risks, and ensuring the delivered product meets customer needs.
The document discusses how agile development teams can take an agile approach to documentation in complex environments that require documentation for maintenance teams or auditors. It notes that while lightweight documentation works for development, more is needed for maintenance and audits. The author advocates treating documentation as separate deliverables produced during iterations to close gaps. By taking a just-in-time approach and identifying reusable elements, agile teams can generate necessary documentation without compromising agile principles.
DevOps is a practical field that focuses on delivering business value as efficiently as possible. DevOps encompasses all the flows from code through testing environments to production environments. It stresses the cooperation between different roles, and how they can work together more closely, as the roots of the word imply—Development and Operations.
This material is about adopting DevOps with Seven domain model. It sharing secrets on how to adopt DevOps. It laying out core considerations for planning, building and executing DevOps.
It about talk about the method to measure readiness, efficiency, return and maturity. Besides, I am also mentioning the process of transformation including new process of continuous release, continuous validation and a well established feedback management mechanism.
Understand the concept of DevOps by employing DevOps Strategy Roadmap Lifecycle PowerPoint Presentation Slides Complete Deck. Describe how DevOps is different from traditional IT with these content-ready PPT themes. The slides also help to discuss DevOps use cases in the business, roadmap, and its lifecycle. Explain the roles, responsibilities, and skills of DevOps engineers by utilizing this visually appealing slide deck. Demonstrate DevOp roadmap for implementation in the organization with the help of a thoroughly researched PPT slideshow. Describe the characteristics of cloud computing, its benefits, and risks with the aid of this PPT layout. Utilize this easy-to-use DevOps transformation strategy PowerPoint slide deck to showcase the difference between cloud and traditional data centers. This ready-to-use PowerPoint layout also discusses the roadmap to integrate cloud computing in business. Highlight the usages of cloud computing and deployment models with the help of visual attention-grabbing DevOps implementation roadmap PowerPoint slides. https://bit.ly/3eFxYYr
Why is dev ops essential for fintech developmentnimbleappgenie
Indeed DevOps brings endless opportunities for FinTech organizations to speed up time to market. Most of the FinTech development companies are familiar with Agile development methodologies, but haven’t yet adopted DevOps.
Nimble AppGenie, fintech development teams that are sound with DevOps methodologies. It has become our standard practice to build products faster and efficiently.
More organizations are recognizing the many benefits that Agile delivers.
As organizations start embracing the approach, there are gaps in understanding about what it is, what it involves and what value it brings.
What is Agile Development is the first in a series of Agile eBooks from Intelliware Development intended to help eliminate those gaps.
Development to Operations (DevOps) is driving a profound impact on the global IT sector. IT vendors that realize DevOps’ full potential are more agile in providing new products and services under the label “DevOps inside” at an ever increasing pace. With the growing number of product choices, conflicting definitions and competing services, you may often encounter confusion while making complex decisions, delaying time to market. You at times may be unsure about how to deploy DevOps and get the most out of the solutions and tools available. Are you looking to master the DevOps "Fog?"
Learn new and trending innovations through the success of others during this informative session, and about tools and practices in the VMware world that will lead you to competitive advantage.
The document discusses the use of four maps - outcome map, value stream map, dependency map, and capability map - to help organizations implement DevOps. It describes each map and the order they should be used in. The outcome map defines desired outcomes, the value stream map identifies flow constraints, the dependency map visualizes external needs, and the capability map measures internal needs. Using these maps helps provide clarity of purpose, identify gaps, and prioritize improvements to establish a continuous flow towards delivering value to customers.
Mainframe DevOps: A Zowe CLI-enabled RoadmapDevOps.com
The Zowe open source framework, hosted by the Linux Foundation's Open Mainframe Project, is often referred to as a Swiss Army knife for mainframe modernization, but where to begin? This session, which is based on findings from numerous Design Thinking workshops, will help DevOps champions and mainframe leaders jumpstart their modernization journeys.
We’ll explore a few high-value use cases like plugging into enterprise CI/CD pipelines and incorporating off-platform tools like code quality. And by addressing practical considerations like Zowe installation, set-up and support, this session will equip attendees with the information they need to become mainframe DevOps mobilizers.
Use of three simple measurements in to aid with improving software delivering. Includes real world data and a case study from three geographicaly distributed teams.
An updated version of Simple Measurements as delivered at the CT-SPIN group in 2012.
DevOps Culture transformation in Modern Software DeliveryNajib Radzuan
DevOps culture aims to shorten development cycles and enable continuous delivery of software through practices that combine software development and IT operations. This presentation discusses how digital transformation requires changes to applications, infrastructure, and processes. It defines DevOps and outlines the DevOps process and tools used. Challenges of adopting DevOps culture include overcoming resistance to change and lack of collaboration between teams. The benefits of DevOps include rapid innovation, faster time-to-market, and improved customer focus. Adopting DevOps requires improving skills, evaluating processes and tools, and starting with small changes.
Building a Compelling Business Case for Continuous DeliveryXebiaLabs
Increasingly, companies strive to deliver better customer experiences by delivering higher quality software, faster. Building a business case for faster delivery is often essential to gaining the support of the organization. Successful business cases for Continuous Delivery (CD) improvements span Development, Operations, and the Business, and seek to simplify, improve, and streamline the application delivery process through standardization and automation.
Hear from Kurt Bittner, Principal Analyst at Forrester and Andrew Phillips, VP of Product Management at XebiaLabs in a webinar that will help you understand how to create a successful business case for CD, the potential return of investment, how to measure the benefits and how to track these benefits over time. The webinar will highlight:
How to identify opportunities for improvement in your value delivery stream
How to estimate the value of these improvements that reduce cycle time by removing bottlenecks and barriers to delivery
How CD can reduce the cost of compliance and the cost associated with security risks.
How to estimate the value of CD creates by growing or accelerating revenues
Examples of the benefits organizations have achieved through CD
This is the presentation that I presented with Ruth Willenborg that provides a review of IBM's DevOps strategy as well as the roadmap for recently developed capabilities and future directions.
The document provides an overview of Agile basics including:
- What Agile is and its iterative, incremental approach to software delivery
- The origins of Agile in the 1990s and its formalization in 2001 with the Agile Manifesto
- The four values and twelve principles of the Agile Manifesto which emphasize individuals, collaboration, customer feedback, and responding to change
The document discusses how agile project management differs from traditional project management and the impact it has on various roles including project managers. It outlines 3 key points:
1) Agile project management focuses on time and cost constraints rather than scope, uses product backlogs and velocity to schedule delivery, and emphasizes frequent delivery.
2) The role of the project manager changes to focus on facilitation, coaching, and team building rather than traditional planning and control.
3) To be successful with agile, project managers must acquire new skills like progressive planning, collaborative decision making, and servant leadership.
The document describes a B2B cloud application project for sales and distribution of finished goods in the textile industry. The project aims to create a cloud application that allows registered users to search products, maintain secure accounts, and contact administrators. An iterative waterfall model was selected for development due to its ability to iterate between phases to resolve errors. The project effort is estimated at 2.4 person-months, with a development time of 3.5 months. Cost will be estimated based on a productivity rate factoring in project size and number of personnel.
This document discusses moving IT organizations from project-level agility to enterprise-wide agility. It outlines the history and maturation of agile practices at the project level over the past 20 years. However, true agility now requires addressing the entire application portfolio and IT enterprise through practices like COSM that span projects, applications, and the enterprise. COSM integrates agile development with portfolio management, architecture, and governance to achieve adaptive and aligned IT.
This document discusses transitioning from a traditional project-based software development model to a product-focused model. It outlines changes needed across several dimensions, including role definitions, taxonomy, culture, talent, funding models, and leadership. It then provides an example case study of transforming a bank's IT organization by starting with an experimental product team, establishing alignment between business and technology, shifting to a product taxonomy, moving to outcome-based funding, empowering teams, and focusing on cultural transformation. The goal is to transition from a "black box IT" project model to an agile, lean, product-oriented approach over 9-12 months.
This document discusses transition management strategies and processes for a project manager taking over a high-risk project midway. It describes a case study of a project manager transitioning into a large, complex project for a bank that was facing challenges. Upon initial review, the new project manager found issues with status reporting, tracking, quality processes, stakeholder communication, and change control. A methodology is proposed for thoroughly reviewing project scope, estimates, plans, task allocation, stakeholder views, and contracts to establish an accurate baseline for transition.
Business Value of Agile Testing: Using TDD, CI, CD, & DevOpsDavid Rico
Presentation on the "Business Value of Agile Testing: Using Test Driven Development, Continuous Integration, Continuous Delivery, & DevOps," which are highly-disciplined contemporary new product development (NPD) approaches for rapidly building high-quality information technology-intensive systems. Identifies the motivation for agile methods, provide a brief introduction to agile methods, describe the fundamental mechanics of agile methods, and a brief survey of the benefits of agile methods as reported by major industry studies (including rarely seen, late-breaking economic data and results from the top consulting firms). Defines agile testing and introduce basic and advanced agile testing practices, strategies, metrics, outcomes, costs & benefits, cost of quality, and statistical performance data. Introduces basic and advanced agile scaling practices, case studies of enterprise-level agile testing, Continuous Delivery, and DevOps at major Internet firms, and common agile testing tools and automation suites. Closes with a summary of agile testing adoption rates, common barriers to agile testing, organizational change models for agile testing, and a summary of the benefits of agile testing.
This document discusses extending agile methodologies to large, distributed projects. It argues that with some modifications, agile practices can be applied successfully to complex projects. Some key extensions discussed are establishing an agile architecture team, using "super leads" to oversee multiple agile teams, and emphasizing light-weight documentation. The advantages of taking an agile approach to large projects include gaining an early market edge, improving quality through incremental releases, better managing risks, and ensuring the delivered product meets customer needs.
The document discusses how agile development teams can take an agile approach to documentation in complex environments that require documentation for maintenance teams or auditors. It notes that while lightweight documentation works for development, more is needed for maintenance and audits. The author advocates treating documentation as separate deliverables produced during iterations to close gaps. By taking a just-in-time approach and identifying reusable elements, agile teams can generate necessary documentation without compromising agile principles.
DevOps is a practical field that focuses on delivering business value as efficiently as possible. DevOps encompasses all the flows from code through testing environments to production environments. It stresses the cooperation between different roles, and how they can work together more closely, as the roots of the word imply—Development and Operations.
This material is about adopting DevOps with Seven domain model. It sharing secrets on how to adopt DevOps. It laying out core considerations for planning, building and executing DevOps.
It about talk about the method to measure readiness, efficiency, return and maturity. Besides, I am also mentioning the process of transformation including new process of continuous release, continuous validation and a well established feedback management mechanism.
Understand the concept of DevOps by employing DevOps Strategy Roadmap Lifecycle PowerPoint Presentation Slides Complete Deck. Describe how DevOps is different from traditional IT with these content-ready PPT themes. The slides also help to discuss DevOps use cases in the business, roadmap, and its lifecycle. Explain the roles, responsibilities, and skills of DevOps engineers by utilizing this visually appealing slide deck. Demonstrate DevOp roadmap for implementation in the organization with the help of a thoroughly researched PPT slideshow. Describe the characteristics of cloud computing, its benefits, and risks with the aid of this PPT layout. Utilize this easy-to-use DevOps transformation strategy PowerPoint slide deck to showcase the difference between cloud and traditional data centers. This ready-to-use PowerPoint layout also discusses the roadmap to integrate cloud computing in business. Highlight the usages of cloud computing and deployment models with the help of visual attention-grabbing DevOps implementation roadmap PowerPoint slides. https://bit.ly/3eFxYYr
Why is dev ops essential for fintech developmentnimbleappgenie
Indeed DevOps brings endless opportunities for FinTech organizations to speed up time to market. Most of the FinTech development companies are familiar with Agile development methodologies, but haven’t yet adopted DevOps.
Nimble AppGenie, fintech development teams that are sound with DevOps methodologies. It has become our standard practice to build products faster and efficiently.
More organizations are recognizing the many benefits that Agile delivers.
As organizations start embracing the approach, there are gaps in understanding about what it is, what it involves and what value it brings.
What is Agile Development is the first in a series of Agile eBooks from Intelliware Development intended to help eliminate those gaps.
Development to Operations (DevOps) is driving a profound impact on the global IT sector. IT vendors that realize DevOps’ full potential are more agile in providing new products and services under the label “DevOps inside” at an ever increasing pace. With the growing number of product choices, conflicting definitions and competing services, you may often encounter confusion while making complex decisions, delaying time to market. You at times may be unsure about how to deploy DevOps and get the most out of the solutions and tools available. Are you looking to master the DevOps "Fog?"
Learn new and trending innovations through the success of others during this informative session, and about tools and practices in the VMware world that will lead you to competitive advantage.
The document discusses the use of four maps - outcome map, value stream map, dependency map, and capability map - to help organizations implement DevOps. It describes each map and the order they should be used in. The outcome map defines desired outcomes, the value stream map identifies flow constraints, the dependency map visualizes external needs, and the capability map measures internal needs. Using these maps helps provide clarity of purpose, identify gaps, and prioritize improvements to establish a continuous flow towards delivering value to customers.
Mainframe DevOps: A Zowe CLI-enabled RoadmapDevOps.com
The Zowe open source framework, hosted by the Linux Foundation's Open Mainframe Project, is often referred to as a Swiss Army knife for mainframe modernization, but where to begin? This session, which is based on findings from numerous Design Thinking workshops, will help DevOps champions and mainframe leaders jumpstart their modernization journeys.
We’ll explore a few high-value use cases like plugging into enterprise CI/CD pipelines and incorporating off-platform tools like code quality. And by addressing practical considerations like Zowe installation, set-up and support, this session will equip attendees with the information they need to become mainframe DevOps mobilizers.
Use of three simple measurements in to aid with improving software delivering. Includes real world data and a case study from three geographicaly distributed teams.
An updated version of Simple Measurements as delivered at the CT-SPIN group in 2012.
DevOps Culture transformation in Modern Software DeliveryNajib Radzuan
DevOps culture aims to shorten development cycles and enable continuous delivery of software through practices that combine software development and IT operations. This presentation discusses how digital transformation requires changes to applications, infrastructure, and processes. It defines DevOps and outlines the DevOps process and tools used. Challenges of adopting DevOps culture include overcoming resistance to change and lack of collaboration between teams. The benefits of DevOps include rapid innovation, faster time-to-market, and improved customer focus. Adopting DevOps requires improving skills, evaluating processes and tools, and starting with small changes.
Building a Compelling Business Case for Continuous DeliveryXebiaLabs
Increasingly, companies strive to deliver better customer experiences by delivering higher quality software, faster. Building a business case for faster delivery is often essential to gaining the support of the organization. Successful business cases for Continuous Delivery (CD) improvements span Development, Operations, and the Business, and seek to simplify, improve, and streamline the application delivery process through standardization and automation.
Hear from Kurt Bittner, Principal Analyst at Forrester and Andrew Phillips, VP of Product Management at XebiaLabs in a webinar that will help you understand how to create a successful business case for CD, the potential return of investment, how to measure the benefits and how to track these benefits over time. The webinar will highlight:
How to identify opportunities for improvement in your value delivery stream
How to estimate the value of these improvements that reduce cycle time by removing bottlenecks and barriers to delivery
How CD can reduce the cost of compliance and the cost associated with security risks.
How to estimate the value of CD creates by growing or accelerating revenues
Examples of the benefits organizations have achieved through CD
This is the presentation that I presented with Ruth Willenborg that provides a review of IBM's DevOps strategy as well as the roadmap for recently developed capabilities and future directions.
The document provides an overview of Agile basics including:
- What Agile is and its iterative, incremental approach to software delivery
- The origins of Agile in the 1990s and its formalization in 2001 with the Agile Manifesto
- The four values and twelve principles of the Agile Manifesto which emphasize individuals, collaboration, customer feedback, and responding to change
The document discusses how agile project management differs from traditional project management and the impact it has on various roles including project managers. It outlines 3 key points:
1) Agile project management focuses on time and cost constraints rather than scope, uses product backlogs and velocity to schedule delivery, and emphasizes frequent delivery.
2) The role of the project manager changes to focus on facilitation, coaching, and team building rather than traditional planning and control.
3) To be successful with agile, project managers must acquire new skills like progressive planning, collaborative decision making, and servant leadership.
The document describes a B2B cloud application project for sales and distribution of finished goods in the textile industry. The project aims to create a cloud application that allows registered users to search products, maintain secure accounts, and contact administrators. An iterative waterfall model was selected for development due to its ability to iterate between phases to resolve errors. The project effort is estimated at 2.4 person-months, with a development time of 3.5 months. Cost will be estimated based on a productivity rate factoring in project size and number of personnel.
This document discusses moving IT organizations from project-level agility to enterprise-wide agility. It outlines the history and maturation of agile practices at the project level over the past 20 years. However, true agility now requires addressing the entire application portfolio and IT enterprise through practices like COSM that span projects, applications, and the enterprise. COSM integrates agile development with portfolio management, architecture, and governance to achieve adaptive and aligned IT.
Software Project Management: Project SummaryMinhas Kamal
Software Project Management: ResearchColab- Project Summary (Document-13)
Presented in 4th year of Bachelor of Science in Software Engineering (BSSE) course at Institute of Information Technology, University of Dhaka (IIT, DU).
Project management involves planning, directing, and controlling resources to complete projects on time and within budget. A key part of project management is work breakdown structure (WBS), which divides work into smaller tasks assigned to organizational units. Critical path method (CPM) and program evaluation and review technique (PERT) are used to schedule projects by identifying the longest sequence of tasks on the critical path that determine the project's duration.
The document discusses applying user experience (UX) design principles in agile software development projects. It covers agile principles, the scrum process, and lean UX design processes. It also compares the agile methodology to the traditional waterfall process, noting that agile often produces better results by emphasizing collaboration, adapting to change, and frequent delivery of working software.
Agile methods promise to deliver projects quicker so that benefits can be realized sooner; and you can use agile techniques for delivering packaged software too...
Agile projects are for delivering packaged software tooDavid Harmer
How we use agile methods and "Use Cases" to deliver projects more effectively. We contend that the coding and configuration required by packaged systems is comparable to development, making their implementation amenable to agile techniques. Here we explain how and why.
This document discusses architecture in agile projects. It covers how agile methods like Scrum incorporate architecture through iterative development and continuous delivery. It also discusses balancing upfront architecture work with flexibility through methods like Architecture Tradeoff Analysis and attribute-driven design. A case study shows how one project used agile practices like continuous experimentation, refactoring, and incremental improvements to develop a complex system architecture.
This document discusses factors that contribute to project complexity and influence project success or failure. It introduces the Darnall-Preston Complexity Index, which evaluates projects based on their internal attributes, external attributes, technological complexity, and environmental attributes. Specifically, it examines how a project's size, duration, resource availability, clarity of objectives/scope, stakeholder agreement, technological newness, legal/cultural/ecological issues can increase complexity if not properly managed. Managing these complexity factors requires selecting the right project manager with the appropriate skills for the given project profile.
The document discusses the agile approach to software development. It defines agile as an iterative development method where requirements evolve through collaboration between cross-functional teams. The key principles of agile include satisfying customers, welcoming changing requirements, frequent delivery, collaboration between business and development, trusting motivated individuals, face-to-face communication, working software as a measure of progress, sustainable development, and continuous improvement. The impact of agile is on people taking cross-functional roles, flexible processes over documentation, and delivering working versions of software that can adapt to changes.
BT, a large telecommunications company, was struggling with long software development cycles that exceeded 12 months. This prevented them from being competitive in the fast-paced telecom market. They implemented agile methods like Scrum and shifted to a 90-day delivery cycle focused on high value requirements. This substantially reduced delivery times while increasing business value. Faster delivery cycles with collaborative cross-functional teams improved communication and allowed them to adapt more effectively to changing needs. Now BT delivers solutions in a more timely manner that better meets business needs.
The document discusses agile development models as an alternative to traditional waterfall models. It describes how agile models use iterative development with short cycles to facilitate adapting quickly to changing requirements. Several specific agile methods are listed such as Scrum, Extreme Programming, and Lean Development. The key principles of agile development are close customer collaboration, preference for working software over documentation, frequent delivery of software increments, and ability to accommodate changing requirements.
This document provides a lessons learned report for a project to implement Oracle <Client Name> for a client to support their recruiting and onboarding processes. The summary identifies strong communication, managing scope changes, user testing, and involvement of local and corporate resources as success factors. Primary challenges included effective communication with corporate, project documentation management, resource commitment, and project management. Recommendations include encouraging strong communication between globally dispersed teams, having consistent user participation throughout the project, and managing scope, risks, and issues in a timely manner.
A fair analysis of the Agile Methodology. A quick comparison of Agile and Waterfall to clear up misconceptions about the two. Scalability is a major issue with Agile and is worth considering if you're not a large software company.
Interaction Room - Creating Space for Developments (Software Projects)adesso Turkey
The Interaction Room serves several purposes:
1) The focus on mission-critical aspects
2) Identification and elimination of risks associated with intuitive visualization methods at an early stage
3) Improving teamwork and the establishment of joint project responsibility between the IT and specialist departments.
The Interaction Room makes the relationships between processes, data and the application environment transparent and creates the basis for efficient decision-making processes. It is a method which steers the interest of those involved in the project’s progress and contributes to ensuring that all participants continuously work on the vision of the software that is being developed. The Interaction Room is not a theoretical concept but has proven itself in the business environment, as can be seen in successful projects in which the Interaction Room has already been used effectively.
(1) The document discusses how operational readiness is often underestimated during project development, leading to issues during commissioning and start-up that cost owners millions. (2) It advocates for establishing an operational readiness team early in projects to provide input on design decisions and ensure deliverables like manuals and training are on track. (3) Jacobs Consultancy offers operational readiness assurance services to help owners implement best practices through all project phases to smooth the transition to operations.
As more organizations begin to adopt agile on multiple, interdependent teams, how do we ensure that the success within a team can translate to success at the enterprise level?
Presented by: Sanjiv Augustine, President of LitheSpeed
Similar to Agile Methods Experience Report by Andrew Rendell, Valtech (20)
This document discusses Lean UX and how to get to know users through various techniques. It recommends building, measuring, learning, and repeating the process. Key aspects include conducting user research through surveys, analytics, personas, and testing assumptions and hypotheses with prototypes. The goal is to learn fast through an iterative process that prioritizes user needs to build the right products.
The Art of Visualising Software - Simon BrownValtech UK
This document discusses strategies for effectively visualizing software architecture through diagrams. It provides examples of different types of diagrams, such as component diagrams, container diagrams, and context diagrams. It also offers tips for creating useful diagrams, such as using short, meaningful titles; explicitly showing line styles and arrows; and explaining any acronyms, shapes, or colors used. The document emphasizes that diagrams should be tailored to the target audience, whether non-technical, semi-technical, or highly technical. It also introduces the C4 model as a common set of abstractions for describing software architecture.
This document discusses the importance of user research for product development. It provides tips for getting started with user research including conducting interviews, observation, affinity sorting to identify themes or insights, and prioritizing findings for the product backlog. The document also discusses building prototypes based on assumptions and evidence from user research, and having regular sprints of research, prototyping, and testing to continually learn and improve the product.
The document discusses Lean UX and Agile development principles for the public sector. It explains techniques like assumption mapping, lightweight collaborative design, low-fidelity prototyping, and affinity sorting. The importance of an iterative process of building prototypes, measuring learning with user research, reflecting and deciding on next steps is emphasized. Facilitation tips for assumption mapping and running affinity sorting sessions are provided.
Transforming nhs choices using agile and lean ux agile mancValtech UK
This document summarizes the process of using agile and Lean UX methods to transform NHS Choices by better understanding users. It discusses getting to know users through assumption mapping, personas, user journey mapping, interviews, and prototypes. A example is provided of mapping the user journey and assumptions for identifying chickenpox. Prototypes were created and tested, with learnings fed back into the process to iteratively build the right solution. The goal is to build a solution that meets users' actual needs through continuous learning and testing assumptions based on data.
When your user base is huge and diverse, how do you make sure everyone is included in a user centric approach? Kev Murray of Valtech demonstrates how we do it.
This document discusses a lunch and learn session about rapid prototyping with Government Digital Services. The session will cover designing and building user interfaces quickly using the latest technologies, including responsive web design and tools for rapid prototyping. It will also discuss how Government Digital Service focuses on user testing and research to develop design patterns that create easy to use, beautiful digital services.
The Mobile Landscape - Do you really need an app?Valtech UK
Is an app really always the answer in reaching and interacting with customers? In this session we look at the differences between native apps and mobile web sites - and most importantly - how do we decide between the two when we want to engage with customers in the mobile context.
Modern Digital Design: The power of Responsive DesignValtech UK
You've probably already heard of the term Responsive Design. Currently it's one of the hot topics being discussed in the digital space and something many businesses are trying to get their heads around.
So what exactly is Responsive Design? And why does it matter?
This whitepaper provides evidence that the internet has entered a third phase in its evolution and is currently being rebuilt around people. Significant evolutionary change usually provides opportunities for innovation, both incremental and disruptive.
Whilst people orientated systems are benefiting both applications designed for B2B and B2C users, this whitepaper focuses predominantly on use of applications integrated with Facebook as a business channel. Whilst some companies have seen huge success with their Facebook initiatives, others have stalled. This whitepaper provides evidence and tactics to successfully monetise the Facebook channel.
Simplifying Facebook: Designing Around PeopleValtech UK
Companies are now expected to provide an online experience built around ‘people’ rather than content. As people are social animals, it’s important to rethink the fundamentals of your online presence with a people centric approach. This session will introduce the idea of ‘social by design’ and discuss the methodology and platforms that you can use to simplify and monetise from your social media relationship, with real life examples of Facebook commerce and multichannel social integration.This is the presentation Valtech's Jonathan Cook gave at JUMP 2012.
The mobile landscape - Do you really need an app?Valtech UK
Take a look around you, on a train, in a queue at the supermarket or at a concert. Chances are good you will see people interacting with their mobile phones or tablets. So far there has been a title wave of new apps being developed by companies and organisations. But is an app really always the answer to the question of how to reach, and interact with customers in mobile devices? In this session we look at the differences between native apps, and mobile web sites, and most important - how do we decide between the two, when we want to engage with customers in the mobile context.
This document introduces responsive design and discusses how to build websites flexibly for different screen sizes and devices. It answers common questions about responsive design, advocates flexibility over adapting to specific devices, and provides tips on content optimization, legacy browser support, responsive tools, and following a responsive design process.
Fashion clothing manufacturer IC Companys have multiple brands (e.g. Peak Performance, Jackpot, Tiger of Sweden, InWear etc.) in multiple segments, operating on multiple markets. Each brand controls their own marketing efforts including their website, making it different to join up the fashion group's brand value and exert control from the group level.
With EPiServer by Valtech, IC Company is able to quickly roll out new sites, centrally manage the site content of their multiple brands and gives control over the sites at brand level.
Kevin O'Toole, Head of Strategy at Flightglobal spoke about the challenges facing a major publisher in releasing the potential of their data into the digital world and how critical an agile approach is to driving new products to market.
Using CFD, SPC and Kanban on UK GOV IT projects Valtech UK
This document discusses using Cumulative Flow Diagrams (CFD), Statistical Process Control (SPC) charts, and Kanban techniques on 50 UK government IT projects with 50 development teams serving 50 different customers across 7 separate locations. CFDs show the flow of work over time for all projects and reveal patterns between mature and chaotic projects. SPC charts compare scheduled work to unplanned work for all, mature, and chaotic projects over time. Implementing Kanban techniques like limiting work in progress and visualizing the flow of work resulted in improved visibility and delivery compared to the previous "sort of DevOps" approach.
This presentation was held at one of our previous Agile Edge Conferences. It analyses how Agile can be introduced to an organisation! Please contact info@valtech.co.uk for information on our next Agile Edge Confererence in January 2012.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
2. Table of Content
Abstract ............................................................................................................................... 3
1. Introduction .......................................................................................................................... 4
2. Initial Agile adoption............................................................................................................. 5
3. The post lauch doubt and scaling up ................................................................................... 5
4. Redefining the Approach ..................................................................................................... 6
4.1. Reflecting on code structure ................................................................................................ 7
4.2. Providing direction and rigorous checkpoints ....................................................................... 8
4.3. Structuring the aplication architecture to promote good governance .................................... 9
4.4. Key measurement tool: testing .......................................................................................... 10
4.5. Documentation that is ‘good enough’ ................................................................................. 11
4.6. Delegation ......................................................................................................................... 12
5. Conclusion ......................................................................................................................... 14
6. References ........................................................................................................................ 14
7. Valtech Contact Details ..................................................................................................... 15
2
3. Abstract
This experience report, by a project’s technical architect, details the adoption of Agile methods
across several teams after one high profile success. The organization had a long history of
waterfall development and a clearly defined remit for technical architects. Years of refinement
had led to a set of techniques which contradicted many of the ideals held by Agile practitioners.
The author’s challenge was to maintain agility and fulfill responsibilities inherited from waterfall
processes without reverting to the conventional practices that ultimately lead to the architect’s
ivory tower.
3
4. 1. Introduction
The T-Mobile International Mobile Portals and Content Delivery Group had developed a set of
expectations around the responsibilities of a role which would be widely recognized as a
'Technical Architect'. The exact remit of a technical architect is subject to
much debate and differs widely between organizations and even projects. This report does not
aim to produce a definition for the technical architect, Agile or otherwise, but in the context of this
particular client engagement the role involved having: a wide remit over the implementations
delivered by several different teams; end to end technical responsibility; delivery of a consistently
efficient implementation that is fit for purpose.
In organizations where a waterfall process is in place it has been the author's experience that the
technical architect is unlikely to be involved in the hands on aspects of delivering solutions. Their
role is often very closely coupled with design documentation.
Often the delivery process has been structured to include quality gates where the deliverable for
the next stage is documented and then reviewed by the technical architect. The review of design
documentation is the primary tool available to the architect. This had been the case at T-Mobile.
The requirement to design, document and review everything upfront as a way to reduce risk is
one that is obviously eschewed by the Agile movement as being ineffective and providing only an
illusion of control.
In addition, making a documentation quality gate the mechanism by which the technical architect
manages the implementation paradoxically reduces their effectiveness by:
Isolating them from the technical implementation for which they are supposedly
responsible.
Reducing their technical ability by taking them away from the technology that made them
great candidates for the role in the first place.
Supporting the fallacy that the technical architect is the all-knowing center of the technical
universe.
Many technical architects on waterfall projects do an excellent job. The author's opinion is that
this is achieved in spite of rather than because of the design review quality gates. Personal
experience is that many technical architects feel disenfranchised because their key skills honed
through many years of education and hard project work are no longer put to good use.
Conversely, many developers perceive the technical architect as being somebody they rarely
interact with, who has little idea of how the software is being put together. In many teams the
author has observed the architect as being regarded as at best a poorly informed individual,
dabbling at the periphery and at worst, a dangerous impediment to progress whose attention
must be avoided at all costs.
4
5. Technical architect is usually a logical career progression for developers. The organization's
need for an individual to fulfill the responsibilities of the architect has not diminished [1] even if
the tools employed in the past have sometimes failed to add value. The challenge is: how can the
responsibilities of a technical architect be fulfilled without introducing practices which reduce the
agility, and therefore the effectiveness, of the team? Valtech has demonstrated that by changing
the techniques and attitudes of the architect it is possible to meet this goal. This experience
report details practices employed by a technical architect and his team across a body of work
consisting of several projects. These practices are evaluated in retrospect to measure their
effectiveness.
2. Initial Agile adoption
In the summer of 2007 a marketing initiative for a new mobile portal was proposed. The adage
'necessity is the mother of all invention' applied to this project. The high profile and reduced time-
scales (twelve weeks rather than six months from initiation to go-live) meant that light-weight
technologies and Agile practices had to be used rather than the incumbent document-centric
waterfall processes. This was very much a tactical Agile adoption. Failure was not an option and
the focus was on effective delivery rather than best practice. The project was a high profile
success in a very short time scale. The delivery date was met, to the hour. The team had proved
that a number of Agile techniques were highly beneficial and instilled confidence throughout the
group that more comprehensive adoption was not only possible but desirable.
3. The post launch doubts and scaling up
After the initial euphoria of launch there were doubts expressed by senior members of the
management team. There were concerns that not all of the old practices should have been
discarded. Of particular concern was the lack of accessible documentation to allow maintenance
of the platform, especially if the development team was cycled. This resulted in pressure to revert
to some of the original document centric processes.
A victim of their own success, the team was now required to deliver more features to the same
high levels of quality in the same aggressive time-scales. Given increasing scope and fixed
delivery dates the only solution was to increase headcount. The team was grown and
reorganized into two separate groups with different functional responsibilities in the same
platform. The technical architect retained his position across the team.
The two groups were seated in different locations. Other than an hour long weekly team meeting
there was increasingly little interaction between the two groups. This led to the creation of silos
where developers in one group knew very little about what was happening in the other.
5
6. The new organization began to deliver quickly and was generally viewed as a success but
internally several disturbing issues were surfacing:
Code style was diverging. It was very easy to see which team had produced any one
piece of code as they were radically different.
Common implementation patterns that had been very well understood were not being
applied consistently. This led to a number of situations where the application had
previously been predictable now behaving in an unpredictable and inconsistent fashion.
Technical debt was increasing. A key development principle was that the code base
should be under constant rationalization to remove duplication and redundancy and
increase reuse and consistency. This principle allowed the team to build fast, prove or
disprove a feature and then refactor to pay back the technical debt. A technical backlog
was maintained and tasks were regularly executed from this backlog. The net effect
should have been a reduction in lines of source but analysis showed that as quickly as
code was being refined, new code was being created. Code was still being produced
quickly but the debt was not being repaid. The team was moving from a proactive
refactoring regime into constant firefighting mode with decreasing feature delivery. During
this period the architect was made less effective by two critical factors:
o The architect was on the critical path for code delivery on one of the projects with
the same expected ideal engineering hours capacity as any developer.
o The architect's remit was well understood but the mechanisms by which that remit
would be achieved were not. This was the one of the traits of the conventional
ivory tower architect: responsibility with no clear mechanism of control.
4. Redefining the approach
The technical architect, who had been aware of these issues for several months but had been
unable to correct them, not least because of his own development commitments, determined that
the situation required immediate and fundamental intervention. At this point external events
required the teams to be reorganized into a number of different projects.
The technical architect took this opportunity to reorganize technical governance. The new regime
would allow scaling of development capacity through delegation and empowerment. To ensure
quality and consistency the regime would gather and analyse empirical evidence. The architect
determined that it was impossible to maintain the commitments of being a full time developer and
fulfill the technical architect's remit.
New techniques minimized the architect's isolation from the implementation, without the
unachievable requirement that the architect write key code modules.
The following sections describe some of the main techniques employed in this new technical
governance regime which kept the architect 'out of the ivory tower'.
6
7. 4.1. Reflecting on code structure
During the initial, problematic, scaling up of the team, development patterns and priorities had
been communicated. These were not always followed. In the new approach, after communicating
the intent, the realization was evaluated for compliance. This took the form of detailed code
reviews during the second iteration of the new projects as the body of code began to increase.
On a clean workstation, using only the instructions on the wiki, the architect built a development
environment. This included configuring the Eclipse IDE and Maven as well as checking out code
and setting up development application server instances. This was an essential first test of the
stability and accessibility of the code base.
The architect used a combination of Eclipse's powerful code navigation tools and the acceptance
and unit tests to traverse the application. After identifying the classes involved in particular user
goals, a UML tool reverse engineered the code into a set of class diagrams. The architect used
Eclipse and the UML tool to determine the associations between the implementation classes and
their tests. The architect then examined the implementation of the unit tests and made brief
passes of the code to determine the responsibilities of each class. This exercise indicated
whether standard patterns and agreed libraries were in use. Importantly it articulated the class
cohesion and structure.
Issues were identified around encapsulation and cohesion. Anti-patterns in test classes which
indicated issues in the implementation were noted and verified. The exercise produced a list of
issues to be corrected. The architect annotated the code in several places with FIXME and
TODO and finally produced a UML class diagram with notes showing the class structure in use.
This was added to the wiki. The exercise formed the basis for several improvement points at the
next retrospective and allowed the architect to provide positive feedback on the implementation
based on real evidence.
This exercise had several positive outcomes:
The architect's confidence that the team was following the correct and consistent set of
patterns was firmly established. The architect also gained valuable familiarity with the
code base. The issues that were spotted were easy to correct at this stage of the project.
If left they may well have spawned a large number of similar features which would have
increased the technical debt.
The developers' confidence was boosted. They were now sure that they were interpreting
the development guidelines correctly and had been publicly credited as such. The next
retrospective recorded that the developers regarded the code review as being one of the
positive features of the sprint. The architect going through code leaving TODO
annotations etc. increased the sense of common code ownership.
7
8. It made everybody more aware that the source was not an opaque artefact whose
functionality was the only facet that would be observed.
One problem as the code base increased was choosing which part of the application to inspect.
One technique that proved effective was simply to conduct an exercise in the retrospective where
each developer named their most complex or cleverest code module. These modules became
candidates for detailed inspection.
This mechanism of code review did require a significant investment in time. These reviews were
only conducted a handful of times over several months. The prohibitive cost of these detailed
code reviews meant that more commonly developers were invited to use a whiteboard to talk the
architect and a number of their peers through the interactions and class structure of a particular
section of the code. The objective was much the same as the detailed code inspection but also
served to educate a wider audience. Whilst it was of comparable in cost to the project in man
hours the cost to any one individual, critically the architect was reduced. E.g. a white-board
session would require two hours preparation by a developer and then one only hour for
attendance from the architect, two other developers and the developer presenting. The cost is six
hours to the project but only one hour is taken from the architect’s diary. A more effective but
costlier code inspection might easily cost six hours of the architect's time.
White-board sessions were less useful than code inspection. They did not bring the architect and
other developers into close contact with the actual code. It often led to a level of abstraction
(consciously or not) being introduced by the presenter in order to communicate with the
audience. Occasionally it appeared to reduce the sense of common code ownership as one
individual became the recognized expert.
4.2. Providing direction and rigorous checkpoints
The architect created a Development Principles wiki page which was then presented to the team
in an interactive session. The principles were deliberately not generic points that could be
universally applied on any project. Instead, these principles were derived from the best working
practices used on the T-Mobile portal application. They were very specific and easy to apply. The
principles covered a wide range of topics from TDD and patterns for concurrency to policies
regarding the team’s attitude to broken builds. They also contained sections that related to
common functional requirements such as error handling.
Reviews and retrospectives supported the view that these principles had a positive effect
encouraging a consistent approach.
In keeping with the approach of gathering empirical evidence over reliance on documentation,
the Development Principles were supported by an audit driven by an Architecture Checklist.
8
9. Like the Developer Principles the Architecture Checklist was developed specifically for this suite
of applications. The temptation to try and make a generic tool which could be widely reused was
resisted.
The audit of the system using the checklist was performed by the architect and technical lead
paired at a workstation. The code was checked out clean and the IDE and test framework were
used to check various points. In previous projects the author had experienced audit exercises
driven by a review of design documentation followed by an interview. This required overly time
consuming preparation, was stressful and less effective than inspecting the code and running
tests.
Since the check list was developed for this application suite most points were pertinent. Each
check was phrased as a question where a given response would sometimes indicate that a more
detailed section was applicable. Not every question was answerable by a simple yes or no;
instead where appropriate the reviewers recorded a written answer. This formed part of the
documentation and most importantly stimulated deeper inspection.
The audit exposed the architect and technical lead (who may or may not have been a senior
developer depending on the project) to the code. It was not a fool proof tool and obviously did not
detect all errors. It did expose issues that would have been show-stoppers later in the application
lifecycle. E.g. an integration module was found to report a network connectivity error to the
operators in exactly the same way as an unexpected response from an external system. These
errors required different escalation routes (the former to the network team, the latter to the owner
of the integrated system). This was stipulated by the Development Principles but had been
missed during development. The audit picked up these sorts of issues which previously had only
been uncovered in UAT or production.
The cost of the audit was high. It was expensive to develop and maintain and required significant
input from project members whose time was heavily in demand. Scheduling was difficult and
audits were often delayed, which increased risk to the project. The audits were enormously
beneficial and fully justified their high cost. They provided valuable empirical evidence and
reduced the architect's isolation from the implementation.
4.3. Structuring the application architecture to promote good governance
It was found that restructuring the application architecture was a contentious but effective tool to
improve technical governance.
The application was designed from inception with a clear modular structure with loose coupling.
Events had demonstrated that it was still possible to build components that violated
encapsulation by the corruption of shared services or simply by consuming all the CPU or
memory allocated.
9
10. When the new projects were initiated the assumption was that they would all be extending the
existing application. The architect determined that this made the technical governance more
difficult. Towards the end of the first iteration he proposed a departure from this architecture. The
monolith would be replaced several discrete applications. Where the same code was required in
more than one platform this was moved into shared libraries (distributed and controlled via
Maven) which individual applications could branch if required.
This move had a significant positive effect. The teams were decoupled in the same way as their
applications. The silos that had been in effect previously were recreated but with clean interfaces
which could be easily policed. Developers now had freedom to innovate rather than a license to
interfere and disrupt.
Although their deployment workload had been increased, the operations team was supportive
because they were given better performance testing guarantees.
One of the failures late in 2008 had been caused by a presentation layer module consuming
unacceptable levels of CPU capacity. Since all modules ran in the same container it took over a
week of repeated tests to ascertain that the complex integration modules, obvious candidates for
extreme CPU use, were not at fault. The updated architecture allowed each component to be
load tested in isolation. This meant that issues were identified with fewer test cycles.
Simplification and encapsulation of the implementation directly led to more effective architectural
governance without imposing onerous processes. Although the initial emotional response to this
change was that it would be very costly, in retrospect, even though it was initiated in the second
iteration it still only required an additional ten days of development time. It saved many times that
effort by removing the requirement to regression test alone.
4.4. Key measurement tool: testing
The technical architect had always placed a high value on a test first strategy and the adoption of
TDD at T-Mobile was the subject of an Agile 2008 Experience Report [3]. All projects had a
reasonable unit test coverage (60% lowest to 85% highest). Unit tests were written by and were
mostly for the benefit of the developers.
A second class of tests, labelled acceptance tests, were closely aligned with the user goals of the
system and were intended to be developed in conjunction with the proxies for the business
stakeholders (organizational issues precluded the direct involvement of the stakeholders
themselves).
On some projects resource constraints meant that the acceptance tests were often created by
the developers without the involvement of other participants. Whilst these tests still had
significant value, an opportunity for the architect and proxy stakeholders to verify that the system
was fit for purpose as they understood it was missed. As the tests were developed solely by
engineers the technical complexity of the code inhibited comprehension by proxy stakeholders.
10
11. To mitigate against the above issues the architect initiated the development of a new class of
tests, referred to as use case tests, which ran against the application fully deployed. These tests
were supported by a simple framework which aimed to make the tests themselves resemble the
language of the interface specifications, i.e. the tests were expressed in a language that the
stakeholder proxies could understand.
These tests became a powerful tool for measuring completeness against functional
requirements. The construction of this test suite highlighted several areas where the
implementation had diverged from the published API. These tests also provided a seed into the
creation of load tests (using JMeter).
The technical architect helped construct and run these tests rather than relying on a report from
others. This practice was prioritized as another mechanism to reduce the distance between the
architect and the implementation [1].
Definition of load testing profiles (i.e. agreeing what constituted 100% load), detailed review and
coordination of load test execution were key responsibilities of the architect. The cost of these
activities was extremely high but entirely justified by the direct exposure it gave the architect to
the non-functional aspects of the system. These aspects are fundamental in delivering the
architectural remit. It was found that the architect and technical lead would be forced to
concentrate solely on load testing for long periods. This came at the cost of ignoring the
demands of the other projects during these times. It was a significant issue if these load tests
occurred at the same time as other critical activities (such as retrospectives or sprint planning) for
other projects.
JMeter and use case test execution gave the technical architect a high level of confidence that
the applications were fit for purpose based on empirical evidence and real experience rather than
documentation.
4.5. Documentation that is 'good enough'
In keeping with Agile values, the technical architect was determined not to expend valuable effort
producing documentation with no clear purpose when that effort could be better used to bring
delivery closer. At the same time the architect wanted to ensure that where documentation was
genuinely required it was fit for purpose.
One project delivered a set of web services. A comprehensive, example based, API specification
was produced. This document was as formal as any document delivered by waterfall projects at
the client. This document was identified as being critical to success and therefore justified its high
cost in man hours to write and review.
Previously a design document for each module had been mandated. This rule was discarded.
Instead, key areas were identified by the architect or the team as being important or complex
enough to justify some form of design review and capture. White board sessions were led by the
appropriate developer.
11
12. These were captured using digital cameras and uploaded to the wiki along with a brief summary
of any conclusions or activities to complete. Where an area was identified as particularly
important the architect had formal UML diagram production added to the sprint backlog. This
ensured key documentation was completed, its cost was visible and that cost was not absorbed
into the development activity. These documentation tasks had specific goals, e.g. communicate
the lifecycle of an object through a state diagram such that the design can be verified against use
cases. The UML diagrams were held in a single, source controlled, highly accessible UML
repository.
Given the client's history of a document centric process the new approach to documentation was
always going to be contentious. This was especially true when the development process had
some sort of interaction with the wider organization. A security audit was performed several
months into one project's life by an external group who had been informed that they were dealing
with an Agile team. Due to some inter-programme communication issues, they were only
supplied with a couple of power point slides. This
met all their preconceptions of Agile. The auditors were surprised when the architect was able to
supply on demand a number of succinct and appropriate documents. These were generated from
the UML repository or copied from the wiki but were exported into a company standard document
repository to comply with versioning and accessibility rules. This demonstrated to the security
auditors that the project was as rigorous as any of its waterfall peers.
4.6. Delegation
There is always going to be a point where it is impossible to achieve any more development
throughput without adding more people to the equation. It is the familiar pattern of horizontal
scalability always outperforming vertical scalability at some point.
Agile projects empower developers. Empowerment requires delegation. To be able to delegate
tasks you need to have a team which is fit for purpose [4]. As part of the client's rigorous
selection procedure the architect performed a technical assessment of all new joiners with a
development remit. In the course of his career the architect had observed many interviews being
concluded using emotional rather than empirical methods.
The technical architect developed a set of case study driven interviews customized for the T-
Mobile project. This meant that the candidate could be exercised using the project's working
practices and technologies. Interviewees were expected to run white board design sessions
based on common problems the project faced or use TDD using Maven and Eclipse. The
interviews were designed to give candidates a vehicle to demonstrate their abilities rather than
trying to trip them up.
12
13. It was the architect's opinion that this was a critical factor in assembling a strong team whose
practical ability had been proven before they started the project. This enabled a high degree of
immediate delegation and empowerment. The technical architect was keen to allow developers
to take the lead in producing the solution with minimal guidance. This allowed best practices to
be developed by individuals and then adopted across the teams.
Ivory tower architects often concentrate on technology rather than people. This stops them
delegating and therefore impedes the development scalability.
13
14. 5. Conclusions
An architect must reduce their isolation from the implementation by being closely involved
with high value technical activities such as load testing and code reviews.
An architect can become isolated from parts of the system if they cannot find time to
cover all areas because they are attempting to also be a full time developer.
Relying solely on documentation as a tool for technical governance is not an effective
strategy but there are documents (which may be on the wiki or in the UML repository)
which are essential to the architect.
Learning to employ ‘soft’ skills becomes as important as technical acumen because
without excellent communication and effective delegation you cannot scale up.
Automated, well written tests are the architect’s best mechanism to gather empirical
evidence of compliance with governance and fitness for purpose.
The best techniques cannot be applied every time because of cost. Choose the most
important areas for the high cost activities and use less effective but cheaper methods for
others.
Activities that made the developers aware that the architect was observing the source
increased the perception of common code ownership and invigorated developers to
maintain high levels of code quality.
6. References
[1] V. Hazrati, The Shiny New Agile Architect, 2008
[2] J. McGovern, A Practical Guide to Enterprise Architecture, 2003
[3] A. Rendell, Pragmatic and effective Test Driven Development, 2008
[4] S McConnell, Rapid Development, 1996
14