This is a high level summary of software build and release best practices. It is targeted towards a general audience and is not meant to be product specific.
Objectives - first to introduce you to a compiled summary of industry build and release best practices, as gathered from customer experiences, subject matter experts within BuildForge, and from observing common industry practices. Adopting best practices can lead to having repeatable processes that yield high performance and high quality build & release results. You may be practicing some of these best practices already today in some form or another; others you may not
Biggest comments we’ve received from customers is that there isn’t enough collaboration or best practices content around build and release management. There is a missing “body of knowledge”. Other vertical practices within the software development lifecycle such as requirements management, configuration management, test, etc. have mature best practices and many commercial tools, publications, and analyst reports cover these specific practices. But build and release has been noticeably absent- until recently. BuildForge was the focus of some of the first independent analyst studies covering the build and release market segment within application software development. Most software development shops still have home-grown solutions to address build and release processes- very rudimentary, not scalable, hard to keep up. We believe the material today will be useful for all types of audiences, regardless of their role on software development teams We have copies of analyst reports from Forrester, IDC and Hurwitz that we can share with you – visit our website
We encourage a phased approach to any type of adoption or implementation that can have an impact to the software development organization The ordering is not strict- we chose to favor an ordering that can allow organizations to pick the “low hanging fruit” first, the shorter duration tasks that can help show immediate value and success. Each company can decide which ones to focus on first depending on their current situation and objectives. Some of the best practices here have implicit dependencies to others- for example, automation is a foundation to achieve other best practices later. This summary version of our build and release best practices defines what these practices are, why they are important, and how they can provide value to a software team. Getting into the “how” part to reach these objectives is outside the scope of what we have planned for today, but we would be happy to talk to you or your organization further if you are interested in having us assist you in reaching any or all of these best practices! Let’s talk about each of these in a bit more detail… Reproducibility – use the “time machine” analogy Automation and Integration - Automate the entire process, not just parts of it. Manual steps are inherently error-prone. Abstraction – no hard-coded build systems; we want to distill the build logic present; optimally hardware should be “disposable” without impacting the builds Optimize processes and architecture design – focus on maintainability and scalability; resilience; break large monolithic tasks and applications into smaller, modular pieces Build Acceleration- can benefit iterative and agile methodologies; may require adopting some of the other best practices first to lay the groundwork Centralized Collaboration – essential for any team-oriented projects; help facilitate communication, visibility, tie in remote teams; yet enforce policies and keep people from stepping on each other Build Early and Often - Building an application early and frequently, especially in the initial design and construction phases, is essential to understand the “health” of the software, uncover problems early. Infrequent builds or waiting until the end of the project to assemble and compile all the pieces together is a recipe for disaster Link to Deployment Environments - Often the builds and deployment tasks are not closely tied within software organizations. Do not treat the deployment as throwing the deliverables “over the wall” for someone else to deal with Business Objectives- Show to your peers, to your management, to your company, to your shareholders, the value your software organization brings. Leverage reports and metrics gathered from the build and release system to confirm how the software team has contributed to the company’s business objectives, whether they be financial, customer satisfaction, quality, or compliance-related objectives, or a combination of any of these. Alignment of IT/engineering tasks to business objectives is important for any company.
Time machine analogy- go back in time to obtain the exact same steps, the exact source code, tools, environments, and servers that were used to produce a deliverable. Go back to any time - 1 hour, 1 week , 1 month, 1 year. May need to reproduce something that was delivered to: internal groups (test, QA, business users) External audiences (paying customers, partners, etc.) Need to identify all dependencies used to build a release
More time is spent trying to re-create the environment than actually fixing the problem (response time) Standish reference Symptom of larger issues (lack of process, discipline, etc.) Customer satisfaction is negatively affected by slow response time to reported issues Common remedy for poor reproducibility – make the customer upgrade. Highly disruptive to them, often results in customer discontent. For a lot of companies, upgrading a piece of enterprise software is a big deal- need approvals, schedule others’ time, can result in coming in over the weekend, etc. Have you forced customers to upgrade to resolve an issue they encountered in the past? If so, you may be a victim of not following this best practice- or at least you customer was! Inconsistent application behavior during development cycles delays testing and QA efforts Not certain what code base to start from to do maintenance work or perform special fixes on past releases Unintended code or regression defects going to customers (when patching from unknown point in time) – “ A bug that you sent us an earlier patch for suddenly re-appeared in this latest patch!” Failed Audits Cannot trace a deliverable back to its original components Show everything that touched or was associated with the deliverable Many process improvement frameworks specify reproducibility as a requirement (ISO 9000-3, etc.)
Spend less time trying to re-create a reported issue, and more time fixing the issue! Use of virtual operating system technologies like VM Ware and Virtual PC are being adopted by many companies to aid in providing reproducibility Long term- have compliance management objectives as part of your everyday process, not an unscheduled exception that requires you scrambling for data and wasting time and resources.
You cannot achieve full automation of your processes if you must still manually interact with tools and systems that support the software lifecycle. Automation and integration are inextricably linked. Don’t just automate one subset of processes that the software team performs- automate them all.
Ask yourself these questions: Do engineers on your team request system builds via an email message? Do people often ask what the state of a build is? If someone comes into work late, or has to leave for an unscheduled event, does the software build and release cycle come to a halt? Is there often miscommunication and error around which build groups should be testing, releasing, or integrating with? Are people often waiting around for the results of another team during the software project? Are handoffs between team members often inconsistent, incomplete, and unscheduled? Is there no coordination across different ALM tools in use on a particular project? Is your software process undocumented such that it would be very difficult for someone else to assume the build and release tasks? Are your tools not integrated? If any of these the answer was “yes”, then this is a best practice that you should consider adopting!
Speeding up efforts that used to be manual provides quicker cycles and greater productivity; eliminating the human component of manual work also contributes to quality, since manual efforts are inherently error-prone. More “turns of the crank” to produce build deliverables means more chances to uncover potential errors as well, which contributes to higher quality.
A lot of companies have build scripts “hard-coded” to particular servers, and data integral to a build and release processes residing on a piece of hardware. Hardware resources for building should be thought of as “disposable”. Losing or replacing a piece of hardware should not jeopardize software processes that are critical to a project. Analogy of a a valet driver to specific vehicle manufacturers- the process and steps of parking customers’ cars, retrieving, and returning them on demand is independent of the make and model of the actual car involved. A valet driver should not need specific steps for each different car model.
Inflexible environments- can’t use different machines, use different environments without a lot of work Errors when running a build on the wrong machine Poor utilization of hardware Expensive to maintain build system A handful of people have to do all the change requests Hard-coded values spread across many build scripts, difficult to track Bringing up a new project or new site is very long and painful Build processes impacted by unavailable/down machines No consistency in how builds are executed across machines Lack of documentation for the project environment Must manually configure servers when trying to build projects Updating server environment manually is painful Adding new machines into the build environment is painful Risk jeopardizing the build process by updating the server Hardware failure can result in loss of build metadata
Flexibility means being able to better handle one-off requests, unplanned events and emergencies, better able to respond to business conditions Abstraction results in cleaner build logic, easier to maintain Benefits teams that need to perform distributed or multi-platform builds Centralize environment administration instead of assets spread out across several machines Reduces risk- hardware failures do not result in loss of build data (ENV variables, paths, windows regkeys, patches- anything you can’t easily place under version control) Ability to establish build benchmarks More efficient use of resources (improved server mgmt, includes trend analysis) Re-use benefits: Standard tasks can be called from different project builds Faster setup time – re-use project libraries vs. writing from scratch
Getting the basics in place is a good start to provide process automation, repeatability, visibility, and capture of environment metadata for reporting and metrics The next logical step for organizations is to then begin to optimize what they already have, and look beyond the tools they have to see other areas that are candidates to help improve the overall software build and delivery process Typically optimization is supported by having an underlying build & release automation framework in place first
Adopting a modular architecture reduces complexity, allows greater flexibility, reduces unnecessary builds, and facilitates re-use “ Even the smallest changes to our code result in the entire application always being re-built.” Consolidating duplicate script code and environment data allows you to make a change or update in one place, vs. tracking down dozens of unknown locations. Consistent naming conventions allow better sharing of team resources, faster troubleshooting efforts People can come up to speed quickly when moved between projects Common terminology reduces mis-communications Easier to search and sort on data, tie into other system (example: build tag ids mapped to source code labels or baselines)
Optimizing your overall architecture provides you with a more resilient system that can better meet and adapt to your future business needs Don’t be afraid to take a good hard look at existing software processes and practices within your organization and ask if there is a better way!
Once you have a basic build management framework/infrastructure in place, look to accelerate existing tasks that may be performed in a manual or serial manner Don’t limit your focus to just compiles- compiling code is just one of many tasks that are part of an end-to-end build process for delivering software. In many organizations, the testing effort takes longer than the actual compile of an application for example. Support for parallel execution, leveraging server pools efficiently, are basic capabilities for achieving build acceleration
The truth is that within the software development lifecycle, there are many consumers that rely upon the results produced by a build: Integration teams rely upon compiled applications to ensure interfaces and run time environments are successful Functional testing and quality assurance cannot begin without a delivered application Document writers cannot capture screen snapshots, review on-line or context sensitive help, or validate documented behavior without a compiled application Project requirements cannot be validated if the application is not built with the latest changes Status for outstanding or new defects, test cases failed, cannot be communicated until a build is received You cannot release a software product that has not been built, and been through all the necessary internal steps and process first By accelerating your overall build process, these stakeholders can get to their activities they need to complete quicker, reducing the overall delivery cycle More frequent builds resulting from applying build acceleration techniques translates to higher quality deliverables, and a more accurate representation of the status and health of the project application under development
There are many benefits to achieving build acceleration as outlined in this slide, if teams are willing to do the work and adopt basic build and release best practices that support this objective
Effective and manageable collaboration is one of the largest challenges global software teams face in trying to work together We believe a centralized approach to metadata storage, process, and access that also supports real-time visibility and status provides the best solution vs. having teams utilize redundant and unconnected systems that require manual synchronization efforts with no single interface to understand the true status across all projects
This slide pretty much speaks for itself
If you feel that your global software teams are operating in a chaotic, unpredictable environment, with missed hand-offs, lots of team members stepping on one another, with no visibility across the different distributed teams, then this is a best practice you should think seriously about adopting!
The build process is the one activity across the software development lifecycle that provides the single best measurement as to the true health of a software project, so performing this activity frequently is key for successful delivery of software applications Code and design reviews may miss issues that are uncovered during the build process. Test cases cannot be performed to exercise and verify areas of an application that are not built. The build tells all. It is important to implement some type of continuous integration build solution that is automated, and to be sure a top-down build “skeleton” is in place early so all parts of the product are being built during software development, including packaging, installation, and documentation components
The truth is that within the software development lifecycle, there are many consumers that rely upon the results produced by a build: Integration teams rely upon compiled applications to ensure interfaces and run time environments are successful Functional testing and quality assurance cannot begin without a delivered application Document writers cannot capture screen snapshots, review on-line or context sensitive help, or validate documented behavior without a compiled application Project requirements cannot be validated if the application is not built with the latest changes Status for outstanding or new defects, test cases failed, cannot be communicated until a build is received You cannot release a software product that has not been built, and been through all the necessary internal steps and process first Some sample questions that can provide the true status of a project: Is the application performing as designed? Is something broke or missing today? Did a previous fix not get applied properly in this build? Are performance issues surfacing? Are all the required install components being included? Does the documentation match up to the behavior of the application being described?
There is a direct correlation to higher frequency of project builds and project success!
Historically the software handoff from engineering to different staged environments, and even to production / operation teams, has been difficult and error-prone Typically engineering teams do not push software assets directly to production environments. Deployment in this example is used to describe “promotions” within different software stages or phases, such as integration, test, acceptance, pre-production, etc.
There is a large amount of pain and confusion with many software organizations when it comes time for the engineering / development team to hand off software deliverables. The target environment may be a staged testing environment, IT production environment, a website, or an application server. Organizations need the ability for a given software artifact or deliverable in the production environment, to able to trace back its origins and produce an audit trail that may contain the following information: Which versions of source code were compiled, when, and by whom What environment variables were used The machine(s) used to build and test the application Who promoted the software asset When was the artifact installed to its current location? Automation achieved in the build process can extend to deployments as well, to further provide increased efficiencies and reduce manual errors Incomplete transfers are a leading cause of failed deployments- missing configuration files, libraries, binaries, documentation, etc.
You should be able to take any software asset from the target deployment environment, and be able to quickly determine the history of that deliverable, being able to trace back to the original source code versions used, server build environment, test activity performed, and who touched or approved it along the way.
Software engineering and IT organizations are being measured today on the impact they have to the underlying business. It is therefore important that such software teams are able to provide reports and metrics that show their positive contributions to the overall business, and help management make important decisions about where future investment dollars should go
There has never been a more true statement, “You cannot improve what you cannot measure.” Can you measure basic things about your build and release environment? How many builds a day or week does your team do? What is the success rate? How long on average is an end-to-end build process? What hardware resources does your project utilize on the network to complete builds? What are the trends your organization is observing with respect to build and release activity? Which projects have higher build defect rates? What is the additional load and activity contributed by new projects added this year? If you can’t show report metrics on this basic data, how can your team objectively determine if they are improving or not?
At the end of the day, a software team within an IT or development organization needs to be able to show how they positively contribute to the business of the company Many process improvement frameworks such as CMMI, ISO, ITIL, Six Sigma, etc. recommend continual evaluation and improvement from where you are today, and more businesses are adopting such frameworks. Providing a solid foundation for reporting metrics that can be aggregated across the software lifecycle can help teams optimize their process, and provide positive value to the underlying business.
Build and Release Best Practices Anthony Baer, IBM
Discuss common impediments for repeatable, high-performance, high quality builds and releases
Introduce you with a set of fundamental building block “candidates” to enhance your existing build/release management system
Help you identify where you are today with respect to build and release best practices, where you want to go
Best practices are covered at a high level- getting into the “how” part is beyond the scope of this presentation.
Leveraged data and experiences from customers, the market, and internal subject matter experts to produce this content
Build and Release Management – Joining the Mainstream of Software Development
One of the least structured but most essential processes in the lifecycle
A critical link between what development creates and what ultimately gets delivered to customers
Unlike other development practices, there has been sparse collaboration or published best practices in this area
But awareness has been growing…
“ Software build management increasingly impacts successful software deployments, business and IT productivity and is becoming a focus for IT organizations. ” - IDC “ The inability to produce consistent, accurate, and repeatable software builds creates a significant development bottleneck that makes development teams ill prepared to manage project complexities without adding additional resources.” - Hurwitz “ Changing development practices and new compliance requirements have turned a spotlight on a long-neglected development life-cycle activity: build. ” - Forrester
Hardware consolidation- better utilize the hardware you have via pooling instead of having idle servers
Hardware consolidation reduces costs
Faster time to market, with higher quality
What You Get
6) Centralized access & collaboration for all stakeholders
Facilitate communication, visibility, sharing, and leveraging globally distributed teams without mass chaos
Essential for any team-oriented software projects
Global collaboration, yet still be able to enforce policies and keep people from stepping on each other
What is it:
6) Centralized access & collaboration for all stakeholders
Centralized access and control allows for consistent policies and processes to be practiced among all stakeholders, eliminating silos of information, which contributes to greater team productivity and more secure data.
A one-size-fits-all approach does not work with a diverse set of stakeholders
Need to support role-based access levels and tailored views
Developers, admins, testers, project leads, managers all need to collaborate, yet all have different needs and requirements
Silos of data spread out across different servers and desktops compromises security and policy requirements; no way to aggregate data or see the “big picture” view across teams and organizations
Without centralized control, teams cannot easily share or leverage others’ work; communication and handoffs may fail, impacting release schedules and quality
Why It’s Important
6) Centralized access & collaboration for all stakeholders
Greater participation among the software team members without compromising security or productivity
Centralized data is secure and protected, yet made available to teams based on level of access permissions granted
Better visibility, accountability across the software development lifecycle
Eliminate the human bottlenecks in the build and release organization
Self-Service capabilities, real-time data available via web
Easer to log activity from one repository location for compliance and governance objectives
Feedback loops happen faster: less time waiting on manual handoffs and notifications, more time developing, fixing, testing, optimizing