04 small interventions sepg 2007

  • 155 views
Uploaded on

Small organizations have very limited resources. This implies that traditional approaches to SPI will probably sink before they succeed for lack of sustaining funding. This white paper shows a proven …

Small organizations have very limited resources. This implies that traditional approaches to SPI will probably sink before they succeed for lack of sustaining funding. This white paper shows a proven approach to institutionalizing a managed behavior and beyond, by effecting small incremental changes that are easy to install individually but that collectively achieve most of the required specific practices at ML2. The presentation addresses a niche audience that usually has great difficulty in finding applicable processes and experiences that match their needs. In particular, small organizations, or process engineers working with small organizations; but also, organizations that cultivate individual dissonance in opposition to synchronicity or democratic decisions can benefit from it.

More in: Business , Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
155
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
6
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Small Organizations, Small Interventions Viviana Rubinstein, Lic. CC, and Jorge Boria, M Eng SEI Authorized Lead Appraisers, Liveware Inc. viviana@liveware.com jorge.boria@liveware.com Abstract: Small organizations have very limited resources. This implies that traditional approaches to SPI will probably sink before they succeed for lack of sustaining funding. This white paper shows a proven approach to institutionalizing a managed behavior and beyond, by effecting small incremental changes that are easy to install individually but that collectively achieve most of the required specific practices at ML2. The presentation addresses a niche audience that usually has great difficulty in finding applicable processes and experiences that match their needs. In particular, small organizations, or process engineers working with small organizations; but also, organizations that cultivate individual dissonance in opposition to synchronism or democratic decisions can benefit from it. Introduction Small organizations have very limited resources in general. Their competitive advantage lies in their doing one thing very well, usually in a niche market. They subsist because they excel in one particular area. The skills that make them survive are linked to engineering practices, whilst their process and project management skills are relatively weak, since there is not much need of them anyway in their life cycles. However, more and more of them are required by their customers to show proof of maturity level by means of a formal appraisal. Small organizations are successful because they have a handle on some niche market. This could be a proprietary algorithm, domain knowledge, technical leadership or very fast turnaround response time for changes. In any case, it is not because their processes are well documented and perfectly executed. Their processes are simple, sound and based on their small number. They usually know who knows what and who is doing what, and they can change their code at the drop of a hat. However, these relative advantages work against them when they want to adopt a formal process. In truth, small groups have more to learn than large organizations in terms of software process. Small organizations rarely have in-house CMMI-related knowledge, as possibly large organizations do. Roles for Project Management Professionals, Configuration Management, Technical Writers that can develop Process Manuals, Quality Control, and Quality Assurance that could perform Process Audits; rarely exist in small organizations. This implies that they have to formalize that that has so far been informal, and do it in such a way that conforms to the CMMI. Since the CMMI requires many different skills and bodies of knowledge to be implemented (70 generic practices in the Managed Level -ML2 and 132 in the Defined Level - ML3) and since PP & PMC, MA, PPQA, CM, OPF y OPD, OT, IPM, RSK y DAR demand skills rarely taught to programmers in Universities
  • 2. and Colleges, small organizations are at a disadvantage with regard to adopting the model practices. Fichman and Kemerer1 pose that adoption of a complex technology is subject to knowledge barriers. Knowledge barriers appear when adoption and use of the technology is hindered by the effort of organizational learning required to obtain necessary knowledge and skills. Specifically, they hypothesize that organizations will have a greater propensity to initiate and sustain the assimilation of specific technology changes when they have a greater scale of activities over which learning costs can be spread (learning-related scale), more extensive existing knowledge related to the focal innovation (related knowledge), and a greater diversity of technical knowledge and activities (diversity). Hence, small organizations are challenged to apply CMMI, which shows all the characteristics of complex technologies, since it covers a large set of domains (large learning-related scale), it requires a significant amount of knowledge related to the domains at hand (deep related knowledge) and there is a large diversity between the domains. Small organizations that have been successful in implementing the model to the point of achieving maturity levels 2 or 3 of the CMMI have taken one of three tracks. In what follows we describe the problem of small organizations in adopting the model, the three different approaches and, finally, we describe one of the approaches (minimalism) in greater detail. Finally, we describe one particular implementation of minimalism. Small Organizations’ Problems with Complex Technologies Fichman and Kemerer conducted experiments that prove correlation of organizational size to assimilation of software process innovations, but their findings suggest that “the well-established empirical link between innovation and organizational size is more likely a result of other variables that covary with size—such as scale, professionalism, education, and specialization.” The problem, then, is not exclusive to the CMMI, but to any other technology showing similar characteristics. In particular, OO modeling can be one such technology, as has been proven in experiments. The difficulty is the degree to which a technology requires to innovate. It is easy for an organization to innovate if much of required know-how already exists within the organization, or can be acquired easily or economically by the innovating organization. In any small organization, the learning-related scale is probably small too (not much Process Improvement can be applied to other activities), and it probably doesn’t have the CMMI expertise required. As stated above, the chances of a small organization being specialized is quite large. The Horrifying Learning Curve Small groups cannot invest their human resources to acquire the required knowledge, not even if the cost of the training is zero. Small organizations cannot acquire too much knowledge in a domain that they have no previous expertise because it demands too much effort from their own people. This is because the same resources that can make change happen are usually busy making the product that goes out of the door. 1 Robert G. Fichman y Chris F. Kemerer “The assimilation of software process innovations: An organizational learning perspective”
  • 3. Economies of scale work against small organizations, since the nature of the change is knowledge-bound. For example, if 1% of the employee population is required to build and help support a process in any organization, the indivisibility of the learning task makes this 1% of, say, 25, still one person, or, in effect, 4%. In a group of ten, this will raise to 10% of the resources. Similarly, if Quality Assurance requires 6% of the development team, in a group of 25 this means that 2 people will have to be dedicated to learn the processes. Moreover, if it takes six months to become proficient in a certain discipline, and that mastery is required for the success of the process improvement program, it still takes six months to learn it even if you are 1 in 10 as if you are 1 in 1000. Hence, the same learning is a marginal cost for a large company and a significant one for a small company. Even if you do not assign the person full time to quality assurance, her learning curve is lengthened by the need to continue to perform other tasks, usually directly linked to income. Learning Curves then, take longer for a multi-tasking person that has no previous exposure to a discipline. Even if the ROI can be very impressive, the organization may not survive the transition if the period to Break Even Point is too long. For example, figure 6 (page 19) showed the productivity curve for projects undergoing changes through a well-defined process improvement project. Although the final productivity is four times larger than the original one, it takes the company 18 months to reach that result. Moreover, productivity turns negative for six months. The company does not reach the original levels of productivity (break even) for one full year after the SPI project starts. Undoubtedly, even if the end result is extremely desirable, the company cannot survive the transition. 30 25 20 15 10 Series1 5 0 -5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 -10 30 -15 25 20 15 10 Series1 5 0 -5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 -10 30 -15 25 20 15 10 Series1 5 0 -5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 -10 -15 Figure 17: Productivity Loss, Repayment Period and Total Costs
  • 4. Successful Implementations This implies that traditional approaches to SPI will probably sink before they succeed for lack of sustaining funding. This argument is not pointed towards giving up on small companies, but rather to bring a fresh look to the problem and suggest a different approach. To diffuse innovation (i.e. the CMMI) in a small organization, the CMMI knowledge has to be acquired more easily or more economically, or both! There are three ways we’ve seen small organizations deal with these problems. All three are focused on amortizing costs of the knowledge acquisition. If a company cannot pay for the learning curve as it is, it has to: 1. Share the costs with others, 2. Buy the knowledge as a package, or 3. Modify the learning curve. The first approach, sharing knowledge acquisition with other similar organizations, usually is sponsored by federal governments and funded in the countries where we’ve seen it. MOPROSOFT in Mexico, SOFTEX in Brazil, and similar programs in India, China, Argentina and many other countries work on the same principle: Find a group of companies that are willing to build a coop to acquire knowledge in the market. Usually this takes two forms. One is to acquire existing classes that form those skills in the market, where the cost of the class is divided amongst the participants or funded by the governmental initiative. The other is to identify and hire Subject Matter Experts (SME’s) and share these SME’s across the participating organizations. This approach has pros and cons. Pro: 1. There is a flourishing of such experiments going on, which should make this easy to implement; 2. It is easy to imagine and build such coops; 3. It is simple to bring such a structure to a quick start, and since SME sharing breeds improvement quickly, 4. The payoff is easy to detect. However, there are the following Cons: 1. We have witnessed significant reticence to share anything amongst participants, making it hard to usufruct economies of scale. 2. Besides, with small companies that are acquired or go bankrupt quite frequently, loss of participants midway is common and it increases costs for the rest. The second approach is to buy the knowledge packaged into a software product. Referred to ironically as “Maturity Level in a Box” by facetious process consultants, in most cases the software contains the totality of the knowledge that the processes that it defines demands, even with support of a workflow engine to enact them. The most expensive packages allow the user to adjust the workflow to their needs. Depending on how difficult their tailoring is to the organization (or, sometimes, vice versa), the software packages require support from specialized consultants for their installation. As in the previous case, there are pros and cons with such an acquisition. Pro are: 1. Very fast installation time; 2. Is relatively easy to adopt; 3. The libraries are complete by design, and they are well structured.
  • 5. Cons are: 1. It is too dependent on the support of consultants; 2. Could be too expensive for a small organization to buy; 3. In most cases, the organization has to adjust to the product, not the other way around. 4. The road to growth is decided by the software provider, not the organization. The third successful approach is the focus of this paper. It is based upon breaking down the learning curve, as shown below with an imaginary example of activities. We have called it minimalism, since it is structured to introduce minimal change at each intervention to solve a particular problem every time. In this approach, the skills are not taught in a large, one-time, one-size-fits-all, encompassing approach. Instead, Just In Time, On The Job, Just Enough Training is provided in a coaching format. Every intervention deals with a particular problem. When the problem is solved, another one replaces it in the focus of the improvement team. These are the reasons for our suggested approach: 1. Small organizations cannot fund large changes. The cost of the transition could prove to be too much, whether the ROI is large or small. 2. Successful small organizations tend to have mastery of technology. Adding technology to support activities is welcomed. 3. Small organizations usually have large returns on people’s time. If you free ten minutes of everyone’s time everyday the payoff is significant to the bottom line. Based on these three hypotheses, we have developed and successfully implemented an approach that we now proceed to explain in detail. Parsimony in Process Improvement Our approach has the following characteristics: 1. It sells the problem, not the solution 2. It focuses on freeing resources, not in tying them down 3. It is based on implementing processes through technological changes (the total opposite of what is recommended for most organizations) 4. Every intervention in process improvement solves a real business problem. It is important that this intervention be limited to the problem at hand and not to implementing the model. The perception of a process-focused approach is that foreign elements are interfering with productivity by introducing unwanted changes. Solving problems, instead, is everybody’s business. It follows that this intervention should respect the organizational culture. As process improvement change agents, we are there to help. We cannot achieve successful changes if that change violates or attempts to transform the local culture. If those changes have to take place (and sometimes you cannot avoid them, e.g. one cannot jump a ten feet chasm in two five feet hops), they should be strategically presented by upper management and require a plan of their own. So where is the CMMI? The sum total of all interventions implements the model practices, as an end point and not as a starting point. The change agents are knowledgeable of the CMMI practices, and when selecting procedures to solve process problems, they keep the model in mind. The Parsimonious Process is as follows: 1. Identify a high-priority business problem related to some development processes
  • 6. 2. 3. 4. 5. Identify actions that solve this problem or avoid its repetition Implement these actions and adjust them to achieve the desired effects Measure the effect of change Iterate from step 1. This approach has obvious advantages in terms of organizational change. It brings immediate results, which in turn make the next change easier to sell, hence it accelerates change adoption. The approach, since it focuses on projects rather than process, breeds growth of the company. However, the approach is not without cons. It requires high-end consultants that know how to frame and solve problems, with a strong interpersonal relationship skill set. These people are not, contrary to popular believe, a dime a dozen. Moreover, this person or persons have to be aware that the approach is subject to scope-creep: “Since we are changing the plan template, let’s include a new estimation technique. And since we are introducing new estimation techniques, we could use this to introduce Value Added Method. And this brings me to a real nice improvement that we can also add…” You get the picture. Another possible downside of minimalism is that the whole organization might lose track of the maturity goal. This, after all, might not be a bad thing, but if the company is in need of the appraisal publication, this is unacceptable. For this last reason, we have found it difficult to sell to management focused on the level. 30 25 issue trackin g 20 15 config ítems req changes Prod. 10 size estimates 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Figure 18: Short Interventions with Short Repayment Periods A Case in Point We will now describe in detail an application of the parsimonious process of our minimalist approach in a small setting. This group was a small, highly specialized software development group within the confines of an international company. The company reinforced independent thinking and rewarded individualism. There was a large resistance to process and the people considered CMMI as a bureaucracy-inducing model. However, the projects were experimenting problems that showed the need for changes. These changes were analyzed and prioritized by the process improvement
  • 7. team. The list, shown below, was approved by the project sponsor and then implemented in many small steps. 1. Control tasks. 2. Establish individual assignments 3. Define artifacts associated to every task 4. Increase the task granularity (i.e. make the artifacts smaller) 5. Introduce simple measurements 6. Define base lines as a concept and in practice 7. Bring configuration items under change control 8. Add requirement development tasks to the work structure 9. Define quality attributes for requirements 10. Define completion criteria and make them be honored 11. Build indicators from the measurements already captured 12. Generate reports based in those indicators 13. Meet regularly to analyze the indicators and act upon the results of the analysis 14. Identify risks and include all action plans in the issue tracking system and, if needed, in plans. Once approved, the team decided that any change should be introduced with a tool that supported it. This decision was based on the particulars of the organizational culture. Changes that were seen as facilitating the evolution of a product were easily accepted, and the culture had no problem with tools. Adopting a tool required a mere explanation of the problem to solve. Once the implementation began, priorities changed, so that in effect the previous lit changed. The activities, as they were executed, follow. 1. An issue tracking system (ITS) was introduced; and everything was considered an issue (tasks, risks, action plans, etc.) 2. Each issue required attention. Assignments were introduced through the ITS. All activities were entered in the ITS and all completion was reported through the ITS. This established a monitoring mechanism on its own right. 3. Outputs were linked to issues. We simply defined a field on the ITS Issue definition screen to allow input of the associated artifact. 4. To turn the granularity up, we started by considering tasks that were longer than a week as “too long,” then broke them up to hold them to a week or less. As a consequence, the artifacts were broken up too, so that an item was linked to an individual task. 5. We then put the “plan baseline”, as defined by the collection of issues, under change management. 6. Consequently, the corresponding items were also put under configuration management. 7. We then added the tasks to create requirements that were related to items to the process. Before, these requirements were captured but this activity was not planned. 8. Next, we defined quality attributes for Requirements. 9. Using these attributes as a starting point, we defined the completion criteria for all of the items in a project. In effect, we introduced verification. (To verify that the items were complete, the team had to test and measure against goals, or inspect or peer review an item). Initially these were considered activities within the task of creating the items, but very soon these activities had become issues themselves. Furthermore, the activities were identified as a sequence of QA
  • 8. review, formal technical review and approval, each with its own responsible person and outcome. 10. By now a large number of items were regularly created and managed. It was then easy to create a structure of folders with privileged accesses that reflected the promotions an item underwent. A folder for each step was created, so that presence in a folder could be associated to the criteria defined above. These criteria were then strictly enforced. 11. These activities soon gave birth to a set of measurements. Considering them by themselves, they did not send any clear message, but combined into indicators from the base measures they indicated the rate of progress and were a great source of information for managers. We generated our weekly and monthly reports from these indicators. 12. The reports were analyzed in manager meetings to review them and act on them. 13. These meetings naturally evolved into identifying risks from all of the above. Using our ITS, we created plans to eliminate, trade, mitigate, track and/or use contingency to manage our risk. 14. Once the projects adopted these activities and saw their usefulness, we used the macro-activities in the ITS to define associated processes and saved them separately. 15. For every activity and task with defined responsibilities (and by the time we got to this point, they should have been all), we created a model of competencies required to successfully perform the task. 16. We then used this competency model to plan and execute training for all the team. By following these individual steps the organization created an ever-increasingly stable environment, with little disruption in every intervention. Every time a step was introduced, the need for it was highlighted by pointing at the yet unsolved problems. Conclusions Small organizations can avoid traditional approaches to SPI that will sink before they succeed. By implementing change in small increments that are easy to install individually the organization can achieve most of the required specific practices at ML2 and beyond. Small organizations or process engineers working with small organizations and organizations that cultivate individual dissonance (random paradigm) in opposition to synchronism or open paradigms can benefit from this proven approach that results in accelerating change and avoids the typical pitfalls of the monster plan. It has the further advantage that if the consultant infers that changes to the sequence are needed in order to accelerate change for business reasons, these can be easily implemented. Summing up, this is a very simple, step-wise approach to mature the processes in very small organizations that can also be applied to larger organizations that have cultural barriers to change based on their perception of process as bureaucracy. Austin, March 2007