From Domain to Requirements: Groundwork for Product Lines & An Evaluation of SSM


Published on

Software Product Lines are increasingly being perceived as a viable option of manufacturing software, and as a software development paradigm. Software Product Lines allow organizations to yield high returns on initial investments, better times to market, cut costs and increase profits. The field of domain engineering becomes particularly of interest when it comes to understanding the right requirements when building a product line. A number of companies today want to get into the paradigm of developing off of a product line however they do not know what is it that they need to build for this product line, what is the right level of modularity that they want to achieve when building the basic components / features or units, what have you, when ensuring high maintainability and scalability of the product line, so that they remain profitable with time without having to reengineer their product lines themselves. Many companies have tried and failed at the effort of efficiently analyzing the domain that they want to build the product line for, largely because of the lack of appropriate / effective guidelines to understand the domain from the perspective of building a product line. This article aims to look at the various problems and challenges that exist in the world of domain engineering at the level of domain analysis and design, when it comes to building the right kind of product lines. Here we take a look at the various approaches out there that provide some guidance for domain analysis.

This paper also lays special emphasis on trying to apply the concepts from the Software Stability Model (SSM) and the related Knowledge Map approach to analysis, architecture and design and how it can be applied to the field of domain analysis and design in order to understand the requirements so as to foster the development of a core infrastructure for a product line. We take a look at how far SSM goes to fulfill the PL promise, what are the values that it brings and what other challenges remain unconquered as yet.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

From Domain to Requirements: Groundwork for Product Lines & An Evaluation of SSM

  1. 1. From Domain to Requirements: Groundwork for Product Lines Shivanshu Singh sks@cmu.eduAbstractSoftware Product Lines are increasingly being perceived as a viable option ofmanufacturing software, and as a software development paradigm. Software ProductLines allow organizations to yield high returns on initial investments, better times tomarket, cut costs and increase profits. The field of domain engineering becomesparticularly of interest when it comes to understanding the right requirements whenbuilding a product line. A number of companies today want to get into the paradigm ofdeveloping off of a product line however they do not know what is it that they need tobuild for this product line, what is the right level of modularity that they want to achievewhen building the basic components / features or units, what have you, when ensuringhigh maintainability and scalability of the product line, so that they remain profitablewith time without having to reengineer their product lines themselves. Many companieshave tried and failed at the effort of efficiently analyzing the domain that they want tobuild the product line for, largely because of the lack of appropriate / effective guidelinesto understand the domain from the perspective of building a product line. This articleaims to look at the various problems and challenges that exist in the world of domainengineering at the level of domain analysis and design, when it comes to building theright kind of product lines. Here we take a look at the various approaches out there thatprovide some guidance for domain analysis.This paper also lays special emphasis on trying to apply the concepts from the SoftwareStability Model (SSM) and the related Knowledge Map approach to analysis, architectureand design and how it can be applied to the field of domain analysis and design in orderto understand the requirements so as to foster the development of a core infrastructurefor a product line. We take a look at how far SSM goes to fulfill the PL promise, what arethe values that it brings and what other challenges remain unconquered as yet. 1
  2. 2. IntroductionEver since a very long time ago in history, mankind has been trying to optimize they wayit uses things, fosters a better living and optimize effort needed in creating things thatsupport comfortable living. If we look at how things have progressed since the mid ofthe 18th century and the industrial revolution, there is a pattern that emerges, especiallywhen it comes to optimization of production process that allow for more costeffectiveness, better reuse and maintainability over time. Some time around 1750 inEngland, the factory system of production and manufacturing was adopted, where therewas a switch from craftsmen labor based production to mechanized production systems.This allowed for faster production once a base infrastructure was ready and better profitmargins over time [1]. The figure below shows this progression.Figure 1 – Progression of the Industrial revolution, optimizing manufacturing andproduction from Exchangable parts (1826) to the first assembly Line (1901 – Olds/Ford)to Automated Assembly Lines (1980s - 1st industrial robot at GM in 1961)Images: [2,3,4]With time, the world saw an introduction of the first assembly lines at the Olds mobile /Ford auto assembly plants. This move helped the company become one of the most 2
  3. 3. successful auto manufacturers. The process, eventually transformed into automatedassembly lines for producing automobiles as the one shown in the figure above, theindustrial robot based assembly line at GM in the 1980s.The idea behind these developments that promised and delivered upon the promise ofbetter yields, and larger profits over time was this: take time to invest in and build a coreinfrastructure that can produce components and parts that are common to allconfigurations of the products that you want to deliver and then over time, use theseinterchangeable and interoperable parts to assemble a large number and variety ofproducts. So instead of having to spend time and resources to build out and also testeach individual product, test out the individual components and build this common basefirst and then save when assembling products, with other mechanisms in place to ensurequality ofcourse.The world of software engineering, as always deriving inspiration from other morematured and well-understood fields of engineering and science, was quick to recognizethis. A thought emerged, especially in the defense and military software efforts, initially,that if the same cost saving approach where a core infrastructure is first built out and anumber of products along various configurations and customizations are produced off ofit, can be applied to software as well, this could really revolutionize how softwaredevelopment and maintenance was done. Around the mid nineteen-nineties , a definitionof something called as a Software Product Line was formed at the SEI, CMU [5]. Thedefinition was as follows: “A software product line (SPL) is a set of software-intensivesystems that share a common, managed set of features satisfying the specific needs of aparticular market segment or mission and that are developed from a common set of coreassets in a prescribed way.”[5]. The intent was that to put an end to the woes of softwareending up costing much more during maintenance than what it cost to built it in the firstplace, lets try to see if the same reuse and common infrastructure based production out ofexchangeable components approach can be applied to software development as well.The SPL PromiseIt is important to understand the intent and reasons why this approach is a promising oneand what are the arguments that have been made that puts the software product line in aplace that calls for serious consideration. The figure below shows a graph of cost overtime. One line shows the costs over time with reuse i.e. without going the softwareproduct line way and the other is for costs incurred over time with reuse, i.e. when goingvia the software product line way, building with reuse in mind, and using reusable core 3
  4. 4. set of assets to produce software products over time. The non-SPL approach involvesproportional costs vs the number of products over time. As the number of productsincrease, the cost also increases, with the cost per product essentially remaining more orless the same.The software development approach that is oriented as per the Software Product Lineconcepts makes a promise of higher yields, lesser cost per product over time as thenumber of products increases, thereby meaning more profits, and lesser maintainabilitycosts with higher maintainability over time. Figure 2 – The Software Product Line Promise of Higher Profit or Lower cost per product over time.The argument here is that if the upfront investment of building a base of commonmanaged core set of assets is acceptable, which may not be and most is not the case withother ‘regular’ methodologies of software development, where each product is essentiallydeveloped in a stand alone fashion, it can lead to better profit margins later on in thegame. This argument is based on understanding that if the efforts are concentrated onmaking this core set of assets such that they are high quality, their dependencies andinvariants involved are well understood beforehand, these components and their possiblecombinations are well tested and assured in the beginning itself, when the complexityinvolved and the impact of any changes / errors is extremely low, it saves time andmoney later on when the stakes are higher, the risks are bigger . So even though thisapproach involves a high upfront time and cost before a single product is shipped out,over time when multiple products built on top of a common set of core assets are 4
  5. 5. released, the time to release per product and the cost involved would aggregate into amuch higher profit.The next question here that now must be asked is how can this be done. What is the wayto go about doing the Software Product Line approach. How can some abstractrequirements be made sense of, organized into a form that follows the path towardsbuilding this common set of core assets which are reusable in nature so that multipleproducts can be spawned off of them.In the subsequent sections, we take a look into how to answer the above question. Weshall look further into the SPL promise, the various tools and techniques that exist toaccomplish that and what the state of affairs is in that space. This includes anintroduction to Domain and Application Engineering. That is followed by a discussionon recent improvements, the description and analysis of a technique called as theSoftware Stability Model and how it fulfills the SPL promise and finally a discussion onthe challenges that still exists that need to be overcome to draw value out of the existingapproaches.Domain EngineeringThe SEI’s definition of Software Product Lines, has two important points to be notedthat go ahead to define the characterstics of Software Product Lines. An SPL provides away to design and implement several closely related systems together, which share (a)common, managed set of features , and (b) common set of core assets. In order to findwhat these common set of core features and core assets are, the whole requirementanalysis and development effort is divided into two main umbrellas of activities. Thefirst being Domain Engineering and the second, Application Engineering.Domain Engineering, in short is concerned mainly with the process of identifying,scoping and developing (along with the means of managing and representing) thecommon set of features and core assets, the relationships and dependencies amongstthem, the constrains and invariants and so on 6 The Application Engineering part is whatreaps the benefits of the work done during Domain Engineering, and is largely concernedwith production of context specific and customized products off of the common featuresand core assets developed in the domain engineering phase. The figure here, shows anoverview of this process. 5
  6. 6. Figure 3 - The Domain and Application Engineering Process OverviewThe process of converting abstract semi formed understanding of the requirements for thefamily of products into a well defined system of core assets and commonalities, startswith a big cloud of Domain Knowledge and performing domain analysis techniques onthat. This part involves consulting current practices, user manuals, conducting interviews,talking to existing experts in the domain etc. The end result of Domain analysis is a set ofartifacts that collectively define the domain.This process is followed by the Domain Design and Implementation phases. This is moreconcerned about developing core common set of architectures and designs for the familyof products that belong to this domain. The Domain Implementation part is what dealswith actually constructing, implementing the previously identified and describedreusable assets, or the reusable components, domain-specific languages, generators, anda reuse infrastructure [7].This whole process is closely followed at the Application engineering end as well. TheApplication Engineering part also has three stages: Application Analysis, Design andImplementation. The Analysis part here consists of identifying which reusable parts fromthe domain engineering phase would be used to construct the desired application in thegiven context. Any custom requirements that may go beyond what already exists in thecore infrastructure developed during the domain engineering phase are also identified anddefined. The application design is where the domain design elements are reused, along 6
  7. 7. with any custom parts that may be needed, to come up with the architecture and thedesign of the application being built, followed by the application implementationphase, that actually involves putting together the identified reusable componentsdeveloped in the domain implementation phase, customized to the application’s operationcontext as well as implementation of any required glue code and other custom applicationspecific features and components.The fact that most of the components typically were designed and developed once duringthe domain engineering phase, over time, no new effort was needed to implement thatcore functionality when developing the application, and the artifacts were simple reused.When this happens for a large number of applications over time, the benefit of saving therework of having to reinvent the same functionality over and over again, takes are biggerform and starts being seen as higher profit margins and more cost effectiveness.The end result of this entire exercise, as shown in the figure above, is ofcourse a finalproduct or family of products / applications.Now that we understand the big picture and where the different parts of the process comein, let us now take a closer look at each one of them and the tools and techniques thatexist in those spaces.Domain AnalysisComing back to the discussion of domain analysis, this is the phase where the first stepstowards moving from a cloud of abstract semi complete and semi formal understandingof a domain and the requirements for an infrastructure consisting of reusable components,to a more concrete one takes place. It is the Activity that identifies and describes thecommonalities and variability within a domain [6, 7]. The process is largely a bottom upone. Existing forms of knowledge in the domain are studied. Sources include existingsystems in the domain, user manuals, interviews, domain experts, system handbooks,textbooks, prototyping, experiments, general knowledge and observations etc. Thecommonalities are then abstracted out from these and the differences noted. This wayamp of the commonalities along with dependencies, hierarchies, variants and invariantsare identified. This forms the base understanding of what the products in this domainwould largely look like with the common features that they would have and other optionsthat may exist. There are a number of analysis approaches that exist [8], however someare of particular interest and we shall look more into those in the remaining part of thissection. 7
  8. 8. The end result of this exercise is a domain model or a set of domain models thecollective view of which, defines the domain as a whole.A domain model is an explicit representation of the common and the variable propertiesof the systems in a domain and the dependencies between the variable properties.The Domain model consists of a number of views that play a part in describing thedomain from one angle. It is generally comprised of: (a) A Domain Dictionary: This defines the terms and concepts in the domain model. This vocabulary helps in building a common language for communicating and representing the knowledge in the domain. (b) The Context: It is the Description of the boundary of the domain and its relation to other (close) domains. Context is important because this is the guiding line as to what to include and not include when considering building the common core infrastructure for a product line. If this boundary is not well understood, any product line effort can quickly spiral out of control [9] as it is nearly impossible to include everything under the sun in the common set of assets. If that’s tried, it would call for an extremely huge upfront investment which may not be able to defend the SPL promise of cost effectiveness; the time taken to realize the benefits may in that case end up being extremely long [10, 11] (c) Concept Model: this consists of meta models that describe further the concepts that exist on the domain.This describe the concepts in a domain expressed in some appropriate modeling format (e.g. UML diagrams) and builds on top of the views of the domain provided by the domain dictionary and the context. (d) Feature Model: the feature model is a hierarchical decomposition of the features in the domain core infrastructure. This is where the commonalities and variant take shape of a formal / semi formal representation to describe the dependencies , contains, invariants, exclusions and such relations amongst the various features in the domain.Let us know take a look at some of the Feature Modeling approaches and techniquesthat exist.FODA - Feature Oriented Domain AnalysisFODA is a technique of representing the feature model for any domain using UML likeconstructs. It is a semi formal, diagrammatic means of describing the feature model of 8
  9. 9. any domain. The figue below shows a simple example of a feature model for an ecommerce website. FODA was first introduced in 1990 at SEI [12]. Figure 4 - FODA Example for an ecommerce website [12]In the FODA modeling technique the various features of any system are represented asrectangular boxes and in a hierarchical format. The features at the top contain the featuresbelow them and each level is a subsequent breakdown of the feature on the top. Thefeatures marked with a solid circle are mandatory when choosing a particular set offeatures to include when building an application based on the core set of featuresdescribed in the feature model. The ones with hollow circle(s) are options features. Ahollow arc between two sub features means that one of the available options must beselected but not all, it is a XOR relationship. A solid arc indicated at least one or more ofthe available features must be included.FODA is a rather simple way to represent any domain’s feature model and was one of thefirst highly useful ones to be introduced. It (in its some slightly modified version) stillhappens to be one of the most popular means of feature model representations used.The figure below shows yet another example of the FODA technique in action. This is anexcerpt or a part of the feature model of Berkeley DB, a popular NoSQL database in use. 9
  10. 10. Figure 5 - FODA feature model excerpt from Feature Model for Berkeley DBIn addition to the basic capabilities of the feature models to describe the relationshipsbetween features, such as parent-child and hierarchical decomposition as well as or, xoretc., it is also possible to indicate excludes and includes relations. These are namely : Arequires B – The selection of A implies selection of B , and A excludes B – A and Bcannot be part of the same product.While FODA Feature Model was a great tool and a powerful semi formal way ofrepresenting the feature in a domain, there was must left to be desired. There was a needto go further and have more abilities for representation of additional information.Over time, several improvements were suggested. One was to allow for specification ofCardinality in feature Models, that is more like the what is seen in the class diagrams andER diagrams and the UML notations e.g. [0..n], [n,m] [ 0,4] [0,1] and so on [13] .Specifying multiplicities when saying that one feature may require at least one to n other 10
  11. 11. such feature gives the additional capability to be more specific. This can help in alsomore accurately modeling quality requirements, for example in cases of how manyrequest handlers a particular system may need if a feature called record user activity forinstance, is selected.Other such suggestion for improvement was extend the Feature Models to havestandardized attributes . The suggestion here was that if the features could also haveinformation about extra, non functional properties attached with them in the form of<name: value; domain> pairs, it might be very powerful again for modeling andrepresenting (and possibly reasoning about) quality attributes, non functional propertiesand such associated constraints. This way it would be possible to have somemeasurement related information be attached to the feature models. What is measurablehowever and when and what to measure in itself is a separate discussion but still theapproach is a good improvement when it comes to non functional requirementspecification and representation for the feature model of any domain.The FODA technique and the whole approach to feature modeling that we saw till nowwas a semi formal one, consisting of building meta models in a UML like, graphical anddiagrammatic representation. Some of the researchers, however in an attempt to make thetechnique more scalable and going by the argument that pictures and diagrams may notbe scalable in a spatial context, and textual representation may do a better job when thefeature model is highly complex in nature, called for moving towards a more formalrepresentation of the features [14]. The intent here was also to automate the reasoningprocess. A formal representation fo the feature model could allow for machine basedautomated reasoning about various combinations that are possible. This goes towards thewhole discussion of programming with requirements and thus using a formal methodsbased approach when doing so. Possible benefits of this improvement or rathertransformation, include a better ability to automate quality assurance and dependency /compatibility assurance for a given combination of features employed for some given setof application requirements.CODA – Context Oriented Domain AnalysisWhile the other regular techniques for domain analysis were for systems that were ratherpre understood and static in nature, at least the requirements for which features wouldmake up a system at a given point in time. Those techniques like FODA, did littlehowever to help in feature modeling self adaptive and dynamically changing systems.The class of systems in this space would be such which react to changes in the 11
  12. 12. environment and context of operation, learn and adapt by adding or removing features forinstance.The need to feature model such systems was addressed by a modification andimprovement over FODA, called as CODA or Context Oriented Domain Analysis [15].The figure below shows an example of a CODA feature model. Figure 6 - CODA example and the capabilities that it introduces.CODA is basically FODA for context aware and adaptive systems. The focus here ismore on behavioral features variations rather than hierarchical decomposition of possiblefeatures and their variability.The figure above shows how CODA works. It also introduces some new capabilities andnotations that help especially when it comes to variations in behavior. It introduces twonew ways of depicting dependencies : a solid arrow indicates that if the arrow goes fromA to B then A includes B. the context and the condition can then be specified along withthis dependency. The other dependency specification notation is that of a dotted arrow,which if from A to B, mean that B is conditionally dependent upon (not included) by Agiven some condition as specified .There is a notion of priority that has also been introduced here. A number of possible subfeatures can be numbered according to their priority of consideration. For example, asshown in the figure above, the subfeatures Ignore, redirect and Answermachine have thepriorities 1, 2 and 3 respectively. Zero one at max 1 of the three can be possible at anygiven point in time and the conditions to be met for the inclusion of the sub features arealso given. The order of consideration for which feature to include / activate would firstsee the highest priority option and check if that condition is met, if not, only then thesubsequent item with the next highest priority would be checked. It is very well possible 12
  13. 13. in the case above that none of the features are selected in case none of them have thecondition as satisfied at the time of feature selection.CODA brings along a number of new capabilities when representing behavioral variationand when representing self adaptive, dynamically changing systems.OthersThere are other techniques available as well, beyond what have been discussed so far.Some examples include FORM (Feature Oriented Reuse Method). It extends FODAbeyond just analysis [16]. Another one is FeatureRESB (Feature Reuse-Driven SoftwareEngineering Business) [17] . It is associated with integrating the feature modeling ofFeature-Oriented Domain Analysis (FODA) into the processes and work products of theReuse-Driven Software Engineering Business (RSEB). These however have not beenincluded as a part of the discussion here however the provided references may serve wellfor the reader.Domain Specific LanguagesSo far the discussion of techniques for domain analysis and feature modeling was ageneral one in nature. All these techniques focused on providing a general tool set andmethodology that was not specific to any given domain(s) and could possibly be used in anumber od places. There is however a different class of such techniques that aim atproviding the infrastructure to perform analysis and do modeling and specification inspecific domains. Domain Specific Languages [18] as they are called move away fromgeneral methods to specialized and specific methods custom made for a target domain.Domain Specific Languages go beyond just selecting optional features and allow for amuch richer specification of the system and the requirements. They allow repetitions,ordering, conditions and constraints etc. The idea here is that of ‘programing’ using (formal ) representation of requirements and using control structures like those inlanguages like C, and then generating system from them based on the conditionsspecified as control structures, conditional guards etc. [19]. The generation can thus becontrolled on certain parameters an d the context of the application in question.Going more domain specific and formal has other benefits also. This much richer andformal specification, enables a better reasoning and assurance for the variations that arepossible. This however comes with some caveats, the most obvious of which is the factthat it is very specific and if there is a need to do multi domain based development, the 13
  14. 14. DSL is not going to be of much help. This may even mean a lot of rework from groundup and limiting situations where the tools and techniques limit possible applications.Domain Design and ImplementationDomain Design comes into play after the Domain Analysis has been done and the domainahs been well understood, requirements for the core infrastructure and possible variationlaid out and a plan for transitioning those into a design are in place. The Domain Designportion involves developing architectures that are representatives of the family ofproducts that would come out from the core infrastructure that would later be developed.An important thing to note here is that of the kind of architecture to develop at this point.Generic and Highly flexible architectures can be developed so that they can later becustomized for the application being developed. The difference between a genericarchitecture and a highly flexible architecture is that a Generic architecture is a systemarchitecture which generally has a fixed topology but supports component plug-and-playrelative to a fixed or perhaps somewhat variable set of interfaces. Highly flexiblearchitecture on the other hand is an architecture which supports structural variation inits topology, i.e. it can be configured to yield a particular generic architecture. Thenotion of a highly flexible architecture is necessary since a generic architecture might notbe able to capture the structural variability in a domain of highly diverse systems. [7]The Domain implementation at last takes care of the implementation of the artefactsidentified and designed during the domain analysis and design phases. This is where thecore common set of assets and variations for a product line and the base coreinfrastructure thereof is actually built. The components or parts thus developed later canbe used in application engineering to quickly put together a product for delivery.Application EngineeringThe last piece in the puzzle is that phase of application engineering. This is the placewhere the benefits and profits of the upfront work done and investments made during theDomain Engineering phase is realized.Since there already exist reusable pluggable components from the domainimplementation phase, they can quickly be put together to form the majority of theapplication or the product along with other custom work as needed to quickly deliver aproduct. 14
  15. 15.  Figure 7 - Time to integrate COTS per product over time and Time to market per productThe graphs above are an estimate of how time to build a product decreases over time(left) and the time to market per product also decreases over time (right) when followingthe reuse or SPL based approach. This is because most of the implementation of corefunctionality has already been done in the domain implementation phase and later inwhen the application is being produced, the already constructed parts can be assembledand ‘glued’ together to create a deliverable instance of the product.Now that all the parts of the Domain and Application Engineering process are wellunderstood and the process of building the infrastructure for a Software Product Line hasbeen discussed, the rest of this report focuses on studying something called as theSoftware Stability Model approach to software development. A critical analysis of themethod with a focus on how it fulfills the SPL vision has also been done along with adiscussion on the challenges that still exist.Software Stability ModelThe Software Stability Model (SSM) [20, 21, 22] is a new way of designing anddeveloping software and does not go by the old school methods of software development.Instead of the application specific development approach followed by the waterfall andthe related development models [23, 24], the software stability model focuses on theoverall picture and has a holistic approach to developing software, designing keeping inmind the ultimate goals of the software that we are developing rather than concentratingon the specific application. The focus is on longevity of software and a generisdevelopment which can be reused in any related context, there by reducing themaintenance costs which traditionally have accounted for more than 80% of the total costof ownership of any enterprise class software. 15
  16. 16. SSM draws concepts from Goal Driven requirements modeling [25, 26] and PatternLanguages [27, 28].Software Stability modeling is a modelingtechnique that uses the concepts of ‘EnduringBusiness Themes’ (EBT s), ‘BusinessObjects’ (BO s), and ‘Industrial Objects’ (IOs). These are classes that are usuallydifferentiated into three different core layers.They are: Figure 8 – Software Stability ModelEnduring Business Themes (EBTs): CoreEBTs [20, 21, 22] come at the very core of the system and form the first layer. Theserepresent the core knowledge of the system. They are also referred as the Stable AnalysisPatterns, as they are the actual requirement of the system. These are similar to what Goalsare in a goal oriented modeling technique but represented in a rather conceptual formthan tangible elements. EBTs remain stable over time i.e. they do not change if thesystem has to be implemented in another application scenario. From a business point ofview what this means is that through EBTs being the ultimate business goals, we want tokeep the business goals constant even if we change the way we offer the business indifferent ways over time.Business Objects (BO s):BOs form the second layer of the system [21, 22]. These form the concrete classes of thesystem and are generally referred to as the “workhorses” of the system, underconsideration. They are the enablers that realize the EBTs and contain the core design ofthe system/engine and too are stable over time. They are also referred to as the stabledesign patterns.Industrial Objects (IO s):The outermost layer contains the IOs [20, 21], this is the third layer. These representthose external applications that can be hooked to the pattern, which usually form theleaves of the system and which can be replaced without affecting the system. The BOsare customized and implemented through the IOs, there by enabling the system to becustomized and additional functions added to the system without having to change thecore of it. The figure above shows how the three layers stack up in an SSM basedsoftware. 16
  17. 17. A System of PatternsAll elements (EBTs and BOs) documented according to a standard pattern template thatconsists of the following and more parts: Name, Known as, Context, related patterns &conflicts, Structure, Usage scenarios and more. Each concept thus documented isn’tjust documented in isolation. The documentation consists information and metainformation about the concept and the grad scheme of things where it fits in with respectto the relationships and inter dependencies with other patterns. The figure below showssuch an example. The concept being documented here is of AnyCorrectiveAction [29].   Figure 9 - AnyCorrectiveAction Stable Design Pattern [29]Architectural Pattern / Knowledge MapTwo or more EBTs form an Architectural Pattern or a Knowledge Map [30, 31] . TheKnowledge Map is the encapsulating entity that holds the core knowledge of the system,from the requirements to the design of the system. The KM is the basis for thedeployment of any system in an application specific scenario and all patterns in the 17
  18. 18. knowledge map may or may not be deployed. It entirely depends upon the applicationbeing implemented.When comparing to the Domain engineering process, the formulation and identificationof EBTs and BOs as a whole and the construction of the Knowledge Map essentiallycompletes the process of domain analysis and design. The EBTs and BOs are actually thecommon core assets, the components that can be reused and combined together laterduring application engineering phase to generate applications. The KM captures theStructure, relations, dependencies and conflicts amongst the core elements, i.e. the EBTsand the BOs and can be looked at as a Pattern Language of EBTs and BOs, based onwhich a system’s blueprint can be described.The Knowledge Map is what enablesgeneration of core architectures for theproducts that could be produced off ofthe common set of assets. It allows forgeneration of highly flexiblearchitectures that are componentized atthe skeleton. The figure bellowillustrates this. Figure 10 - Knowledge Map enabling highly   flexible base architectures that are So every EBT and the associated n componentized at the skeletonnumber of BOs can be a component initself for the base architecture, multiple such sets of which can be clubbed togetherdepending on the requirements of features for any specific application instance. Thus thecore architecture produced here is both generic within the context of the set of an EBTand related BOs but also highly flexible since these sets of EBT and related BOs aremodular in nature.SSM answering the SPL questionThe Industrial Objects are the application parts in any system developed on the lines ofSSM. Whenever a new application is to be built, the process involves starting withchoosing the appropriate architecture consisting of EBTs and corresponding BOs andintegrating them together, this is the step that corresponds with the resuse of artificatsbuilt in the domain engineering phase.To this base that’s built, IOs are then hooked up. The IOs are the application specificparts and that’s how the base generic architecture is customized as per the application 18
  19. 19. specific context. The final result is a product built based on the common set of featuresdeveloped earlier and combining them together as well as adding application specificcustomizations that make the system applicable in the given context of operation.Therefore when answering the SPL question, that is how far does SSM go to fulfill theSPL based building process, it is safe to say that conceptually yes it does ; the figurebelow illustrates that.   Figure 11 – SSM answering the SPL questionThe idea here is that the whole process of starting out with existing knowledge about thedomain like usage scenarios, existing systems maybe, policies, and so on, identifying,documenting and building EBTs and related BOs and then organizing the structure,dependencies and the rest in a Knowledge Map, essentially takes care of the DomainEngineering phases of Domain Analysis, Design and Implementation.The rest of the set of activities involved, namely identifying the application contextspecific requirements for a target application instance / product, and using the KM to puttogether a base system while hooking up IOs to that, essentially is the ApplicationEngineering part, eventually resulting in a finished deliverable product. This process as awhole can thus be repeated (the application engineering part i.e.) to produce a number ofrelated application instances based on the common assets and knowledge represented inthe KM, EBTs and BOs. The SSM approach in this way answers the SPL question. 19
  20. 20. Challenges and ConclusionsEven though Software Stability Model presents a complete end to end solution towardsbuilding for reuse and fulfills the SPL promise conceptually, there are still a number ofchallenges that still remain un solved.The first and foremost is the question of Validity of the approach. Of all the researchpapers and other data gathered, there was no proof of the approach having been used in areal world situation. There are sample proof of concept implementations of the processthat exist and other examples which are no more than toy implementations, but beyondthat thre is nothing very credible out there that may prove that this approach isworthwhile or easily adoptable. Most of the literature on the matter is also more in theresearch phase and more real world demonstrations of application need to be done tohave more confidence in the approach.The second question is of having the right level of granularity in design andrepresentation. The highly componentized nature of Knowledge Maps, all the way toEBTs and BOs, though may prove to be highly beneficial in some situations, the samefine grained nature may quickly prove to be a big overhead if the size of the individualcomponents in not of the ‘right’ size. This closely relates to the anti- pattern that mimicsthe one called chatty services [32], that’s popular in the SOA world, where the businessvalue to very small fine grained services is lost because the communication overhead istoo large.The thid issue is of reconciliation to standardized names and conventions. SSM followsits own naming conventions and definitions, which helps in some ways e.g. in moreefficient organization and representation and tooling to allow for automation when itcomes to representation but this may be a hurdle for adoption elsewhere.Other than these there are still some issues that exist in the industry as a whole whichneed to be overcome to make this whole approach more popular. Issues of understandingwhat is a domain knowledge that is complete, where to draw the line, when is it completeis still up for question. Also there is a good amount of upfront investment of resourcesthat need to be done to follow the SPL approach and in the world that goes forincremental and immediate benefits, there is a need to recognize ways in which suchmore incremental and quicker benefits can be realized. The solution perhaps is to educatepractitioners about the benefits that come later on with the upfront investments, in anycase this need to be clearly and better understood and communicated to allow SPL tohave a fighting chance. 20
  21. 21. In conclusion, SPL and associated domain engineering techniques have come a long waysince they were first envisioned. A number of improvements have been made with a goodamount still in works. Some challenges still exist in this landscape though which need tobe overcome so allow for the approach and related technologies to have a fair chance ofmaking an impact.References                                                                                                                [1] Thomson, Ross (1989). The Path to Mechanized Shoe Production in the United States. Chapel Hill and London: The University of North Carolina Press[2] Available Online: line1913.jpg/440px-A-line1913.jpg[3] Available Online:[4] Available Online:[5] “Software Product Lines”, [Online] Available at: Carnegie Mellon Software Engineering Institute Web Site , Accesed: Dec 01, 2012[6] Krzysztof Czarnecki, Ulrich W. Eisenecker, Robert Gluck, David Vandevoorde, and Todd L. Veldhuizen. 1998. Generative Programming and Active Libraries. In Selected Papers from the International Seminar on Generic Programming, Mehdi Jazayeri, R Loos, and David R. Musser (Eds.). Springer-Verlag, London, UK, 25- 39.[7] Krzysztof Czarnecki (October 1998) . Generative Programming: Principles and Techniques of Software Engineering Based on Automated Configuration and Fragment-Based Component Models . PhD Thesis, Department of Computer Science and Automation, Technical University of Ilmenau[8] Liana Barachisio Lisboa, Vinicius Cardoso Garcia, Daniel Lucrédio, Eduardo Santana de Almeida, Silvio Romero de Lemos Meira, Renata Pontin de Mattos Fortes, ‘A systematic review of domain analysis tools’, Information and Software Technology 52 (2010) 1–13, doi:10.1016/j.infsof.2009.05.001[9] Antony Tang, Wim Couwenberg, Erik Scheppink, Niels Aan de Burgh, Sybren Deelstra, and Hans van Vliet. 2010. SPL migration tensions: an industry experience. In Proceedings of the 2010 Workshop on Knowledge-Oriented Product 21
  22. 22.                                                                                                                                                                                                                                                                                                                                           Line Engineering (KOPLE 10). ACM, New York, NY, USA, , Article 3 , 6 pages. DOI=10.1145/1964138.1964141[10] Charles W. Krueger , Dale Churchett, Ross Buhrdorf, ‘HomeAway’s Transition to Software Product Line Practice: Engineering and Business Results in 60 Days’ , Proceedings of the 12th International Software Product Line Conference, 2008 p 297-306.[11] Christoph Elsner, Daniel Lohmann, and Wolfgang Schröder-Preikschat. 2011. An infrastructure for composing build systems of software product lines. In Proceedings of the 15th International Software Product Line Conference, Volume 2 (SPLC 11), Ina Schaefer, Isabel John, and Klaus Schmid (Eds.). ACM, New York, NY, USA, , Article 18 , 8 pages. DOI=10.1145/2019136.2019157[12] Kang, K.C. and Cohen, S.G. and Hess, J.A. and Novak, W.E. and Peterson, A.S., "Feature-oriented domain analysis (FODA) feasibility study", Technical Report CMU/SEI-90-TR-021, SEI, Carnegie Mellon University, November 1990[13] Czarnecki, K. and Helsen, S. and Eisenecker, U., "Staged configuration using feature models", Proceedings of the Third International Conference on Software Product Lines (SPLC 04), volume 3154 of Lecture Notes in Computer Science. Springer Berlin/Heidelberg, August 2004[14] D. Benavides, P. Trinidad and A. Ruiz-Cortés. "Automated Reasoning on Feature Models". 17th Conference on Advanced Information Systems Engineering (CAiSE05). Porto, Portugal. 2005[15] Brecht Desmet, Jorge Vallejos, Pascal Costanza, Wolfgang De Meuter, and Theo DHondt. 2007. Context-oriented domain analysis. In Proceedings of the 6th international and interdisciplinary conference on Modeling and using context (CONTEXT07), Boicho Kokinov, Daniel C. Richardson, Thomas R. Roth- Berghofer, and Laure Vieu (Eds.). Springer-Verlag, Berlin, Heidelberg, 178-191.[16] Kyo C. Kang and Sajoong Kim and Jaejoon Lee and Kijoo Kim and Gerard Jounghyun Kim and Euiseob Shin, FORM: A feature-oriented reuse method with domain-specific reference architectures, Annals of Software Engineering, 1998, vol 5, pp 143-168[17] Griss, Martin L., Favaro, John; Alessandro, Massimo D., Integrating feature modeling with the RSEB, Proceedings. Fifth International Conference on Software Reuse, pp 76 - 85 22
  23. 23.                                                                                                                                                                                                                                                                                                                                          [18] Markus Völter: DSLs for Product Lines: Approaches, Tools, Experiences. SPLC 2011:353[19] Markus Völter, Eelco Visser: Product Line Engineering Using Domain-Specific Languages. SPLC 2011:70-79[20] Fayad, M. E., & Altman, A. (2001, September). Introduction to Software Stability. Communications of the ACM , 44 (9).[21] Fayad, M. E. (2002, January). Accomplishing Software Stability. Communications of the ACM , 45 (1).[22] Fayad, M. E. (2002, March). How to Deal with Software Stability. Communications of the ACM , 45 (3).[23] Boehm, B. (1986). A spiral model of software development and enhancement. SIGSOFT Software Engineering Notes , 11 (4), 14-24.[24] Weisert, C. (2003, February 8). Waterfall methodology: theres no such thing! Retrieved February 2010, from http://www.[25] A. van Lamsweerde , Goal-Oriented Requirements Engineering: A Guided Tour , 5th IEEE International Symposium on Requirements Engineering, Toronto, August, 2001, pp. 249-263[26] A. van Lamsweerde , Goal-Oriented Requirements Engineering: A Roundtrip from Research to Practice , Proceedings of RE’04, 12th IEEE Joint International Requirements Engineering Conference, Kyoto, Sept. 2004, 4-8[27] Alexander, C. (1977). A Pattern Language: Towns, Buildings, Construction. USA: Oxford University Press. ISBN 978-0-19-501919-3.[28] “Anatomy of a Pattern Language Parts and links.” [Available Online] Accessed on Mar 12, 2013 from[29] Shivanshu K. Singh and M. E. Fayad, ‘The AnyCorrectiveAction Stable Design Pattern’, in Proceedings of the 17th Conference on Pattern Languages of Programs (PLoP) 2010, in conjunction with Splash 2010, Reno/Tahoe, Nevada, USA, October 16-18, 2010.[30] Sanchez, H. A. (2006). Building Systems Using Patterns: Creating Knowledge Maps. Masters Thesis, San Jose State University, Department of Computer Engineering, San Jose.[31] Fayad, M. E., Sanchez, H. A., & Singh, S. K. (2010). Knowledge Maps - Fundamentally Modular Approach to Software Architecture, Design, Development and Deployment. 19th Annual Conference on Software Engineering and Data 23
  24. 24.                                                                                                                                                                                                                                                                                                                                           Engineering (pp. 127-133). San Francisco: International Society for Computers and Their Applications.[32] Luba Cherbakov , Dr. Mamdouh Ibrahim , Jenny Ang , SOA antipatterns : The obstacles to the adoption and successful realization of Service-Oriented Architecture, (2005, November.) Available Online at /developerworks/webservices /library/ws-antipatterns/, assessed Mar 12, 2013. 24