The document describes a 5-step algorithm for modularizing product architectures using dendrograms. The algorithm groups functions into modules based on their inputs and outputs. It defines a distance metric between modules based on the distances between their inputs and outputs. The distances are used to cluster the modules into a hierarchical dendrogram that can identify common modules across products for platform development. The algorithm is applied as an example to modularize four different products.
THE LEFT AND RIGHT BLOCK POLE PLACEMENT COMPARISON STUDY: APPLICATION TO FLIG...ieijjournal
It is known that if a linear-time-invariant MIMO system described by a state space equation has a number of states divisible by the number of inputs and it can be transformed to block controller form, we can design a state feedback controller using block pole placement technique by assigning a set of desired Block poles. These may be left or right block poles. The idea is to compare both in terms of system’s response.
A comparative study of three validities computation methods for multimodel ap...IJECEIAES
The multimodel approach offers a very satisfactory results in modelling, diagnose and control of complex systems. In the modelling case, this approach passes by three steps: the determination of the model’s library, the validities computation and the establishment of the final model. In this context, this paper focuses on the elaboration of a comparative study between three recent methods of validities computation. Thus, it highlight the method that offers the best performances in term of precision. To achieve this goal, we apply, these three methods on two simulation examples in order to compare their performances.
Scalability has been an essential factor for any kind of computational algorithm while considering its performance. In this Big Data era, gathering of large amounts of data is becoming easy. Data analysis on Big Data is not feasible using the existing Machine Learning (ML) algorithms and it perceives them to perform poorly. This is due to the fact that the computational logic for these algorithms is previously designed in sequential way. MapReduce becomes the solution for handling billions of data efficiently. In this report we discuss the basic building block for the computations behind ML algorithms, two different attempts to parallelize machine learning algorithms using MapReduce and a brief description on the overhead in parallelization of ML algorithms.
THE LEFT AND RIGHT BLOCK POLE PLACEMENT COMPARISON STUDY: APPLICATION TO FLIG...ieijjournal
It is known that if a linear-time-invariant MIMO system described by a state space equation has a number of states divisible by the number of inputs and it can be transformed to block controller form, we can design a state feedback controller using block pole placement technique by assigning a set of desired Block poles. These may be left or right block poles. The idea is to compare both in terms of system’s response.
A comparative study of three validities computation methods for multimodel ap...IJECEIAES
The multimodel approach offers a very satisfactory results in modelling, diagnose and control of complex systems. In the modelling case, this approach passes by three steps: the determination of the model’s library, the validities computation and the establishment of the final model. In this context, this paper focuses on the elaboration of a comparative study between three recent methods of validities computation. Thus, it highlight the method that offers the best performances in term of precision. To achieve this goal, we apply, these three methods on two simulation examples in order to compare their performances.
Scalability has been an essential factor for any kind of computational algorithm while considering its performance. In this Big Data era, gathering of large amounts of data is becoming easy. Data analysis on Big Data is not feasible using the existing Machine Learning (ML) algorithms and it perceives them to perform poorly. This is due to the fact that the computational logic for these algorithms is previously designed in sequential way. MapReduce becomes the solution for handling billions of data efficiently. In this report we discuss the basic building block for the computations behind ML algorithms, two different attempts to parallelize machine learning algorithms using MapReduce and a brief description on the overhead in parallelization of ML algorithms.
A M ULTI -O BJECTIVE B ASED E VOLUTIONARY A LGORITHM AND S OCIAL N ETWOR...IJCI JOURNAL
In this paper, a multi-objective based NSGA-II algo
rithm is proposed for dynamic job-shop scheduling
problem (DJSP) with random job arrivals and machine
breakdowns. In DJSP schedules are usually
inevitable due to various unexpected disruptions. T
o handle this problem, it is necessary to select
appropriate key machines at the beginning of the si
mulation instead of random selection. Thus, this pa
per
seeks to address on approach called social network
analysis method to identify the key machines of the
addressed DJSP. With identified key machines, the e
ffectiveness and stability of scheduling i.e., make
span
and starting time deviations of the computational c
omplex NP-hard problem has been solved with propose
d
multi-objective based hybrid NSGA-ll algorithm. Sev
eral experiments studies have been conducted and
comparisons have been made to demonstrate the effic
iency of the proposed approach with classical multi
-
objective based NSGA-II algorithm. The experimental
results illustrate that the proposed method is ver
y
effective in various shop floor conditions
A Combined Approach for Feature Subset Selection and Size Reduction for High ...IJERA Editor
selection of relevant feature from a given set of feature is one of the important issues in the field of
data mining as well as classification. In general the dataset may contain a number of features however it is not
necessary that the whole set features are important for particular analysis of decision making because the
features may share the common information‟s and can also be completely irrelevant to the undergoing
processing. This generally happen because of improper selection of features during the dataset formation or
because of improper information availability about the observed system. However in both cases the data will
contain the features that will just increase the processing burden which may ultimately cause the improper
outcome when used for analysis. Because of these reasons some kind of methods are required to detect and
remove these features hence in this paper we are presenting an efficient approach for not just removing the
unimportant features but also the size of complete dataset size. The proposed algorithm utilizes the information
theory to detect the information gain from each feature and minimum span tree to group the similar features
with that the fuzzy c-means clustering is used to remove the similar entries from the dataset. Finally the
algorithm is tested with SVM classifier using 35 publicly available real-world high-dimensional dataset and the
results shows that the presented algorithm not only reduces the feature set and data lengths but also improves the
performances of the classifier.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
PREDICTIVE EVALUATION OF THE STOCK PORTFOLIO PERFORMANCE USING FUZZY CMEANS A...ijfls
The aim of this paper is to investigate the trend of the return of a portfolio formed randomly or for any
specific technique. The approach is made using two techniques fuzzy: fuzzy c-means (FCM) algorithm and
the fuzzy transform, where the rules used at fuzzy transform arise from the application of the FCM
algorithm. The results show that the proposed methodology is able to predict the trend of the return of a
stock portfolio, as well as the tendency of the market index. Real data of the financial market are used from
2004 until 2007.
LATTICE-CELL : HYBRID APPROACH FOR TEXT CATEGORIZATIONcsandit
In this paper, we propose a new text categorization framework based on Concepts Lattice and
cellular automata. In this framework, concept structure are modeled by a Cellular Automaton
for Symbolic Induction (CASI). Our objective is to reduce time categorization caused by the
Concept Lattice. We examine, by experiments the performance of the proposed approach and
compare it with other algorithms such as Naive Bayes and k nearest neighbors. The results
show performance improvement while reducing time categorization.
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...ijsc
In fuzzy decision-making processes based on linguistic information, operations on discrete fuzzy numbers
are commonly performed. Aggregation and defuzzification operations are some of these often used
operations. Many aggregation and defuzzification operators produce results independent to the decisionmaker’s
strategy. On the other hand, the Weighted Average Based on Levels (WABL) approach can take
into account the level weights and the decision maker's "optimism" strategy. This gives flexibility to the
WABL operator and, through machine learning, can be trained in the direction of the decision maker's
strategy, producing more satisfactory results for the decision maker. However, in order to determine the
WABL value, it is necessary to calculate some integrals. In this study, the concept of WABL for discrete
trapezoidal fuzzy numbers is investigated, and analytical formulas have been proven to facilitate the
calculation of WABL value for these fuzzy numbers. Trapezoidal and their special form, triangular fuzzy
numbers, are the most commonly used fuzzy number types in fuzzy modeling, so in this study, such numbers
have been studied. Computational examples explaining the theoretical results have been performed.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
Interior Dual Optimization Software Engineering with Applications in BCS Elec...BRNSS Publication Hub
Interior optimization software and algorithms programming methods provide a computational tool with
a number of applications. Theory and computational demonstrations/techniques were primarily shown
in previous articles. The mathematical framework of this new method, (Casesnoves, 2018–2020), was
also proven.The links among interior optimization, graphical optimization (Casesnoves, 2016–7), and
classical methods in non-linear equations systems were developed. This paper is focused on software
engineering with mathematical methods implementation in programming as a primary subject. The
specific details for interior optimization computational adaptation on every specific problem, such as
engineering, physics, and electronics physics, are explained. Second subject is electronics applications
of software in the field of superconductors. It comprises a series of new BCS equation optimization
for Type I superconductors, based on previous research for other different Type I superconductors
previously published. A new dual optimization for two superconductors is simulated. Results are
acceptable with low errors and imaging demonstrations of the interior optimization utility.
A M ULTI -O BJECTIVE B ASED E VOLUTIONARY A LGORITHM AND S OCIAL N ETWOR...IJCI JOURNAL
In this paper, a multi-objective based NSGA-II algo
rithm is proposed for dynamic job-shop scheduling
problem (DJSP) with random job arrivals and machine
breakdowns. In DJSP schedules are usually
inevitable due to various unexpected disruptions. T
o handle this problem, it is necessary to select
appropriate key machines at the beginning of the si
mulation instead of random selection. Thus, this pa
per
seeks to address on approach called social network
analysis method to identify the key machines of the
addressed DJSP. With identified key machines, the e
ffectiveness and stability of scheduling i.e., make
span
and starting time deviations of the computational c
omplex NP-hard problem has been solved with propose
d
multi-objective based hybrid NSGA-ll algorithm. Sev
eral experiments studies have been conducted and
comparisons have been made to demonstrate the effic
iency of the proposed approach with classical multi
-
objective based NSGA-II algorithm. The experimental
results illustrate that the proposed method is ver
y
effective in various shop floor conditions
A Combined Approach for Feature Subset Selection and Size Reduction for High ...IJERA Editor
selection of relevant feature from a given set of feature is one of the important issues in the field of
data mining as well as classification. In general the dataset may contain a number of features however it is not
necessary that the whole set features are important for particular analysis of decision making because the
features may share the common information‟s and can also be completely irrelevant to the undergoing
processing. This generally happen because of improper selection of features during the dataset formation or
because of improper information availability about the observed system. However in both cases the data will
contain the features that will just increase the processing burden which may ultimately cause the improper
outcome when used for analysis. Because of these reasons some kind of methods are required to detect and
remove these features hence in this paper we are presenting an efficient approach for not just removing the
unimportant features but also the size of complete dataset size. The proposed algorithm utilizes the information
theory to detect the information gain from each feature and minimum span tree to group the similar features
with that the fuzzy c-means clustering is used to remove the similar entries from the dataset. Finally the
algorithm is tested with SVM classifier using 35 publicly available real-world high-dimensional dataset and the
results shows that the presented algorithm not only reduces the feature set and data lengths but also improves the
performances of the classifier.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
PREDICTIVE EVALUATION OF THE STOCK PORTFOLIO PERFORMANCE USING FUZZY CMEANS A...ijfls
The aim of this paper is to investigate the trend of the return of a portfolio formed randomly or for any
specific technique. The approach is made using two techniques fuzzy: fuzzy c-means (FCM) algorithm and
the fuzzy transform, where the rules used at fuzzy transform arise from the application of the FCM
algorithm. The results show that the proposed methodology is able to predict the trend of the return of a
stock portfolio, as well as the tendency of the market index. Real data of the financial market are used from
2004 until 2007.
LATTICE-CELL : HYBRID APPROACH FOR TEXT CATEGORIZATIONcsandit
In this paper, we propose a new text categorization framework based on Concepts Lattice and
cellular automata. In this framework, concept structure are modeled by a Cellular Automaton
for Symbolic Induction (CASI). Our objective is to reduce time categorization caused by the
Concept Lattice. We examine, by experiments the performance of the proposed approach and
compare it with other algorithms such as Naive Bayes and k nearest neighbors. The results
show performance improvement while reducing time categorization.
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...ijsc
In fuzzy decision-making processes based on linguistic information, operations on discrete fuzzy numbers
are commonly performed. Aggregation and defuzzification operations are some of these often used
operations. Many aggregation and defuzzification operators produce results independent to the decisionmaker’s
strategy. On the other hand, the Weighted Average Based on Levels (WABL) approach can take
into account the level weights and the decision maker's "optimism" strategy. This gives flexibility to the
WABL operator and, through machine learning, can be trained in the direction of the decision maker's
strategy, producing more satisfactory results for the decision maker. However, in order to determine the
WABL value, it is necessary to calculate some integrals. In this study, the concept of WABL for discrete
trapezoidal fuzzy numbers is investigated, and analytical formulas have been proven to facilitate the
calculation of WABL value for these fuzzy numbers. Trapezoidal and their special form, triangular fuzzy
numbers, are the most commonly used fuzzy number types in fuzzy modeling, so in this study, such numbers
have been studied. Computational examples explaining the theoretical results have been performed.
Fault diagnosis using genetic algorithms and principal curveseSAT Journals
Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
Interior Dual Optimization Software Engineering with Applications in BCS Elec...BRNSS Publication Hub
Interior optimization software and algorithms programming methods provide a computational tool with
a number of applications. Theory and computational demonstrations/techniques were primarily shown
in previous articles. The mathematical framework of this new method, (Casesnoves, 2018–2020), was
also proven.The links among interior optimization, graphical optimization (Casesnoves, 2016–7), and
classical methods in non-linear equations systems were developed. This paper is focused on software
engineering with mathematical methods implementation in programming as a primary subject. The
specific details for interior optimization computational adaptation on every specific problem, such as
engineering, physics, and electronics physics, are explained. Second subject is electronics applications
of software in the field of superconductors. It comprises a series of new BCS equation optimization
for Type I superconductors, based on previous research for other different Type I superconductors
previously published. A new dual optimization for two superconductors is simulated. Results are
acceptable with low errors and imaging demonstrations of the interior optimization utility.
Surrogate modeling for industrial designShinwoo Jang
We describe GTApprox | a new tool for medium-scale surrogate modeling in industrial design. Compared to existing software, GTApprox brings several innovations: a few novel approximation algorithms, several advanced methods of automated model selection, novel options in the form of hints. We demonstrate the efficiency of GTApprox on a large collection of test problems. In addition, we describe several applications of GTApprox to real engineering problems.
Towards a Docker-based architecture for open multi-agent systemsIAESIJAI
In open multi-agent systems (OMAS), heterogeneous agents in different environments or models can migrate from one system to another, taking their attributes and knowledge and increasing developing complexity compared to conventional multi-agent systems (MAS). Furthermore, the complexity of opening may be due to the uncertainties and dynamic behavior that the change of agents entails, needing to formulate techniques to analyze this complexity and understand the system’s global behavior. We used Docker to approach these problems and make the architecture flexible to handle distinct types of programming languages and frameworks of agents. This paper presents a Docker-based architecture to aid OMAS development, acting on agent migration between different models running in heterogeneous hardware and software scenarios. We present a simulation scenario with NetLogo’s Open Sugarscape 2 Constant Growback and JaCaMo’s Gold Miners to verify the proposal’s feasibility
Migration strategies for object oriented system to component based systemijfcstjournal
Migration of object oriented system to component based System is not an easy task, not only technically a
lot of changes needs to be done but also numerous other issues needs to be kept in mind. However Component based Software development has been gaining its popularity from the past few years and has higher reusability scope. Programs built using CBSE approach are confirmed to be suitable to new environments. These days it’s a universal practice to reuse components in project to achieve better quality and to save time. So moving to CBSE from object oriented seems wise decision. Number of approaches has been introduced to implement this and each one of them has its own pros and cons. The paper focuses on the brief review on works of different authors in this area from the year 2000 to 2014.
SIMULATION-BASED OPTIMIZATION USING SIMULATED ANNEALING FOR OPTIMAL EQUIPMENT...Sudhendu Rai
The paper describes a software toolkit that enables the data-driven simulation-based optimization of print shops It enables quick modeling of complex print production environments under the cellular production framework. The software toolkit automates several steps of the modeling process by taking declarative inputs from the end-user and then automatically generating complex simulation models that are used to determine improved design and operating points. This paper describes the addition of another layer of automation consisting of simulation-based optimization using simulated-annealing that enables automated search of a large number of design alternatives in the presence of operational constraints to determine a cost-optimal solution. The results of the application of this approach to a real-world problem are also described.
The large-scale cyberinformatics method to replication is defined not only by the analysis of local-area networks, but also by the structured need for the Internet. Here, we confirm the refinement of superpages, which embodies the unfortunate principles of operating systems. SHODE, our new methodology for secure methodologies, is the solution to all of these obstacles.
A GUI-driven prototype for synthesizing self-adaptation decisionjournalBEEI
The ability to ensure an optimal decision is significant for self-adaptive systems especially when dealing with uncertainty. For this reason, a synthesis-driven approach can be used to capture and synthesize a decision that aims to satisfy the multi-objective properties. Assessing the quality of the synthesis-driven approach is challenging, since it involves a set of activities from modeling, simulating, and analyzing the outcomes. This paper presents the design and implementation of a graphical user interface (GUI)-based prototype for assessing synthesis outcome and performance of an adaptation decision. The prototype is designed and developed based on the component-based development approach that is able to integrate the existing and related libraries from PRISM-games model checker for the synthesis engine, JFreeChart libraries for the chart presentation, and Java Universal Network/Graph Framework libraries for the graph visualization. This paper also presents the implementation of the proposed prototype based on the cloud application deployment scenario to illustrate its applicability. This work contributes to provide a fundamental work towards automated synthesis for self-adaptive systems.
BARRACUDA, AN OPEN SOURCE FRAMEWORK FOR PARALLELIZING DIVIDE AND CONQUER ALGO...IJCI JOURNAL
This paper presents a newly-created Barracuda open-source framework which aims to parallelize Java divide and conquer applications. This framework exploits implicit for-loop parallelism in dividing and merging operations. So, this makes it a mixture of parallel for-loop and task parallelism. It targets sharedmemory multiprocessors and hybrid distributed shared-memory architectures. We highlight the effectiveness of the framework and focus on the performance gain and programming effort by using this framework. Barracuda aims at large public actors as well as various application domains. In terms of performance achievement, it is very close to Fork/Join framework while allowing end-users to only focus on refactoring code and experts to have the opportunity to improve it.
Similar to Modularizing Arcihtectures Using Dendrograms1 (20)
1. INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN
ICED 03 STOCKHOLM AUGUST 19 – 21, 2003
XXXXXXXXXXXX
MODULARIZING PRODUCT ARCHITECTURES USING DENDROGRAMS
Katja Hölttä, Victor Tang, and Warren P. Seering
Abstract
Finding common modules across products for platforming a product family or to find a
common module for joint development with a partner can be challenging. At the moment
there are no repeatable methods for grouping functions into modules and for choosing from
different module candidates to form a good platform. We have developed a five-step
algorithm that accomplishes this task of grouping and creating a dendrogram. The algorithm
is based on a metric, distance function, which we define in the paper. The salient features of
this algorithm are: it applies to modularization among simple as well as complex systems; it
addresses the synthesis issue by a method that creates a hierarchy of modules, it does not rely
on qualitative ordinal measures; it does not rely on non-repeatable heuristics, and it can be
implemented and executed in a computer. The algorithm is applied on a group of four
products: an intraoral camera, electronic pipette, pencil sharpener, and a fruit/veggie peeler.
Keywords: Product structuring, modularization and standardization, platforming
1 Introduction
A module is a structurally independent building block of a larger system with well-defined
interfaces. A module is fairly loosely connected to the rest of the system allowing an
independent development of the module as long as the interconnections at the interfaces are
well thought of. [1][2]
The advantages of modularity are possible economies of scale and scope and economies in
parts sourcing [1]. Modularity also provides flexibility that enables product variations and
technology development without changes to the overall design [2]. Same flexibility allows
also for independent development of modules, which is useful in concurrent design or
overlapped product development [3], collaborative projects, or when buying the module from
a supplier [4]. Modularity also eases the management of complex product architectures [2]
and therefore also their development. Modularity can also be used to create product families
[5] [6] [7]. This saves design and testing costs and can allow for greater variation but one
must be aware of possible excess functionality costs if a low cost and low functionality part is
replaced by a higher cost part in order to use the same part in both products [8] [9].
Modularity and product platforms have been shown to be useful [e.g. 6] but there seem to be
few methods to choose the best modules for a product family or joint development platform.
Baldwin and Clark [1] discuss how to modularize but they do not address the problem of what
exactly should be included in a module. Ericsson [2] has developed a modularization method
called Modular Function Deployment (MFD) but it is intended for single products only, not
1
2. product families. Also Design Structure Matrix clustering [10] [11] is intended for single
products, but it has an advantage that it has been reduced to a repeatable algorithms that can
be run by a computer, which enables the modularizing of also complex systems. Stake [11]
introduces a clustering algorithm for MFD to group functions according to modularity driver
scores. He and Blackenfelt [12] also show how MFD and DSM can be integrated to combine
benefits of the two methods but they are still intended for single products only. Kota et al.
[13] present a benchmark method to compare own platform to competitor’s platform. The
method takes manufacturing, component’s size, and material into account in addition to
functionality, but it is not a platforming tool. Stone et al. [14] discuss heuristics to group
functions in a function structure [for more about function structure see 15] into modules
within a product and Zamirowski and Otto [7] add three additional heuristics to apply across
products in a product family. Dahmus [5] et al. applies the heuristics and introduce a
modularity matrix to help decide what modules should and what should not be shared across a
product family. The weakness of the heuristics is that they are not repeatable since the
functional decomposition and the use of heuristics depend on the user’s point of view. Our
goal is to overcome these weaknesses by introducing a more systematic method for grouping
functions into modules.
Another weakness of the existing methods is that they use nominal or ordinal scales instead of
more rigorous ratio scales. Sosa et al. [16] use ordinal scale (-2,-1,0,1,2) in component DSMs,
Ericsson [2] in MFD, and Stake [11] and Blackenfelt [12] in their combined MFD/DSM
approach. Dahmus [5] as well as Zamirowski and Otto [7] suggest the use of Pugh’s concept
selection that is also based on ordinal scales. Kmenta and Ishii [17] discuss the problem of
performing arithmetic operations on ordinal measures. Stated simply, it produces inconsistent
results. Otto and Wood [18] discuss more broadly the strengths and weakness of these
different type measures. Kurshid and Sahai [19] present a rigorous treatment of these
measures. Ratio scales are most useful because the point zero has meaning, and mathematical
operations such as multiplying and dividing have meaning, e.g., meters/second.
In this paper, we address the weakness of all the above. We use a more flexible flow method
[20] for identifying possible modules in a function structure and our algorithm can be put into
a computer. In addition we develop a genuine metric space with a distance function that is
based on the flow characteristics and we will use a ratio scale.
This algorithm is designed especially for the flow method [20] but it could possibly be used
also in conjunction with other modularization methods. The flow method is based on the
heuristics introduced by Stone et al. [14] and further developed for product families by
Zamirowski and Otto [7]. The difference is that in flow method the focus is on the flows
instead of the functions in a function structure. Functions can even be ignored since often the
end result (outputs) and the requirements needed to achieve it (inputs) are all that matter. The
flow method was designed to identify commonalties between different products. It is more
flexible than the function focused heuristics and can therefore be used also in case of joint
development of a common module for even very different products. It is also applicable in
product family platforming.
The problem we address in this study is how to group functions in a functional
decomposition, such as a function structure, to form a module commonalty hierarchy that can
be used to define common modules across products. The following section will introduce the
grouping algorithm. We will then go on to show an example of this method applied to four
products. We will end the article with our conclusions and suggestions for further study.
2
3. 2 Algorithm
This is a five-step algorithm to calculate the “distance”, which will be defined, between any
two modules and group modules into a hierarchical dendrogram [21] that will help decide
what function groups are similar enough to be replaced by a common module. Pedersen [22]
has also used clustering and dendrograms to create product families, but his approach is based
on existing products and is not applicable at product architecture face.
We start by creating function structures for each product that is considered to be either part of
the same product family platform or developed partially with a partner. We will then look for
similar outputs in each function structure, e.g. for rotary motion or gas and thermal energy.
We will then start grouping functions before, or close, to the function with the similar outputs
and draw black boxes with the outputs and all the inputs that the grouped functions bring to
the bundle. Each product should have a couple of alternative black boxes to have more to
chose from. We can repeat the procedure for other similar outputs that are found in the
function structures. Since many flows, such and torque and rotation speed, depend on one
another, they should be combined into a single input or output to avoid redundancy. We will
use power as the input and output flow for all power related flows e.g. instead of two flows
torque and angular velocity, we use a single flow torque∗velocity. Similar strategy is used in
[23] with bond graphs. We are now ready to start the algorithm it self.
Step 1: Enumerate all the components, i.e. black boxes.
Enumerate all the black boxes to form a seti [mi]
3
i.e. m1,m2,m3,…,mn
Step 2: Characterize all the black boxes.
Characterize all the black boxes by their inputs and outputs. For example for module mα:
Figure 1 Input /output characterization for module mα.
The input set consists of five inputs as follows:
xα
1 electrical power voltage*current in watts
xα
2 translational power force*velocity in watts
xα
3 rotational power torque*angular velocity in watts
xα
4 information bandwidth in bits
xα
5 translation stroke distance in mm, m, etc.
yα
1, yα
2, yα
3, yα
4, and yα
5 are defined in a similar matter. One should notice, that the set of
inputs and outputs can be expanded and reduced as needed. We use five in this exemplary
case without loss of generality.
Step 3: Screen the black boxes for potential groupings.
Our goal is to find out how similar two modules are e.g. what is their distance from one
another. To define the distance between two modules (mα and mβ), we will start from the
distance between inputs and outputs. We say that the distance between inputs xα
i and xβ
i is
sαβ
i, where
mα
xα
1
xα
2
xα
5
yα
1
yα
2
yα
5
4. α β
x x
( )
5 5
5 α β
α β
y y
( )
5 5
5 α β
mαβ = sαβ + sαβ + + sαβ + tαβ + tαβ + + tαβ (3)
matrix [M]
m1 m2 m3 m4
m1 0 1,52 2,15 2,19
m2 1,52 0 2,40 2,43
m3 2,15 2,40 0 0,78
m4 1,19 2,43 0,78 0
4
α β
s x x
( −
)
= 1 1
, …,
1 α β
max( , )
1 1
αβ
x x
max( , )
5 5
αβ
x x
s
−
= (1)
The distance tαβ
i between outputs yα
i and yβ
ι is defined as follows
α β
t y y
( −
)
= 1 1
, …,
1 α β
max( , )
1 1
αβ
y y
max( , )
5 5
αβ
y y
t
−
= (2)
From the equation (1) we clearly see that if xα
i = xβ
i then sαβ
i=0. In general sαβ
i can be less or
greater than or equal to zero for i=1,2,3…5. The same applies for outputs in equation (2).
Step 4: Calculate the distance metric among black boxes.
We define pseudo-distance between mα and mβ by
(( ) ( ) ... ( ) ( ) ( ) ... ( )2 )
5
2
2
2
1
2
5
2
2
2
1
In addition, we define that sαα
I=0. Now, mαβ≥0 always, and distance matrix M
Note that matrix M is symmetric and that it satisfies all the conditions for an Euclidean metric
i.e. it is non-negative, idempotent, reflexive, and the triangle inequality applies.
Step 5: Build the dendrogram.
Build the dendrogram by starting with the two modules that have the smallest distance.
Connect these modules at their distance value (see Figure below). Take then the next module
pair that has neither one of the modules already in the dendrogram and connect them to each
other at their distance value. Continue by adding a module at a time that has the shortest
distance to either one of the module groups already on the dendrogram and connect it at its
distance value to the module group that is closest to it. Continue until the two module groups
are connected to one another and all modules are in the dendrogram.
Figure 2 Exemplary dendrogram.
m3 m4 m1 m2
Distance
5. 3 Example
We will now show in more detail how to use this algorithm. We will apply it on four
products: an intraoral camera, an electronic pipette, a pencil sharpener, and a fruit and veggie
peeler. These products are very different and produced by different companies but we want to
show that our method, in fact, finds reasonable commonalties between the products. Our goal
is not to platform these products but just show with a simple but challenging example how our
idea works. Partial function structures around the identified common outputs are shown in
Figure 3.
convert
electricity to
translation
E rot rot rot rot trans
intraoral camera
Figure 3 Exemplary products, veggie peeler and pencil sharpener from [18].
rot
Step 1. We chose to form just one black box for the two simplest products, the intraoral
camera and the pencil sharpener, and three alternative black boxes for the electronic pipette
and two for the fruit and veggie peeler. The black boxes and their enumeration are show in
Figure 4.
Black box m1 is simply the function “convert electricity into translation” in the intraoral
camera’s function structure. Black box m2a entails the function chain “convert electricity to
rotation – change rotation – transmit rotation – indicate position – convert rotation to
translation” from the electronic pipette’s function structure. Black box m2b is the same chain
without the last two functions “indicate position - convert rotation to translation” and black
box m2c is same as m2a less the last function “convert rotation to translation”. Black box m3
entails the pencil sharpener’s function chain “import electricity – actuate electricity – convert
electricity to rotation – change rotational energy”. Black box m4a represents the function chain
5
import
electricity
separate solid
change
rotation
guide
rotation
distribute
rotation
convert elect.
to rotation
regulate
electricity
actuate
electricity
import
electricity
transmit
rotational
energy
change
rotational
energy
convert elect.
to rotation
actuate
electricity
guide rotate solid separate solid
rotation
change
friction
guide
translation
convert
rotation to
translation
pencil sharpener
fruit/veggie peeler
E E E
E E
rot rot rot
pencil sharp pencil,
shavings
human
force
reaction force
E E rot
rot
rot rot trans
rot peel,
peeled food
food
translation
E
convert
rotation to
translation
indicate
position
transmit
rotation
change
rotation
convert elect.
to rotation
info to
run the
motor position
pipette
info to run
the drive
measure
image
resolution
image
6. “import electricity – actuate electricity – regulate electricity – convert electricity to rotation”
in the fruit/veggie peeler’s function structure. Black box m4b is the same function chain plus
“– guide rotation – change rotation – convert rotation to translation”.
1
2 y 1
5 y
E rot
m position 2a
4 x
info to run
the drive
E rot
E rot m4b
E rot
Figure 4 Exemplary black boxes.
intraoral camera
pipette
Step 2. We will use the same black box characterization as shown above. The values used in
this calculation are imaginary but reasonable for the types of products in question.
Table 1 Module inputs and outputs.
Step 3-4. We calculated the distance between each module except between alternative
modules from a same product and placed the distances in matrix M. Note that the shaded cells
stand for distance between alternative modules in a same product and are thus meaningless
and left blank.
6
2c
3 y 2b
3 y
2c
4 y
33
y
4a
3 y
y 4b
y
4b
2 5 2b
1 x
2a
1 x
2a
x4
3
1 x
4a
1 x 4b
1 x
2c
1 x
2c
4 x 2b
2a
2 y 2a
5 y
2a
4 y
11
x
1
4 x
m3
m4a
E transl
m1
transl
E
m2b
info to run
the motor
m2c
info to run
the motor
E transl
info to run
the motor position
pencil sharpener
fruit/veggie peeler
module 1 inputs module 1 outputs
x1
1 x2
1 x3
1 x4
1 x5
1 y1
1 y2
1 y3
1 y4
1 y5
1
5 1 5 15
module 2a inputs module 2a outputs
x1
2 x2
2 x3
2 x4
2 x5
2 y1
2 y2
2 y3
2 y4
2 y5
2
5 1 5 2 10
module 2b inputs module 2b outputs
x1
2 x2
2 x3
2 x4
2 x5
2 y1
2 y2
2 y3
2 y4
2 y5
2
5 1 10
module 2c inputs module 2c outputs
x1
2 x2
2 x3
2 x4
2 x5
2 y1
2 y2
2 y3
2 y4
2 y5
2
5 1 10 2
module 3 inputs module 3 outputs
x1
3 x2
3 x3
3 x4
3 x5
3 y1
3 y2
3 y3
3 y4
3 y5
3
5 15
module 4a inputs module 4a outputs
x1
4 x2
4 x3
4 x4
4 x5
4 y1
4 y2
4 y3
4 y4
4 y5
4
10 20
module 4b inputs module 4b outputs
x1
4 x2
4 x3
4 x4
4 x5
4 y1
4 y2
4 y3
4 y4
4 y5
4
10 5 12
7. m1 m2a m2b m2c m3 m4a m4b
m1 0 1,05 1,73 2,00 2,00 2,06
m2a 0 2,24 2,29
m2b 0 1,05 1,22
m2c 0 1,45 1,58
m3 0 0,56
m4a 0
m4b 0
Figure 5 Distance matrix M.
m2 a m1 m4b m2b m3 m4a m2c
7
matrix [M]
Step 5. We build the dendrogram.
Figure 6 Dendrogram.
1.5
1.0
.5
The dendrogram grouped the black boxes according to the type of output force. The three
modules on the left represent linear motors and the four on the right rotary motors. This is
intuitively a smart way of categorizing the exemplary black boxes. Black box m2c clearly
stands out as a module that barely belongs to either category: linear or rotary motors. Closer
examination of the module reveals that m2 black boxes are the only rotary motors that have
information as input and output, which explains the difference. Two modules, on the other
hand, strike as very close to one another in the dendrogram: m4a and m3. These two modules
represent black boxes that have exact same flows, only the values differ slightly.
Redesigning a common module so that it fits to the original products requires design changes.
If we decided to replace the drives in intraoral camera, electronic pipette, and fruit/veggie
peeler with a single module i.e. replace modules m2a, m1 and m4b with a single module that
has all the required inputs as shown below, some design changes are needed.
transl
info to run
the drive
E
position
Figure 7 a common module to replace modules m2a, m1 and m4b
1,14
1,51
2,06
2,29
SYMMETRIC 1,80
8. The electronic pipette has the most complex drive and all the flows of the common module, so
only the flow values, information content and amount, voltage, translation speed, stroke
length, and accuracy etc., need be adjusted for both the module and the pipette’s module
interface. The other two designs need to accommodate flows that they would not otherwise
need in addition to adjusting the flow values. The intraoral camera does require position
information, but if a common module is chosen to be used and the position information will
be there, the intraoral camera needs to be redesigned to accommodate the new flow or even to
take use of the new information provided. The fruit/veggie peeler’s original drive was simpler
than the new module. Thus there is a tradeoff between over functionality and the benefits
from a common module. Another choice, suggested by the dendrogram, is to leave the most
complex module, that of the electronic pipette, out of the common module and standardize the
two simpler modules. The cost of over functionality and redesign effort is proportional to the
distance in the dendrogram. A critical distance, over which commonalizing costs override the
benefits, can be calculated for each specific case to help decide what modules to replace by a
common module. Concentrating on flows and clustering possible module candidates in a
dendrogram highlights differences and commonalties and is a good tool in making a decision
how much to commonalize, and what to include and exclude from a common module
4 Discussion
We have shown how a five-step algorithm can be used to group functions in a function
structure to form a hierarchy for platform module selection. We focused on flows instead of
functions in the function structure, which makes the method more independent of
decomposition decisions and enables the use of ratio scales. The core of our approach is
predicated on the notion of modules being “close” by virtue of a “distance” function in a
function flow system structure. For a distance function, it is necessary to have more
information than ordinal rankings or interval scales can provide, because we need to create a
metric among modules where zero has meaning. We applied the algorithm on a simple
example but we see no problem of applying it also on complex systems since the flow method
and our algorithm can be programmed on a computer.
Our algorithm treats all flows equal; in real life they will exhibit much more variety. Fixson
suggests that interfaces have different intensities. He also points out that different connections
have different degrees of reversibility and this should affect the complexity of the interactions
of the module to the rest of the system. [24] Work is underway to define the intensities.
We used real products as examples, but we used hypothetical input and output values. The
case itself is imaginary i.e. the products are not being considered for a real platform and we
can thus not test how our platform suggestions would work in real product platforms. Future
research is needed to be able to decide what distance value is still acceptable compared to the
benefits of common platform. For example, should one group only m1 and m4b into one
module, or rather all three: m1, m4b, and m2a?
The advantages of this algorithm are: it applies to modularization among simple as well as
complex systems; it addresses the synthesis issue by a method that creates a hierarchy of
modules, it does not rely on qualitative ordinal measures; it does not rely on non-repeatable
heuristics, and it can be implemented and executed in a computer guaranteeing repeatability.
References
[1] Baldwin C.Y. and Clark K.B. “Design rules. Volume 1. the power of modularity”, The
MIT Press, Cambridge, Massachusetts, 2000.
8
9. [2] Ericsson A. and Erixon G., “Controlling design variants: modular product platforms”,
9
ASME press, New York, 1999.
[3] Roemer T.A., Ahmadi R., and Wang R.H., “Time-cost trade-offs in overlapped product
development”, Operations Research, Vol 48, No. 6, 2000, pp.858-865.
[4] Camuffo A., “Rolling out a “World Car”: globalization, outsourcing and modularity in
the auto industry”, Working Paper, International Motor Vehicle Program, Massachusetts
Institute of Technology, 2001.
[5] Dahmus J.B., Gonzales-Zugasti J.P., and Otto K.N. “Modular product architecture”,
Proceedings of DETC 00, ASME Design Engineering Technical Conferences,
Baltimore Maryland, 2000.
[6] Meyer M.H. and Lehnerd A.P., “The power of product platforms” The Free Press, New
York, 1997.
[7] Zamirowski E.J. and Otto K.N., “Identifying product family architecture modularity
using function and variety heuristics”, 11th International Conference on Design Theory
and Methodology, ASME, Las Vegas, 1999.
[8] Gupta S. and Krishnan V., “Integrated component and supplier selection for a product
family”, Production and Operations Management, 8 (2), 1999, pp.163-181.
[9] Krishnan V. and Gupta S., “Appropriateness and impact of platform-based product
development”, Management Science, Vol. 47, No. 1, 2001. pp.52-68.
[10] Pimmler T.U. and Eppinger S.D., “Integration analysis of product decompositions”,
ASME Design Theory and Methodology Conference, Minneapolis, 1994.
[11] Stake R.B., “On conceptual development of modular products”, Doctoral Thesis.
Division of Assembly Systems, Dept. of Production Engineering, Royal Institute of
Technology, Stockholm. 2000.
[12] Blackenfelt M., “Modularisation by matrices – a method for the consideration of
strategic and functional aspects”, Proceedings of the 5th WDK Workshop on Product
Structuring, Tampere, Finland, 2000.
[13] Kota S., Sethuraman K., and Miller R., “A metric for evaluating design commonality in
product families”, Journal of Mechanical Design, Vol. 122, 2000, pp.403 – 410.
[14] Stone R.B., Wood K.L., and Crawford R.H., “A heuristic method for Identifying
Modules for Product Architecture”, Design Studies, Vol 21, Issue 1, 2000, pp.5-31.
[15] Pahl G. and Beitz W., “Engineering design”, Springer-Verlag, London, 2nd ed., 1999.
[16] Sosa M.E., Eppinger S.D., and Rowles C.M., “Understanding the effects of product
architecture on technical communication in product development organizations”, MIT
Sloan School of Management Working Paper 4130, 2000.
[17] Kmenta S. and Kosuke I., “Scenario-based FMEA: a life cycle cost perspective”,
Proceeding of DETC 00. ASME Design Engineering Technical Conferences, Baltimore
Maryland, 2000.
[18] Otto K. and Wood K., “Product design: techniques in reverse engineering and new
product development”, Prentice Hall, Upper Saddle River, New Jersey, 2001.
10. [19] Khurshid A. and Hardeo S., “Scales of measurements: an introduction and a selected
bibliography”, Quality and Quantity, 27, 1993, pp.303-324.
[20] Hölttä K., “Identifying common modules for collaborative R&D”, Presented at POM
2002 Meeting on Production and Operations Management, San Francisco, 2002.
[21] Mantegna R.N. and Stanley H.E., “An introduction to econo physics: correlations and
complexity in finance”, Cambridge University Press, Cambridge, UK. 2000.
[22] Pedersen K., “Designing platform families: an evolutionary approach to developing
engineering systems”, Doctoral Thesis, Georgia University of Technology, 1999.
[23] Karnopp D. and Rosenberg R., “System dynamics: a unified approach”, John Wiley &
10
sons. 1975.
[24] Fixson S., “Methodology development: analyzing product architecture implications on
supply chain cost dynamics”, Presented at the 5th Conference on Technology, Policy,
and Innovation “Critical Infrastructures”, Delft, 2001.
Corresponding author:
Katja Hölttä
Visiting Scholar
Massachusetts Institute of Technology Helsinki University of Technology
Center for Innovation in Product Development Department of Machine Design
Cambridge, Massachusetts 02142, U.S.A. P.O.Box 4100
02015 HUT
Finland
Tel: Int +358 9 451 5072
Fax: Int +358 9 451 3549
E-mail: Katja.Holtta@hut.fi