This document discusses integrating IDEF3 process modeling with queuing network analysis to provide quantitative performance measures without simulation. IDEF3 captures process knowledge visually but provides no metrics. Queuing network analysis can estimate metrics like utilization and wait times but requires a different modeling view. The authors develop a framework to convert IDEF3 models to queuing networks by extracting resource information from activities. A database stores all information to facilitate conversion and analysis. Results from the queuing network analyzer are compared to simulation, finding reasonable accuracy at low system utilization. This integration allows domain experts to obtain performance insights without complex simulation modeling.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
Threshold benchmarking for feature ranking techniquesjournalBEEI
In prediction modeling, the choice of features chosen from the original feature set is crucial for accuracy and model interpretability. Feature ranking techniques rank the features by its importance but there is no consensus on the number of features to be cut-off. Thus, it becomes important to identify a threshold value or range, so as to remove the redundant features. In this work, an empirical study is conducted for identification of the threshold benchmark for feature ranking algorithms. Experiments are conducted on Apache Click dataset with six popularly used ranker techniques and six machine learning techniques, to deduce a relationship between the total number of input features (N) to the threshold range. The area under the curve analysis shows that ≃ 33-50% of the features are necessary and sufficient to yield a reasonable performance measure, with a variance of 2%, in defect prediction models. Further, we also find that the log2(N) as the ranker threshold value represents the lower limit of the range.
Building a new CTL model checker using Web Servicesinfopapers
Florin Stoica, Laura Stoica, Building a new CTL model checker using Web Services, Proceeding The 21th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2013), At Split-Primosten, Croatia, 18-20 September, pp. 285-289, 2013
DOI=10.1109/SoftCOM.2013.6671858 http://dx.doi.org/10.1109/SoftCOM.2013.6671858
1. The document presents a hybrid algorithm that combines Kernelized Fuzzy C-Means (KFCM), Hybrid Ant Colony Optimization (HACO), and Fuzzy Adaptive Particle Swarm Optimization (FAPSO) to improve clustering of electrocardiogram (ECG) beat data.
2. The algorithm maps data into a higher dimensional space using kernel functions to make clusters more linearly separable, addresses issues with KFCM being sensitive to initialization and prone to local minima.
3. It uses HACO to optimize cluster centers and membership degrees, and FAPSO to evaluate fitness values and optimize weight vectors, forming usable clusters for applications like ECG classification.
This document describes a new recursive Monte Carlo simulation algorithm called the Sampled Path Set Algorithm (SPSA) for modeling complex k-out-of-n reliability systems. The SPSA uses a graph representation of a reliability block diagram and recursively searches the graph to determine system response based on the system state vector at each simulation iteration, allowing modeling of systems with general component failure and repair distributions and large numbers of components. Existing methods for analyzing such systems using tie/cut sets have limitations as the number of sets grows non-linearly with increased system complexity. The SPSA provides a more efficient alternative with linear growth in processing and memory requirements.
Static Load Balancing of Parallel Mining Efficient Algorithm with PBEC in Fre...IRJET Journal
This document presents a proposed static load balancing method for parallel mining of frequent sequences. It partitions the set of all frequent sequences into disjoint prefix-based equivalence classes (PBECs). It estimates the relative execution time of each PBEC using the sequential PrefixSpan algorithm on sampling data sets. The PBECs are then assigned to processors based on these estimated execution times to balance the computational load. The goal is to develop an efficient parallel frequent sequence mining algorithm that can scale to large datasets through static load balancing of computations across multiple processors.
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...IJDKP
Many applications of automatic document classification require learning accurately with little training
data. The semi-supervised classification technique uses labeled and unlabeled data for training. This
technique has shown to be effective in some cases; however, the use of unlabeled data is not always
beneficial.
On the other hand, the emergence of web technologies has originated the collaborative development of
ontologies. In this paper, we propose the use of ontologies in order to improve the accuracy and efficiency
of the semi-supervised document classification.
We used support vector machines, which is one of the most effective algorithms that have been studied for
text. Our algorithm enhances the performance of transductive support vector machines through the use of
ontologies. We report experimental results applying our algorithm to three different datasets. Our
experiments show an increment of accuracy of 4% on average and up to 20%, in comparison with the
traditional semi-supervised model.
SOURCE CODE RETRIEVAL USING SEQUENCE BASED SIMILARITYIJDKP
This document summarizes an approach to improve source code retrieval using structural information from source code. A lexical parser is developed to extract control statements and method identifiers from Java programs. A similarity measure is proposed that calculates the ratio of fully matching statements to partially matching statements in a sequence. Experiments show the retrieval model using this measure improves retrieval performance over other models by up to 90.9% relative to the number of retrieved methods.
An Adjacent Analysis of the Parallel Programming Model Perspective: A SurveyIRJET Journal
This document provides an overview and analysis of parallel programming models. It begins with an abstract discussing the growing demand for parallel computing and challenges with existing parallel programming frameworks. It then reviews several relevant studies on parallel programming models and architectures. The document goes on to describe several key parallel programming models in more detail, including the Parallel Random Access Machine (PRAM) model, Unrestricted Message Passing (UMP) model, and Bulk Synchronous Parallel (BSP) model. It discusses aspects of each model like architecture, communication methods, and associated cost models. The overall goal is to compare benefits and limitations of different parallel programming models.
Threshold benchmarking for feature ranking techniquesjournalBEEI
In prediction modeling, the choice of features chosen from the original feature set is crucial for accuracy and model interpretability. Feature ranking techniques rank the features by its importance but there is no consensus on the number of features to be cut-off. Thus, it becomes important to identify a threshold value or range, so as to remove the redundant features. In this work, an empirical study is conducted for identification of the threshold benchmark for feature ranking algorithms. Experiments are conducted on Apache Click dataset with six popularly used ranker techniques and six machine learning techniques, to deduce a relationship between the total number of input features (N) to the threshold range. The area under the curve analysis shows that ≃ 33-50% of the features are necessary and sufficient to yield a reasonable performance measure, with a variance of 2%, in defect prediction models. Further, we also find that the log2(N) as the ranker threshold value represents the lower limit of the range.
Building a new CTL model checker using Web Servicesinfopapers
Florin Stoica, Laura Stoica, Building a new CTL model checker using Web Services, Proceeding The 21th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2013), At Split-Primosten, Croatia, 18-20 September, pp. 285-289, 2013
DOI=10.1109/SoftCOM.2013.6671858 http://dx.doi.org/10.1109/SoftCOM.2013.6671858
1. The document presents a hybrid algorithm that combines Kernelized Fuzzy C-Means (KFCM), Hybrid Ant Colony Optimization (HACO), and Fuzzy Adaptive Particle Swarm Optimization (FAPSO) to improve clustering of electrocardiogram (ECG) beat data.
2. The algorithm maps data into a higher dimensional space using kernel functions to make clusters more linearly separable, addresses issues with KFCM being sensitive to initialization and prone to local minima.
3. It uses HACO to optimize cluster centers and membership degrees, and FAPSO to evaluate fitness values and optimize weight vectors, forming usable clusters for applications like ECG classification.
This document describes a new recursive Monte Carlo simulation algorithm called the Sampled Path Set Algorithm (SPSA) for modeling complex k-out-of-n reliability systems. The SPSA uses a graph representation of a reliability block diagram and recursively searches the graph to determine system response based on the system state vector at each simulation iteration, allowing modeling of systems with general component failure and repair distributions and large numbers of components. Existing methods for analyzing such systems using tie/cut sets have limitations as the number of sets grows non-linearly with increased system complexity. The SPSA provides a more efficient alternative with linear growth in processing and memory requirements.
Static Load Balancing of Parallel Mining Efficient Algorithm with PBEC in Fre...IRJET Journal
This document presents a proposed static load balancing method for parallel mining of frequent sequences. It partitions the set of all frequent sequences into disjoint prefix-based equivalence classes (PBECs). It estimates the relative execution time of each PBEC using the sequential PrefixSpan algorithm on sampling data sets. The PBECs are then assigned to processors based on these estimated execution times to balance the computational load. The goal is to develop an efficient parallel frequent sequence mining algorithm that can scale to large datasets through static load balancing of computations across multiple processors.
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...IJDKP
Many applications of automatic document classification require learning accurately with little training
data. The semi-supervised classification technique uses labeled and unlabeled data for training. This
technique has shown to be effective in some cases; however, the use of unlabeled data is not always
beneficial.
On the other hand, the emergence of web technologies has originated the collaborative development of
ontologies. In this paper, we propose the use of ontologies in order to improve the accuracy and efficiency
of the semi-supervised document classification.
We used support vector machines, which is one of the most effective algorithms that have been studied for
text. Our algorithm enhances the performance of transductive support vector machines through the use of
ontologies. We report experimental results applying our algorithm to three different datasets. Our
experiments show an increment of accuracy of 4% on average and up to 20%, in comparison with the
traditional semi-supervised model.
SOURCE CODE RETRIEVAL USING SEQUENCE BASED SIMILARITYIJDKP
This document summarizes an approach to improve source code retrieval using structural information from source code. A lexical parser is developed to extract control statements and method identifiers from Java programs. A similarity measure is proposed that calculates the ratio of fully matching statements to partially matching statements in a sequence. Experiments show the retrieval model using this measure improves retrieval performance over other models by up to 90.9% relative to the number of retrieved methods.
A Study of BFLOAT16 for Deep Learning TrainingSubhajit Sahu
Highlighted notes of:
A Study of BFLOAT16 for Deep Learning Training
This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for DeepLearning training across image classification, speech recognition, language model-ing, generative networks, and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed-precision training and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16tensors achieves the same state-of-the-art (SOTA) results across domains as FP32tensors in the same number of iterations and with no changes to hyper-parameters.
Software effort estimation through clustering techniques of RBFN networkIOSR Journals
This document discusses using a radial basis function neural network (RBFN) to estimate software development effort based on the COCOMO II model. The RBFN uses COCOMO II data for training and has three layers - an input layer with COCOMO II parameters like size and scale factors, a hidden middle layer with Gaussian activation functions, and an output layer that calculates effort. Two clustering algorithms, K-means and APC-III, are used to determine the receptive fields of the hidden layer neurons. The K-means algorithm partitions the COCOMO II data into clusters and finds cluster centers to minimize distance between clusters. The RBFN is trained and tested on the COCOMO II data to evaluate its ability to accurately estimate software
K2 Algorithm-based Text Detection with An Adaptive Classifier ThresholdCSCJournals
In natural scene images, text detection is a challenging study area for dissimilar content-based image analysis tasks. In this paper, a Bayesian network scores are used to classify candidate character regions by computing posterior probabilities. The posterior probabilities are used to define an adaptive threshold to detect text in scene images with accuracy. Therefore, candidate character regions are extracted through maximally stable extremal region. K2 algorithm-based Bayesian network scores are learned by evaluating dependencies amongst features of a given candidate character region. Bayesian logistic regression classifier is trained to compute posterior probabilities to define an adaptive classifier threshold. The candidate character regions below from adaptive classifier threshold are discarded as non-character regions. Finally, text regions are detected with the use of effective text localization scheme based on geometric features. The entire system is evaluated on the ICDAR 2013 competition database. Experimental results produce competitive performance (precision, recall and harmonic mean) with the recently published algorithms.
This document discusses GCUBE indexing, which is a method for indexing and aggregating spatial/continuous values in a data warehouse. The key challenges addressed are defining and aggregating spatial/continuous values, and efficiently representing, indexing, updating and querying data that includes both categorical and continuous dimensions. The proposed GCUBE approach maps multi-dimensional data to a linear ordering using the Hilbert curve, and then constructs an index structure on the ordered data to enable efficient query processing. Empirical results show the GCUBE indexing offers significant performance advantages over alternative approaches.
Using the black-box approach with machine learning methods in ...butest
The document discusses two experiments using machine learning methods to improve job scheduling in grid computing environments. In the first experiment, machine learning methods were used to assist basic resource selection algorithms. In the second experiment, machine learning methods directly selected resources for job execution. The results showed that machine learning approaches could achieve improvements or comparable results to traditional scheduling methods.
Clustering is also known as data segmentation aims to partitions data set into groups, clusters, according to their similarity. Cluster analysis has been extensively studied in many researches. There are many algorithms for different types of clustering. These classical algorithms can't be applied on big data due to its distinct features. It is a challenge to apply the traditional techniques on large unstructured data. This study proposes a hybrid model to cluster big data using the famous traditional K-means clustering algorithm. The proposed model consists of three phases namely; Mapper phase, Clustering Phase and Reduce phase. The first phase uses map-reduce algorithm to split big data into small datasets. Whereas, the second phase implements the traditional clustering K-means algorithm on each of the spitted small data sets. The last phase is responsible of producing the general clusters output of the complete data set. Two functions, Mode and Fuzzy Gaussian, have been implemented and compared at the last phase to determine the most suitable one. The experimental study used four benchmark big data sets; Covtype, Covtype-2, Poker, and Poker-2. The results proved the efficiency of the proposed model in clustering big data using the traditional K-means algorithm. Also, the experiments show that the Fuzzy Gaussian function produces more accurate results than the traditional Mode function.
The document describes a study that evaluated three automatic text summarization techniques - LSA, LexRank, and Luhn - on 100 BBC business news articles. The LSA extractive summarization technique performed best according to ROUGE recall scoring, achieving an average Rouge-1, Rouge-2, Rouge-l recall score of 0.867, 0.617, and 0.841 respectively, outperforming LexRank and Luhn. The study also developed a web application called I AM SAM that summarizes news articles from plain text or URLs and provides instant ROUGE scores.
This document compares hierarchical and non-hierarchical clustering algorithms. It summarizes four clustering algorithms: K-Means, K-Medoids, Farthest First Clustering (hierarchical algorithms), and DBSCAN (non-hierarchical algorithm). It describes the methodology of each algorithm and provides pseudocode. It also describes the datasets used to evaluate the performance of the algorithms and the evaluation metrics. The goal is to compare the performance of the clustering methods on different datasets.
This document provides an overview of stream data mining techniques. It discusses how traditional data mining cannot be directly applied to data streams due to their continuous, rapid nature. The document outlines some essential methodologies for analyzing data streams, including sampling, load shedding, sketching, and data summarization techniques like reservoirs, histograms, and wavelets. It also discusses challenges in applying these techniques to data streams and open problems in the emerging field of stream data mining.
Parallel KNN for Big Data using Adaptive IndexingIRJET Journal
This document presents an evaluation of different algorithms for performing parallel k-nearest neighbor (kNN) queries on big data using the MapReduce framework. It first discusses how kNN algorithms do not scale well for large datasets. It then reviews existing MapReduce-based kNN algorithms like H-BNLJ, H-zkNNJ, and RankReduce that improve performance by partitioning data and distributing computation. The document also proposes using an adaptive indexing technique with the RankReduce algorithm. An implementation of this approach on a airline on-time statistics dataset shows it achieves better precision and speed than other algorithms.
With the rapid development in Geographic Information Systems (GISs) and their applications, more and
more geo-graphical databases have been developed by different vendors. However, data integration and
accessing is still a big problem for the development of GIS applications as no interoperability exists among
different spatial databases. In this paper we propose a unified approach for spatial data query. The paper
describes a framework for integrating information from repositories containing different vector data sets
formats and repositories containing raster datasets. The presented approach converts different vector data
formats into a single unified format (File Geo-Database “GDB”). In addition, we employ “metadata” to
support a wide range of users’ queries to retrieve relevant geographic information from heterogeneous and
distributed repositories. Such an employment enhances both query processing and performance.
Hybridization of Meta-heuristics for Optimizing Routing protocol in VANETsIJERA Editor
The goal of VANET is to establish a vehicular communication system which is reliable and fast which caters to
road safety and road safety. In VANET where network fragmentation is frequent with no central control, routing
becomes a challenging task. Planning an optimal routing plan for tuning parameter configuration of routing
protocol for setting up VANET is very crucial. This is done by defining an optimization problem where
hybridization of meta-heuristics is defined. The paper contributes the idea of combining meta-heuristic
algorithm to enhance the performance of individual search method for optimization problem.
Application Of Extreme Value Theory To Bursts PredictionCSCJournals
Bursts and extreme events in quantities such as connection durations, file sizes, throughput, etc. may produce undesirable consequences in computer networks. Deterioration in the quality of service is a major consequence. Predicting these extreme events and burst is important. It helps in reserving the right resources for a better quality of service. We applied Extreme value theory (EVT) to predict bursts in network traffic. We took a deeper look into the application of EVT by using EVT based Exploratory Data Analysis. We found that traffic is naturally divided into two categories, Internal and external traffic. The internal traffic follows generalized extreme value (GEV) model with a negative shape parameter, which is also the same as Weibull distribution. The external traffic follows a GEV with positive shape parameter, which is Frechet distribution. These findings are of great value to the quality of service in data networks, especially when included in service level agreement as traffic descriptor parameters.
Efficient Forecasting of Exchange rates with Recurrent FLANNIOSR Journals
The document proposes a Functional Link Artificial Recurrent Neural Network (FLARNN) model for forecasting foreign exchange rates between currencies like the US dollar, Indian rupee, British pound, and Japanese yen. It compares the performance of the FLARNN model to existing neural network models like LMS and FLANN. The FLARNN uses functional expansion and recurrent connections to more accurately predict exchange rates up to 60 days in the future based on historical data. Experimental results show the FLARNN model consistently outperforms the other methods according to error convergence and Mean Average Percentage Error.
An Approach for Project Scheduling Using PERT/CPM and Petri Nets (PNs) ToolsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
The document discusses data stream mining and summarizes some key challenges and techniques. It describes how traditional data mining cannot be directly applied to data streams due to their continuous, rapid arrival. It then outlines several techniques used for summarizing and extracting knowledge from data streams, including sampling, sketching, load shedding, synopsis data structures, and algorithms modified from basic data mining to handle streams.
A COMPARATIVE STUDY IN DYNAMIC JOB SCHEDULING APPROACHES IN GRID COMPUTING EN...ijgca
Grid computing is one of the most interesting research areas for present and future computing strategy and methodology. The dramatic changes in the complexity of scientific applications and part of nonscientific applications increase the need for distributed systems in general and grid computing specifically. One of the main challenges in grid computing environment is the way of handling the jobs (tasks) in the grid environment. Job scheduling is the activity to schedule the submitted jobs in the grid environment. There are many approaches in job scheduling in grid computing. This paper provides an experimental study of different approaches in grid computing job scheduling. The involved approaches in this paper are “4-levels/RMFF” and our previously published approach “XLevels/XD-Binary Tree”. First of all, introduction to grid computing and job scheduling techniques is provided. Then the description of currently existing approaches will be presented. After that, experiments and provided results give a practical evaluation of these approaches from different perspectives. Conclusion of the comparative study states that overall average tasks waiting time is enhanced by approximately 30% by using the X-levels/XD-binary tree approach against 4-levels/RMFF approach.
A semantic framework and software design to enable the transparent integratio...Patricia Tavares Boralli
This document proposes a conceptual framework to unify representations of natural systems knowledge. The framework is based on separating the ontological nature of an object of study from the context of its observation. Each object is associated with a concept defined in an ontology and an observation context describing aspects like location and time. Models and data are treated as generic knowledge sources with a semantic type and observation context. This allows flexible integration and calculation of states across heterogeneous sources by composing their observation contexts and resolving semantic compatibility. The framework aims to simplify knowledge representation by abstracting away complexity related to data format and scale.
Change is constant and affects people on many levels. It impacts individuals personally and professionally through biological, physical, emotional, mental, social, educational, economic, and cultural changes over time. Change happens due to the passage of time and various influences in home, work, education, social, and spiritual environments. While change cannot be stopped, embracing it provides opportunities for growth and avoiding stagnation. Change impacts businesses through their bottom line, customer satisfaction, employees, products, services, and processes. It causes evolution in work policies, tools, teams, and leadership over time. Rather than fearing change, which is an emotional response to the unknown, one can choose to accept it.
A Study of BFLOAT16 for Deep Learning TrainingSubhajit Sahu
Highlighted notes of:
A Study of BFLOAT16 for Deep Learning Training
This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for DeepLearning training across image classification, speech recognition, language model-ing, generative networks, and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed-precision training and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16tensors achieves the same state-of-the-art (SOTA) results across domains as FP32tensors in the same number of iterations and with no changes to hyper-parameters.
Software effort estimation through clustering techniques of RBFN networkIOSR Journals
This document discusses using a radial basis function neural network (RBFN) to estimate software development effort based on the COCOMO II model. The RBFN uses COCOMO II data for training and has three layers - an input layer with COCOMO II parameters like size and scale factors, a hidden middle layer with Gaussian activation functions, and an output layer that calculates effort. Two clustering algorithms, K-means and APC-III, are used to determine the receptive fields of the hidden layer neurons. The K-means algorithm partitions the COCOMO II data into clusters and finds cluster centers to minimize distance between clusters. The RBFN is trained and tested on the COCOMO II data to evaluate its ability to accurately estimate software
K2 Algorithm-based Text Detection with An Adaptive Classifier ThresholdCSCJournals
In natural scene images, text detection is a challenging study area for dissimilar content-based image analysis tasks. In this paper, a Bayesian network scores are used to classify candidate character regions by computing posterior probabilities. The posterior probabilities are used to define an adaptive threshold to detect text in scene images with accuracy. Therefore, candidate character regions are extracted through maximally stable extremal region. K2 algorithm-based Bayesian network scores are learned by evaluating dependencies amongst features of a given candidate character region. Bayesian logistic regression classifier is trained to compute posterior probabilities to define an adaptive classifier threshold. The candidate character regions below from adaptive classifier threshold are discarded as non-character regions. Finally, text regions are detected with the use of effective text localization scheme based on geometric features. The entire system is evaluated on the ICDAR 2013 competition database. Experimental results produce competitive performance (precision, recall and harmonic mean) with the recently published algorithms.
This document discusses GCUBE indexing, which is a method for indexing and aggregating spatial/continuous values in a data warehouse. The key challenges addressed are defining and aggregating spatial/continuous values, and efficiently representing, indexing, updating and querying data that includes both categorical and continuous dimensions. The proposed GCUBE approach maps multi-dimensional data to a linear ordering using the Hilbert curve, and then constructs an index structure on the ordered data to enable efficient query processing. Empirical results show the GCUBE indexing offers significant performance advantages over alternative approaches.
Using the black-box approach with machine learning methods in ...butest
The document discusses two experiments using machine learning methods to improve job scheduling in grid computing environments. In the first experiment, machine learning methods were used to assist basic resource selection algorithms. In the second experiment, machine learning methods directly selected resources for job execution. The results showed that machine learning approaches could achieve improvements or comparable results to traditional scheduling methods.
Clustering is also known as data segmentation aims to partitions data set into groups, clusters, according to their similarity. Cluster analysis has been extensively studied in many researches. There are many algorithms for different types of clustering. These classical algorithms can't be applied on big data due to its distinct features. It is a challenge to apply the traditional techniques on large unstructured data. This study proposes a hybrid model to cluster big data using the famous traditional K-means clustering algorithm. The proposed model consists of three phases namely; Mapper phase, Clustering Phase and Reduce phase. The first phase uses map-reduce algorithm to split big data into small datasets. Whereas, the second phase implements the traditional clustering K-means algorithm on each of the spitted small data sets. The last phase is responsible of producing the general clusters output of the complete data set. Two functions, Mode and Fuzzy Gaussian, have been implemented and compared at the last phase to determine the most suitable one. The experimental study used four benchmark big data sets; Covtype, Covtype-2, Poker, and Poker-2. The results proved the efficiency of the proposed model in clustering big data using the traditional K-means algorithm. Also, the experiments show that the Fuzzy Gaussian function produces more accurate results than the traditional Mode function.
The document describes a study that evaluated three automatic text summarization techniques - LSA, LexRank, and Luhn - on 100 BBC business news articles. The LSA extractive summarization technique performed best according to ROUGE recall scoring, achieving an average Rouge-1, Rouge-2, Rouge-l recall score of 0.867, 0.617, and 0.841 respectively, outperforming LexRank and Luhn. The study also developed a web application called I AM SAM that summarizes news articles from plain text or URLs and provides instant ROUGE scores.
This document compares hierarchical and non-hierarchical clustering algorithms. It summarizes four clustering algorithms: K-Means, K-Medoids, Farthest First Clustering (hierarchical algorithms), and DBSCAN (non-hierarchical algorithm). It describes the methodology of each algorithm and provides pseudocode. It also describes the datasets used to evaluate the performance of the algorithms and the evaluation metrics. The goal is to compare the performance of the clustering methods on different datasets.
This document provides an overview of stream data mining techniques. It discusses how traditional data mining cannot be directly applied to data streams due to their continuous, rapid nature. The document outlines some essential methodologies for analyzing data streams, including sampling, load shedding, sketching, and data summarization techniques like reservoirs, histograms, and wavelets. It also discusses challenges in applying these techniques to data streams and open problems in the emerging field of stream data mining.
Parallel KNN for Big Data using Adaptive IndexingIRJET Journal
This document presents an evaluation of different algorithms for performing parallel k-nearest neighbor (kNN) queries on big data using the MapReduce framework. It first discusses how kNN algorithms do not scale well for large datasets. It then reviews existing MapReduce-based kNN algorithms like H-BNLJ, H-zkNNJ, and RankReduce that improve performance by partitioning data and distributing computation. The document also proposes using an adaptive indexing technique with the RankReduce algorithm. An implementation of this approach on a airline on-time statistics dataset shows it achieves better precision and speed than other algorithms.
With the rapid development in Geographic Information Systems (GISs) and their applications, more and
more geo-graphical databases have been developed by different vendors. However, data integration and
accessing is still a big problem for the development of GIS applications as no interoperability exists among
different spatial databases. In this paper we propose a unified approach for spatial data query. The paper
describes a framework for integrating information from repositories containing different vector data sets
formats and repositories containing raster datasets. The presented approach converts different vector data
formats into a single unified format (File Geo-Database “GDB”). In addition, we employ “metadata” to
support a wide range of users’ queries to retrieve relevant geographic information from heterogeneous and
distributed repositories. Such an employment enhances both query processing and performance.
Hybridization of Meta-heuristics for Optimizing Routing protocol in VANETsIJERA Editor
The goal of VANET is to establish a vehicular communication system which is reliable and fast which caters to
road safety and road safety. In VANET where network fragmentation is frequent with no central control, routing
becomes a challenging task. Planning an optimal routing plan for tuning parameter configuration of routing
protocol for setting up VANET is very crucial. This is done by defining an optimization problem where
hybridization of meta-heuristics is defined. The paper contributes the idea of combining meta-heuristic
algorithm to enhance the performance of individual search method for optimization problem.
Application Of Extreme Value Theory To Bursts PredictionCSCJournals
Bursts and extreme events in quantities such as connection durations, file sizes, throughput, etc. may produce undesirable consequences in computer networks. Deterioration in the quality of service is a major consequence. Predicting these extreme events and burst is important. It helps in reserving the right resources for a better quality of service. We applied Extreme value theory (EVT) to predict bursts in network traffic. We took a deeper look into the application of EVT by using EVT based Exploratory Data Analysis. We found that traffic is naturally divided into two categories, Internal and external traffic. The internal traffic follows generalized extreme value (GEV) model with a negative shape parameter, which is also the same as Weibull distribution. The external traffic follows a GEV with positive shape parameter, which is Frechet distribution. These findings are of great value to the quality of service in data networks, especially when included in service level agreement as traffic descriptor parameters.
Efficient Forecasting of Exchange rates with Recurrent FLANNIOSR Journals
The document proposes a Functional Link Artificial Recurrent Neural Network (FLARNN) model for forecasting foreign exchange rates between currencies like the US dollar, Indian rupee, British pound, and Japanese yen. It compares the performance of the FLARNN model to existing neural network models like LMS and FLANN. The FLARNN uses functional expansion and recurrent connections to more accurately predict exchange rates up to 60 days in the future based on historical data. Experimental results show the FLARNN model consistently outperforms the other methods according to error convergence and Mean Average Percentage Error.
An Approach for Project Scheduling Using PERT/CPM and Petri Nets (PNs) ToolsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
The document discusses data stream mining and summarizes some key challenges and techniques. It describes how traditional data mining cannot be directly applied to data streams due to their continuous, rapid arrival. It then outlines several techniques used for summarizing and extracting knowledge from data streams, including sampling, sketching, load shedding, synopsis data structures, and algorithms modified from basic data mining to handle streams.
A COMPARATIVE STUDY IN DYNAMIC JOB SCHEDULING APPROACHES IN GRID COMPUTING EN...ijgca
Grid computing is one of the most interesting research areas for present and future computing strategy and methodology. The dramatic changes in the complexity of scientific applications and part of nonscientific applications increase the need for distributed systems in general and grid computing specifically. One of the main challenges in grid computing environment is the way of handling the jobs (tasks) in the grid environment. Job scheduling is the activity to schedule the submitted jobs in the grid environment. There are many approaches in job scheduling in grid computing. This paper provides an experimental study of different approaches in grid computing job scheduling. The involved approaches in this paper are “4-levels/RMFF” and our previously published approach “XLevels/XD-Binary Tree”. First of all, introduction to grid computing and job scheduling techniques is provided. Then the description of currently existing approaches will be presented. After that, experiments and provided results give a practical evaluation of these approaches from different perspectives. Conclusion of the comparative study states that overall average tasks waiting time is enhanced by approximately 30% by using the X-levels/XD-binary tree approach against 4-levels/RMFF approach.
A semantic framework and software design to enable the transparent integratio...Patricia Tavares Boralli
This document proposes a conceptual framework to unify representations of natural systems knowledge. The framework is based on separating the ontological nature of an object of study from the context of its observation. Each object is associated with a concept defined in an ontology and an observation context describing aspects like location and time. Models and data are treated as generic knowledge sources with a semantic type and observation context. This allows flexible integration and calculation of states across heterogeneous sources by composing their observation contexts and resolving semantic compatibility. The framework aims to simplify knowledge representation by abstracting away complexity related to data format and scale.
Change is constant and affects people on many levels. It impacts individuals personally and professionally through biological, physical, emotional, mental, social, educational, economic, and cultural changes over time. Change happens due to the passage of time and various influences in home, work, education, social, and spiritual environments. While change cannot be stopped, embracing it provides opportunities for growth and avoiding stagnation. Change impacts businesses through their bottom line, customer satisfaction, employees, products, services, and processes. It causes evolution in work policies, tools, teams, and leadership over time. Rather than fearing change, which is an emotional response to the unknown, one can choose to accept it.
Visualizer for concept relations in an automatic meaning extraction systemPatricia Tavares Boralli
This document discusses a visualizer interface that has been developed for an automatic meaning extraction (AME) system. The visualizer allows users to view concepts and their relationships extracted from text documents in a graph format. It maps concepts as nodes and relationships as edges. Users can search for concepts, view related concepts, and trace relationships back to the original text passages. The visualizer was created to help users interact with and understand the outputs of the AME system, which automatically extracts concepts and relations from documents across various domains.
The document discusses Objective Oriented Markup (OOMDP), which is a coding process that uses objective coding, independence, parallel and merge concepts. It follows a flow from a UIO Factory to building and customizing. The coding methodology uses inheritance, cascading, encapsulation, and consistency. It also discusses features like specification, scheduling, coding conventions, and quality. Charts show staff assignments and work hours over time.
This document discusses adaptive system-level scheduling under fluid traffic flow conditions in multiprocessor systems. It proposes a scheduling mechanism that accounts for traffic-centric system design. The mechanism evaluates scheduling methods based on effectiveness, robustness, and flexibility. It also introduces a processor-FPGA scheduling approach that reduces schedule length by taking advantage of FPGA reconfiguration. Simulation results show that processor-FPGA scheduling outperforms multiprocessor-only scheduling under certain traffic conditions. Future work will focus on formulating a traffic-centric scheduling method.
Smart E-Logistics for SCM Spend AnalysisIRJET Journal
This document discusses applying predictive analytics and machine learning techniques like LSTM models to supply chain management problems. It focuses on spend analysis and extracting fields from invoices and proofs of delivery using optical character recognition. The key points are:
1. LSTM models are applied to time series spend analysis data and shown to provide more accurate predictions than ARIMA models.
2. A technique is proposed to extract fields from printed and handwritten documents using models trained on Form Recognizer and then cleaning the extracted data.
3. The technique aims to reconcile invoices and proofs of delivery by comparing extracted data fields and calculating a match confidence score.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It begins with background on cloud computing and queuing theory. It then models a cloud data center as an [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. Key performance factors analyzed include mean number of tasks in the system. Analytical results are obtained by solving the model to estimate response time distribution and other metrics. The modeling approach allows determining the relationship between performance and number of servers/buffer size.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
This document discusses modeling cloud computing data centers as queuing systems to analyze performance factors. It presents an analytical model of a cloud data center as a [(M/G/1) : (∞/GDMODEL)] queuing system with single task arrivals and infinite task buffer capacity. The model is solved to obtain important performance metrics like mean number of tasks in the system. Prior work on modeling cloud systems and queuing theory concepts are also reviewed. Key assumptions of the proposed model include tasks following a Poisson arrival process and service times having a general probability distribution.
This document discusses online analytical processing (OLAP) for business intelligence using a 3D architecture. It proposes the Next Generation Greedy Dynamic Mix based OLAP algorithm (NGGDM-OLAP) which uses a mix of greedy and dynamic approaches for efficient data cube modeling and multidimensional query results. The algorithm constructs execution plans in a top-down manner by identifying the most beneficial view at each step. The document also describes OLAP system architecture, multidimensional data modeling, different OLAP analysis models, and concludes that integrating OLAP and data mining tools can benefit both areas.
Using OPC technology to support the study of advanced process controlISA Interchange
This document discusses using OPC technology to support the study of advanced process control techniques. It presents a co-simulation environment integrating MATLAB, LabVIEW, and an OPC server to simulate a nonlinear boiler model in real-time over a TCP/IP network. An MPC controller is designed using the OPC client to control the boiler's drum water level, steam pressure, and NOx emissions. The setup provides a cost-effective tool for academic research on advanced process control and networked control systems.
Using OPC technology to support the study of advanced process controlISA Interchange
OPC, originally the object linking and embedding (OLE) for process control, brings a broad communication opportunity between different kinds of control systems. This paper investigates the use of OPC technology for the study of distributed control systems (DCS) as a cost effective and flexible research tool for the development and testing of advanced process control (APC) techniques in university research centers. Co-simulation environment based on Matlab, LabVIEW and TCP/IP network is presented here. Several implementation issues and OPC based client/server control application have been addressed for TCP/IP network. A nonlinear boiler model is simulated as OPC server and OPC client is used for closed loop model identification, and to design a model predictive controller (MPC). The MPC is able to control the NOx emissions in addition to drum water level and steam pressure.
A systematic mapping study of performance analysis and modelling of cloud sys...IJECEIAES
Cloud computing is a paradigm that uses utility-driven models in providing dynamic services to clients at all levels. Performance analysis and modelling is essential because of service level agreement guarantees. Studies on performance analysis and modelling are increasing in a productive manner on the cloud landscape on issues like virtual machines and data storage. The objective of this study is to conduct a systematic mapping study of performance analysis and modelling of cloud systems and applications. A systematic mapping study is useful in visualization and summarizing the research carried in an area of interest. The systematic study provided an overview of studies on this subject by using a structure, based on categorization. The results are presented in terms of research such as evaluation and solution, and contribution such as tools and method utilized. The results showed that there were more discussions on optimization in relation to tool, method and process with 6.42%, 14.29% and 7.62% respectively. In addition, analysis based on designs in terms of model had 14.29% and publication relating to optimization in terms of evaluation research had 9.77%, validation 7.52%, experience 3.01%, and solution 10.51%. Research gaps were identified and should motivate researchers in pursuing further research directions.
Software size estimation at early stages of project development holds great significance to meet
the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor
of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for
measuring the size of object oriented projects. The class point approach is used to quantify
classes which are the logical building blocks in object oriented paradigm. In this paper, we
propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries
based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H
benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class
point approach is used to estimate the size of an OLAP System The results of our approach are
validated.
Software size estimation at early stages of project development holds great significance to meet the competitive demands of software industry. Software size represents one of the most
interesting internal attributes which has been used in several effort/cost models as a predictor of effort and cost needed to design and implement the software. The whole world is focusing
towards object oriented paradigm thus it is essential to use an accurate methodology for measuring the size of object oriented projects. The class point approach is used to quantify classes which are the logical building blocks in object oriented paradigm. In this paper, we propose a class point based approach for software size estimation of On-Line Analytical
Processing (OLAP) systems. OLAP is an approach to swiftly answer decision support queries based on multidimensional view of data. Materialized views can significantly reduce the
execution time for decision support queries. We perform a case study based on the TPC-H benchmark which is a representative of OLAP System. We have used a Greedy based approach
to determine a good set of views to be materialized. After finding the number of views, the class point approach is used to estimate the size of an OLAP System The results of our approach are validated.
This document summarizes a paper that presents a novel method for passive resource discovery in cluster grid environments. The method monitors network packet frequency from nodes' network interface cards to identify nodes with available CPU cycles (<70% utilization) by detecting latency signatures from frequent context switching. Experiments on a 50-node testbed showed the method can consistently and accurately discover available resources by analyzing existing network traffic, including traffic passed through a switch. The paper also proposes algorithms for distributed two-level resource discovery, replication and utilization to optimize resource allocation and access costs in distributed computing environments.
Machine learning in Dynamic Adaptive Streaming over HTTP (DASH)Eswar Publications
Recently machine learning has been introduced into the area of adaptive video streaming. This paper explores a novel taxonomy that includes six state of the art techniques of machine learning that have been applied to Dynamic Adaptive Streaming over HTTP (DASH): (1) Q-learning, (2) Reinforcement learning, (3) Regression, (4) Classification, (5) Decision Tree learning, and (6) Neural networks.
The document summarizes three journal articles about grid and cloud computing.
The first journal investigates the benefits of grid computing technologies for high-performance computing. It uses a case study approach and experimental methodology. Three scenarios are modeled to test average job response times.
The second journal aims to develop a prototype integrating grid technologies with NASA's web GIS software. It determines integration models and system architecture through data gathering and analysis. Components are developed and tested within a virtual organization environment.
The third journal comprehensively compares grid and cloud computing concepts from different perspectives. It collects data through observations and content analysis of definitions to ensure cloud computing is not just a renaming of grid computing.
PREDICTION OF AVERAGE TOTAL PROJECT DURATION USING ARTIFICIAL NEURAL NETWORKS...IAEME Publication
The prediction of project‘s expectancy life is an important issue for entrepreneurs since it helps them to avoid the expiration time of projects. To properly address this issue, Neural Network-based approach, fuzzy logic and regression methods are used to predict the necessary time that can be consumed to put an end to the targeted project. Before applying the three aforementioned approaches, the modeling and simulation of the activities network are introduced for calculating the total average time of project. Then, comparatively speaking, the neural network, fuzzy logic and regression method approach are compared in terms prediction’s accuracy. The generated error from the three methods is compared, namely different types of errors are calculated. Basically, the input variables consist of the probability of success (PS), the coefficient of improvement (Coef_PS) and the coefficient of learning (CofA), while the output variable is the average total duration of the project (DTTm). The Predicted mean square error (MSE) values are purposefully used to compare the three models. Interestingly, the results show that the optimum prediction model is the fuzzy logic model with accurate results. It is noteworthy to say that the application in this paper can be applied on a real case study.
Service Management: Forecasting Hydrogen Demandirrosennen
The document discusses various data science methodologies that can be used for forecasting hydrogen demand in the industrial sector. It covers time series forecasting methods like exponential smoothing, ARIMA, and Prophet. Machine learning regression techniques including linear, logistic, and support vector regression are presented. Deep learning neural networks such as RNNs and LSTMs are also discussed. The document advocates for hybrid and ensemble methods. Additional topics include forecasting with external factors, demand segmentation, real-time data integration, cross-validation, and continuous monitoring and adjustment. RNNs have shown effectiveness for hydrogen demand forecasting. Ensemble models can outperform single methods when applied to complex phenomena. Real-time data is critical for accurate forecasts.
EMPIRICAL APPLICATION OF SIMULATED ANNEALING USING OBJECT-ORIENTED METRICS TO...ijcsa
The work is about using Simulated Annealing Algorithm for the effort estimation model parameter
optimization which can lead to the reduction in the difference in actual and estimated effort used in model
development.
The model has been tested using OOP’s dataset, obtained from NASA for research purpose.The data set
based model equation parameters have been found that consists of two independent variables, viz. Lines of
Code (LOC) along with one more attribute as a dependent variable related to software development effort
(DE). The results have been compared with the earlier work done by the author on Artificial Neural
Network (ANN) and Adaptive Neuro Fuzzy Inference System (ANFIS) and it has been observed that the
developed SA based model is more capable to provide better estimation of software development effort than
ANN and ANFIS
Multi-threaded approach in generating frequent itemset of Apriori algorithm b...TELKOMNIKA JOURNAL
This research is about the application of multi-threaded and trie data structures to the support calculation problem in the Apriori algorithm.
The support calculation results can search the association rule for market basket analysis problems. The support calculation process is a bottleneck process and can cause delays in the following process. This work observed five multi-threaded models based on Flynn’s taxonomy, which are single process, multiple data (SPMD), multiple process, single data (MPSD), multiple process, multiple data (MPMD), double SPMD first variant, and double SPMD second variant to shorten the processing time of the support calculation. In addition to the processing time, this works also consider the time difference between each multi-threaded model when the number of item variants increases. The time obtained from the experiment shows that the multi-threaded model that applies a double SPMD variant structure can perform almost three times faster than the multi-threaded model that applies the SPMD structure, MPMD structure, and combination of MPMD and SPMD based on the time difference of 5-itemsets and 10-itemsets experimental result.
This document proposes an integrated framework for IDEF method-based simulation model design and development. The framework uses IDEF0 for functional modeling, IDEF3 for process modeling, and IDEF1X for data modeling. A common data model is constructed from these IDEF models and then multiple simulation models are automatically generated from the data model using a database-driven approach. The framework aims to improve knowledge reuse, communication, and model maintainability. It is evaluated using a semiconductor fabrication case study. The case study shows the framework can help improve simulation project processes by leveraging descriptive IDEF models and a relational database.
Artificial intelligence based pattern recognition is
one of the most important tools in process control to identify
process problems. The objective of this study was to
evaluate the relative performance of a feature-based
Recognizer compared with the raw data-based recognizer.
The study focused on recognition of seven commonly
researched patterns plotted on the quality chart. The
artificial intelligence based pattern recognizer trained using
the three selected statistical features resulted in significantly
better performance compared with the raw data-based
recognizer.
Similar to Integration of queuing network and idef3 for business process analysis (20)
Integration of queuing network and idef3 for business process analysis
1. Integration of queuing network
and IDEF3 for business
process analysis
Ki-Young Jeong
Department of Industrial Engineering Technology,
South Carolina State Univeristy, Orangeburg, South Carolina, USA
Hyunbo Cho
Division of Mechanical and Industrial Engineering,
Pohang University of Science and Technology, Pohang, Republic of Korea, and
Don T. Phillips
Department of Industrial and Systems Engineering, Texas A&M University,
College Station, Texas, USA
Abstract
Purpose – The purpose of this paper is to provide a framework and prototype software to use
IDEF3 descriptions as a knowledge base from which a queuing network (QN) analysis is
performed to compute system performance measures as part of quick response manufacturing.
This intends to help domain experts obtain informative quantitative performance measures such
as resource utilization, waiting time, and cycle time without relying on a time consuming
simulation approach.
Design/methodology/approach – A general open queuing network is used to extract the related
resource information from the process knowledge captured by IDEF3 method. The relational database
is used to integrate the open QN and IDEF3, which also improves the knowledge reusability.
In addition, the performance of the open queuing network analyzer (QNA) is compared to the
simulation through case studies.
Findings – The domain experts usually do not own much technical modeling knowledge. However,
through this integration, it is found that they could obtain several meaningful system performance
measures without simulation. They could also perform the diverse “what if” scenario analyses with
this prototype without difficulties. It is another finding that the system performance measures
generated by the open QNA are reasonably close to the values obtained from simulation, articularly
when the system utilization is low.
Research limitations/implications – The open QN analysis used in this integration is not
as generic as the simulation approach in terms of the modeling scope and capability.
Hence, this integration supports only the exclusive OR (XOR) out of three junctions in IDEF3
grammars.
Practical implications – Some system analysis problems do not require a complex simulation
modeling approach. Domain experts need a modeling tool to quickly obtain some system dynamics
and insights. This integration framework suffices those requirements.
Originality/value – This paper describes the first attempt to generate informative system
performance measures from the IDEF3 model using the open QN. It also offers practical help to the
domain experts working in the system analysis area.
Keywords Simulation,Knowledgemanagement,Businessprocessre-engineering,Processmanagement,
Queuing theory, Modelling
Paper type Research paper
The current issue and full text archive of this journal is available at
www.emeraldinsight.com/1463-7154.htm
Integration of
queuing network
and IDEF3
471
Business Process Management
Journal
Vol. 14 No. 4, 2008
pp. 471-482
q Emerald Group Publishing Limited
1463-7154
DOI 10.1108/14637150810888028
2. 1. Introduction
IDEF3 is a descriptive process modeling method, which graphically represents the process
knowledge of a given system in order to improve the communication between project
members.However,sinceitdoesnotprovideanyquantitativeanalysisforthesystem,some
researchers tried to integrate the simulation with IDEF3 by generating a simulation model
from an IDEF3 model to numerically explain the behavior of the systems. For example,
KBSI (1995) and Resenburg and Zwemstra (1995) developed the mechanism to generate a
WITNESS and a SIMAN simulation model from an IDEF3 model, respectively. Although
these approaches have been widely used, the extensive computer running time for
simulation has been considered as a main disadvantage of any simulation-based approach,
which motivated this research. If an analytical method can be used as a substitute for
simulation in some situations whose modeling objectives do not require simulation-level
modeling efforts, why not we integrate that analytical method with IDEF3?
In this study, we selected the general open queuing network (GOQN) as a substitute
for the simulation since it can measure the resource contention and its effect on the
overall system through the resource utilization and the system cycle time, etc. In fact,
the GOQN can be applicable to many real life cases such as a flow shop and a job
shops, and business process improvement as addressed in Bitran and Morabito (1996)
and Shanthikumar and Buzacott (1984). The GOQN is robust in terms of data
requirement since it is defined by:
.
mean and variance of entity’s external interarrival time;
.
mean and variance of entity’s processing time at resource; and
.
entity’s routing between resources.
That is, it does not require any specific form of a distribution as in simulation.
However, we recognize that there are some analytical limitations and difficulties in the
GOQN approach. For example, if a finite buffer or a non-First-In First-Out
queue discipline issue is critical in a given domain, it may not properly work as a
substitute for simulation. However, as Law and Kelton (1991) pointed out, the model
needs to capture the essence of the system for which it is intended without excessive
details. Hence, if none of the finite buffer or the queue discipline is a critical constraint
or if it can be appropriately abstracted to reduce the complexity without violating the
modeling requirements and objectives, the GOQN can still work as a solution method.
Hence, if a GOQN analysis is available within IDEF3 environment, analysts can have
an opportunity to deploy it as a substitute for simulation, and obtain rapid results
without a detailed and time-consuming simulation model.
2. Modeling views
It is important to recognize for further study that IDEF3 uses a process-centered
modeling view (PCMV) to capture the process knowledge in a given system, and
GOQN uses a resource-centered modeling view (RCMV) for a quantitative analysis.
If we define a process as a set of sequenced time-related activities performed by
resource(s) to provide a service to an entity flowing through a system, i.e. products or
customers, the PCMV defines the sequence of activities from the entity’s perspective
regardless of the resource, and then it provides the resource information to each
activity. Hence, the same resource may be represented multiple times in a process with
the same or different names. For example, a process plan in a manufacturing system is
BPMJ
14,4
472
3. a good example of the PCMV where the process is defined first to determine a product
manufacturing sequence, and then specific resource information is assigned to it.
However, in the RCMV, each resource is first recognized and uniquely defined, then the
input and output flows of the entities to and from the resource are defined. Hence, each
resource can be represented only once in the RCMV. Many analytical modeling
methods such as the Petri net and the GOQN use the RCMV.
Figure 1 shows the difference between two modeling views. The flow of the two
entities – e1 and e2 – is represented as a solid arrow. The first diagram (a) shows a type
of the PCMV representation where a sequence of the named circles represents a process
defined for each entity. Note that, the name of each resource providing a service to each
entity in an activity is denoted under the activity. The same resource R1 is repeatedly
used in the different activities at each process. The second diagram (b) describes the
same system from the RCMV perspective where resource name is uniquely represented
in a circle, and all flows are represented between R1 and R2. In general, the PCMV is
more intuitive and commonly used in a descriptive model while the RCMV is more
popular in a quantitative model since it is easy to quantify the amount of flow between
resources. Hence, to integrate the IDEF3 with the GOQN, we need to convert the
PCMV-based knowledge in IDEF3 to the RCMV-based knowledge in GOQN.
3. Overview and concept of a proposed framework
The conceptual architecture of the integration is shown in Figure 2, which shows the
four modules; knowledge acquisition and representation (KAR), modeling view
converter (MVC), database, and queueing network analyzer (QNA). The KAR module
is composed of the IDEF3 and the queuing network (QN) graphics module.
A typical operation among the modules has the following sequence:
.
the IDEF3 graphics module captures the process knowledge using the PCMV;
.
the MVC translates this PCMV-based knowledge into the RCMV-based knowledge;
. the QN graphics module refines the RCMV-based knowledge; and
.
the QNA performs the GOQN-based QN analysis in order to numerically explain
the IDEF3 process model.
Keeping all knowledge in the database facilitates the knowledge reusability since all
captured and processed knowledge is separated from the IDEF3 and the QNA, which
can be easily accessed via Structured Query Statement. Hence, if analysts want to deploy
another analytical tool, i.e. simulation later, this knowledge can be reused easily.
Figure 1.
Examples of two different
modeling views
e1
R1 R2
activity
1
activity
2
activity
3
R1
e2
R1 R2
activity
4
activity
5
activity
6
R1
R1 R2
e1
&
e2
(a) Representation from PCMV (b) Representation from RCMV
Integration of
queuing network
and IDEF3
473
4. Now, we are ready to explain each module. All italicized words refer to the objects or
tables in the database module.
3.1 IDEF3 graphics module
Since IDEF3 plays an important role in our approach as a main information capturing
method, it is useful to briefly review the concepts of the IDEF3. The IDEF3 is one of the
integrated definition methods developed by the Information Integration for Concurrent
Engineering program sponsored by the USA Air Force’s Armstrong Laboratory
(Mayer et al., 1995). The primary goal of IDEF3 is to present a structured method by
which a domain expert can capture the processes of a particular system. The process
schematics of IDEF3 have been widely accepted as a medium for process description in
industry (Mo and Menzel, 1998). The process schematics consist of the three main
components:
(1) unit of behavior (UOB);
(2) junction; and
(3) link.
A UOB captures information on what is going on in a system to represent a process
or an activity. It is depicted by a rectangle with a unique label. Junctions in IDEF3
provide a mechanism specifying a logical branching of UOBs and introduce the timing
and sequencing of multiple processes. Junction types include a conjunctive AND
junction denoted by “&” and two disjunctive junctions: an inclusive OR denoted by “O”
and an exclusive OR denoted by “X”. However, in this paper, it is assumed that the
process model does not include any inclusive OR and conjunctive AND due to its
analytical complexity, and we believe that those junctions can be appropriately
handled in the simulation environment. A link connects UOBs or “Junctions”.
Hence, any IDEF3 process schematics can be represented by GIDEF3 ¼ (U, J, L) where
U, J and L are the set of UOB, junctions, and links, respectively. Table I summarizes
the process schematics of IDEF3 within this paper’s boundary.
Figure 2.
Conceptual architecture
of a proposed framework
Knowledge Acquisition & Representation (KAR)
IDEF3 Graphics Module QN Graphics Module
Database QNAMVC
Catpure/Store process
knowkedge in PCMV
related tables through
IDEF3 semantics and
display it
Fill RCMV
related tables
using PCMV
tables
Display QN using the data in RCMV
table and capture addition data
Read RCMV
tables for analysis
and store the
results
Display
results
BPMJ
14,4
474
5. As a major user-interface, the IDEF3 graphics module captures a sequence of activities
from the entity perspective through IDEF3 schematics, and this captured knowledge is
stored at the appropriate IDEF3 related tables in the database. If analysts want the
quantitative analysis for the process captured, they have to capture the resource
information for each activity associated with UOB – to facilitate the data manipulation
in the QN graphics module. Through this information processing, the IDEF3 related
tables or called PCMV tables in Figure 3 such as UOB, Process, Activity and Junction
are populated, and the QN related tables or called RCMV tables such as Routing and
Operation are structured. Note that, we classify that Equipment, Operator, and Product
as the common data objects or tables related to both modeling view.
3.2 Modeling view converter and database module
The role of the MVC is to populate the QN related tables using the data in the IDEF3
tables to facilitate the performance of the QN graphics module and the QNA. That is,
the PCMV-based knowledge stored at IDEF3 related tables is transformed into
RCMV-based knowledge in the QN related tables. For example, the Activity
information is directly reused to construct the Operation due to the similarity between
two objects. The Routing is constructed based on Operation, Process and Activity to
Figure 3.
IDEF1X data model for
facilitating a proposed
framework
P
Equipment
Name
Capacity
MTBF
MTTR
SetupTime
RunTime
WorkingShift
OverTime
Product
Product_ID
Name
Demand
LotSize
Quantity in Parent
Parent ID
Equipment_ID
Activity
Product_ID (FK)
UOB_ID (FK)
Equipment_ID (FK)
Operator_ID (FK)
UOB
UOB_ID
Name
P
P
P
Operator
Operator_ID
Desc
Capacity
SkillCode
WorkingShift
OverTime
P
EquipSetupTime
EquipRunTime
LaborSetupTime
LaborRunTime
Operation_Code (AK)
Desc
Operator_ID (FK)
Operation
P
Product_ID (FK)
Equipment_ID (FK)
Routing
Routing_ID
Product_ID (FK)
Operation_Code_From
Operation_Code_To
Percentage
P
Junction
Junction_ID
Name
Process
Process_ID
origin
destination
P
Name Description Symbol
Unit of behavior (UOB) Capture information on what is going on in the
system, which represents a process or an activity
ID
name
Link Represent temporal, logical, causal, natural or
relational constructs between UOBs
Junctions Specify a logical branching of UOBs
Fan-Out XOR: exactly one of the following paths will
be activated
X
X
Fan-In XOR: exactly one of the preceding paths have
completed at a time
Table I.
IDEF3 process
schematics
Integration of
queuing network
and IDEF3
475
6. provide the probabilistic sequence of each operation for each product. The Process
contains the sequence of UOB describing the interactions between product, resource
and operator within Activity. In fact, the MVC is a macro-module working on the
database to make the QN analysis-friendly data format.
The database design should be robust enough to support both PCMV and RCMV.
One way to consider the robustness in a system design is to study its ontology, and
reflect it in the relational database design since ontology provides the definition of the
terminologies, objects, and relationship between them in the system. The IDEF1X data
modeling method can be used for this purpose since it captures objects with attributes
and the relationship among the objects in a given system. Figure 3 shows a partial
IDEF1X data model for this study. The Equipment, Product, Operator, Routing, UOB,
Process and Junction are defined as an independent object (rectangle) with attributes
while the Activity and Operation are represented as a dependent object (rounded
rectangle). An independent object can be identified itself while a dependent object is
identified by the foreign key(s) migrated from the independent objects. The dotted
line represents a non-identifying connection relation, meaning that the object can be
uniquely identified without knowing the associated objects while the solid line
represents the identifying connection relation, implying that an object is identified with
its association. The dot without “P” represents zero, one or more relationship between
objects while the dot with “P” denotes the one or more relationship.
All IDEF3 related information from GIDEF3 ¼ (U, J, L) is stored in the IDEF3 related
tables. The Activity describes the interaction between resource and product within
UOB, and the Process shows the sequence information conveyed by “Link” in IDEF3.
Once this information is captured with data objects such as Equipment, Operator and
Product, the QN information is computed by the MVC. The Operation provides the
detailed specification for each operation performed at each resource, containing all
operation information that describes “who (operator) handles what product(s) with
what machines for what time.” The Routing shows the probabilistic sequence of flow of
products among resources.
The Product, Equipment and Routing provide:
.
external inter-arrival time (demand data);
. processing time (service data); and
.
routing information between resources (routing data), respectively, which
defines a QN.
Note that, although the variance information regarding the inter-arrival time and the
processing time are not shown in Figure 3, those values are captured in the QN
graphics module.
3.3 Open queuing network analyzer
A QN can be described as Gq ¼ (N, A) where N is a set of nodes representing resource
and A is a set of arcs representing the direction of flow among nodes. The RCMV
tables generated by the MVC provide all information for the frame of a QN. For
example, the Equipment and Routing store a set of nodes and arc information,
respectively. The major role of the QN graphics module is to complete both RCMV
tables and common data tables such as Equipment, Product and Operator for the QNA.
It captures:
BPMJ
14,4
476
7. .
mean and variance of external inter-arrival times for each product (demand data)
in Product; and
.
processing time (service data) for each resource, Equipment and Operator; and
.
the routing probability between nodes (resources) for each product in Routing.
The routing probability is stored at “Percentage” field in the Routing table.
3.4 Queuing network model and analysis
The GOQN, Gq ¼ (N, A) can be solved by:
.
mean and variance of entity’s external inter-arrival time;
.
mean and variance of entity’s processing time at resource; and
.
entity’s routing between resources, and this information is stored at the
Equipment, Product, Operator, Operation, and Routing tables.
Hence, the QNA solves the problem using this information based on the GOQN theory.
For example, the inter-arrival time is estimated from “Demand” and “LotSize” in Product,
“SetupTime,” “RunTime” and other time information in Equipment or Operation can be
used for the processing time. The “Percentage” in Routing represents the probabilistic
routing probability.
Although, it may not easy to observe the direct information mapping from an
IDEF3 to a QN, the information captured by IDEF3 was used to construct the QN
frame. For example, the sequence of products among resources represented in the
sequence of UOBs in PCMV tables provides the structure of routing in the QN, Gq. It
also helped to develop the structure of the common data objects by capturing the lists
of resource and products. Based on this frame, the QNA can solve the problem with
mathematical formulae after the QN graphics module finalized the QN.
We briefly showed some formulae used in this module. Each node in Gq is considered
as a GI/G/c queue where the notation GI, G and c refers to general inter-arrival time,
general service time and number of resource, respectively. Since, it does not require any
specific distribution for inter-arrival and service times, the data collection efforts can be
reduced compared to those in simulation. QN analysis consists of two steps:
(1) decomposition; and
(2) aggregation.
The decomposition computes the node-level performance measures such as resource
utilization and sojourn time, etc. and the aggregation computes the system-level
performance measures, such as system cycle time and total WIP in the system. Hence,
information at Product, Equipment and Operator is used for all formulae in this section.
The utilization for each resource j, u(j), is represented in equation (1):
uð jÞ ¼
P
klk
ð jÞ
TAð jÞ
ð1Þ
where l k
(j) is all workload caused by product k at resource j, computed using the
routing, demand and processing time information stored in Routing, Product,
and Equipment, Operation, respectively. TA(j) is total available time at resource j.
If u(j) $ 1, the system is infeasible, saying that the steady state analysis is not possible.
Integration of
queuing network
and IDEF3
477
8. The prompt computation of equation (1) is another advantage of QN over simulation. For
example, it is not easy to detect the model’s feasibility in a large-scale simulation model
development due to the data and modeling complexity. Hence, the model is likely to
be developed without full consideration of feasibility, which often results in considerable
WIP accumulation in a queue. The major performance measures in GOQN are
approximated using the results in M/M/c queue for which all-exact solution forms
are known. For example, the expected waiting time for any product at resource j,
E(WqjGI/G/c), is approximated using the waiting time at M/M/c queue as in equation (2):
EðWqjGI=G=cÞ <
Caj þ Csj
2
EðWqjM=M=cÞ ð2Þ
where Caj and Csj are squared coefficient of variation for interarrival time and service
time, respectively. Again, the Product and Equipment provide values for this
computation. Other node-level performance measures can be computed using the Little’s
(1961) formula once E(WqjGI/G/c) is computed. The key aggregation procedure is to
compute the system cycle time given by:
EðCT k
Þ ¼
j
X
E Nk
j
ðEðWqjGI=G=cÞ þ sk
ð j ÞÞ ð3Þ
where EðNk
j Þ and s k
( j ) represent the expected number of visits to j and process time at j
for product k. If the lot size is considered, the processing time at resource j, can be
represented as:
sk
ð j Þ ¼ stð j Þ þ Qk
tk
ð j Þ ð4Þ
where st( j ), Q k
, t k
( j ) denote lot size independent setup time, product k’s lot size and
processing time of individual piece in the lot at resource j. That is, by using equations (3)
and (4), the impact of setup time and lot size on the cycle time can be estimated. Readers
are encouraged to refer to Bitran and Morabito (1996) for a detailed computation
procedure regarding all terms in equation (1) and (3).
When the GIDEF3 has the same structure as the Gq, i.e. a resource is used only once
across the GIDEF3, each UOB is considered as a single node in Gq, and the sojourn time
computed by equation (3) is the same as the activity cycle time in UOB, there is no need
for modeling view converting.
4. Prototype software development
Software “SmartQueue” was developed to implement the proposed concept and
framework. It provides a user-friendly graphic interface for the IDEF3 process
descriptions and the QN analysis.
The QN diagram is built from an IDEF3 schematic diagram, and all IDEF3 syntax
is reused for the QN diagram. For example, an XOR junction and UOB rectangle are
reused to represent the probabilistic routing and a resource node, respectively. All
artifacts constructed in “SmartQueue” are stored in a pool for reuse. It also allows users
to build their own sub- QN (template) in a library. Users can retrieve this template
when they want to expand or make the QN model in more detail. Once this template
is used, the users are supposed to connect the template with the existing QN.
BPMJ
14,4
478
9. Figure 4 shows a screen shot of a template with its entity queue information window
that is a part of the QN graphics module. It is noted that, a resource name is represented
inside each rectangle. The “Entity Queue Information” window shows all the required
input data for each product at each node in a template, which includes:
.
number of servers;
.
mean service time;
.
variance of service time;
.
external arrival rate; and
. variance of external interarrival time.
In addition to this input data, the node type information should also be specified for QN
analysis. For example, any arriving product enters the template only through input
node(s) and leaves it through output node(s). The input and all other nodes except
output node(s) are considered as a transient node, and any output node is considered as
an absorbing node. The template is connected to the existing QN through input and
output nodes. Once all information is provided, the “SmartQueue” performs QN
analysis based on the GOQN theory.
5. Case studies
A machining shop was used as a case study to implement the proposed concept with
“SmartQueue” software. Once we obtained the results with “SmartQueue,” we developed
the simulation model from the same database used in the “SmartQueue” to compare the
performance of the QNA with the performance measures in the simulation. The system
cycle time – time for each product to spend in a system – is used as a major criterion.
The machining shop produces gearboxes used for automobiles.Thisshop are handling
about 50 different part types of gearboxes manufactured through various operations
including metal cleaning, cutting, lining, drilling, grinding, welding, pressing, heat and
Figure 4.
Template and node
information in
“SmartQueue”
Integration of
queuing network
and IDEF3
479
10. chemical treatment, and inspection. Among these, this shop performs only metal cutting,
drilling, grinding, milling, and pressing related operations while vendors and other shops
perform remaining and other operations such as cleaning, welding, heat, and chemical
treatment, etc. The shop manager wants to reduce the system cycletime via the setup time
reduction and re-layout of resources such as equipment and operators. New equipment
may be purchased if required for setup time reduction. However, before making any
decision, they decided to analyze current shop performance as a first step.
Since, the IDEF3 syntax in “SmartQueue” can provide a visualized process model, it
was used to improve communications between shop managers and the project team.
Through the IDEF3 model, we captured 35 different equipment types performing all
the operations for 50 part types across all the facilities, each of which will eventually
correspond to an individual node in the GOQN theory. Inside the machining shop, the
11 equipment types out of 35 were used to provide diverse operations. It was also
observed that the routing between this equipment was not continuous. For example,
the products may leave and revisit the machining shop in the middle of the whole
manufacturing process since most of cleaning and chemical-related operations are
performed at other shops and vendors. Eventually, all the finished products go through
the non destructive test (NDT) operation, and are delivered to the final assembly shop
located at other areas if the NDT result is successful. Otherwise, products need
additional steps or they are destroyed. Since, we focused on the machining shop
analysis, all other operations beyond this shop were considered as a time-holding block
with an infinite capacity.
Table II shows the major equipment information in the shop; the average setup time
per lot and the run time per individual piece in a lot. It should be noted that some
information was masked to protect the company proprietary. This shop operates two
8-hour-shifts per day in which 1 hour is used for lunch or break. It also deploys five
different operator teams responsible for operating the equipment as seen in the last
column. The Deburr and press team have two members while all others have one
member per shift. Figure 5 shows an annual demand distribution for all parts. The
minimum, average and maximum values are 20, 450.52 and 5,406 units, respectively.
The lot size for each part varies from 3 to 98 with average 14 units. To accommodate
the variations existing in the demand and process time, the squared coefficient of
variation for interarrival time and process time is assumed to be 0.3.
Once all the data were collected, the QNA in “SmartQueue” was executed, and
the corresponding simulation model was created using the Enterprise-Dynamics
Equipment No. of equipment Setup time/lot (h) Run time/piece (h) Operators
Auto drill 1 15.10 8.20 Auto drill
Press 1 23.75 4.75 Press
Booth 2 3.38 10.07 Deburr
Drill press 1 4.33 11.67 Press
Lather 1 22.37 3.77 Press
NDT 1 14.48 3.07 Inspector
Semiauto drill 1 42.83 5.39 Press
Laser cutter 1 12.80 5.32 Laser
Manual shear 1 18.33 3.33 Manual shear
Milling 1 43.00 7.00 Press
Table II.
Equipment information
BPMJ
14,4
480
11. simulation library (Enterprise Dynamics, 2005) from the relational database. It was
observed that a single run of simulation took about 15 minutes for two-year length.
Figure 6 shows the comparison result of system cycle time for each part type in which
the cycle time from simulation is the average of five runs to filter variations.
The average cycle time discrepancy between two methods was 6.22 percent using
the following formula:
Discrepancy ðpercentÞ ¼
jSmartQueue 2 Simulationj
Simulation
£ 100 ð5Þ
The shop managers were convinced with the fact that they could directly access the
relational database to develop a simulation model if the future modeling objectives require
more detailed analysis than done in the QNA. In practice, this knowledge reusability was
considered to provide flexibility in performing an analytical modeling project. Additional
tests were performed to show the effect of resource utilization on the performance of the
QNA. Low utilization- and high-utilization cases were created based on this case study
data. The same comparisons were performed for each case, and the results were
summarized in Table III. It is noted that, the original case study is denoted as the medium
Figure 5.
Demand information
0
1,000
2,000
3,000
4,000
5,000
6,000
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49
part no
units
Figure 6.
Cycle time comparison
between simulation and
SmartQueue
0.00
5.00
10.00
15.00
20.00
25.00
30.00
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49
Part No
Flowtime(day)
SIMULATION
SmartQueue
Case Utilization (percent) Simulation SmartQueue Discrepancy (percent)
Low 28 5.91 5.84 1.18
Medium 45 7.56 7.09 6.22
High 68 8.21 7.45 9.26
Table III.
Result comparison
Integration of
queuing network
and IDEF3
481
12. utilization case. According to Table III, the QNA underestimated the system cycle time,
and the discrepancy between two methods increased as the resource utilization increased.
These results are consistent with Desruelle and Steudel (1996) and other researches.
Therefore, users need to consider the requirements and objectives of the modeling project
before deciding an approach.Alternatively, users canusethe QNAfirst,and thenbuildthe
simulation model if additional analytical requirements are added.
6. Conclusions and further studies
The concept of transforming IDEF3 model into a QN model was provided and
implementedthrough“SmartQueue”inordertoshowthefeasibilitytoimproveknowledge
reusability and to add quantitative analysis capability to the domain knowledge
descriptions captured in IDEF3. Within the scope of the case study, the accuracy of
the QNA compared to simulation was reasonable in case of the moderate resource
utilization. The integration and knowledge reusability through an independent relational
database was considered to improve flexibility in choosing an appropriate analytical
approach since we can avoid the times and efforts in developing and executing simulation
models if the corresponding QN can satisfy the objectives of the modeling work. More
research may be required to improve the capability of the QNA, i.e. computing an optimal
lot size for each part to minimize cycle time. Another possible extension of this research is
to develop a hybrid approach where both QN and simulation’s advantages are integrated.
References
Bitran, G. and Morabito, R. (1996), “Open QN: optimization and performance evaluation models
for discrete manufacturing system”, Production and Operations Management, Vol. 5 No. 2,
pp. 163-93.
Desruelle, P. and Steudel, H.J. (1996), “A queuing network model of a single-operator
manufacturing work cell with machine/operator interference”, Management Science,
Vol. 42 No. 4, pp. 576-90.
Enterprise Dynamics (2005), Reference Guide 4Dscript, Enterprise Dynamics, Maarssen.
KBSI (1995), ProSime Automatic Process Modelling for Windows, User’s Manual and Reference
Guide Ver. 2.1, Knowledge Based Systems Inc., College Station, TX.
Law, A.M. andKelton, W.D. (1991), Simulation Modeling and Analysis, McGraw-Hill, New York, NY.
Little, J. (1961), “A proof for the queuing formula: L ¼ lW”,Operations Research, Vol. 9, pp.383-9.
Mayer, R., Mensal, C.P., Painter, M., Dewitte, S., Blinn, T. and Perakath, B. (1995), “Information
integration for concurrent engineering (IICE) IDEF3 process description capture method
report”, Interim Technical Report, Knowledge Based Systems Inc., College Station, TX.
Mo, J.O.T. and Menzel, C.P. (1998), “An integrated process model driven knowledge based system
for remote customer support”, Computers in Industry, Vol. 37, pp. 171-83.
Resenburg, A.V. and Zwemstra, N. (1995), “Implementing IDEF techniques as simulation
modelling specifications”, Computers Industrial Engineering, Vol. 29, pp. 467-71.
Shanthikumar, J.G. and Buzacott, J.A. (1984), “The time spent in a dynamic job shop”, European
Journal of Operational Research, Vol. 17, pp. 215-26.
Corresponding author
Ki-Young Jeong can be contacted at: kjeong@scsu.edu
BPMJ
14,4
482
To purchase reprints of this article please e-mail: reprints@emeraldinsight.com
Or visit our web site for further details: www.emeraldinsight.com/reprints