IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A MULTIPURPOSE MATRICES METHODOLOGY FOR TRANSMISSION USAGE, LOSS AND RELIABIL...ecij
In the era of power system restructuring there is a need of simplified method which provides a complete allocation of usage, transmission losses and transmission reliability margin. In this paper, authors presents a combined multipurpose matrices methodology for Transmission usage, transmission loss and transmission reliability margin allocation. Proposed methodology is simple and easy to implement on large power system. A modified Kirchhoff matrix is used for allocation purpose. A sample 6 bus system is used to demonstrate the feasibility of proposed methodology.
Economical and Reliable Expansion Alternative of Composite Power System under...IJECEIAES
The paper intends to select the most economical and reliable expansion alternative of a composite power system to meet the expected future load growth. In order to reduce time computational quantity, a heuristic algorithm is adopted for composite power system reliability evaluation is proposed. The proposed algorithm is based on Monte-Carlo simulation method. The reliability indices are estimated for system base case and for the case of adding peaking generation units. The least cost reserve margin for the addition of five 20MW generating units sequentially is determined. Using the proposed algorithm an increment comparison approach used to illustrate the effect of the added units on the interruption and on the annual net gain costs. A flow chart introduced to explain the basic methodology to have an adequate assessment of a power system using Monte Carlo Simulation. The IEEE RTS (24-bus, 38-line) and The Jordanian Electrical Power System (46bus and 92-line) were examined to illustrate how to make decisions in power system planning and expansions.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A MULTIPURPOSE MATRICES METHODOLOGY FOR TRANSMISSION USAGE, LOSS AND RELIABIL...ecij
In the era of power system restructuring there is a need of simplified method which provides a complete allocation of usage, transmission losses and transmission reliability margin. In this paper, authors presents a combined multipurpose matrices methodology for Transmission usage, transmission loss and transmission reliability margin allocation. Proposed methodology is simple and easy to implement on large power system. A modified Kirchhoff matrix is used for allocation purpose. A sample 6 bus system is used to demonstrate the feasibility of proposed methodology.
Economical and Reliable Expansion Alternative of Composite Power System under...IJECEIAES
The paper intends to select the most economical and reliable expansion alternative of a composite power system to meet the expected future load growth. In order to reduce time computational quantity, a heuristic algorithm is adopted for composite power system reliability evaluation is proposed. The proposed algorithm is based on Monte-Carlo simulation method. The reliability indices are estimated for system base case and for the case of adding peaking generation units. The least cost reserve margin for the addition of five 20MW generating units sequentially is determined. Using the proposed algorithm an increment comparison approach used to illustrate the effect of the added units on the interruption and on the annual net gain costs. A flow chart introduced to explain the basic methodology to have an adequate assessment of a power system using Monte Carlo Simulation. The IEEE RTS (24-bus, 38-line) and The Jordanian Electrical Power System (46bus and 92-line) were examined to illustrate how to make decisions in power system planning and expansions.
Evaluate the performance of K-Means and the fuzzy C-Means algorithms to forma...IJECEIAES
The clustering approach is considered as a vital method for wireless sensor networks (WSNs) by organizing the sensor nodes into specific clusters. Consequently, saving the energy and prolonging network lifetime which is totally dependent on the sensors battery, that is considered as a major challenge in the WSNs. Classification algorithms such as K-means (KM) and Fuzzy C-means (FCM), which are two of the most used algorithms in literature for this purpose in WSNs. However, according to the nature of random nodes deployment manner, on certain occasions, this situation forces these algorithms to produce unbalanced clusters, which adversely affects the lifetime of the network. Based for our knowledge, there is no study has analyzed the performance of these algorithms in terms clusters construction in WSNs. In this study, we investigate in KM and FCM performance and which of them has better ability to construct balanced clusters, in order to enable the researchers to choose the appropriate algorithm for the purpose of improving network lifespan. In this study, we utilize new parameters to evaluate the performance of clusters formation in multi-scenarios. Simulation result shows that our FCM is more superior than KM by producing balanced clusters with the random distribution manner for sensor nodes.
AN ENHANCED HYBRID ROUTING AND CLUSTERING TECHNIQUE FOR WIRELESS SENSOR NETWORKijwmn
Wireless Sensor Networks (WSN) have extensively deployed in a wide range of applications. However, WSN still faces several limitations in processing capabilities, memory, and power supply of sensor nodes. It is required to extend the lifetime of WSN. Mainly this is achieved by routing protocols choosing the best transmission path in-network with desired power conservation.This cause is developing a generic protocol framework for WSNa big challenge. This work proposed a new routing technique, described as Hybrid Routing-Clustering (HRC) model. This new approach takes advantage of clustering and routing procedures defined in K-Mean clustering and AODV routing, which constituted of three phases. This development aims to achieve enhanced power conservation rate in consequence network lifetime. An extensive evaluation methodology utilized to measure the performance of the proposed model in simulated scenarios.The results categorized in terms of the average amount of packet received and power conservation rate. The Hybrid Routing-Clustering (HRC) model was determined, showed enhanced results regarding both parameters. In the end, they are comparing these results with well-known routing and well-known clustering algorithms.
The final cost of public school building projects, like other construction projects, is unknown
to the owner till the account closure. Artificial Neural Networks (ANN) is used in an attempt to
predict the final cost of two story (12 classes) school projects under lowest bid system of award
before work starts. A database of (65) school projects records completed in (2007-2012) are used to
develop and verify the ANN model. Based on expert opinions, nine out of eleven parameters are
considered to have the most significant impact on the magnitude of final cost. Hence they are used as
model inputs while the output of the model is going to be the final cost (FC). These parameters are;
accepted bid price, average bid price, estimated cost, contractor rank, supervising engineer
experience, project location, number of bidders, year of contracting, and contractual duration. It was
found that ANN has the ability to predict the final cost for school projects with very good degree of
accuracy having a coefficient of correlation (R) of (91%), and an average accuracy percentage of
(99.98%).
Power System Reliability Assessment in a Complex Restructured Power SystemIJECEIAES
The basic purpose of an electric power system is to supply its consumers with electric energy as parsimoniously as possible and with a sensible degree of continuity and quality. It is expected that the solicitation of power system reliability assessment in bulk power systems will continue to increase in the future especially in the newly deregulated power diligence. This paper presents the research conducted on the three areas of incorporating multi-state generating unit models, evaluating system performance indices and identifying transmission paucities in complex system adequacy assessment. The incentives for electricity market participants to endow in new generation and transmission facilities are highly influenced by the market risk in a complex restructured environment. This paper also presents a procedure to identify transmission deficiencies and remedial modification in the composite generation and transmission system and focused on the application of probabilistic techniques in composite system adequacy assessment
Multi-objective optimization for preemptive and predictive supply chain opera...IJECEIAES
At present, the manufacturing industry has undergone a tremendous change in its operating principle with respect to the supply chain management system where the demands of consumers are dynamically and exponentially rising. Although Industry 4.0 offers a significant solution to this principle with the aid of its predictive automated operating process, till date there is less number of fault tolerant model that can effectively meet the standard demands of supply chain planning. Therefore, the proposed system introduces an analytical model where predictive optimization is carried out towards bridging the gap between supply and demands in supply chain 4.0. An analytical framework is a design from constraints derived from practical environment in order to offer better applicability of it. The study outcome shows that the proposed model could offer better performance in comparison to the existing optimization method with respect to the better budget control system for offering predictive and preemptive model design.
Spectral opportunity selection based on the hybrid algorithm AHP-ELECTRETELKOMNIKA JOURNAL
Due to an ever-growing demand for spectrum and the fast-paced developmentof wireless applications, technologies such as cognitive radio enablethe efficient use of the spectrum. The objective of the present article is todesign an algorithm capable of choosing the best channel for data transmission.It uses quantitative methods that can modify behavior by changing qualityparameters in the channel. To achieve this task, a hybrid decision-makingalgorithm is designed that combinesanalytical hierarchy process(AHP)algorithms and adjusts the weights of each channel parameter, using a prioritytable. TheElimination Et Choix Tranduisant La Realité(ELECTRE)algorithm processes the information from each channel through a weightmatrix and then delivers the most favorable result for the transmitted data. Theresults reveal that the hybrid AHP-ELECTRE algorithm has a suitableperformance, which improves the throughput rate by 14% compared to similaralternatives.
Improved fuzzy c-means algorithm based on a novel mechanism for the formation...TELKOMNIKA JOURNAL
The clustering approach is considered as a vital method for many fields suchas machine learning, pattern recognition, image processing, information retrieval, data compression, computer graphics, and others.Similarly, it hasgreat significance in wireless sensor networks (WSNs) by organizing thesensor nodes into specific clusters. Consequently, saving energy and prolonging network lifetime, which is totally dependent on the sensor’s battery, that is considered asa major challenge in the WSNs. Fuzzyc-means (FCM) is one of classification algorithm, which is widely used in literature for this purpose in WSNs. However, according to the nature of random nodes deployment manner, on certain occasions, this situation forces this algorithm to produce unbalanced clusters, which adversely affects the lifetime of the network.To overcome this problem, a new clustering method called FCM-CMhas been proposed by improving the FCM algorithm to form balanced clustersfor random nodes deployment. The improvement is conductedby integrating the FCM with a centralized mechanism(CM).The proposed method will be evaluated based on four new parameters. Simulation result shows that our proposed algorithm is more superior to FCM by producing balanced clustersin addition to increasing the balancing of the intra-distances of the clusters, which leads to energy conservation and prolonging network lifespan.
Generalized optimal placement of PMUs considering power system observability,...IJECEIAES
This paper presents a generalized optimal placement of Phasor Measurement Units (PMUs) considering power system observability, reliability, Communication Infrastructure (CI), and latency time associated with this CI. Moreover, the economic study for additional new data transmission paths is considered as well as the availability of predefined locations of some PMUs and the preexisting communication devices (CDs) in some buses. Two cases for the location of the Control Center Base Station (CCBS) are considered; predefined case and free selected case. The PMUs placement and their required communication network topology and channel capacity are co-optimized simultaneously. In this study, two different approaches are applied to optimize the objective function; the first approach is combined from Binary Particle Swarm Optimization-Gravitational Search Algorithm (BPSOGSA) and the Minimum Spanning Tree (MST) algorithm, while the second approach is based only on BPSOGSA. The feasibility of the proposed approaches are examined by applying it to IEEE 14-bus and IEEE 118-bus systems.
Benchmarking medium voltage feeders using data envelopment analysis: a case s...TELKOMNIKA JOURNAL
Feeder performance evaluation is a key component in improving the power system network.
Currently there is no proper method to find the performance of Medium Voltage Feeders (MVF) except the
number of feeder failures. Performance benchmarking may be used to identify actual performance of
feeders. The results of such benchmarking studies allow the organization to compare feeders with
themselves and identify poorly performing feeders. This paper focuses on prominent benchmarking
techniques used in international regulatory regime and analyses the applicability to MVFs.
Data Envelopment Analysis (DEA) method is selected to analyze the MVFs. Correlation analysis and DEA
analysis are carried out on different models and then the base model is selected for the analysis.
The relative performance of the 32 MVFs of Western Province, Sri Lanka is evaluated using the DEA.
Relative efficiency scores are identified for each feeder. Also the feeders are classified according to the
sensitivity analysis. The results indicate that the DEA analysis may be conveniently employed to evaluate
the performance of the MVFs. The evaluation is carried out once or twice a year with the MV distribution
development plan in order to identify the performance of the feeders and to utilize the available limited
resources efficiently.
Design and Optimization of Supply Chain Network with Nonlinear Log-Space Mode...RSIS International
This proposal intends to address the practical vulnerabilities in the state-of-the-art models for optimal design of supply chain network. While the conventional models attempt to transform the design constraints of the supply chain network into sum of cost functions, the proposed model will transform the cost function into a nonlinear subspace. Moreover, the subspace will be optimized under a logarithmic scale and so the multiple network constraints such as stock transportation, inventory, echelon levels and backorders can be mapped within the subspace. Subsequently, robust optimization algorithms based on biological inspiration will be proposed. The optimization algorithms will be included with adaptiveness and so the nonlinear cost function can be solved effectively. The adaptiveness will be mainly based on the ability of handing every network constraints such as echelon levels, inventory, etc
Evaluate the performance of K-Means and the fuzzy C-Means algorithms to forma...IJECEIAES
The clustering approach is considered as a vital method for wireless sensor networks (WSNs) by organizing the sensor nodes into specific clusters. Consequently, saving the energy and prolonging network lifetime which is totally dependent on the sensors battery, that is considered as a major challenge in the WSNs. Classification algorithms such as K-means (KM) and Fuzzy C-means (FCM), which are two of the most used algorithms in literature for this purpose in WSNs. However, according to the nature of random nodes deployment manner, on certain occasions, this situation forces these algorithms to produce unbalanced clusters, which adversely affects the lifetime of the network. Based for our knowledge, there is no study has analyzed the performance of these algorithms in terms clusters construction in WSNs. In this study, we investigate in KM and FCM performance and which of them has better ability to construct balanced clusters, in order to enable the researchers to choose the appropriate algorithm for the purpose of improving network lifespan. In this study, we utilize new parameters to evaluate the performance of clusters formation in multi-scenarios. Simulation result shows that our FCM is more superior than KM by producing balanced clusters with the random distribution manner for sensor nodes.
AN ENHANCED HYBRID ROUTING AND CLUSTERING TECHNIQUE FOR WIRELESS SENSOR NETWORKijwmn
Wireless Sensor Networks (WSN) have extensively deployed in a wide range of applications. However, WSN still faces several limitations in processing capabilities, memory, and power supply of sensor nodes. It is required to extend the lifetime of WSN. Mainly this is achieved by routing protocols choosing the best transmission path in-network with desired power conservation.This cause is developing a generic protocol framework for WSNa big challenge. This work proposed a new routing technique, described as Hybrid Routing-Clustering (HRC) model. This new approach takes advantage of clustering and routing procedures defined in K-Mean clustering and AODV routing, which constituted of three phases. This development aims to achieve enhanced power conservation rate in consequence network lifetime. An extensive evaluation methodology utilized to measure the performance of the proposed model in simulated scenarios.The results categorized in terms of the average amount of packet received and power conservation rate. The Hybrid Routing-Clustering (HRC) model was determined, showed enhanced results regarding both parameters. In the end, they are comparing these results with well-known routing and well-known clustering algorithms.
The final cost of public school building projects, like other construction projects, is unknown
to the owner till the account closure. Artificial Neural Networks (ANN) is used in an attempt to
predict the final cost of two story (12 classes) school projects under lowest bid system of award
before work starts. A database of (65) school projects records completed in (2007-2012) are used to
develop and verify the ANN model. Based on expert opinions, nine out of eleven parameters are
considered to have the most significant impact on the magnitude of final cost. Hence they are used as
model inputs while the output of the model is going to be the final cost (FC). These parameters are;
accepted bid price, average bid price, estimated cost, contractor rank, supervising engineer
experience, project location, number of bidders, year of contracting, and contractual duration. It was
found that ANN has the ability to predict the final cost for school projects with very good degree of
accuracy having a coefficient of correlation (R) of (91%), and an average accuracy percentage of
(99.98%).
Power System Reliability Assessment in a Complex Restructured Power SystemIJECEIAES
The basic purpose of an electric power system is to supply its consumers with electric energy as parsimoniously as possible and with a sensible degree of continuity and quality. It is expected that the solicitation of power system reliability assessment in bulk power systems will continue to increase in the future especially in the newly deregulated power diligence. This paper presents the research conducted on the three areas of incorporating multi-state generating unit models, evaluating system performance indices and identifying transmission paucities in complex system adequacy assessment. The incentives for electricity market participants to endow in new generation and transmission facilities are highly influenced by the market risk in a complex restructured environment. This paper also presents a procedure to identify transmission deficiencies and remedial modification in the composite generation and transmission system and focused on the application of probabilistic techniques in composite system adequacy assessment
Multi-objective optimization for preemptive and predictive supply chain opera...IJECEIAES
At present, the manufacturing industry has undergone a tremendous change in its operating principle with respect to the supply chain management system where the demands of consumers are dynamically and exponentially rising. Although Industry 4.0 offers a significant solution to this principle with the aid of its predictive automated operating process, till date there is less number of fault tolerant model that can effectively meet the standard demands of supply chain planning. Therefore, the proposed system introduces an analytical model where predictive optimization is carried out towards bridging the gap between supply and demands in supply chain 4.0. An analytical framework is a design from constraints derived from practical environment in order to offer better applicability of it. The study outcome shows that the proposed model could offer better performance in comparison to the existing optimization method with respect to the better budget control system for offering predictive and preemptive model design.
Spectral opportunity selection based on the hybrid algorithm AHP-ELECTRETELKOMNIKA JOURNAL
Due to an ever-growing demand for spectrum and the fast-paced developmentof wireless applications, technologies such as cognitive radio enablethe efficient use of the spectrum. The objective of the present article is todesign an algorithm capable of choosing the best channel for data transmission.It uses quantitative methods that can modify behavior by changing qualityparameters in the channel. To achieve this task, a hybrid decision-makingalgorithm is designed that combinesanalytical hierarchy process(AHP)algorithms and adjusts the weights of each channel parameter, using a prioritytable. TheElimination Et Choix Tranduisant La Realité(ELECTRE)algorithm processes the information from each channel through a weightmatrix and then delivers the most favorable result for the transmitted data. Theresults reveal that the hybrid AHP-ELECTRE algorithm has a suitableperformance, which improves the throughput rate by 14% compared to similaralternatives.
Improved fuzzy c-means algorithm based on a novel mechanism for the formation...TELKOMNIKA JOURNAL
The clustering approach is considered as a vital method for many fields suchas machine learning, pattern recognition, image processing, information retrieval, data compression, computer graphics, and others.Similarly, it hasgreat significance in wireless sensor networks (WSNs) by organizing thesensor nodes into specific clusters. Consequently, saving energy and prolonging network lifetime, which is totally dependent on the sensor’s battery, that is considered asa major challenge in the WSNs. Fuzzyc-means (FCM) is one of classification algorithm, which is widely used in literature for this purpose in WSNs. However, according to the nature of random nodes deployment manner, on certain occasions, this situation forces this algorithm to produce unbalanced clusters, which adversely affects the lifetime of the network.To overcome this problem, a new clustering method called FCM-CMhas been proposed by improving the FCM algorithm to form balanced clustersfor random nodes deployment. The improvement is conductedby integrating the FCM with a centralized mechanism(CM).The proposed method will be evaluated based on four new parameters. Simulation result shows that our proposed algorithm is more superior to FCM by producing balanced clustersin addition to increasing the balancing of the intra-distances of the clusters, which leads to energy conservation and prolonging network lifespan.
Generalized optimal placement of PMUs considering power system observability,...IJECEIAES
This paper presents a generalized optimal placement of Phasor Measurement Units (PMUs) considering power system observability, reliability, Communication Infrastructure (CI), and latency time associated with this CI. Moreover, the economic study for additional new data transmission paths is considered as well as the availability of predefined locations of some PMUs and the preexisting communication devices (CDs) in some buses. Two cases for the location of the Control Center Base Station (CCBS) are considered; predefined case and free selected case. The PMUs placement and their required communication network topology and channel capacity are co-optimized simultaneously. In this study, two different approaches are applied to optimize the objective function; the first approach is combined from Binary Particle Swarm Optimization-Gravitational Search Algorithm (BPSOGSA) and the Minimum Spanning Tree (MST) algorithm, while the second approach is based only on BPSOGSA. The feasibility of the proposed approaches are examined by applying it to IEEE 14-bus and IEEE 118-bus systems.
Benchmarking medium voltage feeders using data envelopment analysis: a case s...TELKOMNIKA JOURNAL
Feeder performance evaluation is a key component in improving the power system network.
Currently there is no proper method to find the performance of Medium Voltage Feeders (MVF) except the
number of feeder failures. Performance benchmarking may be used to identify actual performance of
feeders. The results of such benchmarking studies allow the organization to compare feeders with
themselves and identify poorly performing feeders. This paper focuses on prominent benchmarking
techniques used in international regulatory regime and analyses the applicability to MVFs.
Data Envelopment Analysis (DEA) method is selected to analyze the MVFs. Correlation analysis and DEA
analysis are carried out on different models and then the base model is selected for the analysis.
The relative performance of the 32 MVFs of Western Province, Sri Lanka is evaluated using the DEA.
Relative efficiency scores are identified for each feeder. Also the feeders are classified according to the
sensitivity analysis. The results indicate that the DEA analysis may be conveniently employed to evaluate
the performance of the MVFs. The evaluation is carried out once or twice a year with the MV distribution
development plan in order to identify the performance of the feeders and to utilize the available limited
resources efficiently.
Design and Optimization of Supply Chain Network with Nonlinear Log-Space Mode...RSIS International
This proposal intends to address the practical vulnerabilities in the state-of-the-art models for optimal design of supply chain network. While the conventional models attempt to transform the design constraints of the supply chain network into sum of cost functions, the proposed model will transform the cost function into a nonlinear subspace. Moreover, the subspace will be optimized under a logarithmic scale and so the multiple network constraints such as stock transportation, inventory, echelon levels and backorders can be mapped within the subspace. Subsequently, robust optimization algorithms based on biological inspiration will be proposed. The optimization algorithms will be included with adaptiveness and so the nonlinear cost function can be solved effectively. The adaptiveness will be mainly based on the ability of handing every network constraints such as echelon levels, inventory, etc
With the development of the urbanization, industrialization and populace, there has been a huge development in the rush hour gridlock. With development in the rush hour gridlock, there got a heap of issues with it as well, these issues incorporate congested roads, mishaps and movement govern infringement at the overwhelming activity signals. This thusly adversy affects the economy of the nation and in addition the loss of lives. Thus, Speed control is in the need of great importance because of the expanded rate of mishaps announced in our everyday life. The criminal traffic offense expanded due to over movement on streets. The reason is rapid of vehicles. The speed of the vehicles is past the normal speed confine is called speed infringement. In this paper diverse issues are confronted that are given in issue detailing. Every one of these issues are in future with the assistance of the fortification learning issue and advancement issue the changed neural system is contemplated with NN calculations forward Chaining back spread . Omesh Goyal | Chamkour Singh ""A Review on Traffic Signal Identification"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23557.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/23557/a-review-on-traffic-signal-identification/omesh-goyal
Cloud service ranking with an integration of k-means algorithm and decision-m...IJECEIAES
The present research focuses on ranking cloud services by using the k-means algorithm with multi-criteria decision-making (MCDM) approaches that are the prime factor in the decision-making process and have been used to choose cloud services. The tools offered by MCDM can solve almost any decision-making problem. When faced with a selection challenge in the cloud environment, the trusted party would need to weigh the client’s choice against a predetermined list of criteria. There is a wide range of approaches to evaluating the quality of cloud services. The deep learning model has been considered a branch of artificial intelligence that assesses datasets to perform training and testing and makes decisions accordingly. This paper presents a concise overview of MCDM approaches and discusses some of the most commonly used MCDM methods. Also, a model based on deep learning with the k-means algorithm based decision-making trial and evaluation laboratory (kDE-MATEL) and analytic network process (ANP) is proposed as k-means algorithm based decision-making trial and evaluation laboratory with analytic network process (kD-ANP) for selecting cloud services. The proposed model uses the k-means algorithm and gives different levels of priority and weight to a set of criteria. A traditional model is also compared with a proposed model to reflect the efficiency of the proposed approach.
WMNs: The Design and Analysis of Fair Schedulingiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Proactive Scheduling in Cloud ComputingjournalBEEI
Autonomic fault aware scheduling is a feature quite important for cloud computing and it is related to adoption of workload variation. In this context, this paper proposes an fault aware pattern matching autonomic scheduling for cloud computing based on autonomic computing concepts. In order to validate the proposed solution, we performed two experiments one with traditional approach and other other with pattern recognition fault aware approach. The results show the effectiveness of the scheme.
This paper presents a novel approach for static transmission expansion planning and
allocation of the associated expansion costs to individual market entities in a restructured power
system. The approach seeks the optimal addition of transmission lines among the possible candidate
transmission lines minimizing the overall system costs and at the same time satisfying the system
operational and security constraints. Novelty of the approach lies in applying a widely known
technique used for overload security analysis to an area such as Transmission expansion planning.
Transmission expansion costs are allocated using distribution factors to the individual entities in a
fair and transparent manner. The results for modified Garver Test system demonstrate that the
approach with the advantage of its simplicity can be applied to transmission expansion planning and
cost allocation in restructured power system
Similar to Integration of cost-risk within an intelligent maintenance system (20)
2. 67L. Carlander et al. / Procedia CIRP 47 (2016) 66 – 71
1. Introduction
The UK rail industry is under intense pressures, in terms of
capacity of the network, maintenance budgets and asset
reliability. The UK faces a particularly difficult challenge in
the modernization of the rail network as age of much of the
infrastructure is significant. Anticipated rises in passenger
numbers and operating services will raise the pressures on the
capacity of the network [1]. Against this background of rising
asset usage Network Rail is hoping to reduce maintenance
costs [2]. To achieve both targets asset down-time incidents
will have to be reduced through the very best practice in asset
management. It is expected that increasing usage of
autonomous systems will make a significant contribution
towards reducing denial of service. While many view
autonomous systems in terms of using UAV drones or robotic
systems, much of the impact from the widespread application
of autonomous systems will be in the area of software based
decision support or decision making. The AUTONOM project
is hoping to deliver much of the framework for such a system.
AUTONOM is funded by EPSRC [3] under the
autonomous and intelligent systems program (AISP); and is
also supported by key UK industrialists, including Network
Rail. AUTONOM seeks to enable effective asset and
maintenance decision-making in data-rich scenarios
autonomously.
Uncertainty and Risk are an integral part of cost
engineering. Uncertainty and risk assessment in industry is
used to show more clearly the possible ranges of values.
Single point estimates, (where a single value is presented as
the estimate) can be misleading and give decision makers a
false sense of certainty about the estimate. A three-point
estimate is a popular method for presenting the least-costly,
most-costly and most-likely estimates, however while it gives
information on the range of an estimate it gives limited
information on the shape of the probability distribution
function.
If an organization has sufficiently detailed information
then variables can be assigned ranges, and through a process
of curve fitting, assigned probability distribution functions.
The organization can then perform a Monte Carlo simulation
to produce a more accurate estimate. Monte Carlo simulation
is considered the industry best practice for dealing with
uncertainty in data.
Cost estimation is not performed in isolation within the
AUTONOM project, there is a focus on three technical areas:
Data Fusion, Planning and Scheduling and Cost Analysis. All
are part of an integrated strategy that could lead to better
decision support for maintenance activities.
Figure 1: AUTONOM integration strategy [4]
The Data Fusion approach consists of gathering suitable
data from multiple sources, so that data can be merged to
supply inputs to an automated planning and scheduling model.
The scheduling model uses a genetic algorithm approach to
generate many different solutions and hunt towards a more
optimal solution. This process is made more complicated by
the need to schedule multiple maintenance tasks into an
ordered list. The schedule is then used by the cost analysis, on
which suitable cost engineering best practices are applied.
These combined approaches will enable decision making
within an integrated framework.
A challenge of this work has been to formulate cost models
that can work with the limited information. The data-flows
within the demonstrator limit the available information to use
in the model calculation, as shown in in figure 2.
Figure 2 Cost analysis module I/O analysis
In the reported state of the demonstrator [5], the cost
estimates generated were broken down into material costs,
3. 68 L. Carlander et al. / Procedia CIRP 47 (2016) 66 – 71
labour costs and denial of service costs. However each of
these cost estimates were single point estimates.
While the data transitions and initial state of the prototype
has been described elsewhere [5], the planned future
developments are worth further consideration.
The planned architecture calls for feedback of results
internally, specifically cost estimates will need to be
fed back to the planning/scheduling and used in that
process.
Initial versions of the demonstrator calculated single
point estimates, for accuracies sake this should be
expanded into a full Monte Carlo
This work outlines the contributions towards improving the
estimate for denial of service through Monte Carlo simulation.
Network Rail is keen to improve their current practices
with regards to maintenance activities of their high value
assets throughout their entire rail network In order to build
frameworks and models case studies are being built that
highlight challenges that Network Rail faces and provide
outline solutions. When these can be validated it is anticipated
that the continuation work will be to apply the lessons learnt
to other project partners’ maintenance challenges in the oil
and nuclear sectors.
2. Uncertainty analysis
This case study is based on specific chosen maintenance
activities that are related to points failures, track circuit
failures, track defects, condition of track and many other
infrastructure causes. A major gap in this research is to find
the potential dependencies between these activities, their costs
and the uncertainties in cost.
One aim of this work is to develop a framework to quantify
uncertainty and integrate risk of maintenance activities within
a cost model. The cost model on which the study is based is to
generate cost of labour, materials and denial of service related
to maintenance activities. Uncertainty measurements should
be applied to all these input parameters. The methodology
proposed for both uncertainty quantification and integration of
the uncertainty model into the general cost model.
Figure 3: Methodology of the uncertainty integration project
Figure 3 shows the chosen methodology for this paper.
This paper relates relevant points from the literature research
subject and the identified research gap. The identified
challenges were explored using a Mindmap approach.
2.1 Uncertainty and Risk
Uncertainty and risk are two related terms which are often
confused and it is worthwhile to clarify their meaning.
According to ISO 31000, “risk is often expressed in terms of
the consequences of an event (including changes in
circumstances) and the associated likelihood of occurrence”
while “Uncertainty is the state, even partial, of deficiency of
information related to, understanding or knowledge of an
event, its consequence,or likelihood” [6].
2.2 Uncertainty classification and quantification
According to Miliken’s work [7] the first step in analysing
uncertainty is to identify its nature. It can be:
state uncertainty, which is related to impossibility to
assign probabilities to the likelihood of future events,
response uncertainty that is related to the outcome of
a decision
effect uncertainty that reflects a lack of
understanding of the impact of events or changes on
the studied environment [8].
The work performed by Erkoyunku [9] categorises the
different kinds of uncertainty measurement techniques as
deterministic, qualitative or quantitative.
Figure 4: Uncertainty assessment methods [9]
It is stated that all these techniques are very efficient, with
a particular focus on quantitative techniques such as
probability theory, mathematical and statistical techniques,
and Monte Carlo simulation, as well as deterministic methods
like the sensitivity analysis. According to the literature [9],
quantitative approaches are known as the best means to
provide suitable information to facilitate decision making.
Variables can be expressed using probability distributions.
The Monte Carlo a method is based on probability theory and
is used to explore complex probabilistic situations and results
in a suitable approximation of the studied systems probability
distribution is obtained.
The data used in this study was taken from the TRUST
system. The TRUST system logs delays on the network. Each
delay is attributed to a particular fault and to an owner.
Typically this results in fines being passed between Network
4. 69L. Carlander et al. / Procedia CIRP 47 (2016) 66 – 71
Rail and the TOC’s. While the data available notes to whom
the fault was attributed and the scale of the delay and resulting
fine and the cause-code of the delay. The event of interest is
therefore a delayed train and can be further broken down into
different events for each code. A cracked rail event can be
differentiated from a flooding type of event. Analysis of some
of the codes that are used in the integrated demonstrator and
quantification of the uncertainty is achievable. This work is a
rudimentary analysis of risk as different failure events can be
quantified in terms of consequences and impact [6], but
likelihood is not possible with the provided data.
3. Methodology
A methodology for quantification of uncertainty and
integration within the demonstrators cost models is presented
in Figure 5.
Figure 5: Methodology for uncertainty quantification and
integration
Generally the data forming the histograms had high
numbers of low value faults, but a long “tail” of low
probability but high cost events. This makes deciding bin size
a more complex proposition. Generally bin size started small
and was increased until the number of unpopulated bins
situated between occupied bins were minimised. This desire
to minimise gaps could result in bins of too great a scale and
therefore this process currently relies on human judgement.
However a process based on mathematical rules would be
simple to code when the framework is ready for full
integration within the AUTONOM project framework.
Once histograms of delay minutes data are created the
methodology directs users to examine the data first for
symmetrical distributions and secondly if asymmetry is found
correlation with known (and often used) distributions. If both
questions are insufficient the process indicates that there is a
possibility that more than one distribution will be needed to
explain the data. This would correspond to a situation where a
hidden factor is at work. In this situation knowing the data
contains a hidden factor could inspire further analysis and
using two curves to describe the data gives closer correlation
between the Monte Carlo and the available data.
The coefficient of determination (R²) is a number between
0 and 1 that determines the degree of correlation between two
curves The coefficient of determination is used as the metric
of choice for determining which curve form fits with the data.
The input data is the Denial of Service (DoS) delays, which
are correlated with the cost. The fault types looked at are
those that are related to the maintenance activities studied
throughout the demonstrator [5] created for the AUTONOM
project.
4. Results
Distribution of costs and delay minutes are analysed using
the described methodology and modelled using the probability
distribution(s) found to fit best. The work consists in finding
the best fit between statistical histograms and known
probability distributions. The closer to 1 the factor is, the
more the curves are correlated. Figure 6 shows the results of
such a method, with a blue curve and a red curve representing
a frequency distribution based on the data, and the chosen best
fit probability distribution, respectively.
Figure 6: Correlation between distribution of delay data and
best fit probability distribution
The determination factor between these two curves is used
as the metric for curve suitability. The resulting best-fitting
probability distributions can be applied as inputs to a Monte
Carlo simulation.
5. 70 L. Carlander et al. / Procedia CIRP 47 (2016) 66 – 71
Figure 7: Histogram of data and calculated Probability
Distribution Function
While some of the analysed faults were suitably
approximated by the use of a single Probability Distribution
Function (see figure 7), the framework was open to the
possibility of using two Probability Distribution Functions to
more accurately fit the available data.
Figure 8: Combined Probability Distribution Functions
Using Weibull and PERT distributions to more accurately
fit the behaviourof the available data are shown in figure 8.
Figure 9: Cumulative probability curves
Using the same data used in figure 8 the resulting
cumulative probability curve in figure 9 reveals that the data
has a significant range, when compared to the cumulative
probability curves generated from a log-normal probability
distribution, (see figure 10).
Figure 10: cumulative probability oflog-normal function
Figure 10 shows the P20-P80 range (range bounded by the
probability being 20 and 80 respectively). Using P20 to P80
as indicative of the range of values reduces the influence of
long tails from some of the probability distributions and
makes range values more sensible as a metric of interest. We
see that the cumulative probability from multiple probability
distribution functions can result in a wider P20-P80 range.
5. Integration
Integration of the cost-risk information within the broader
AUTONOM framework will be challenging. Prior to this
work the existing cost model generated single point estimates.
The integration architecture demanded a feedback of cost
information to the schedule modeller. Using a single figure
cost estimate suits the Genetic Algorithm approach, but as the
real cost information is often more complex, deciding what
information to pass back towards the schedule model becomes
challenging.
Integrated Genetic Algorithm and Monte-Carlo
methodologies are mentioned within the available literature
and applied to a range of problems. The work of Marseguerra
and Zio [10] provides an example integrated Monte Carlo and
Genetic algorithm approach to maintenance. This very
relevant example provides illustrative case-studies of a
chemical process plan and the optimisation of maintenance
costs. The claims that a very modest number of simulation
runs (at most 1000 but less for many cases) is adequate is not
clearly demonstrated to be true. Subsequent work,
Marseguerra et al. (2002) [11], used only 100 repetitions for
the Monte Carlo model used. Such small sampling is said to
be justified by the fact the GA provides more repetition; the
individual solutions might not have much confidence but the
number of repeated times the solution is found helps add
confidence. This team released similar results where the three
best solutions found are re-examined using a repeat of the
Monte Carlo simulation using 1x10^6 trials and were able to
satisfy themselves that the solution “seems to actually deserve
the title of being ‘optimal’ or ‘near optimal’ [12].
While much of this type of work was limited by the
computational power available in 1999 and 2002 [10-12], this
6. 71L. Carlander et al. / Procedia CIRP 47 (2016) 66 – 71
constraint will not be as strong now. We could reasonably run
each GA solution using a MC simulation that uses many more
trials. As this trials number is going to be strongly linked with
the accuracy of the overall GA-MC optimisation we’ll have to
select the number of trails used carefully to maximise both
trial accuracy and speed of result delivery.
Our work will have one significant additional
complications not faced by previous GA-MC attempts; our
denial of service costs, (or “downtime penalty” as the
previous GA-MC literature describes it [11]), can not be
easily fixed at a single value. Additionally, a Monte Carlo
cost model will generate lots of information- such as most
likely cost and cost-range, highest-possible cost. Dependent
upon the application, the user may wish to take the option that
is more expensive, but has a lower cost-range, a lower cost-
risk option.
6. Conclusions
This paper has provided details on the methodology to
apply in order to find suitable cost uncertainty quantification
and the steps towards integration into an existing model.
Many cost estimating techniques that can be applied; such as
probability theory, regression analysis or Monte Carlo
simulation. Many of them are useful when designing a
framework for uncertainty quantification and integration, and
when applying it to maintenance cost models.
Due to the integration aspect of this project, the difficulty
in integrating Monte Carlo simulations with Genetic
algorithms has been discussed. There remains several issues
to resolve; both the architecture of the GA/MC integration and
the technical difficulties with the integration. Selection of the
metric of choice (either range or mean value of the cost
estimate) will have to be completed after validation trials.
Issues of computational efficiency of the integrated
Genetic Algorithm and Monte Carlo method are also of
interest, as both are considered computationally expensive.
We might find that the historical reliance on very small tria l
numbers for MC simulations is no longer needed. If
combinatorial explosion remains a challenge future work will
be directed towards examining what possible processes can be
used to making the Monte Carlo method more efficient,
thereby reducing the computational effort required by the
AUTONOM system as it deals with the many maintenance
tasks required to keep the Rail network performing.
Acknowledgements
This research is performed as part of the “AUTONOM:
Integrated through-life support for high-value systems”
project. This project is funded by EPSRC and supported by
Network Rail, BAE Systems, Schlumberger, DSTL, NNL,
Sellafield and UKSA.
References
[1] The Future Railway. Technical Strategy LeadershipGroup. (2012)
http://www.futurerailway.org/RTS/About/Documents/RTS%202012%20The
%20Future%20Railway.pdf
[2] NetworkRail, (2014),NetworkRail’s activityandexpenditureplans.
http://www.networkrail.co.uk/browse%20documents/strategicbusinessplan/cp
5/supporting%20documents/our%20activity%20and%20expenditure%20plan
s/maintenance%20expenditure%20summary.pdf
[3] http://gow.epsrc.ac.uk/NGBOViewGrant.aspx?GrantRef=EP/J011630/1
[4] Kirkwood, L., Shehab, E., Baguley, P., Amorim-Melo, P., Durazo-
Cardenas, I. Challenges in cost analysis of innovative maintenance of
distributedhigh-value assets. Procedia CIRP 2014;22: 148-151.
[5] Durazo-Cardenas I., Starr A., Turner C., Kirkwood L., Bevilacqua M.,
Baguley P., Shehab E. An Integrated Condition-Based, planning and cost-
weighted rail intelligent maintenance system. Railway engineering 13th
International conference. 2015.
[6] ISO 31000:2009Risk management - Principles andguidelines
[7] Milliken, F.J. Three Types of Perceived Uncertainty about the
Environment: State, Effect, and Response. The Academy of Management
Review. 1987; 12: 133-143.
[8] Ashill, N. & Jobber, D. The effects of the external environment on
marketing decision-maker uncertainty. Journal of Marketing Management.
2013;30:.268-294.
[9] Erkoyuncu, J.A. Cost Uncertainty Management and Modelling for
Industrial Product-ServiceSystems. CranfieldUniversity. PhDThesis. 2011.
[10] Marseguerra M., Zio E. Optimizing maintenance and repair policies via a
combination of genetic algorithms and Monte Carlo simulation. Reliability
EngineeringandSystemSafety 2000; 68: 69-83
[11] Marseguerra M, Zio E, Podofillini L. Condition-based maintenance
optimization by means of genetic algorithms and Monte Carlo simulation.
Reliability EngineeringandSystemSafety 2002; 77: 151–166
[12] Cantoni M., Marseguerra M., Zio E. Genetic algorithms and Monte
Carlo simulation for optimal plant design. Reliability Engineering and System
Safety 2000;68: 29-38