This document proposes a new approach called Grey relational analysis – Simulated Annealing (G-SA) to solve multi-response optimization problems in the Taguchi method. The G-SA approach has two phases: 1) Grey relational analysis is used to calculate coefficients for the normalized signal-to-noise ratios of all factor level combinations. 2) Simulated annealing is employed to determine the optimal factor level combination by minimizing a weighted sum of the responses, where the weights are determined using simulated annealing. The document provides background on grey relational analysis, simulated annealing and the Taguchi method, and suggests that the G-SA approach may help practitioners solve real-world multi-response problems.
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
In the present work, we use the homotopy perturbation method (HPM) to solve the Newell- Whitehead-Segel non-linear differential equations. Four case study problems of Newell-Whitehead- Segel are solved by the HPM and the exact solutions are obtained. The trend of the rapid convergence of the sequences constructed by the method toward the exact solution is shown numerically. As a result the rapid convergence towards the exact solutions of HPM indicates that, using the HPM to solve the Newell-Whitehead-Segel non-linear differential equations, a reasonable less amount of computational work with acceptable accuracy may be sufficient. Moreover the application of the HPM proves that the method is an effective and simple tool for solving the Newell-Whitehead-Segel non-linear differential equations
Several approaches are proposed to solve global numerical optimization problems. Most of researchers have experimented the robustness of their algorithms by generating the result based on minimization aspect. In this paper, we focus on maximization problems by using several hybrid chemical reaction optimization algorithms including orthogonal chemical reaction optimization (OCRO), hybrid algorithm based on particle swarm and chemical reaction optimization (HP-CRO), real-coded chemical reaction optimization (RCCRO) and hybrid mutation chemical reaction optimization algorithm (MCRO), which showed success in minimization. The aim of this paper is to demonstrate that the approaches inspired by chemical reaction optimization are not only limited to minimization, but also are suitable for maximization. Moreover, experiment comparison related to other maximization algorithms is presented and discussed.
Prof Ong gave a webinar talk on the AI Revolution in Materials Science for the Singapore Agency of Science Technology and Research (A*STAR). In this talk, he discussed the big challenges in materials science where AI can potentially make a huge impact towards addressing as well as outstanding challenges and opportunities to bringing forth the AI revolution to the materials domain.
Adomian Decomposition Method for Solving the Nonlinear Heat EquationIJERA Editor
This paper studies the application of the Adomian Decomposition Method to find the exact and approximate solutions of the heat equation with power nonlinearity. First, the relevant literature is studied in understanding the importance and extent of applicability of the method in the applied science. The literature review has been incorporated in the introduction of the paper. The rest of the paper is divided in three further sections. The first part Adomian Decomposition Method provides a step-by-step guide of applying the method on any heat equation with nonlinearity. The second section is labeled as Applications. It considers two examples from the previous works of Pumak (2005) and Hetmaniok et al. (2010) to find the exact and approximate solutions of the equations respectively.
This document presents a stability theorem for large solutions of the three-dimensional incompressible magnetohydrodynamic equations. The theorem proves that if one solution decays to zero and another solution is initially close, then the perturbed solution will remain close. The proof uses Sobolev spaces and interpolation inequalities. As consequences of the theorem, the stability result can be extended to cases where the domain is all of R3 or satisfies certain conditions. Further work could look at extending results on regularity and stability of fluid flows to the MHD case using different function spaces like Besov spaces.
The document describes a study that developed a practical method to improve enrichment rates in protein-ligand docking. The method scales hydrogen bond and lipophilic interaction scores based on burial depth, rewarding interactions that occur deep in the active site. Testing showed the burial depth scaling improved the discrimination between active and inactive compounds compared to an unscaled scoring function. The optimized burial depth scaling equation was then shown to improve enrichment on test datasets, demonstrating its ability to address deficiencies in existing scoring functions.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
In the present work, we use the homotopy perturbation method (HPM) to solve the Newell- Whitehead-Segel non-linear differential equations. Four case study problems of Newell-Whitehead- Segel are solved by the HPM and the exact solutions are obtained. The trend of the rapid convergence of the sequences constructed by the method toward the exact solution is shown numerically. As a result the rapid convergence towards the exact solutions of HPM indicates that, using the HPM to solve the Newell-Whitehead-Segel non-linear differential equations, a reasonable less amount of computational work with acceptable accuracy may be sufficient. Moreover the application of the HPM proves that the method is an effective and simple tool for solving the Newell-Whitehead-Segel non-linear differential equations
Several approaches are proposed to solve global numerical optimization problems. Most of researchers have experimented the robustness of their algorithms by generating the result based on minimization aspect. In this paper, we focus on maximization problems by using several hybrid chemical reaction optimization algorithms including orthogonal chemical reaction optimization (OCRO), hybrid algorithm based on particle swarm and chemical reaction optimization (HP-CRO), real-coded chemical reaction optimization (RCCRO) and hybrid mutation chemical reaction optimization algorithm (MCRO), which showed success in minimization. The aim of this paper is to demonstrate that the approaches inspired by chemical reaction optimization are not only limited to minimization, but also are suitable for maximization. Moreover, experiment comparison related to other maximization algorithms is presented and discussed.
Prof Ong gave a webinar talk on the AI Revolution in Materials Science for the Singapore Agency of Science Technology and Research (A*STAR). In this talk, he discussed the big challenges in materials science where AI can potentially make a huge impact towards addressing as well as outstanding challenges and opportunities to bringing forth the AI revolution to the materials domain.
Adomian Decomposition Method for Solving the Nonlinear Heat EquationIJERA Editor
This paper studies the application of the Adomian Decomposition Method to find the exact and approximate solutions of the heat equation with power nonlinearity. First, the relevant literature is studied in understanding the importance and extent of applicability of the method in the applied science. The literature review has been incorporated in the introduction of the paper. The rest of the paper is divided in three further sections. The first part Adomian Decomposition Method provides a step-by-step guide of applying the method on any heat equation with nonlinearity. The second section is labeled as Applications. It considers two examples from the previous works of Pumak (2005) and Hetmaniok et al. (2010) to find the exact and approximate solutions of the equations respectively.
This document presents a stability theorem for large solutions of the three-dimensional incompressible magnetohydrodynamic equations. The theorem proves that if one solution decays to zero and another solution is initially close, then the perturbed solution will remain close. The proof uses Sobolev spaces and interpolation inequalities. As consequences of the theorem, the stability result can be extended to cases where the domain is all of R3 or satisfies certain conditions. Further work could look at extending results on regularity and stability of fluid flows to the MHD case using different function spaces like Besov spaces.
The document describes a study that developed a practical method to improve enrichment rates in protein-ligand docking. The method scales hydrogen bond and lipophilic interaction scores based on burial depth, rewarding interactions that occur deep in the active site. Testing showed the burial depth scaling improved the discrimination between active and inactive compounds compared to an unscaled scoring function. The optimized burial depth scaling equation was then shown to improve enrichment on test datasets, demonstrating its ability to address deficiencies in existing scoring functions.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes a study that used Taguchi methodology and Grey Relational Analysis to optimize machining parameters for CNC end milling of stainless steel 304. The goals were to minimize surface roughness (Ra) and maximize material removal rate (MRR). Experiments were conducted using an L9 orthogonal array to test cutting speed, feed rate, and depth of cut at three levels. Response data was normalized and Grey Relational Coefficients were calculated to determine the optimal parameter combination. The analysis found that a cutting speed of 75 m/min, feed rate of 0.15 mm/rev, and depth of cut of 1.5 mm provided the best results for achieving the combined objectives of lower Ra and higher MRR.
This document discusses the machinability of various tool steels. It provides information on different types of tool steels classified by the AISI and how their machinability is influenced by factors like composition, microstructure, and heat treatment. Machining operations like turning, drilling, milling, and face milling of tool steels are also covered, outlining tool materials, speeds/feeds, cutting fluids, and heat treatments used for optimal machinability. Tool steels require special machining techniques due to their high hardness, carbon content, and alloying elements.
Machining and Thermal aspects (MGU S8 ME)Denny John
The document summarizes heat generation during metal cutting and its effects. It discusses that 90-100% of mechanical energy during machining converts to thermal energy, raising temperatures. Heat affects tool life, surface finish, and dimensional accuracy. Heat is generated primarily at the shear and tool-chip interfaces due to plastic deformation and friction. Cutting temperature depends on work material properties, tool geometry, and cutting conditions like speed and fluid use. Higher speeds increase temperatures. Measurement methods include thermocouples and infrared detection. Effects of heat include tool wear and failure.
Experimental Determination of Tool-chip Interface TemperaturesDeepam Goyal
This document summarizes several methods for experimentally determining tool-chip interface temperature during machining processes: (1) Calorimetric methods measure total heat generated but provide only average values; (2) Decolorizing agents indicate temperature via color changes; (3) Tool-work thermocouples measure average cutting temperature; (4) Moving thermocouples measure gradual temperature variation in forming chips; (5) Embedded thermocouples monitor job temperature but cannot be placed at tool-chip contact; (6) Photo-cell techniques accurately measure shear zone and tool flank temperatures; (7) Infrared photographic techniques provide temperature distribution profiles at tool and chip surfaces.
Optimisation of machining parameters for slot milling operation in Inconel 62...Denny John
Vikram Sarabhai Space Centre (VSSC) is the lead centre for development of satellite launch vehicles and technologies in India. It was established in 1963 as Thumba Equatorial Rocket Launching Station and renamed after Dr. Vikram Sarabhai in 1971. VSSC's major programmes include Polar Satellite Launch Vehicle (PSLV), Geosynchronous Satellite Launch Vehicle (GSLV), and it has made significant contributions to India's space programmes, including the Chandrayaan-1 mission to the moon.
Inconel 625 is a nickel-based super alloy widely used in aerospace and other industries due to its high strength, corrosion resistance and ability to retain strength at high temperatures. It is
Tool life is measured by the time period from when a tool starts cutting until failure or until it needs resharpening. Tool life can be measured in units of time, number of pieces cut, volume of material removed, or length of cut. Tools typically fail due to high temperatures, mechanical impacts, or gradual wear. Wear occurs on the flank and crater faces of tools and is caused by abrasion, diffusion, electrochemical reactions, and other mechanisms. Factors like cutting speed, workpiece properties, tool geometry, and cooling influence tool life.
The document discusses tool wear, types of tool wear including flank wear, crater wear and nose wear. It describes how tool wear occurs and the factors affecting tool wear and tool life. These factors include cutting speed, feed rate, depth of cut, tool geometry, tool and work material, cutting fluids, and rigidity of machine tool. The document also discusses machinability, factors affecting machinability, evaluation of machinability, and measurement of surface roughness.
this presentation tries to explain the various heat zones that are developed during the metal cutting process. furthermore, how much heat is dissipated from the various zones. lastly the possible methods of temperature reduction in brief.
Factors affecting tool life in machining processesmohdalaamri
This document discusses factors that affect tool life in machining processes. It identifies the main factors as cutting tool geometry, material, characteristics, cutting conditions, workpiece material, and cutting fluid. Cutting tool geometry influences machined surface quality, productivity, chip control, and forces/temperatures. Cutting tool material and coatings must have properties like heat/wear resistance. Cutting conditions like depth of cut, feed rate, and cutting speed also impact tool life. Workpiece material properties and machinability affect tool performance. Cutting fluids provide lubrication, cooling and chip removal to extend tool life. Environmental impacts of fluids are also considered.
1. The document discusses the theory of metal cutting, including the chip formation process, types of chips, tool angles, tool wear mechanisms, tool materials, and cutting fluids.
2. Key aspects covered include the orthogonal cutting model, factors that influence chip type like tool angles and speeds/feeds, how tool angles impact forces and tool life, and common tool materials like HSS and cemented carbides and their characteristics.
3. Cutting fluids are discussed as being important to reduce heat at the tool-work interface and lubricate the process to increase tool life and improve surface finish. Their properties and common types used are also summarized.
Tool Wear and Tool life of single point cutting toolAkshay Arvind
Tool wear occurs gradually as material is removed from cutting tools during operation. The three main types of tool wear are flank wear, crater wear, and nose wear. Flank wear increases cutting forces and can cause tool failure if it exceeds 0.5-0.6mm. Crater wear increases the rake angle but weakens the tool. Nose wear shortens the tool and reduces machining accuracy. Factors like cutting speed, depth of cut, tool material, and work material affect the tool's life, which is the length of satisfactory operation before needing replacement due to wear.
This document discusses tool wear, tool life, and machinability. It defines tool life as the useful cutting time before tool failure or need for resharpening. Tool wear is caused by various mechanisms like abrasion, diffusion, and plastic deformation, and is measured by flank and crater wear. Machinability is determined by factors like surface finish, tool life, cutting forces, and chip control. The machinability of different materials depends on their properties and varies significantly. Cutting fluids are used to decrease power needs, increase heat dissipation, and improve other machinability factors.
This chapter aims to provide basic backgrounds of different types of machining processes and highlights on an understanding of important parameters which affects machining of metals with their chip removals.
Metal cutting or Machining is the process of producing workpiece by removing unwanted material from a block of metal. in the form of chips. This process is most important since almost all the products get their final shape and size by metal removal. either directly or indirectly.
The major drawback of the process is loss of material in the form of chips. In this chapter. we shall have a fundamental understanding of the basic metal process.
Comparison of signal smoothing techniques for use in embedded system for moni...Dalton Valadares
Paper about the comparison between some signal smoothing techniques for use in an embedded system responsible for monitoring the biofuels quality, specificaly the oxidative stability.
The document discusses numerical solution techniques for solving systems of partial differential equations in ATLAS. It describes the Gummel, Newton, and Block methods and when each is preferred based on factors like convergence rate, accuracy, efficiency and robustness. The Newton method is generally the default as it exhibits quadratic convergence but requires an accurate initial guess. The document also provides examples of how these methods are specified for different device models in ATLAS.
Statistical analysis of electrodeposited in2 s3 films techconnect conferenceArkansas State University
In2S3 has received attention as an alternative to CdS as the buffer layer in heterojunction solar cells. Although having a bandgap of 2.0 eV relative to 2.5 eV for CdS, the lower toxicity and environmental impact of indium relative to cadmium, and significant photosensitivity, compel ongoing research [1]. Indium sulfide thin films were deposited onto molybdenum-coated glass (SiO2) substrates by electrodeposition from organic baths (ethylene glycol-based) containing indium chloride (InCl3), sodium chloride (NaCl), and sodium thiosulfate (Na2S2O3.5H2O), the latter used as an additional sulfur source along with elemental sulfur (S). The Taguchi method was used to optimize the deposition paramters so as to minimize non-uniformity, cracks, and improper stoichiometry. The measured performance characteristics (molar ratio (In:S) and crack density) for all of the In2S3 films were calculated to analyze the effect of each deposition factor (deposition voltage, deposition temperature, composition of solution, and deposition time) involved in the electrodeposition process by calculating the sensitivity (signal to noise, S/N, ratios).
Comparative study of different algorithmsijfcstjournal
This Paper provides a brief description of the Genetic Algorithm (GA), the Simulated Annealing (SA) Algorithm, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and attempts to explain the way as how our Proposed Genetic Algorithm (GA), Proposed Simulated Annealing (SA) Algorithm using GA, Classical Backtracking (BT) Algorithm and Classical Brute Force (BF) Search Algorithm can be employed in finding the best solution of N Queens Problem and also, makes a comparison between these four algorithms. It is entirely a review based work. The four algorithms were written as well as implemented. From the Results, it was found that, the Proposed Genetic Algorithm (GA) performed better than the Proposed Simulated Annealing (SA) Algorithm using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and it also provided better fitness value (solution) than the Proposed Simulated Annealing Algorithm (SA) using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm, for different N values. Also, it was noticed that, the Proposed GA took more time to provide result than the Proposed SA using GA.
The document presents a new Lightning Attachment Procedure Optimization (LAPO) algorithm for solving the short-term hydrothermal scheduling (STHS) problem. The STHS problem aims to minimize total fuel costs of thermal power units over a period by determining optimal hourly generation of hydro and thermal units while satisfying constraints. LAPO is applied to three test systems of varying hydro and thermal unit combinations to evaluate its effectiveness in finding optimal solutions for the nonlinear, non-convex STHS optimization problem. Simulation results demonstrate LAPO's superiority over other techniques for this application.
Natural convection in a differentially heated cavity plays a
major role in the understanding of flow physics and heat
transfer aspects of various applications. Parameters such as
Rayleigh number, Prandtl number, aspect ratio, inclination
angle and surface emissivity are considered to have either
individual or grouped effect on natural convection in an
enclosed cavity. In spite of this, simultaneous study of these
parameters over a wide range is rare. Development of
correlation which helps to investigate the effect of the large
number and wide range of parameters is challenging. The
number of simulations required to generate correlations for
even a small number of parameters is extremely large. Till
date there is no streamlined procedure to optimize the number
of simulations required for correlation development.
Therefore, the present study aims to optimize the number of
simulations by using Taguchi technique and later generate
correlations by employing multiple variable regression
analysis. It is observed that for a wide range of parameters,
the proposed CFD-Taguchi-Regression approach drastically
reduces the total number of simulations for correlation
generation.
Mathematical Modelling and Parameter Optimization of Pulsating Heat PipesXin-She Yang
This document presents a simplified mathematical model for pulsating heat pipes and uses parameter optimization to estimate key parameters from limited experimental data. The model considers the temperature evolution and mass transfer of vapor bubbles as well as the motion of liquid plugs using governing equations. Parameter estimation is framed as a nonlinear constrained optimization problem solved using the firefly algorithm to efficiently estimate parameters. The model is intended to reproduce key physics while obtaining good parameter estimates from sparse data.
Simulated Annealing - A Optimisation TechniqueAUSTIN MOSES
Simulated annealing is a global optimization technique inspired by the physical process of annealing in solids. It can find the global minimum of a cost function by slowly cooling the system. At each temperature, the algorithm accepts random moves to neighboring solutions with a probability based on the change in cost and current temperature. This allows occasionally moving to higher-cost solutions and avoids getting stuck in local minima. While slower than local search methods, simulated annealing is more likely to find the global optimum solution over multiple iterations as the temperature decreases.
This document summarizes a study that used Taguchi methodology and Grey Relational Analysis to optimize machining parameters for CNC end milling of stainless steel 304. The goals were to minimize surface roughness (Ra) and maximize material removal rate (MRR). Experiments were conducted using an L9 orthogonal array to test cutting speed, feed rate, and depth of cut at three levels. Response data was normalized and Grey Relational Coefficients were calculated to determine the optimal parameter combination. The analysis found that a cutting speed of 75 m/min, feed rate of 0.15 mm/rev, and depth of cut of 1.5 mm provided the best results for achieving the combined objectives of lower Ra and higher MRR.
This document discusses the machinability of various tool steels. It provides information on different types of tool steels classified by the AISI and how their machinability is influenced by factors like composition, microstructure, and heat treatment. Machining operations like turning, drilling, milling, and face milling of tool steels are also covered, outlining tool materials, speeds/feeds, cutting fluids, and heat treatments used for optimal machinability. Tool steels require special machining techniques due to their high hardness, carbon content, and alloying elements.
Machining and Thermal aspects (MGU S8 ME)Denny John
The document summarizes heat generation during metal cutting and its effects. It discusses that 90-100% of mechanical energy during machining converts to thermal energy, raising temperatures. Heat affects tool life, surface finish, and dimensional accuracy. Heat is generated primarily at the shear and tool-chip interfaces due to plastic deformation and friction. Cutting temperature depends on work material properties, tool geometry, and cutting conditions like speed and fluid use. Higher speeds increase temperatures. Measurement methods include thermocouples and infrared detection. Effects of heat include tool wear and failure.
Experimental Determination of Tool-chip Interface TemperaturesDeepam Goyal
This document summarizes several methods for experimentally determining tool-chip interface temperature during machining processes: (1) Calorimetric methods measure total heat generated but provide only average values; (2) Decolorizing agents indicate temperature via color changes; (3) Tool-work thermocouples measure average cutting temperature; (4) Moving thermocouples measure gradual temperature variation in forming chips; (5) Embedded thermocouples monitor job temperature but cannot be placed at tool-chip contact; (6) Photo-cell techniques accurately measure shear zone and tool flank temperatures; (7) Infrared photographic techniques provide temperature distribution profiles at tool and chip surfaces.
Optimisation of machining parameters for slot milling operation in Inconel 62...Denny John
Vikram Sarabhai Space Centre (VSSC) is the lead centre for development of satellite launch vehicles and technologies in India. It was established in 1963 as Thumba Equatorial Rocket Launching Station and renamed after Dr. Vikram Sarabhai in 1971. VSSC's major programmes include Polar Satellite Launch Vehicle (PSLV), Geosynchronous Satellite Launch Vehicle (GSLV), and it has made significant contributions to India's space programmes, including the Chandrayaan-1 mission to the moon.
Inconel 625 is a nickel-based super alloy widely used in aerospace and other industries due to its high strength, corrosion resistance and ability to retain strength at high temperatures. It is
Tool life is measured by the time period from when a tool starts cutting until failure or until it needs resharpening. Tool life can be measured in units of time, number of pieces cut, volume of material removed, or length of cut. Tools typically fail due to high temperatures, mechanical impacts, or gradual wear. Wear occurs on the flank and crater faces of tools and is caused by abrasion, diffusion, electrochemical reactions, and other mechanisms. Factors like cutting speed, workpiece properties, tool geometry, and cooling influence tool life.
The document discusses tool wear, types of tool wear including flank wear, crater wear and nose wear. It describes how tool wear occurs and the factors affecting tool wear and tool life. These factors include cutting speed, feed rate, depth of cut, tool geometry, tool and work material, cutting fluids, and rigidity of machine tool. The document also discusses machinability, factors affecting machinability, evaluation of machinability, and measurement of surface roughness.
this presentation tries to explain the various heat zones that are developed during the metal cutting process. furthermore, how much heat is dissipated from the various zones. lastly the possible methods of temperature reduction in brief.
Factors affecting tool life in machining processesmohdalaamri
This document discusses factors that affect tool life in machining processes. It identifies the main factors as cutting tool geometry, material, characteristics, cutting conditions, workpiece material, and cutting fluid. Cutting tool geometry influences machined surface quality, productivity, chip control, and forces/temperatures. Cutting tool material and coatings must have properties like heat/wear resistance. Cutting conditions like depth of cut, feed rate, and cutting speed also impact tool life. Workpiece material properties and machinability affect tool performance. Cutting fluids provide lubrication, cooling and chip removal to extend tool life. Environmental impacts of fluids are also considered.
1. The document discusses the theory of metal cutting, including the chip formation process, types of chips, tool angles, tool wear mechanisms, tool materials, and cutting fluids.
2. Key aspects covered include the orthogonal cutting model, factors that influence chip type like tool angles and speeds/feeds, how tool angles impact forces and tool life, and common tool materials like HSS and cemented carbides and their characteristics.
3. Cutting fluids are discussed as being important to reduce heat at the tool-work interface and lubricate the process to increase tool life and improve surface finish. Their properties and common types used are also summarized.
Tool Wear and Tool life of single point cutting toolAkshay Arvind
Tool wear occurs gradually as material is removed from cutting tools during operation. The three main types of tool wear are flank wear, crater wear, and nose wear. Flank wear increases cutting forces and can cause tool failure if it exceeds 0.5-0.6mm. Crater wear increases the rake angle but weakens the tool. Nose wear shortens the tool and reduces machining accuracy. Factors like cutting speed, depth of cut, tool material, and work material affect the tool's life, which is the length of satisfactory operation before needing replacement due to wear.
This document discusses tool wear, tool life, and machinability. It defines tool life as the useful cutting time before tool failure or need for resharpening. Tool wear is caused by various mechanisms like abrasion, diffusion, and plastic deformation, and is measured by flank and crater wear. Machinability is determined by factors like surface finish, tool life, cutting forces, and chip control. The machinability of different materials depends on their properties and varies significantly. Cutting fluids are used to decrease power needs, increase heat dissipation, and improve other machinability factors.
This chapter aims to provide basic backgrounds of different types of machining processes and highlights on an understanding of important parameters which affects machining of metals with their chip removals.
Metal cutting or Machining is the process of producing workpiece by removing unwanted material from a block of metal. in the form of chips. This process is most important since almost all the products get their final shape and size by metal removal. either directly or indirectly.
The major drawback of the process is loss of material in the form of chips. In this chapter. we shall have a fundamental understanding of the basic metal process.
Comparison of signal smoothing techniques for use in embedded system for moni...Dalton Valadares
Paper about the comparison between some signal smoothing techniques for use in an embedded system responsible for monitoring the biofuels quality, specificaly the oxidative stability.
The document discusses numerical solution techniques for solving systems of partial differential equations in ATLAS. It describes the Gummel, Newton, and Block methods and when each is preferred based on factors like convergence rate, accuracy, efficiency and robustness. The Newton method is generally the default as it exhibits quadratic convergence but requires an accurate initial guess. The document also provides examples of how these methods are specified for different device models in ATLAS.
Statistical analysis of electrodeposited in2 s3 films techconnect conferenceArkansas State University
In2S3 has received attention as an alternative to CdS as the buffer layer in heterojunction solar cells. Although having a bandgap of 2.0 eV relative to 2.5 eV for CdS, the lower toxicity and environmental impact of indium relative to cadmium, and significant photosensitivity, compel ongoing research [1]. Indium sulfide thin films were deposited onto molybdenum-coated glass (SiO2) substrates by electrodeposition from organic baths (ethylene glycol-based) containing indium chloride (InCl3), sodium chloride (NaCl), and sodium thiosulfate (Na2S2O3.5H2O), the latter used as an additional sulfur source along with elemental sulfur (S). The Taguchi method was used to optimize the deposition paramters so as to minimize non-uniformity, cracks, and improper stoichiometry. The measured performance characteristics (molar ratio (In:S) and crack density) for all of the In2S3 films were calculated to analyze the effect of each deposition factor (deposition voltage, deposition temperature, composition of solution, and deposition time) involved in the electrodeposition process by calculating the sensitivity (signal to noise, S/N, ratios).
Comparative study of different algorithmsijfcstjournal
This Paper provides a brief description of the Genetic Algorithm (GA), the Simulated Annealing (SA) Algorithm, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and attempts to explain the way as how our Proposed Genetic Algorithm (GA), Proposed Simulated Annealing (SA) Algorithm using GA, Classical Backtracking (BT) Algorithm and Classical Brute Force (BF) Search Algorithm can be employed in finding the best solution of N Queens Problem and also, makes a comparison between these four algorithms. It is entirely a review based work. The four algorithms were written as well as implemented. From the Results, it was found that, the Proposed Genetic Algorithm (GA) performed better than the Proposed Simulated Annealing (SA) Algorithm using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and it also provided better fitness value (solution) than the Proposed Simulated Annealing Algorithm (SA) using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm, for different N values. Also, it was noticed that, the Proposed GA took more time to provide result than the Proposed SA using GA.
The document presents a new Lightning Attachment Procedure Optimization (LAPO) algorithm for solving the short-term hydrothermal scheduling (STHS) problem. The STHS problem aims to minimize total fuel costs of thermal power units over a period by determining optimal hourly generation of hydro and thermal units while satisfying constraints. LAPO is applied to three test systems of varying hydro and thermal unit combinations to evaluate its effectiveness in finding optimal solutions for the nonlinear, non-convex STHS optimization problem. Simulation results demonstrate LAPO's superiority over other techniques for this application.
Natural convection in a differentially heated cavity plays a
major role in the understanding of flow physics and heat
transfer aspects of various applications. Parameters such as
Rayleigh number, Prandtl number, aspect ratio, inclination
angle and surface emissivity are considered to have either
individual or grouped effect on natural convection in an
enclosed cavity. In spite of this, simultaneous study of these
parameters over a wide range is rare. Development of
correlation which helps to investigate the effect of the large
number and wide range of parameters is challenging. The
number of simulations required to generate correlations for
even a small number of parameters is extremely large. Till
date there is no streamlined procedure to optimize the number
of simulations required for correlation development.
Therefore, the present study aims to optimize the number of
simulations by using Taguchi technique and later generate
correlations by employing multiple variable regression
analysis. It is observed that for a wide range of parameters,
the proposed CFD-Taguchi-Regression approach drastically
reduces the total number of simulations for correlation
generation.
Mathematical Modelling and Parameter Optimization of Pulsating Heat PipesXin-She Yang
This document presents a simplified mathematical model for pulsating heat pipes and uses parameter optimization to estimate key parameters from limited experimental data. The model considers the temperature evolution and mass transfer of vapor bubbles as well as the motion of liquid plugs using governing equations. Parameter estimation is framed as a nonlinear constrained optimization problem solved using the firefly algorithm to efficiently estimate parameters. The model is intended to reproduce key physics while obtaining good parameter estimates from sparse data.
Simulated Annealing - A Optimisation TechniqueAUSTIN MOSES
Simulated annealing is a global optimization technique inspired by the physical process of annealing in solids. It can find the global minimum of a cost function by slowly cooling the system. At each temperature, the algorithm accepts random moves to neighboring solutions with a probability based on the change in cost and current temperature. This allows occasionally moving to higher-cost solutions and avoids getting stuck in local minima. While slower than local search methods, simulated annealing is more likely to find the global optimum solution over multiple iterations as the temperature decreases.
The document presents an optimization technique called teaching learning-based optimization (TLBO) to solve combined heat and power dispatch (CHPD) problems. To improve the algorithm, opposition-based learning (OBL) is incorporated into the basic TLBO, creating an oppositional TLBO (OTLBO) method. The TLBO, OTLBO, and other optimization algorithms are tested on three standard CHPD test systems. Simulation results show that OTLBO finds better solutions than other techniques like evolutionary programming, particle swarm optimization variants, genetic algorithm, and bee colony optimization, in less computational time.
An improved method for predicting heat exchanger network areaAlexander Decker
This document presents an improved methodology for predicting the area required for heat exchanger networks. The current methodology relies on film heat transfer coefficients, which can vary significantly between the targeting, synthesis, and detailed design stages of process integration. The new methodology accounts for changes in stream properties with temperature and relates film heat transfer coefficients to pressure drop constraints, allowing the three stages to be consistent. It was tested on two case studies and found to have less than 2% difference between stages, compared to up to 59% difference with the current methodology. The new methodology provides an excellent agreement between the targeting, synthesis, and detailed design of heat exchanger networks.
This paper presents a solution to solve the network reconfiguration, DG coordination (location and size) and capacitor coordination (location and size), simultaneously. The proposed solution will be determined by using Artificial Bee Colony (ABC). Various case studies are presented to see the impact on the test system, in term of power loss reduction and also voltage profiles. The proposed approach is applied to a 33-bus test system and simulate by using MATLAB programming. The simulation results show that combination of DG, capacitor and network reconfiguration gives a positive impact on total power losses minimization as well as voltage profile improvement compared to other case studies.
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEMijfcstjournal
This Paper provides a brief description of the Genetic Algorithm (GA), the Simulated Annealing (SA)
Algorithm, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and attempts to
explain the way as how the Proposed Genetic Algorithm (GA), the Proposed Simulated Annealing (SA)
Algorithm using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm can be
employed in finding the best solution of N Queens Problem and also, makes a comparison between these
four algorithms. It is entirely a review based work. The four algorithms were written as well as
implemented. From the Results, it was found that, the Proposed Genetic Algorithm (GA) performed better
than the Proposed Simulated Annealing (SA) Algorithm using GA, the Backtracking (BT) Algorithm and
the Brute Force (BF) Search Algorithm and it also provided better fitness value (solution) than the
Proposed Simulated Annealing Algorithm (SA) using GA, the Backtracking (BT) Algorithm and the Brute
Force (BF) Search Algorithm, for different N values. Also, it was noticed that, the Proposed GA took more
time to provide result than the Proposed SA using GA.
Limitations of symmetry in fe modeling a comparison of fem and a (1)Kunal Bhatt
It has long been an accepted practice to use symmetry in Finite Element Modeling.
Whenever modeling a large structure, we turn to symmetry in order to significantly reduce the model
size and computation time. But is symmetry always the solution to long computation times, and is it
always accurate?
This document provides a review of non-traditional optimization algorithms that have been used to solve simultaneous scheduling problems in industrial and production environments. It discusses several metaheuristic algorithms such as genetic algorithms, simulated annealing, particle swarm optimization, artificial immune systems, differential evolution, and ant colony optimization. These algorithms are able to find good solutions to combinatorial optimization problems like scheduling that are NP-complete and cannot be solved optimally in polynomial time using traditional methods. The review concludes that these non-traditional techniques can yield global optimal solutions and efficiently explore solution spaces, making them useful approaches for simultaneous scheduling problems.
The document discusses modeling turbulent premixed and partially premixed combustion. It notes that directly solving transport equations for all reactive species moments is impractical due to the large number of equations and closure terms required. Instead, models track the first two moments of one or two key scalars like mixture fraction and progress variable. The document reviews modeling approaches for lean premixed flames and describes the strained flamelet model and its extension to partially premixed flames. It discusses implementing the model in CFD codes and comparing results.
This document proposes a holistic approach to reconstruct data in ocean sensor networks using compression sensing. It involves two key aspects:
1) A node reordering scheme is developed to improve the sparsity of signals in the discrete cosine transform or Fourier transform domain, reducing the number of measurements needed for accurate reconstruction.
2) An improved sparse adaptive tracking algorithm is adopted to estimate the sparse degree and then reconstruct the signal in a step-by-step manner, gradually converging on an accurate reconstruction even with unknown sparsity.
Simulation results show the proposed method can effectively reduce signal sparsity and accurately reconstruct signals, especially in cases of unknown sparsity.
This document summarizes a method for tracking deformable objects in images. It proposes casting the problem as finding optimal cyclic paths in a product space of the template shape and input image. A cost functional is introduced that considers data fidelity, shape consistency, and elastic deformation. The functional is optimized using a minimum ratio cycle algorithm on graphics cards, allowing real-time segmentation and tracking of deformable objects while guaranteeing a globally optimal solution. The method can be extended to track multiple deformable anatomical structures in medical images.
This document summarizes a method for tracking deformable objects in images. It proposes casting the problem as finding optimal cyclic paths in a product space of the template shape and input image. A cost functional is introduced that consists of three terms: data fidelity favoring strong edges, shape consistency favoring similar tangent angles, and an elastic penalty for stretching. Optimization is performed using simulated annealing for segmentation and iterated conditional modes for tracking. The algorithm provides optimal segmentation and point correspondences between template and image curve in linear time.
This document summarizes a research paper that proposes a new approach for tracking multiple deformable anatomical structures in medical images using geometrically deformable templates (GDTs). The GDTs can deform to match similar shapes based on image forces while minimizing a penalty function that measures deformation from the template's equilibrium shape. This allows simultaneous segmentation of multiple objects using intra- and inter-shape information. Simulated annealing is used for segmentation while iterated conditional modes is used for tracking. The paper also reviews previous work on image segmentation, tracking deformable objects, and shape-based image segmentation.
The document discusses various energy minimization methods used in molecular modeling. It explains that the goal of energy minimization is to find a conformation with the lowest possible energy by making small adjustments to bond angles and lengths. It describes several common algorithms for energy minimization, including steepest descent, conjugate gradient, and Newton-Raphson methods. The document also discusses importance of energy minimization in modeling protein-ligand and protein-protein docking.
1. Department of Industrial Engineering, Anna University, Chennai, 600025, India,
e mail: nishmech@gmail.com
Abstract—The Taguchi method aims at reducing response
variation from target so as to increase yields and product lifetime,
reduce defects, and improve performance. However, the Taguchi
method is only efficient for optimizing a single quality response.
This research, therefore, proposes a Grey relational analysis –
Simulated Annealing approach for solving the multiresponse
problem in the Taguchi method. The proposed approach includes
two phases. The first phase utilizes grey relational analysis to
calculate grey relational co-efficient for the normalized S/N ratio
values for all combinations of factor levels. Whereas, the second
phase employs simulated annealing to decide optimal combination
of factor levels for multiresponse problem. Here the multi
response is converted in to single response by sum of each
responses multiplying with its optimal weight (optimal weights are
determined using Simulated Annealing), then solved by Taguchi
method. A case study is provided for illustration. In conclusion,
the G-SA approach may provide a great assistance to practitioners
for solving the multiresponse problem in real life applications on
the Taguchi method.
Keywords— Grey Relational Analysis, Simulated Annealing,
Taguchi method, multiresponse problem.
I. INTRODUCTION
In practice, a great deal of engineering time is spent
generating information bout how different design
parameters affect performance under different usage
conditions. The Taguchi [1] method is a widely accepted
approach for robust design. The overall goal of robust
design is to find settings of the controllable factors so that
the response is least sensitive to variations in the noise
variables, while still yielding an acceptable mean level of
the response. Generally, a process’s or a product’s quality
response can be divided into three main types: the smaller-
the-better (STB); the nominal-the-best (NTB); and the
larger-the-better (LTB) type responses. To optimize a
quality response by the Taguchi method, an orthogonal
array (OA) is utilized to reduce the number of experiments
under permissive reliability. Then, signal-to-ratio (S/N)
ratio is employed as a quality measure to decide the optimal
combination of factor levels.
In today’s highly competitive markets, however,
customers are concerned about more than one quality
response. The Taguchi method has been extensively
employed in manufacturing to robustly design a product or
process with only one quality response [2-3]. Recently, the
multiresponse problem in the Taguchi method has received
an increasing research attention. For example, Phadke [4]
employed pure engineering judgment for optimizing
concurrently three quality responses in a very-large-scale
integrated circuit-manufacturing process. However, human
judgment increases uncertainty in decision-making process.
Pignatello [5] discussed a manufacturing process with five
responses. They used data-driven transformations for each
response variable in a multiple-univariate or one-at-a-time
manner. Then, the regression technique based approaches
were utilized to determine tentative optimal factor levels.
Also, Reddy et al. [6] employed regression techniques
based approaches during unifying goal programming to
optimize simultaneously several responses. Regression
approaches, however, increase the complexity of
computational process. Thereafter, Antony [7] utilized
principal component analysis (PCA) to transform the
multiresponses in few uncorrelated ones, which were then
employed for solving the multiresponse problem. However,
PCA is based on some rigid assumptions, such as the error
terms are multivariate normally distributed random
variables, which may limit its use in practical applications.
Jeyapaul et al. [8] utilized genetic algorithm for determining
a weight for the S/N ratio of each response. Then, the
weighted sum of S/N ratios was used to decide optimal
factor levels. However, the genetic algorithm is a search
heuristic that provides near optimal solutions for complex
search spaces, such as scheduling and transportation
problems.
Grey relational analysis and Simulated Annealing (SA)
have been broadly used for optimizing many a
manufacturing process or a product. This research,
therefore, provides a combined approach of Grey relational
analysis and Simulated Annealing for solving the
multiresponse problem in the Taguchi method. Relevant
background of Grey relational analysis and Simulated
Annealing are introduced in Section II. The proposed
Grey-Simulated Annealing Approach for Solving the
Multiresponse Problem in Taguchi Method
Nishanth.G, Rajmohan.M
2. approach is outlined in Section III. A case study is provided
for illustration in SectionIV. Finally, conclusions are made
in SectionV.
II. RELEVANT BACKGROUND
A- Grey Relational Analysis
Grey relational analysis, proposed by Deng [9], is a
method of measuring degree of approximation among
sequences according to the grey relational grade. Grey
relational analysis is part of grey system theory, which is
suitable for solving the complicated interrelationships
between multiple factors and variables. The major
advantage of Grey theory is that it can handle both
incomplete information and unclear problems very
precisely. The grey relational analysis has been widely
employed for solving the multiresponse problem in many
manufacturing applications [10-13].
Grey analysis uses a specific concept of information. It
defines situations with no information as black, and those
with perfect information as white. However, neither of these
idealized situations ever occurs in real world problems. In
fact, situations between these extremes are described as
being grey, hazy or fuzzy. Therefore, a grey system means
that a system in which part of information is known and part
of information is unknown. With this definition, information
quantity and quality form a continuum from a total lack of
information to complete information – from black through
grey to white. Since uncertainty always exists, one is always
somewhere in the middle, somewhere between the extremes,
somewhere in the grey area.
Grey analysis then comes to a clear set of statements
about system solutions. At one extreme, no solution can be
defined for a system with no information. At the other
extreme, a system with perfect information has a unique
solution. In the middle, grey systems will give a variety of
available solutions. Grey analysis does not attempt to find
the best solution, but does provide techniques for
determining a good solution, an appropriate solution for real
world problems.
Based on the above introduction, this research proposes a
combined approach using grey relational analysis and
Simulated Annealing for solving the multiresponse problem
in the Taguchi method.
B- Simulated Annealing
Simulated annealing was developed in the mid 1970's by
Scott Kirkpatric, along with a few other researchers.
Simulated annealing was original developed to better
optimize the design of integrated circuit (IC) chips.
Simulated annealing simulates the actual process of
annealing. Annealing is the metallurgical process of heating
up a solid and then cooling slowly until it crystallizes. The
atoms of this material have high energies at very high
temperatures. This gives the atoms a great deal of freedom
in their ability to restructure themselves. As the temperature
is reduced the energy of these atoms decreases. If this
cooling process is carried out too quickly many
irregularities and defects will be seen in the crystal
structure. The process of too rapid of cooling is known as
rapid quenching. Ideally the temperature should be
deceased at a slower rate. A slower fall to the lower energy
rates will allow a more consistent crystal structure to form.
This more stable crystal form will allow the metal to be
much more durable.
Simulated annealing seeks to emulate this process.
Simulated annealing begins at a very high temperature
where the input values are allowed to assume a great range
of random values. As the training progresses the
temperature is allowed to fall. This restricts the degree to
which the inputs are allowed to vary. This often leads the
simulated annealing algorithm to a better solution, just as a
metal achieves a better crystal structure through the actual
annealing process.
The process of annealing is one in which a solid, usually
metal, is first heated, and then allowed to cool slowly. As
the solid cools, a change of state takes place in which
individual atoms arrange themselves into a regular array
corresponding to a minimum energy arrangement. Such an
arrangement cannot easily propagate throughout the solid if
the cooling occurs quickly, and boundaries between
different ``domains of regularity'' occur. Such boundaries
introduce potential “fault-lines” along which a fracture is
most likely to occur when the material is stressed. To avoid
such potential failures, metal is often cooled slowly, in a
process known as annealing to permit re-arrangements at
these boundaries so the same local minimum energy
arrangement occurs throughout the material.
This process is imitated in numerical optimization. The
idea originated with metropolis et al 1953 when trying to
simulate such thermodynamic systems. Given a potential
state change from one with energy E1 to one with energy
E2, they chose to accept it with a probability
( )
−−
=
kT
EE
acceptprob 12
exp,1min)(
Where T is the ``temperature'' and k is a constant in this
application it is Boltzmann's constant. In words this always
accepts a change if it moves to a state of lower energy, but
sometimes accepts the change even though the system
moves to a state with a higher energy. Note that for small T
there is a very small probability of accepting an unfavorable
move, while for large T, the probability of acceptance can
be quite high. With this in mind we now describe the
requirements in order to apply the same ideas to a more
general minimization problem.
We need
1. A coding of the possible system states;
3. ),.....,2,1,min(),.....,2,1,max(
),.....,2,1,max(
niyniy
yniy
Z
ijij
ijij
ij
=−=
−=
=
2. An objective function which we are trying to
minimize;
3. A mechanism for proposing random changes to the
state of the system; and
4. A control parameter T, analogous to the
temperature above, which governs the probability of
acceptance of the proposed change, together with an
annealing schedule specifying how the temperature is to be
lowered.
III. METHODOLOGY
The procedure for the application of Grey relational
analysis and Simulated Annealing (SA) to solve multi-
response problems is given below.
Step 1: Choose the appropriate orthogonal array for the
problem and design the experiment layout. The selection of
OA depends on the number of factors (f) and the number of
interactions (if any). The number of degrees of freedom
associated with the experiment is always greater than or
equal to the number of degrees of freedom required for
studying the main and interaction effects.
Step 2: Conduct the experiment by setting levels as per the
selected orthogonal array and obtain the responses ( yij ).
Step 3: Calculate the SN ratio for a given response using
one of the formulae depending upon the type of quality
characteristic.
i Larger-the-better
S/N Ratio (η) = ∑=
−
r
i ijyr 1
210
11
log10 (1)
Where r = number of replications; yij=observed response
value, where i=1, 2...n; j=1, 2...k;
This is applied for problems where maximization of the
quality characteristic of interest is sought. This is referred to
as the larger-the-better type problem.
ii Smaller-the-better
S/N Ratio (η) =
⋅− ∑=
r
i
ijy
r 1
2
10
1
log10 (2)
This is termed a smaller-the-better type problem where
minimization of the characteristic is intended.
iii Nominal-the-best
S/N Ratio (η) =
2
2
10log10
σ
µ
(3)
Where
r
yyy r+⋅⋅⋅++
= 21
µ
( )
∑= −
−
=
r
i
i
r
yy
1
2
1
σ
This is called a nominal-the-best type of problem where
one tries to minimize the mean squared error around a
specific target value. Adjusting mean to the target by any
method renders the problem to a constrained optimization
problem.
Step 4: yij is normalized as Zij (0 ≤Zij ≤ 1) by the following
formula to set right the effect of adopting different units:
),.....,2,1,min(),.....,2,1,max(
),.....,2,1,max(
niyniy
niyy
Z
ijij
ijij
ij
=−=
=−
= (4)
(To be used for SN ratios with larger-the-better manner)
(5)
(To be used for SN ratios with smaller-the-better manner)
(6)
(To be used for SN ratios with nominal-the-best manner)
Where j = 1, 2… k;
Step 5: Grey Relational Analysis: Calculate the grey
relational co-efficient for the normalized S/N ratio values.
max)(
maxmin
))(),((
∆+∆
∆+∆
=
ξ
ξ
γ
k
kyky
oj
io (7)
Where
1. j=1,2...n; k=1,2...m, n is the number of
experimental data items and m is the number of
responses.
2. yo(k) is the reference sequence (yo(k)=1,
k=1,2...m); yj(k) is the specific comparison
sequence.
3. =−=∆ ||)()(|| kyky jooj The absolute value of
the difference between yo(k) and yj(k).
( )
minmax
minarg
DVDV
DVetTy
Z
ij
ij
−
−−
=
( )nietTyDV ij ,,2,1,argmaxmax ⋅⋅⋅⋅=−=
( )nietTyDV ij ,,2,1,argminmin ⋅⋅⋅⋅=−=
4. 4. ||)()(||minminmin kyky jo
kij
−=∆
∀∈∀
is the
smallest value of yj(k).
5. ||)()(||maxmaxmax kyky jo
kij
−=∆
∀∈∀
is the
largest value of yj(k).
6. K is the distinguishing coefficient, which is defined
in the range 0≤K≤1 (the value may adjusted based
on the
practical needs of the system)
Step 6: Simulated Annealing: The procedure for the
application of SA is presented below
Step 6.1: Generation of Initial Seed: In this algorithm
initialization is often carried out randomly. The initial seed
is generated randomly. But the sum of weight of all the
response should be equal to one. The objective of this
algorithm is to get the optimal weights so as to maximize
the grey grade.
Step 6.2: Evaluation: Calculate the value of the objective
function for the initial seed. The objective function
quantifies the performance level of the seed. The objective
function value for the problem is given
f(x) = ∑∑= =
k
j
n
i
ijj ZW
1 1
(8)
Where f(x) is the total weighted grey grade (WGG) ratio to
be maximized,
Wj= Weights for each response,
Zij= Grey coefficient values,
n = Number of Experiments under each response, and
k = Number of responses.
Step 6.3: Neighborhood generation: Generate some of
the neighborhoods to the initial seed and Evaluate in the
objective function. The neighborhood weights are generated
by pair wise transfer some less percentage of value from
one weight to another one within the seed. This percentage
which is to be transferred is determined randomly. The best
neighborhood which has maximum WGG among the
neighborhoods is taken and compared with the initial seed
objective function value to check whether it is acceptable or
not.
Step 6.4: Probability for acceptance: When the
neighborhood seed gives the inferior solution, we go to
check for the probability of acceptance.
Probability (Pacc) =
( )
− =
x
xx
T
ff 1
exp (9)
If the probability of acceptance is within the
generated random number, the inferior seed is accepted.
Step 6.5: Termination condition: When the program
meets the final temperature or number of continuous
unacceptable inferior neighborhood generation (freezer
count) is equal to pre-specified value, the program will stop.
Step 7: Get the final best seed which maximizes the
objective function from the simulated annealing algorithm
and using this weights (W1, W2...Wj), calculated weighted
grey grade
WGG = W1Z11 + W2Z12 + …..+ WjZij (10)
Step 8: Determine the optimal level combination for the
factors. Maximization of weighted grey grade consequences
the better product quality; therefore, on the basis of
weighted grey grade, the main effects of the control factor is
calculated and the optimal level for each controllable factor
is determined. For example, to calculate the main effect
of factor i on Weighted grey grade, we calculate the average
of weighted grey grade values (WGG) for each level j,
denoted as WGGij, then the difference in the main effect, εi,
is defined as:
( ) ( )ijiji WGGWGG minmax −=ε (11)
The best level j* of the controllable factor ‘i’ is selected by
j* = max (WGGij) (12)
Step 9: Perform ANOVA to identify the significant factors
and percentage of contribution of the factors.
Step 10: Calculate the Predicted value of SN ratio for the
selected optimal levels and calculate the improvement in SN
ratio and overall improvement percentage as the ratio
between sum of the improvement values of all responses
and sum of SN ratios of initial conditions of all responses.
The predicted SN ratio using the selected optimal level is
can be calculated as:
( )∑=
−+=
f
i
mim
1
ηηηη (13)
Where,
5. mη = Average SN ratio.
η = Average SN ratio corresponding to ith
factor on fth
level.
f = Number of factors.
IV. IMPLEMENTATION OF THE SOLUTION METHODOLOGY -
CASE STUDY
In this work, LM 25-based aluminium alloy (Cu:
7.15%,Mg: 0.49%, Mn: 0.11%, Fe: 0.47%, Ni: 0.002%, Ti:
0.064%, Zn: 0.017%, Pb: 0.003%, Sn: 0.005%) reinforced
with green bonded silicon carbide particle of size 25 µm
with 10% volume fractions manufactured through stir
casting route is used for experimentation. The drilling tests
are carried out on radial drilling machine under dry
condition. In order to conduct experiments, the work
materials are cut into plates of 150×50×20 mm and faced in
a lathe to obtain flat surface. Then the plate is fastened to
the rigid fixture attached to the strain gauge dynamometer
which is mounted on the table. Equal spacing is maintained
between successive drilled holes in the plate. The cutting
point of a standard HSS twist drills of 10 mm diameter with
various cutting point angles (90, 115 and 140 degrees),
coated by TiN are used throughout the experimental work.
The average surface roughness (Ra), cutting force (Fc) and
torque (T) are considered as responses for this study. The
surface roughness is measured at three positions spaced at
120° intervals around the hole circumference. The surface
roughness of each hole is taken as the mean of three
circumferential readings. The cutting force and torque for
each trial is measured by using strain gauge dynamometer.
Plan of investigation
The factors and their levels considered in this study are
shown in Table 1. Experiments are conducted with three
factors each at three levels and hence a three level
orthogonal array (OA) is chosen. Degrees of freedom (Dof)
required for the design are six. The OA, which satisfies the
required Dof is L9.The experiments are conducted using L9
OA and the response values obtained are given in Table 2.
Step 1: Calculate the S/N ratios for a given response and
predicted S/N ratios of the starting conditions using one of
the Eqs. (1), (2) and (3) depending upon the type of quality
characteristics. The computed S/N ratios for each quality
characteristic are shown in Table 3.
Table 1. Factors and levels
Levels
Parameters Unit
1 2 3
Cutting
speed (V)
m/min 35.18 56.54 87.96
Feed (F) mm/rev 0.050 0.125 0.20
Point angle
(PA)
Degree 90 115 140
Table 2. L9 Orthogonal array with factors and responses
Responses
Trial
No.
V F PA
Ra in
µm
Fc in N
T in
Nm
1 1 1 1 7.83 107.87 0.88
2 1 2 2 4.01 254.96 2.06
3 1 3 3 2.22 470.67 2.26
4 2 1 2 6.70 186.31 1.96
5 2 2 3 5.80 539.331 1.28
6 2 3 1 6.09 1186.53 3.24
7 3 1 3 6.01 274.57 0.69
8 3 2 1 8.27 1078.66 2.55
9 3 3 2 6.20 1274.78 2.16
Step 2: Normalize the S/N ratio values by Eqs. (4), (5) and
(6). The results are given in Table 3.
Step 3: Perform the grey relational analysis. From the data
in Table 3, calculate the grey relational co-efficient for the
normalized S/N ratio values by using Eq. (7). The value for
ξ∆max is taken as 0.5 in Eq. (7). Since all the process
parameters are of equal weighting. The results are given in
Table 4.
Table 4. Grey relational co-efficient
Grey relational co-efficient
Trial No.
Ra Fc T
1 0.923 0.333 0.372
2 0.476 0.434 0.631
3 0.333 0.553 0.682
4 0.757 0.391 0.606
5 0.650 0.589 0.454
6 0.682 0.945 1.000
7 0.673 0.446 0.333
8 1.000 0.881 0.764
9 0.695 1.000 0.656
Table 3. S/N ratio values and normalized S/N ratio values
S/N ratios Normalized values of S/N ratios Zij
Trial No.
Ra Fc T Ra Fc T
1 −17.875 −40.658 1.110 0.958 0.000 0.157
6. 2 -12.063 -48.129 -6.277 0.450 0.348 0.707
3 -6.927 -53.454 -7.082 0.000 0.597 0.767
4 -16.521 -45.405 -5.845 0.840 0.221 0.675
5 -15.269 -54.637 -2.144 0.730 0.652 0.400
6 -15.692 -61.486 -10.211 0.767 0.971 1.000
7 -15.577 -48.773 3.223 0.757 0.378 0.000
8 -18.350 -60.658 -8.131 1.000 0.932 0.845
9 -15.848 -62.109 -6.689 0.781 1.000 0.738
Step 4: By applying SA, the optimal weights corresponding
to each response are obtained. The optimal weights are [
0.997172, 0.000932, 0.001896 ]. So the WGG equation is
WGGi1 = 0.994582 Zi1 + 0.00352 Zi2 + 0.001898 Zi3
where Zi1, Zi2 and Zi3 represent the grey grade values for the
responses Ra, Fc and T at ith
trial respectively. The WGG
values are computed and listed in Table 5.
Table 5. Grey relational co-efficient and weighted grey grade
Grey relational co-efficient
Trial No.
Ra Fc T
WGG
1 0.923 0.333 0.372 0.9531075
2 0.476 0.434 0.631 0.4511233
3 0.333 0.553 0.682 0.0035572
4 0.757 0.391 0.606 0.8385025
5 0.650 0.589 0.454 0.7300956
6 0.682 0.945 1.000 0.7681603
7 0.673 0.446 0.333 0.7542291
8 1.000 0.881 0.764 0.9994665
9 0.695 1.000 0.656 0.7816893
Step 5: From the Table 6, the effect on WGG and Fig.1
plots their corresponding factor effects. The controllable
factors on WGG value in order of significance are V, F, PA.
The larger the WGG value implies the better the quality.
Consequently, the optimal condition can be set in the order
V3F1 PA1.
Table 6, Main effects on weighted grey grades
Factors Level 1 Level 2 Level 3 Max-Min
Cutting
speed (v)
0.4693 0.7789 0.8451 0.375866
Feed (F) 0.8486 0.7269 0.5178 0.330811
Point angle
(PA) 0.9069 0.6904 0.496 0.410951
Fig. 1 Factor Effects on WGG Values
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
V1 V2 V3 F1 F2 F3 PA1 PA2 PA3
V. ANALYSIS OF VARIANCE
The purpose of the ANOVA is to examine the process
parameters which significantly affect the performance
characteristics. This is accomplished by separating the total
variability of the multi-response weighted grey grade, which is
measured by the sum of the squared deviations from the total
mean of the WGG, into contributions by each of the process
parameters and the error. The Results of the Pooled ANOVA is
shown in Table 7.
Table 7, Results of pooled ANOVA
7. VI. CONCLUSION
Taguchi method can optimize the single response problem,
but cannot optimize the multiple responses problem and
currently receive little attention to multi response problem. In
many cases, pure engineering judgments are used to optimize
multiple responses, which often bring a certain degree of
uncertainty to the decision-making process. This paper has
presented the use of Grey relational analysis – Simulated
annealing algorithm (G-Sa) and ANOVA to the Taguchi
method for the optimization of the face milling process with
multiple performances characteristic.
VII. REFERENCE
[1] G. Taguchi, Taguchi Methods Research and
Development.Vol. 1. MI.: American Suppliers Institute Press,
Dearborn, 1991.
[2] C.C. Tsao and H. Hocheng, “ Comparison of the tool
life of tungsten carbides coated by multi-layer TiCN and
TiAlCN for end mills using the Taguchi method, “ Journal
of Material Processing Technology, vol. 123, 2002, pp.
1–4.
[3] M.H. Li, A. Al-Refaie, and C.Y. Yang, “DMIAC
Approach to Improve the Capability of SMT Solder
Printing Process,” IEEE Transactions on Electronics
Packaging Manufacturing, to be published.
[4] M.S. Phadke, Quality Engineering Using Robust
Design. NJ, Englewood Cliffs: Prentice-Hall, 1989.
[5] J.J. Pignatello, “Strategies for robust multiresponse
quality engineering,” IIE Transactions, vol. 25, 1993, pp.
5–15.
[6] P.B.S. Reddy, K. Nishina, and A. Subash Babu,
“Unification of robust design and goal programming for
multiresponse optimization- a case study,” Quality and
Reliability Engineering International, vol. 13, 1997, pp.
371–383.
[7] J. Antony, “Multi-response optimization in industrial
experiments using Taguchi’s quality loss function and
principal component analysis,” Quality and Reliability
Engineering International, vol. 16, 2000, pp. 3–8
[8] R. Jeyapaul, P. Shahabudeen, and K. Krishnaiah,
“Simultaneous optimization of multi-response problems in
the Taguchi method using genetic algorithm,” International
Journal of Advanced Manufacturing Technology, vol. 30,
2006, 870–878.
[9] J.L. Deng, “Introduction to grey system,” Journal of
Grey systems, vol. 1(1), 1989, pp.1–24.
[10] C. L. Lin, J. L. Lin and T. C. Ko, “Optimisation of the
EDM Process Based on the Orthogonal Array with Fuzzy
Logic and Grey Relational Analysis Method,” International
Journal of Advanced Manufacturing Technolology, vol. 19,
2002, pp.271–277
[11] L.I. Tong and C.H. Wang, “Multi-response
optimization using principal component analysis and grey
relational analysis,” International Journal of Industrial
Engineering-Theory Applications and Practice, Vol. 9(4),
2002, pp. 343–350.
[12] C.-H. Wang1 and L.-I. Tong, “Quality Improvement
for Dynamic Ordered Categorical Response Using Grey
Relational Analysis,” International Journal of Advanced
Manufacturing Technolology, vol. 21, 2003, pp.377–383
[13] A. Al-Refaie, M.H.C. Li, and K.C. Tai, “Optimizing
SUS 304 wire drawing process by grey analysis utilizing
Taguchi method,” Journal of University of Science and
Technology Beijing, to be published.
Factor SS Dof MS F % Contribution
V 0.24155164 2 0.1207758 3.35811141 32.8645264
F 0.16792635 2 0.0839632 2.33455417 22.847371
PA 0.25358314 2 0.1267916 3.52537628 34.5014825
Error 0.07193087 2 0.0359654 9.78662013
Total 0.734992 8 100