This document summarizes the results of a designed experiment investigating factors that influence the properties of titanium diboride used as a cutting tool material. A fractional factorial design with 7 factors and 3 responses was used to screen for significant factors. Analysis of normal probability plots identified carbon content, sintering temperature, oxygen impurity content, and TiB2 particle size as the most influential factors for properties of hardness, fracture toughness, and Weibull modulus. Interactions between these factors also showed effects.
Optimization of the Superfinishing Process Using Different Types of StonesIDES Editor
Super finishing is a micro-finishing process that
produces a controlled surface condition on circular parts. It is
not primarily a sizing operation and its major purpose is to
produce a surface on a work piece capable of sustaining uneven
distribution of a load by improving the geometrical accuracy.
The wear life of the parts micro finished to maximum
smoothness is extended considerably. Super finishing is a
slow speed, low temperature, high precision abrasive
machining operation for removing minute amounts of surface
material In this paper critical parameters which affects surface
roughness are determined. According to the design of
experimentation, mathematical model for four different types
of abrasive stones used is proposed. In order to get minimum
values of the surface roughness, optimization of the
mathematical model is done and optimal values of the
examined factors are determined. The obtained results are,
according to the experiment plan, valid for the testing of
material MS12.
vFORTRAN is used as a numerical and scientific computing language. The main objective of the lab work is to understand FORTRAN language using which we solve simple numerical problems and compare different methodologies. Through this project we make use of various functions of FORTRAN and solve a simple projectile problem and also LAPACK library to solve a tridiagonal matrix problem. We use DGESV and DGTSV functions to make it possible. The given problems are built and compiled using a free integrated development environment called CODE::BLOCKS [1] which is an open source platform for FORTRAN and C.
A full experimental and numerical modelling of the practicability of thin foa...Mehran Naghizadeh
This paper presents the performance of geofoam-filled trenches in mitigating of ground vibration transmissions by the means of a full experimental study. The results are interpreted in the frequency domain. Fully automated 2D and 3D numerical models are applied to evaluate the screening effectiveness of geofoam-filled trenches in the active and passive schemes. Experimental results are in good agreement with the prediction of numerical modelling. The validated model is used to investigate the influence of geometrical and dimensional features on the trench. In addition, three different systems including single, double and triangle wall obstacles are selected for analysis, and the results are compared for various situations. The parametric study is based on complete automation of the model through coupling finite element analysis software (Plaxis) and Python programming language to control input, change the parameters, as well as to produce output and calculate the efficiency of the barrier. The results show that the depth and the width of approximately 1λr and 0.2λr, respectively are enough to reach the acceptable amount of efficiency for the active isolation for all three systems. For the passive scheme, the role of depth can be ignored for the single and double wall barriers, while depth plays a significant role for the triangle wall system.
This document discusses efficient reliability demonstration tests that can reduce sample sizes and test times compared to conventional methods. It presents principles for test time reduction using degradation measurements during testing. Methods are provided for calculating optimal test plans that minimize costs while meeting reliability requirements and risk constraints. Decision rules are given for terminating tests early based on degradation measurements and risk estimates. An example application demonstrates how the approach can significantly reduce testing costs.
A landing gear assembly consists of various components viz. Lower side stay, Upperside stay, Locking actuators, Extension actuators, Tyres, and Locking pins to name a few. Each unit having a specific operation to deal with, in this project the main unit being studied is the lower brace. The primary objective is to analyse stresses in the element of the lower brace unit using strength of materials or RDM method and Finite Element Method (FEM) and compare both. Using the obtained data a suitable material is proposed for the component. The approach used here is to study the overall behaviour of the element by taking up each aspect, finally summing up the total effect of all the aspects in the functioning of the element.
Coherence enhancement diffusion using robust orientation estimationcsandit
In this paper, a new robust orientation estimation for Coherence Enhancement Diffusion (CED)
is proposed. In CED, proper scale selection is very important as the gradient vector at that
scale reflects the orientation of local ridge. For this purpose, a new scheme is proposed in
which pre calculated orientation, by using orientation diffusion, is used to find the correct true
local scale. From the experiments it is found that the proposed scheme is working much better
in noisy environment as compared to the traditional Coherence Enhancement Diffusion.
This document discusses the development of a sintering center and knowledge exchange for non-equilibrium sintering methods of advanced ceramic composite materials. It describes representing the complete processing route for producing technical ceramics, including initial powders, shape forming, sintering, and finishing technologies. It also discusses specific sintering technologies like FAST, SPS, and high pressure - high temperature sintering. Finally, it includes an XRD pattern of a nanopowder titanium diboride sample.
Optimization of the Superfinishing Process Using Different Types of StonesIDES Editor
Super finishing is a micro-finishing process that
produces a controlled surface condition on circular parts. It is
not primarily a sizing operation and its major purpose is to
produce a surface on a work piece capable of sustaining uneven
distribution of a load by improving the geometrical accuracy.
The wear life of the parts micro finished to maximum
smoothness is extended considerably. Super finishing is a
slow speed, low temperature, high precision abrasive
machining operation for removing minute amounts of surface
material In this paper critical parameters which affects surface
roughness are determined. According to the design of
experimentation, mathematical model for four different types
of abrasive stones used is proposed. In order to get minimum
values of the surface roughness, optimization of the
mathematical model is done and optimal values of the
examined factors are determined. The obtained results are,
according to the experiment plan, valid for the testing of
material MS12.
vFORTRAN is used as a numerical and scientific computing language. The main objective of the lab work is to understand FORTRAN language using which we solve simple numerical problems and compare different methodologies. Through this project we make use of various functions of FORTRAN and solve a simple projectile problem and also LAPACK library to solve a tridiagonal matrix problem. We use DGESV and DGTSV functions to make it possible. The given problems are built and compiled using a free integrated development environment called CODE::BLOCKS [1] which is an open source platform for FORTRAN and C.
A full experimental and numerical modelling of the practicability of thin foa...Mehran Naghizadeh
This paper presents the performance of geofoam-filled trenches in mitigating of ground vibration transmissions by the means of a full experimental study. The results are interpreted in the frequency domain. Fully automated 2D and 3D numerical models are applied to evaluate the screening effectiveness of geofoam-filled trenches in the active and passive schemes. Experimental results are in good agreement with the prediction of numerical modelling. The validated model is used to investigate the influence of geometrical and dimensional features on the trench. In addition, three different systems including single, double and triangle wall obstacles are selected for analysis, and the results are compared for various situations. The parametric study is based on complete automation of the model through coupling finite element analysis software (Plaxis) and Python programming language to control input, change the parameters, as well as to produce output and calculate the efficiency of the barrier. The results show that the depth and the width of approximately 1λr and 0.2λr, respectively are enough to reach the acceptable amount of efficiency for the active isolation for all three systems. For the passive scheme, the role of depth can be ignored for the single and double wall barriers, while depth plays a significant role for the triangle wall system.
This document discusses efficient reliability demonstration tests that can reduce sample sizes and test times compared to conventional methods. It presents principles for test time reduction using degradation measurements during testing. Methods are provided for calculating optimal test plans that minimize costs while meeting reliability requirements and risk constraints. Decision rules are given for terminating tests early based on degradation measurements and risk estimates. An example application demonstrates how the approach can significantly reduce testing costs.
A landing gear assembly consists of various components viz. Lower side stay, Upperside stay, Locking actuators, Extension actuators, Tyres, and Locking pins to name a few. Each unit having a specific operation to deal with, in this project the main unit being studied is the lower brace. The primary objective is to analyse stresses in the element of the lower brace unit using strength of materials or RDM method and Finite Element Method (FEM) and compare both. Using the obtained data a suitable material is proposed for the component. The approach used here is to study the overall behaviour of the element by taking up each aspect, finally summing up the total effect of all the aspects in the functioning of the element.
Coherence enhancement diffusion using robust orientation estimationcsandit
In this paper, a new robust orientation estimation for Coherence Enhancement Diffusion (CED)
is proposed. In CED, proper scale selection is very important as the gradient vector at that
scale reflects the orientation of local ridge. For this purpose, a new scheme is proposed in
which pre calculated orientation, by using orientation diffusion, is used to find the correct true
local scale. From the experiments it is found that the proposed scheme is working much better
in noisy environment as compared to the traditional Coherence Enhancement Diffusion.
This document discusses the development of a sintering center and knowledge exchange for non-equilibrium sintering methods of advanced ceramic composite materials. It describes representing the complete processing route for producing technical ceramics, including initial powders, shape forming, sintering, and finishing technologies. It also discusses specific sintering technologies like FAST, SPS, and high pressure - high temperature sintering. Finally, it includes an XRD pattern of a nanopowder titanium diboride sample.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Desinging dsp (0, 1) acceptance sampling plans based on truncated life tests ...eSAT Journals
Abstract In this paper DSP (0, 1) sampling plans for truncated life tests are developed using minimum angle method, when the life time of the items follows some selected distributions. The design parameters of the sampling plan are determined for pre-determined acceptance number by satisfying two risks at the specified quality levels simultaneously. The tables of design parameters are provided for various test termination time and mean ratio for some selected distributions. The operating characteristic values are also provided in the table. Some comparisons are made among the selected distributions. The results are explained with examples. Keywords: Probability of acceptance, Rayleigh distribution, generalized exponential distribution, Weibull distribution, Gamma distribution, Producer’s risk, Consumer’s risk, Minimum angle method.
A Method for the Reduction 0f Linear High Order MIMO Systems Using Interlacin...IJMTST Journal
This document presents a new method for reducing the order of linear multi-input multi-output (MIMO) systems. The method obtains the denominator polynomial of the reduced order model using an interlacing property of the roots of the even and odd parts of the original system's denominator polynomial. The numerator polynomial is obtained using a factor division technique. The method is illustrated through an example of reducing a 4th order MIMO system to 2nd order. Response characteristics of the original and reduced systems are compared, showing the reduced model matches the time response of the original system well.
The document discusses response surface methodology (RSM), which uses statistical and mathematical techniques to model and analyze problems with responses influenced by several variables. RSM is used to optimize responses by exploring the relationships between variables and responses through designed experiments and polynomial mathematical models. Key aspects covered include first and second-order polynomial models, experimental designs like factorial and central composite designs, and techniques like steepest ascent to navigate response surfaces. Examples demonstrate how RSM can be applied to optimize process variables and responses.
Dynamic shear stress evaluation on micro turning tool using photoelasticitySoumen Mandal
The document presents an experimental method for evaluating shear stresses on a micro-turning tool using photoelasticity. A micro-turning tool was coated with a birefringent material and subjected to micro-turning of brass while capturing high-speed images. A custom-designed grey field poledioscope was used to obtain images under four analyzer orientations, which were processed to generate shear stress maps of the tool dynamically. The method allows monitoring of tool stresses during operation to prevent breakage and ensure desired performance.
This document is a presentation submitted by a group of 6 mechanical engineering students to their professor. It contains an introduction, definitions of derivatives, a brief history of derivatives attributed to Newton and Leibniz, and applications of derivatives in various fields such as automobiles, radar guns, business, physics, biology, chemistry, and mathematics. It also provides rules and examples of calculating derivatives using power, multiplication by constant, sum, difference, product, quotient and chain rules.
Concurrent Ternary Galois-based Computation using Nano-apex Multiplexing Nibs...VLSICS Design
Novel realizations of concurrent computations utilizing three-dimensional lattice networks and their corresponding carbon-based field emission controlled switching is introduced in this article. The formalistic ternary nano-based implementation utilizes recent findings in field emission and nano applications which include carbon-based nanotubes and nanotips for three-valued lattice computing via field-emission methods. The presented work implements multi-valued Galois functions by utilizing concurrent nano-based lattice systems, which use two-to-one controlled switching via carbon-based field emission devices by using nano-apex carbon fibers and carbon nanotubes that were presented in the first part of the article. The introduced computational extension utilizing many-to-one carbon field-emission devices will be further utilized in implementing congestion-free architectures within the third part of the article. The emerging nano-based technologies form important directions in low-power compact-size regular lattice realizations, in which carbon-based devices switch less-costly and more-reliably using much less power than silicon-based devices. Applications include low-power design of VLSI circuits for signal processing and control of autonomous robots.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This paper demonstrates the use of liner programming methods in order to determine the optimal product mix for
profit maximization. There had been several papers written to demonstrate the use of linear programming in
finding the optimal product mix in various organization. This paper is aimed to show the generic approach to be
taken to find the optimal product mix.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This document demonstrates using linear programming to determine the optimal product mix for a manufacturing firm to maximize profit. The firm produces n products using m raw materials. The problem is formulated as a linear program to maximize total profit subject to raw material constraints. The optimal solution is found using the simplex method and provides the quantities of each product (v1, v2, etc.) that maximize total profit (z0). The solution may show some product quantities as zero, indicating those products should not be produced to maximize profit under the given constraints.
Reduction of Response Surface Design Model: Nested Approachinventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Sustainable Manufacturing: Optimization of single pass Turning machining oper...sajal dixit
Main aim is to optimize a manufacturing process by using different Meta-heuristic algorithm. i had selected turning process here. Firstly i found the most influential parameters in turning process by introducing "Local-centrality Method". Optimization of these most influential parameters will lead to the optimization of whole process using "Genetic algorithm and Taguchi Method". Genetic algorithm has been used to optimize production rate & production cost and Taguchi method has been used to optimize cutting quality, which has been described in presentation.
This document provides instructions for a chemical engineering assignment on particle size analysis. It includes 3 questions to be answered relating to sieve analysis data, calculating cumulative distribution from surface distribution, and estimating terminal velocity of a falling particle. Candidates must type their assignment, show all calculations, and include no more than 7 pages plus any appendices. It is due on September 1st at 4pm and is worth 50 total marks.
Quantitative Analysis for Emperical ResearchAmit Kamble
Overview for Approach Methods for quantitative analysis; which includes
1) Planning of Experiments
2) Data Generation
3) presentation of report
some numerical approach methods; data modeling; hypothesis methods
- Response surface methodology (RSM) is a statistical technique used to optimize processes and develop new products. It was developed in the 1950s to improve chemical processes.
- RSM uses experimental designs and mathematical/statistical techniques to model and analyze the relationship between inputs and outputs or responses. The goal is to optimize the response by selecting the best setting of each input variable.
- Common RSM methods include steepest ascent/descent, central composite design, and Box-Behnken design. They are used to estimate coefficients in a polynomial regression model and determine optimal settings for the inputs.
This document discusses optimization techniques and provides examples to illustrate key concepts in optimization problems. It defines optimization as finding extreme states like minimum/maximum and discusses how it is applied in various fields. It then covers basic definitions like design variables, objective functions, constraints, convexity, local vs global optima. Examples are given to show unconstrained vs constrained problems and illustrate active, inactive and violated constraints. Optimization techniques largely depend on calculus concepts like derivatives and hessian matrix.
The document discusses applications of calculus, specifically derivatives, in the field of electronics and automation. It provides theoretical background on concepts like monotonicity, curvature, inflection points, maxima and minima. It then presents 3 problems involving optimization of electrical circuits and components using derivatives to find maximum power output or minimum resistance. The solutions demonstrate how derivatives can be applied in engineering contexts.
This document summarizes the analysis and design of the Tareq Al-Alool building located in Nablus, Palestine. It includes background information, analysis inputs such as material properties and loads, conceptual design of structural elements like slabs, beams, columns, and shear walls. ETABS modeling was used to analyze the building and check designs. Reinforced concrete elements were designed to satisfy strength, serviceability, and code requirements.
An Improved Adaptive Multi-Objective Particle Swarm Optimization for Disassem...IJRESJOURNAL
With the development of productivity and the fast growth of the economy, environmental pollution, resource utilization and low product recovery rate have emerged subsequently, so more and more attention has been paid to the recycling and reuse of products. However, since the complexity of disassembly line balancing problem (DLBP) increases with the number of parts in the product, finding the optimal balance is computationally intensive. In order to improve the computational ability of particle swarm optimization (PSO) algorithm in solving DLBP, this paper proposed an improved adaptive multi-objective particle swarm optimization (IAMOPSO) algorithm. Firstly, the evolution factor parameter is introduced to judge the state of evolution using the idea of fuzzy classification and then the feedback information from evolutionary environment is served in adjusting inertia weight, acceleration coefficients dynamically. Finally, a dimensional learning strategy based on information entropy is used in which each learning object is uncertain. The results from testing in using series of instances with different size verify the effect of proposed algorithm.
This document discusses applications of calculus in biotechnology and electronics/automation careers. It provides examples of how derivatives are used in areas like optimizing production costs, modeling chemical reaction rates, and analyzing resonant circuits. Three practice problems are developed applying derivatives to optimization problems in biotechnology involving tube volume, fertilizer production, and circuit voltage. The document concludes the derivative is important across fields for optimizing factors like money, materials, labor, and time.
This document discusses numerical methods for solving differential and partial differential equations. It begins by providing some historical context on the development of numerical analysis. It then discusses several common numerical methods including Lagrangian interpolation, finite difference methods, finite element methods, spectral methods, and finite volume methods. For each method, it provides a brief overview of the approach and discusses aspects like discretization, accuracy, computational cost, and common applications. Overall, the document serves as an introduction to various numerical techniques for approximating solutions to differential equations.
1) The document discusses the leaching of PbS concentrate from a particle size distribution over time in multiple tanks. Equations are provided to model reaction rate constants, residence time distributions, and fractional leaching.
2) Graphs and calculations show that larger particle sizes require longer times for over 97.5% lead extraction due to reaction-rate effects.
3) Using a Rosin-Rammler particle size distribution, the model predicts over 98% extraction requires around 35 minutes of residence time.
4) For a given feed rate and 96% conversion, 10 mixed flow tanks are shown to require less time than a single tank due to their more uniform residence time distributions approaching plug flow.
The document summarizes the results of an experiment on leaching lead sulfide (PbS) particles using ferric chloride (FeCl3) as the lixiviant. The experiment involved leaching PbS at different times and temperatures, collecting leachate samples, and analyzing them to determine the amount of lead dissolved using atomic absorption spectroscopy. Graphs and tables show the results of lead concentration, mass of lead leached, and fractional lead dissolution over time for the different experimental conditions.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Desinging dsp (0, 1) acceptance sampling plans based on truncated life tests ...eSAT Journals
Abstract In this paper DSP (0, 1) sampling plans for truncated life tests are developed using minimum angle method, when the life time of the items follows some selected distributions. The design parameters of the sampling plan are determined for pre-determined acceptance number by satisfying two risks at the specified quality levels simultaneously. The tables of design parameters are provided for various test termination time and mean ratio for some selected distributions. The operating characteristic values are also provided in the table. Some comparisons are made among the selected distributions. The results are explained with examples. Keywords: Probability of acceptance, Rayleigh distribution, generalized exponential distribution, Weibull distribution, Gamma distribution, Producer’s risk, Consumer’s risk, Minimum angle method.
A Method for the Reduction 0f Linear High Order MIMO Systems Using Interlacin...IJMTST Journal
This document presents a new method for reducing the order of linear multi-input multi-output (MIMO) systems. The method obtains the denominator polynomial of the reduced order model using an interlacing property of the roots of the even and odd parts of the original system's denominator polynomial. The numerator polynomial is obtained using a factor division technique. The method is illustrated through an example of reducing a 4th order MIMO system to 2nd order. Response characteristics of the original and reduced systems are compared, showing the reduced model matches the time response of the original system well.
The document discusses response surface methodology (RSM), which uses statistical and mathematical techniques to model and analyze problems with responses influenced by several variables. RSM is used to optimize responses by exploring the relationships between variables and responses through designed experiments and polynomial mathematical models. Key aspects covered include first and second-order polynomial models, experimental designs like factorial and central composite designs, and techniques like steepest ascent to navigate response surfaces. Examples demonstrate how RSM can be applied to optimize process variables and responses.
Dynamic shear stress evaluation on micro turning tool using photoelasticitySoumen Mandal
The document presents an experimental method for evaluating shear stresses on a micro-turning tool using photoelasticity. A micro-turning tool was coated with a birefringent material and subjected to micro-turning of brass while capturing high-speed images. A custom-designed grey field poledioscope was used to obtain images under four analyzer orientations, which were processed to generate shear stress maps of the tool dynamically. The method allows monitoring of tool stresses during operation to prevent breakage and ensure desired performance.
This document is a presentation submitted by a group of 6 mechanical engineering students to their professor. It contains an introduction, definitions of derivatives, a brief history of derivatives attributed to Newton and Leibniz, and applications of derivatives in various fields such as automobiles, radar guns, business, physics, biology, chemistry, and mathematics. It also provides rules and examples of calculating derivatives using power, multiplication by constant, sum, difference, product, quotient and chain rules.
Concurrent Ternary Galois-based Computation using Nano-apex Multiplexing Nibs...VLSICS Design
Novel realizations of concurrent computations utilizing three-dimensional lattice networks and their corresponding carbon-based field emission controlled switching is introduced in this article. The formalistic ternary nano-based implementation utilizes recent findings in field emission and nano applications which include carbon-based nanotubes and nanotips for three-valued lattice computing via field-emission methods. The presented work implements multi-valued Galois functions by utilizing concurrent nano-based lattice systems, which use two-to-one controlled switching via carbon-based field emission devices by using nano-apex carbon fibers and carbon nanotubes that were presented in the first part of the article. The introduced computational extension utilizing many-to-one carbon field-emission devices will be further utilized in implementing congestion-free architectures within the third part of the article. The emerging nano-based technologies form important directions in low-power compact-size regular lattice realizations, in which carbon-based devices switch less-costly and more-reliably using much less power than silicon-based devices. Applications include low-power design of VLSI circuits for signal processing and control of autonomous robots.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This paper demonstrates the use of liner programming methods in order to determine the optimal product mix for
profit maximization. There had been several papers written to demonstrate the use of linear programming in
finding the optimal product mix in various organization. This paper is aimed to show the generic approach to be
taken to find the optimal product mix.
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
This document demonstrates using linear programming to determine the optimal product mix for a manufacturing firm to maximize profit. The firm produces n products using m raw materials. The problem is formulated as a linear program to maximize total profit subject to raw material constraints. The optimal solution is found using the simplex method and provides the quantities of each product (v1, v2, etc.) that maximize total profit (z0). The solution may show some product quantities as zero, indicating those products should not be produced to maximize profit under the given constraints.
Reduction of Response Surface Design Model: Nested Approachinventionjournals
International Journal of Mathematics and Statistics Invention (IJMSI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJMSI publishes research articles and reviews within the whole field Mathematics and Statistics, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Sustainable Manufacturing: Optimization of single pass Turning machining oper...sajal dixit
Main aim is to optimize a manufacturing process by using different Meta-heuristic algorithm. i had selected turning process here. Firstly i found the most influential parameters in turning process by introducing "Local-centrality Method". Optimization of these most influential parameters will lead to the optimization of whole process using "Genetic algorithm and Taguchi Method". Genetic algorithm has been used to optimize production rate & production cost and Taguchi method has been used to optimize cutting quality, which has been described in presentation.
This document provides instructions for a chemical engineering assignment on particle size analysis. It includes 3 questions to be answered relating to sieve analysis data, calculating cumulative distribution from surface distribution, and estimating terminal velocity of a falling particle. Candidates must type their assignment, show all calculations, and include no more than 7 pages plus any appendices. It is due on September 1st at 4pm and is worth 50 total marks.
Quantitative Analysis for Emperical ResearchAmit Kamble
Overview for Approach Methods for quantitative analysis; which includes
1) Planning of Experiments
2) Data Generation
3) presentation of report
some numerical approach methods; data modeling; hypothesis methods
- Response surface methodology (RSM) is a statistical technique used to optimize processes and develop new products. It was developed in the 1950s to improve chemical processes.
- RSM uses experimental designs and mathematical/statistical techniques to model and analyze the relationship between inputs and outputs or responses. The goal is to optimize the response by selecting the best setting of each input variable.
- Common RSM methods include steepest ascent/descent, central composite design, and Box-Behnken design. They are used to estimate coefficients in a polynomial regression model and determine optimal settings for the inputs.
This document discusses optimization techniques and provides examples to illustrate key concepts in optimization problems. It defines optimization as finding extreme states like minimum/maximum and discusses how it is applied in various fields. It then covers basic definitions like design variables, objective functions, constraints, convexity, local vs global optima. Examples are given to show unconstrained vs constrained problems and illustrate active, inactive and violated constraints. Optimization techniques largely depend on calculus concepts like derivatives and hessian matrix.
The document discusses applications of calculus, specifically derivatives, in the field of electronics and automation. It provides theoretical background on concepts like monotonicity, curvature, inflection points, maxima and minima. It then presents 3 problems involving optimization of electrical circuits and components using derivatives to find maximum power output or minimum resistance. The solutions demonstrate how derivatives can be applied in engineering contexts.
This document summarizes the analysis and design of the Tareq Al-Alool building located in Nablus, Palestine. It includes background information, analysis inputs such as material properties and loads, conceptual design of structural elements like slabs, beams, columns, and shear walls. ETABS modeling was used to analyze the building and check designs. Reinforced concrete elements were designed to satisfy strength, serviceability, and code requirements.
An Improved Adaptive Multi-Objective Particle Swarm Optimization for Disassem...IJRESJOURNAL
With the development of productivity and the fast growth of the economy, environmental pollution, resource utilization and low product recovery rate have emerged subsequently, so more and more attention has been paid to the recycling and reuse of products. However, since the complexity of disassembly line balancing problem (DLBP) increases with the number of parts in the product, finding the optimal balance is computationally intensive. In order to improve the computational ability of particle swarm optimization (PSO) algorithm in solving DLBP, this paper proposed an improved adaptive multi-objective particle swarm optimization (IAMOPSO) algorithm. Firstly, the evolution factor parameter is introduced to judge the state of evolution using the idea of fuzzy classification and then the feedback information from evolutionary environment is served in adjusting inertia weight, acceleration coefficients dynamically. Finally, a dimensional learning strategy based on information entropy is used in which each learning object is uncertain. The results from testing in using series of instances with different size verify the effect of proposed algorithm.
This document discusses applications of calculus in biotechnology and electronics/automation careers. It provides examples of how derivatives are used in areas like optimizing production costs, modeling chemical reaction rates, and analyzing resonant circuits. Three practice problems are developed applying derivatives to optimization problems in biotechnology involving tube volume, fertilizer production, and circuit voltage. The document concludes the derivative is important across fields for optimizing factors like money, materials, labor, and time.
This document discusses numerical methods for solving differential and partial differential equations. It begins by providing some historical context on the development of numerical analysis. It then discusses several common numerical methods including Lagrangian interpolation, finite difference methods, finite element methods, spectral methods, and finite volume methods. For each method, it provides a brief overview of the approach and discusses aspects like discretization, accuracy, computational cost, and common applications. Overall, the document serves as an introduction to various numerical techniques for approximating solutions to differential equations.
1) The document discusses the leaching of PbS concentrate from a particle size distribution over time in multiple tanks. Equations are provided to model reaction rate constants, residence time distributions, and fractional leaching.
2) Graphs and calculations show that larger particle sizes require longer times for over 97.5% lead extraction due to reaction-rate effects.
3) Using a Rosin-Rammler particle size distribution, the model predicts over 98% extraction requires around 35 minutes of residence time.
4) For a given feed rate and 96% conversion, 10 mixed flow tanks are shown to require less time than a single tank due to their more uniform residence time distributions approaching plug flow.
The document summarizes the results of an experiment on leaching lead sulfide (PbS) particles using ferric chloride (FeCl3) as the lixiviant. The experiment involved leaching PbS at different times and temperatures, collecting leachate samples, and analyzing them to determine the amount of lead dissolved using atomic absorption spectroscopy. Graphs and tables show the results of lead concentration, mass of lead leached, and fractional lead dissolution over time for the different experimental conditions.
Supercapacitors can store electric charge through a process called double layer capacitance. They have a higher power density than batteries but a lower energy density. A supercapacitor increases its capacitance and energy storage capacity by increasing the surface area of its electrodes and decreasing the distance between them. While supercapacitors have limitations like lower energy density and higher cost than batteries, they charge and discharge much faster than batteries and can be cycled millions of times, making them useful for applications that require bursts of energy or regeneration of energy. Recent research is focused on improving supercapacitors' energy density to make them a viable alternative to batteries for more applications.
This document compares various materials for use as smartphone shells based on their fracture toughness, thermal conductivity, stiffness, embodied energy, and other properties. It presents data on aluminum, polycarbonate, carbon fiber reinforced polymer (CFRP), epoxies, and polymethyl methacrylate (PMMA) in tables and charts. The objective is to optimize fracture toughness while minimizing mass. Polycarbonate has the best material index values for thermal conductivity and embodied energy, while aluminum ranks highest for stiffness and CFRP for fracture toughness. The conclusion is that polycarbonate may be the best material for smartphone shells based on the design criteria and objectives.
1. The document describes experiments on the continuous cooling transformation of austenite into steel. Dilatometry tests were conducted where the steel was heated to 1100°C and air cooled, measuring its dilation at various temperatures.
2. Graphs of dilation vs. temperature show regions corresponding to ferrite, transformation, and austenite. Fraction transformed vs. temperature data was also collected.
3. Kinetic parameters for the transformation were determined by analyzing the fraction transformed data and constructing an Avrami plot of ln(-ln(1-X)) vs ln(t).
The document analyzes the failure of a bike pedal spindle. It determines the location of fatigue crack origin and calculates various forces, moments, and stresses acting on the spindle. The largest stress was found to be torque stress. This agrees with observations that the spindle failed in a perpendicular direction to the torque stress. Based on the images and fracture mechanics analysis, the critical crack length and size of the plastic zone were estimated. The maximum pedal force required to cause final fracture was also calculated based on the material's fracture toughness.
The document discusses corrosion prevention techniques for fuse pins on aircraft based on a 1992 crash of an El Al Boeing 747. Three solutions are proposed: 1) applying a zinc coating to provide cathodic protection to the steel pins, 2) electropolishing the pins to reduce stress and increase corrosion resistance, and 3) using stainless steel instead of regular steel for the pins to improve corrosion resistance due to stainless steel's chromium content. Each solution is evaluated in terms of technological feasibility, economics, and resources required. While all solutions could work, switching to stainless steel may be most effective while also being feasibly implemented.
The document proposes developing an alumina nanofiber membrane (ANM-100) for removing fluoride from municipal water supplies. It will combine nanofiltration membrane technology with alumina filter material. The goals are to 1) determine the best method for synthesizing alumina nanofibers to optimize fluoride removal through experiments, 2) establish efficient manufacturing methods, and 3) commercialize the product. Alumina nanofibers will provide a high surface area and adsorption properties to filter out fluoride that nanofiltration cannot. Electrospinning will produce the fibers, which are then calcined and formed into a membrane. Testing will optimize fluoride removal before moving to manufacturing and marketing.
The document discusses selecting a material for the body shell of a smartphone. It considers factors like durability, performance, weight and design. The summary is:
1) The material must withstand bending forces without compromising the phone's functionality over time.
2) Many materials could optimize characteristics like weight, stiffness and strength, but thermal conductivity and cost are also constraints.
3) A material index analysis is used to screen candidates based on yield stress, stiffness and thermal conductivity to find a material that minimizes weight while maintaining enough toughness for everyday use.
1. MTRL 460
Monitoring and Optimization of
Materials Processing
Tutorial 3:
Introduction to Designed Experiments
Group 9:
Muhammad Arshad Hassni
Vishal Sharma
Bennet Lim
Igor Vranjes
Muhammad Harith Mohd Fauzi
Daniel Shim
Date: December 8, 2014
2. INTRODUCTION
Design of Experiments (DOE) is a class of statistically based techniques to organize
experimentation to obtain the maximum amount of information at the minimum cost
and time expenditure. Among the many different DOE techniques, two-level factorial
experiments (2FD) are amongst the most effective in engineering applications. They can
be easily used to (1) identify the effects and interactions of relatively few (k=2- 4)
variables or to (2) screen many (k>5) variables in order to identify the few (typically 4 or
less) significant ones.
2FD are however NOT the best choice to empirically model the process in terms of the
few important variables; for the modeling we will be using Response Surface Methods
(RSM). These two procedures (i.e. 2FD, RSM) will be aided by use of commercial
software Design Expert (DX8). The best outcome of well-organized experiments is an
empirical (or semi-empirical) model of the process that allows a prediction of the
response Y as a function of the independent variables (factors) Xi at a specified
confidence level.
Some of the features of efficient experiments include:
The experiments are carefully planned, with a significant portion of time and
resources (~25%) spent at the planning stage. The problem should be well
understood. The right questions should be asked, as even “an approximate
answer to the right question is worth a great deal more than a precise answer to
the wrong question”.
The objectives of the experimentation are clearly defined and communicated to
everybody involved; the task should focus on the roots of real problems, with
possible numerical targets to be reached as a result of the experimentation
The process that is experimented on is stable, well defined and understood by
experimenters. In industrial environment focused on process (or product)
optimization, SPC should be introduced, run and utilised to stabilise the process
before attempting experimental program on process optimization
3. TUTORIAL 3A(i): BLIND BALANCE
Task (i): Two-level Factorial Design with 2 Variables
OBJECTIVE
To determine the effects of the support and body rotations on body “blind balance”
abilities.
EXPERIMENTAL PLAN
Variables
X1 Surface type X1 (low level {-}): floor X1 (high level {+}: foam
X2 Rotations
X2 (low level {-}): 0
rotations 0R
X2 (high level {+}): 1
rotation 1R
1 Response: Y in-balance time (seconds)
DATA COLLECTION
Exp #
X1
[surface]
X2
[rotations]
X1*X2 Y[sec]A Y[sec]B Y[sec]C
Y[sec]
Average
1 floor {-} 0 rot {-} {+} 51.63 45.38 50.92 49.31
2 foam {+} 0 rot {-} {-} 33.85 41.33 29.18 34.79
3 floor {-} 1 rot {+} {-} 37.08 42.76 16.42 32.09
4 foam {+} 1 rot {+} {+} 73.20 48.75 83.27 68.41
Average 46.15
DATA ANALYSIS
Square Plot
Total “effect” L
49.31
34.79
32.09
68.41
4. The total “effect” L of any given variable on the experiment outcome is calculated as the
difference between all responses when the variable is at high and low levels.
The effect of X1 surface, L1:
LX1 = 34.79 + 68.41 – 49.31 – 32.09 = 21.80 s
47.23% of average: weak positive effect of surface
The effect of X2 rotation, L2:
LX2 = 32.09 + 68.41 – 49.31 – 34.79 = 16.40 s
35.53% of average: weak positive effect of rotations
Effect of interaction
The effect of the interaction is calculated by taking the difference between (1) the
effects when all variables are the same level (i.e. both at high or both at low level), and
(2) the effects when all variables are at mixed levels (i.e. one at high level and another at
low level).
The effect of interaction of X1 surface and X2 rotation, L12:
LX1X2 = 49.31 + 68.41 – 32.09 – 34.79 = 50.84 s
110% of average: strong interaction effect
CONCLUSIONS
Both variables have weak positive effects. The interaction between the two variables is
strong on the other hand.
5. TUTORIAL 3A(ii): BLIND COORDINATION
Task (ii): Two-level Factorial Design with 3 Variables
OBJECTIVE
To determine the effects of the hands and body rotations, on your body “blind
coordination” abilities.
EXPERIMENTAL PLAN
2FD3 with the following variables and response:
Variables
X1 hand-to-target
distance
X1 low level {-}: close X1 high level {+}: far
X2 hand X2 low level {-}: right X2 high level {+}: left
X3 rotations X3 low level {-}: 0 rot 0R X3 high level {+}: 1 rot 1R
“Response” Y: shots at target (%)
DATA COLLECTION
Exp #
Main
Interactions
Group
Data
Factors
Distance Hand Rotation
X1 X2 X3 X1X2 X1X3 X2X3 X1X2X3 Y(%)
1 -1 -1 -1 1 1 1 -1 20
2 1 -1 -1 -1 -1 1 1 90
3 -1 1 -1 -1 1 -1 1 90
4 1 1 -1 1 -1 -1 -1 10
5 -1 -1 1 1 -1 -1 1 10
6 1 -1 1 -1 1 -1 -1 70
7 -1 1 1 -1 -1 1 -1 50
8 1 1 1 1 1 1 1 10
Effects 10 -30 -70 -250 30 -10 50 Ave=44
6. DATA ANALYSIS
Cube plot
The results of such 2FD3, with 3-variables at 2 levels (23=8 experiments) factorial design
of experiments can be conveniently plotted on a “cube plot”, with the response values
(i.e. average % of successful hits) at the cube corners.
Total “effect” L
The effect of X1 distance, L1:
L1 = -20 + 90 – 90 + 10 – 10 + 70 – 50 + 10 = 10
22.7% of average: weak positive effect of distance
The effect of X2 hand, L2:
L2 = -20 - 90 + 90 + 10 – 10 - 70 + 50 + 10 = -30
68.2% of average: weak negative effect of hand
The effect of X3 rotation, L3:
L3 = -20 - 90 – 90 - 10 + 10 + 70 + 50 + 10 = -70
159% of average: strong positive effect of rotation
X3
X2
X1 9020
90
10
10 70
50 10
7. Effect of interaction
The effect of interaction of X1 distance and X2 hand, L12:
L12 = 20 - 90 – 90 + 10 + 10 - 70 – 50 + 10 = -250
568% of average: strong negative effect of distance and hand: Strong Interaction Effect
The effect of interaction of X1 distance and X3 rotation, L13:
L13 = 20 - 90 + 90 - 10 – 10 + 70 – 50 + 10 = 30
68.2% of average: weak positive effect of distance and rotation: Weak Interaction Effect
The effect of interaction of X2 hand and X3 rotation, L23:
L23 = 20 + 90 – 90 - 10 – 10 - 70 + 50 + 10 = -10
22.7% of average: weak negative effect of hand and rotation: Weak Interaction Effect
The effect of interaction of X1 distance and X2 hand and X3 rotation, L123:
L123 = -20 + 90 + 90 - 10 + 10 - 70 – 50 + 10 = 50
114% of average: strong positive effect of distance, hand and rotation:
Strong Interaction Effect
CONCLUSION
From the results above, we can see that L12 and L123 have strong interaction effect
whilst L13 and L23 have weak interaction effect to our experiments.
8. TUTORIAL 3B: FRACTIONAL FACTORIALS FOR VARIABLES SCREENING
Task #9: Titanium Di-Boride for Composite Aluminum Machining Applications
OBJECTIVE
To design the screening experiments to identify the four significant variables out of the
pool of the tentative seven variables by using DX program.
Tut3B includes the following 4 steps:
1. The Scenario
Advanced Titanium Di-boride (TiB2) ceramic is one of the primary candidates for
applications in high-wear environments, at low and intermediate temperatures and in
contact with corrosive liquid or solid aluminum. The primary advantages of TiB2 are
extremely high hardness, and stability against solid and liquid aluminum. It is presently
being considered as an alternative (to diamond) cutting tool for the highly abrasive
composites based on aluminum, such as SiC-Al and Al2O3-Al. For the metal cutting
applications, the following three properties of TiB2 are required:
R1 Maximum hardness (Target R1T = 27GPa)
R2 Maximum fracture toughness (Target R2T = 7MPa√m
R3 Maximum Weibull Modulus (Target R3T = 13)
The following seven process variables X1 to X7 were tentatively proposed as those
controlling the required properties R1, R2 and R3:
X1 Carbon additive content (2 to 6 wt%)
X2 Heating rate (10 to 30 C/min)
X3 Powder milling time (5 to 10 hr)
X4 Oxygen impurity content in TiB2 (1 to 5 wt%)
X5 Sintering temperature (1,600 to 1,900 C)
X6 Sintering time (1 to 3 hr)
X7 TiB2 powder grain diameter (0.5 to 2 μm)
Using the DX program, design the screening experiments (27-3 fractional factorials) to
identify the 4 significant variables out of the pool of the tentative 7 variables.
9. 2. Screening Experiment using DX8
The tentative 7 variables and their range of variation:
The table of responses (R1, R2, R3):
10. 3. Run the Design Experiment Using MTRL460LABSIM
The data below shows one of the simulations conducted using the LAB program.
Lab 9 Simulation completed on: 16-11-14 at 00:03:30:
Variables Values S.D.
X1: Carbon Content % 3.00 wt% 0.1 %
X2: Heating Rate 25.00 °C/min 0.1 %
X3: Milling Time 7.00 hr 0.1 %
X4: Oxygen Content in TiB2 4.00 wt% 0.1 %
X5: Sintering Temperature 1700.00 °C 0.1 %
X6: Sintering Time 2.50 hr 0.1 %
X7: TiB2 Grain Diameter 0.75 um 0.1 %
Err. of Measurement 0.2 %
Hardness [GPa] Toughness [MPa•√m] Modulus
Average 23.938 5.901 11.754
S.D. 0.395 0.023 0.057
Values Values Values
1 23.086 5.857 11.657
2 23.471 5.859 11.663
3 23.503 5.864 11.672
4 23.523 5.884 11.688
5 23.670 5.888 11.698
6 23.740 5.893 11.724
7 23.774 5.895 11.725
8 23.794 5.896 11.755
9 23.817 5.903 11.758
10 23.853 5.904 11.758
11 24.009 5.905 11.773
12 24.025 5.907 11.774
13 24.069 5.909 11.778
14 24.114 5.909 11.779
15 24.195 5.911 11.781
16 24.251 5.912 11.785
17 24.268 5.922 11.788
18 24.375 5.923 11.819
19 24.453 5.926 11.848
20 24.768 5.947 11.860
11. The average value of hardness, fracture toughness, and Weibull modulus obtained from
each simulation using LAB program is then transferred into DX8 Data Entry Table as
shown below.
DX8 Data Entry Table:
12. Design summary:
4. Analysis and the 4 significant variables
Normal probability plots:
Using DX8, we generated a normal probability plot for each of the three response
variables. Outliers on these plots can be identified, showing us the significant variables
in this process.
Design-Expert® Software
Maximum Hardness
Shapiro-Wilk test
W-value = 0.825
p-value = 0.001
A: Carbon Additive Content
B: Heating Rate
C: Powder Milling Time
D: Oxygen Impurity Content in TiB2
E: Sintering Temperature
F: Sintering time
G: TiB2 power grain diameter
Positive Effects
Negative Effects
0.00 0.60 1.21 1.81 2.42 3.02 3.63 4.23 4.84 5.44 6.05
0
10
20
30
50
70
80
90
95
99
Half-Normal Plot
|Standardized Effect|
Half-Normal%Probability
A-Carbon Additive Content
D-Oxygen Impurity Content in TiB2
E-Sintering Temperature
G-TiB2 power grain diameter
AD
AG
DE
DG
Figure 4: Maximum hardness half-normal plot.
13. Design-Expert® Software
Maximum Fracture Toughness
Shapiro-Wilk test
W-value = 0.781
p-value = 0.000
A: Carbon Additive Content
B: Heating Rate
C: Powder Milling Time
D: Oxygen Impurity Content in TiB2
E: Sintering Temperature
F: Sintering time
G: TiB2 power grain diameter
Positive Effects
Negative Effects
0.00 0.30 0.61 0.91 1.21 1.52 1.82 2.12 2.43 2.73 3.03
0
10
20
30
50
70
80
90
95
99
Half-Normal Plot
|Standardized Effect|
Half-Normal%Probability
A-Carbon Additive Content
D-Oxygen Impurity Content in TiB2
E-Sintering Temperature
G-TiB2 power grain diameter
AD
AE
AG
DE
DF
Figure 5: Maximum fracture toughness half-normal plot.
Design-Expert® Software
Maximum Weibull Modulus
Shapiro-Wilk test
W-value = 0.960
p-value = 0.508
A: Carbon Additive Content
B: Heating Rate
C: Powder Milling Time
D: Oxygen Impurity Content in TiB2
E: Sintering Temperature
F: Sintering time
G: TiB2 power grain diameter
Positive Effects
Negative Effects
0.00 1.52 3.04 4.56 6.08
0
10
20
30
50
70
80
90
95
99
Half-Normal Plot
|Standardized Effect|
Half-Normal%Probability
A-Carbon Additive Content
D-Oxygen Impurity Content in TiB2
E-Sintering Temperature
G-TiB2 power grain diameter
AD
AE
AG
DE
DF
DG
Figure 6: Maximum Weibull modulus half-normal plot.
From figures 4-6, we can see that the outliers on the half-normal plots are carbon
additive content (A), sintering temperature (E), oxygen impurity content in TiB2 (D), and
TiB2 powder grain diameter (G). Therefore, these 4 variables are significant and they are
the main effects of the process. The other outliers on the half-normal plot are various
combinations of 2 of these main effects. This means that the interaction effects
between these 4 variables are also significant.
14. Figure 4 shows that the positive effects on maximum hardness are oxygen impurity
content in TiB2 (D), sintering temperature (E), the interaction between oxygen impurity
content in TiB2 and TiB2 powder grain diameter (DG), and the interaction between
carbon additive content and oxygen impurity content in TiB2 (AD). The negative effects
on maximum hardness are carbon additive content (A), TiB2 powder grain diameter (G),
the interaction between oxygen impurity content in TiB2 and sintering temperature (DE),
and the interaction between carbon additive content and TiB2 powder grain diameter
(AG).
Figure 5 shows that the positive effects on maximum fracture toughness are carbon
additive content (A) and interactions AE, AD, DE, and DF. The negative effects on
maximum fracture toughness are sintering temperature (E), oxygen impurity content in
TiB2 (D), TiB2 powder grain diameter (G), and interaction AG.
Figure 6 shows that the positive effects on maximum Weibull modulus are the
interactions AE, AD, DE, DF, and DG. The negative effects on maximum Weibull modulus
are all 4 of the main effects and the interaction AG.
Design-Expert® Software
Factor Coding: Actual
Maximum Hardness (GPa)
X1 = A: Carbon Additive Content
X2 = D: Oxygen Impurity Content in TiB2
X3 = E: Sintering Temperature
Actual Factors
B: Heating Rate = 20
C: Powder Milling Time = 8
F: Sintering time = 2
G: TiB2 power grain diameter = 1.125
Cube
Maximum Hardness (GPa)
A: Carbon Additive Content (wt%)
D:OxygenImpurityContentinTiB2(wt%)
E: Sintering Temperature (oC)
A-: 3 A+: 5
D-: 2
D+: 4
E-: 1700
E+: 1800
23.3469
28.5839
22.3922
24.9181
16.2389
21.4759
20.0308
22.5567
Figure 7: Cube plot for A, D, E and their interactions affecting maximum hardness.
15. Design-Expert® Software
Factor Coding: Actual
Maximum Fracture Toughness (MPa*sqrt(m))
X1 = A: Carbon Additive Content
X2 = D: Oxygen Impurity Content in TiB2
X3 = E: Sintering Temperature
Actual Factors
B: Heating Rate = 20
C: Powder Milling Time = 8
F: Sintering time = 2
G: TiB2 power grain diameter = 1.125
Cube
Maximum Fracture Toughness (MPa*sqrt(m))
A: Carbon Additive Content (wt%)
D:OxygenImpurityContentinTiB2(wt%) E: Sintering Temperature (oC)
A-: 3 A+: 5
D-: 2
D+: 4
E-: 1700
E+: 1800
9.13019
7.51006
5.14231
4.84394
8.72181
7.41894
5.31819
5.33706
Figure 8: Cube plot for A, D, E and their interactions affecting maximum fracture
toughness.
Design-Expert® Software
Factor Coding: Actual
Maximum Weibull Modulus
X1 = A: Carbon Additive Content
X2 = D: Oxygen Impurity Content in TiB2
X3 = E: Sintering Temperature
Actual Factors
B: Heating Rate = 20
C: Powder Milling Time = 8
F: Sintering time = 2
G: TiB2 power grain diameter = 1.125
Cube
Maximum Weibull Modulus
A: Carbon Additive Content (wt%)
D:OxygenImpurityContentinTiB2(wt%)
E: Sintering Temperature (oC)
A-: 3 A+: 5
D-: 2
D+: 4
E-: 1700
E+: 1800
13.4728
10.4799
9.60641
8.48684
12.0567
9.68084
9.57859
9.07616
Figure 9: Cube plot for A, D, E and their interactions affecting Weibull modulus.
16. Figures 7-9 show the cube plots for A, D, E significant variables and their interactions
and their effects on the three response variables. All three cube plots confirm the half-
normal plots findings on which main effects and interactions cause positive effects and
which main effects and interactions cause negative effects. However, our process has 4
significant variables and cube plots can only show the effects of 3 significant variables.
Therefore, several more cube plots would have to be made with different combinations
of 3 significant variables. The results would show the same as the half-normal plots.
TUTORIAL 3C: RESPONSE SURFACE METHOD FOR PROCESS MODELLING AND
OPTIMIZATION
OBJECTIVE
Using Response Surface Methodology (RSM) principles, Central Composite
Design CCD of DX8 software, and the LAB simulation program, empirically model,
plot and examine the response surfaces for R1, R2 and R3 as a function of the
four significant variables.
Examine and discuss the statistical significance tests for the models
Optimize the process using the model: identify the optimum conditions of the
process, that would result in a combination of the responses R1, R2 and R3
closest to the target values R1T, R2T and R3T; verify the optimum processing
and/or use conditions through running LAB simulation program.
In tutorial 3B, the 4 significant variables that affect the 3 responses were determined to
be carbon additive, oxygen impurity, sintering temperature and titanium diboride (TiB2)
Design-Expert® Software
Factor Coding: Actual
MaximumFractureToughness(MPa*sqrt(m))
X1= A: CarbonAdditiveContent
X2= D: OxygenImpurityContent inTiB2
X3= E: SinteringTemperature
ActualFactors
B: HeatingRate= 20
C: Powder MillingTime= 8
F: Sinteringtime= 2
G: TiB2power graindiameter = 1.125
Cube
Maximum Fracture Toughness (MPa*sqrt(m))
A: Carbon Additive Content (wt%)
D:OxygenImpurityContentinTiB2(wt%)
E: Sintering Temperature (oC)
A-: 3 A+: 5
D-: 2
D+: 4
E-: 1700
E+: 1800
9.13019
7.51006
5.14231
4.84394
8.72181
7.41894
5.31819
5.33706
17. powder grain diameter. Using these variables and the DX9 program, we are able to plan
a Central Composite Design (CCD) to model the process. The CCD will run 30
experiments providing different sets of data for the 4 variables.
The new series of CCD experiments are run again using the LAB simulation program with
the previously determined values for the 4 variables. Each simulation will occur at
various values for the 4 variables while the 3 minor variables will be kept at constant
values at the midpoint of their respective ranges. The LABSIM program will generate
averaged values for the 3 responses (hardness, fracture toughness and Weibull
modulus) for each individual experiment.
The following is the design summary of the CCD experiments
18. The CCD table for the 30 experiments is shown below.
19. Now that we have our data we can obtain empirical models of the process with the CCD
module in DX9. We will be producing 3 models, one for each response that we want to
maximize.
Response 1: Hardness
The following is the model for maximizing hardness.
21. With this ANOVA table we can assess the quality of the model. As indicated, the F-ratio
has a value of 229.93 and the correlation coefficient r2 has a value of 0.9954. The
standard requirement of a qualified model is a F-ratio larger than 10 and correlation
coefficient larger than approximately 0.9.
Since this model qualifies, we do not need to perform further re-evaluation such as
using transformations or different ranges of insignificant variables.
We can now proceed to observe some of the plots of this model.
22.
23. Response 2: Fracture Toughness
The following is the model for maximizing fracture toughness.
25. With this ANOVA table we can assess the quality of the model. As indicated, the F-ratio
has a value of 1173.81 and the correlation coefficient r2 has a value of 0.9991. The
values are well above the requirements and compared to model 1, as the quality of the
model can be considered to be much higher. Therefore, no further action needs to be
taken.
The following are graphical representations of the model.
26. Response 3: Weibull Modulus
The following is the generated model for our third response: Weibull modulus.
29. As indicated in the table, the F-ratio has a value of 674.46 and the correlation coefficient
r2 has a value of 0.9984. We can confirm the quality of the model as these two values
exceed the standard requirement.
Finally we represent our model graphically for the third response.
30.
31. Optimization of CCD
We set the 4 significant variables in the range and the goal is to maximize the 3
responses. This is done in the “Criteria” tab under optimization in DX9.
32. We order the solutions in terms of desirability, which means that the underlined set of
data is providing the best data that maximizes our target responses. The ideal solution is
shown below:
The red dots indicate the 4 significant variables whereas the blue dots indicate the our
maximized responses. The desirability is 86.7%, which means that we are 86.7% certain
that the values for the 4 variables gives us the best end result. Since there are other
non-identified errors in statistics, it would be rather difficult to approach a perfect
desirability of 100%. Therefore, we can still be satisfied with these results as it is
relatively close to a perfect 100% desirability. An alternative representation is shown
below as a bar graph
33. Consequently, we can obtain contour and cube plots of the results from the ideal
solution. Below is the contour plot of the 4 significant variables, showing the ranges of
different desirability.
34. We can also express our results as a cube plot, again in terms of the 4 variables.
35. A summary of our optimization results is shown below, 3 confirmation runs were run to
ensure consistent results.
Finally, we use LABSIM again at the predicted optimum level of the 4 variables, to verify
the model predictions.
Hardness [GPa] Toughness [MPa•m^(1/2)] Modulus Variables Values S.D.
Average 28.190 9.724 17.302 X1: Carbon Content % 3.49 wt% 0.1 %
S.D. 0.177 0.033 0.069 X2: Heating Rate 25.00 °C/min 0.1 %
X3: Milling Time 7.00 hr 0.1 %
Values Values Values X4: Oxygen Content in TiB2 2.00 wt% 0.1 %
1 27.803 9.653 17.172 X5: Sintering Temperature 1725.82 °C 0.1 %
2 27.968 9.673 17.184 X6: Sintering Time 2.50 hr 0.1 %
3 28.015 9.680 17.216 X7: TiB2 Grain Diameter 0.75 um 0.1 %
4 28.037 9.685 17.236 Err. of Measurement
5 28.074 9.700 17.255
0.2 %
36. Our model has presented that the average for hardness; toughness and modulus are
given as below:
1) Hardness= 28.2818 GPa
From the data obtained in the model, our mode averagel is about 0.32% closed to
the lab simulation result for hardness value. According to the graph above, the lab
data is within the range of specification and its average is reliable to verify our
model.
2) Fracture toughness = 9.6739MPa*m0.5
37. From the data obtained in the model, our model average is about 0.52% closed to
the lab simulation result for fracture toughness value. According to the graph above,
the lab data is within the range of specification and its average is reliable to verify
our model
3) Weibull Modulus = 17. 1969 MPa
From the data obtained in the model, our model average is about 0.61% closed to
the lab simulation result for Weibull Modulus value. According to the graph above,
the lab data is within the range of specification and its average is reliable to verify
our model
The lab data average is approximately equal to the model data average as the
changes are just less than 1% differences. The changes are maybe due to the
systematic and random error occurring during measurement. For the experiment,
you can optimize the data value by having a well-calibrated high quality measuring
tool.