This paper is about the five phases of Six Sigma which are Define, Measure, Analyze, Improve& Control. The methods used in each phase are discussed in detail and the various tests used in Analyze Phase of Six Sigma are given; Six Sigma can be implemented in an organization by using the methods and formulas used in each phase combined with the help of Statistical Software Minitab 18.
This paper proposes a novel model management technique to be applied in population- based heuristic optimization. This technique adaptively selects different computational models (both physics-based and statistical models) to be used during optimization, with the overall goal to end with high fidelity solutions in a reasonable time period. For example, in optimizing an aircraft wing to obtain maximum lift-to-drag ratio, one can use low-fidelity models such as given by the vortex lattice method, or a high-fidelity finite volume model (that solves the full Navier-Stokes equations), or a surrogate model that substitutes the high-fidelity model.The information from models with different levels of fidelity is inte- grated into the heuristic optimization process using a novel model-switching metric. In this context, models could be surrogate models, low-fidelity physics-based analytical mod- els, and medium-to-high fidelity computational models (based on grid density). The model switching technique replaces the current model with the next higher fidelity model, when a stochastic switching criterion is met at a given iteration during the optimization process. The switching criteria is based on whether the uncertainty associated with the current model output dominates the latest improvement of the fitness function. In the case of the physics-based models, the uncertainty in their output is quantified through an inverse assessment process by comparing with high-fidelity model responses or experimental data (if available). To determine the fidelity of surrogate models, the Predictive Estimation of Model Fidelity (PEMF) method is applied. The effectiveness of the proposed method is demonstrated by applying it to airfoil optimization with the objective to maximize the lift to drag ratio of the wing under different flow regimes. It was found that the tuned low fidelity model dominates the optimization process in terms of computational time and function calls.
Optimization techniques in formulation Development Response surface methodol...D.R. Chandravanshi
The term “optimize” is “to make as perfect”. It is defined as follows: choosing the best element from some set of variable alternatives.
An art ,process ,or methodology of making something (a design system or decision ) as perfect ,as functional, as effective as possible .
This paper proposes a novel model management technique to be applied in population- based heuristic optimization. This technique adaptively selects different computational models (both physics-based and statistical models) to be used during optimization, with the overall goal to end with high fidelity solutions in a reasonable time period. For example, in optimizing an aircraft wing to obtain maximum lift-to-drag ratio, one can use low-fidelity models such as given by the vortex lattice method, or a high-fidelity finite volume model (that solves the full Navier-Stokes equations), or a surrogate model that substitutes the high-fidelity model.The information from models with different levels of fidelity is inte- grated into the heuristic optimization process using a novel model-switching metric. In this context, models could be surrogate models, low-fidelity physics-based analytical mod- els, and medium-to-high fidelity computational models (based on grid density). The model switching technique replaces the current model with the next higher fidelity model, when a stochastic switching criterion is met at a given iteration during the optimization process. The switching criteria is based on whether the uncertainty associated with the current model output dominates the latest improvement of the fitness function. In the case of the physics-based models, the uncertainty in their output is quantified through an inverse assessment process by comparing with high-fidelity model responses or experimental data (if available). To determine the fidelity of surrogate models, the Predictive Estimation of Model Fidelity (PEMF) method is applied. The effectiveness of the proposed method is demonstrated by applying it to airfoil optimization with the objective to maximize the lift to drag ratio of the wing under different flow regimes. It was found that the tuned low fidelity model dominates the optimization process in terms of computational time and function calls.
Optimization techniques in formulation Development Response surface methodol...D.R. Chandravanshi
The term “optimize” is “to make as perfect”. It is defined as follows: choosing the best element from some set of variable alternatives.
An art ,process ,or methodology of making something (a design system or decision ) as perfect ,as functional, as effective as possible .
This paper advances the Domain Segmentation based on Uncertainty in the Surrogate (DSUS) framework which is a novel approach to characterize the uncertainty in surrogates. The leave-one-out cross-validation technique is adopted in the DSUS framework to measure local errors of a surrogate. A method is proposed in this paper to evaluate the performance of the leave-out-out cross-validation errors as local error measures. This method evaluates local errors by comparing: (i) the leave-one-out cross-validation error with (ii) the actual local error estimated within a local hypercube for each training point. The comparison results show that the leave-one-out cross-validation strategy can capture the local errors of a surrogate. The DSUS framework is then applied to key aspects of wind resource as- sessment and wind farm cost modeling. The uncertainties in the wind farm cost and the wind power potential are successfully characterized, which provides designers/users more confidence when using these models
The analysis of complex system behavior often demands expensive experiments or computational simulations. Surrogates modeling techniques are often used to provide a tractable and inexpensive approximation of such complex system behavior. Owing to the lack of knowledge regarding the suitability of particular surrogate modeling techniques, model selection approach can be helpful to choose the best surrogate technique. Popular model selection approaches include: (i) split sample, (ii) cross-validation, (iii) bootstrapping, and (iv) Akaike's information criterion (AIC) (Queipo et al. 2005; Bozdogan et al. 2000). However, the effectiveness of these model selection methods is limited by the lack of accurate measures of local and global errors in surrogates.
This paper develops a novel and model-independent concept to quantify the local/global reliability of surrogates, to assist in model selection (in surrogate applications). This method is called the Generalized-Regional Error Estimation of Surrogate (G-REES). In this method, intermediate surrogates are iteratively constructed over heuristic subsets of the available sample points (i.e., intermediate training points), and tested over the remaining available sample points (i.e., intermediate test points). The fraction of sample points used as intermediate training points is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The statistical mode of the median and the maximum error distributions are then determined. These mode values are then represented as functions of the density of training points (at the corresponding iteration). Regression methods, called Variation of Error with Sample Density (VESD), are used for this purpose. The VESD models are then used to predict the expected median and maximum errors, when all the sample points are used as training points.
The effectiveness of the proposed model selection criterion is explored to find the best surrogate between candidates including: (i) Kriging, (ii) Radial Basis Functions (RBF), (iii) Extended Radial Basis Functions (ERBF), and (iv) Quadratic Response Surface (QRS), for standard test functions and a wind farm capacity factor function. The results will be compared with the relative accuracy of the surrogates evaluated on additional test points, and also with the prediction sum of square (PRESS) error given by leave-one-out cross-validation.
The application of G-REES to a standard test problem with two design variables (Branin-hoo function) show that the proposed method predicts the median and the maximum value of the global error with a higher level of confidence compared to PRESS. It also shows that model selection based on G-REES method is significantly more reliable than that currently performed using error measures such as PRESS. The
Approximation models (or surrogate models) provide an efficient substitute to expen- sive physical simulations and an efficient solution to the lack of physical models of system behavior. However, it is challenging to quantify the accuracy and reliability of such ap- proximation models in a region of interest or the overall domain without additional system evaluations. Standard error measures, such as the mean squared error, the cross-validation error, and the Akaikes information criterion, provide limited (often inadequate) informa- tion regarding the accuracy of the final surrogate. This paper introduces a novel and model independent concept to quantify the level of errors in the function value estimated by the final surrogate in any given region of the design domain. This method is called the Re- gional Error Estimation of Surrogate (REES). Assuming the full set of available sample points to be fixed, intermediate surrogates are iteratively constructed over a sample set comprising all samples outside the region of interest and heuristic subsets of samples inside the region of interest (i.e., intermediate training points). The intermediate surrogate is tested over the remaining sample points inside the region of interest (i.e., intermediate test points). The fraction of sample points inside region of interest, which are used as interme- diate training points, is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors within the region of in- terest for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The estimated statistical mode of the median and the maximum error, and the absolute maximum error are then represented as functions of the density of intermediate training points, using regression models. The regression models are then used to predict the expected median and maximum regional errors when all the sample points are used as training points. Standard test functions and a wind farm power generation problem are used to illustrate the effectiveness and the utility of such a regional error quantification method.
This paper advances the Domain Segmentation based on Uncertainty in the Surrogate (DSUS) framework which is a novel approach to characterize the uncertainty in surrogates. The leave-one-out cross-validation technique is adopted in the DSUS framework to measure local errors of a surrogate. A method is proposed in this paper to evaluate the performance of the leave-out-out cross-validation errors as local error measures. This method evaluates local errors by comparing: (i) the leave-one-out cross-validation error with (ii) the actual local error estimated within a local hypercube for each training point. The comparison results show that the leave-one-out cross-validation strategy can capture the local errors of a surrogate. The DSUS framework is then applied to key aspects of wind resource as- sessment and wind farm cost modeling. The uncertainties in the wind farm cost and the wind power potential are successfully characterized, which provides designers/users more confidence when using these models
The analysis of complex system behavior often demands expensive experiments or computational simulations. Surrogates modeling techniques are often used to provide a tractable and inexpensive approximation of such complex system behavior. Owing to the lack of knowledge regarding the suitability of particular surrogate modeling techniques, model selection approach can be helpful to choose the best surrogate technique. Popular model selection approaches include: (i) split sample, (ii) cross-validation, (iii) bootstrapping, and (iv) Akaike's information criterion (AIC) (Queipo et al. 2005; Bozdogan et al. 2000). However, the effectiveness of these model selection methods is limited by the lack of accurate measures of local and global errors in surrogates.
This paper develops a novel and model-independent concept to quantify the local/global reliability of surrogates, to assist in model selection (in surrogate applications). This method is called the Generalized-Regional Error Estimation of Surrogate (G-REES). In this method, intermediate surrogates are iteratively constructed over heuristic subsets of the available sample points (i.e., intermediate training points), and tested over the remaining available sample points (i.e., intermediate test points). The fraction of sample points used as intermediate training points is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The statistical mode of the median and the maximum error distributions are then determined. These mode values are then represented as functions of the density of training points (at the corresponding iteration). Regression methods, called Variation of Error with Sample Density (VESD), are used for this purpose. The VESD models are then used to predict the expected median and maximum errors, when all the sample points are used as training points.
The effectiveness of the proposed model selection criterion is explored to find the best surrogate between candidates including: (i) Kriging, (ii) Radial Basis Functions (RBF), (iii) Extended Radial Basis Functions (ERBF), and (iv) Quadratic Response Surface (QRS), for standard test functions and a wind farm capacity factor function. The results will be compared with the relative accuracy of the surrogates evaluated on additional test points, and also with the prediction sum of square (PRESS) error given by leave-one-out cross-validation.
The application of G-REES to a standard test problem with two design variables (Branin-hoo function) show that the proposed method predicts the median and the maximum value of the global error with a higher level of confidence compared to PRESS. It also shows that model selection based on G-REES method is significantly more reliable than that currently performed using error measures such as PRESS. The
Approximation models (or surrogate models) provide an efficient substitute to expen- sive physical simulations and an efficient solution to the lack of physical models of system behavior. However, it is challenging to quantify the accuracy and reliability of such ap- proximation models in a region of interest or the overall domain without additional system evaluations. Standard error measures, such as the mean squared error, the cross-validation error, and the Akaikes information criterion, provide limited (often inadequate) informa- tion regarding the accuracy of the final surrogate. This paper introduces a novel and model independent concept to quantify the level of errors in the function value estimated by the final surrogate in any given region of the design domain. This method is called the Re- gional Error Estimation of Surrogate (REES). Assuming the full set of available sample points to be fixed, intermediate surrogates are iteratively constructed over a sample set comprising all samples outside the region of interest and heuristic subsets of samples inside the region of interest (i.e., intermediate training points). The intermediate surrogate is tested over the remaining sample points inside the region of interest (i.e., intermediate test points). The fraction of sample points inside region of interest, which are used as interme- diate training points, is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors within the region of in- terest for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The estimated statistical mode of the median and the maximum error, and the absolute maximum error are then represented as functions of the density of intermediate training points, using regression models. The regression models are then used to predict the expected median and maximum regional errors when all the sample points are used as training points. Standard test functions and a wind farm power generation problem are used to illustrate the effectiveness and the utility of such a regional error quantification method.
Hi everybody
We (hsqs.in) are going to provide you Six Sigma knowledge by our ppt presentation.
If you face any problem in understanding of six sigma then please send me your questions on our provided email I.D , we will send you solution of As soon as possible .
Regards
Kumar Kunal
krkunal@rediffmail.com
kumar.kunal@hsqs.in
Hi everybody
We (hsqs.in) are going to provide you Six Sigma knowledge by our ppt presentation.
If you face any problem in understanding of six sigma then please send me your questions on our provided email I.D , we will send you solution of As soon as possible .
Regards
Kumar Kunal
krkunal@rediffmail.com
kumar.kunal@hsqs.in
A term (Greek) used in statistics to represent standard deviation from mean value, an indicator of the degree of variation in a set of a process.
Sigma measures how far a given process deviates from perfection. Higher sigma capability, better performance
Six Sigma - A highly disciplined process that enables organizations deliver nearly perfect products and services.
The figure of six arrived statistically from current average maturity of most business enterprises
A philosophy and a goal: as perfect as practically possible.
A methodology and a symbol of quality
Basic definition of six sigma, why as it introduced in the first place, the mathematical expression, statistical definition, sig sigma application in clinical laboratory.
Quality Improvement in Low Pressure Die Casting Of Automobile Cylinder Head U...paperpublications3
Abstract: Six Sigma is one of the best developing methodologies for quality confirmation and administration in Automobile parts manufacturing. In this research, Quality management tools such as Data Analysis, Pareto charts, Cause and Effect diagrams, Process capability study, Failure Mode Effect Analysis(FMEA), Design of Experiments(DOE), Visual and Control charts etc. are used in characterizing the problems to discover the underlying drivers for the problems and doing tests so as to propose improvements, through which the organization could acquire quality and soundness all the while. The project will be focused on the quality improvement of major defect in cylinder head which were produced by low pressure die casting machines in an Auto mobile manufacturing company. In this project the main defect cause this rejection is “Misrun” which is observed in low pressure die casting of cylinder head. Using the Six Sigma method the DPMO reduces from 16400 to 7800. Further improvement in the rejection is expected in the long run after the continous implementation of all the solutions.
six sigma is a vibrant topic in quality management
i had made this one in my total quality management subject in mba.
u can take this but leave a comment if u like it
Six Sigma is a quality management methodology that streamlines and transforms business processes to achieve more with less. Six Sigma Yellow Belt is part of the Six Sigma process improvement certification for quality management.
This TUV SUD's Lean Six Sigma Yellow Belt Certification is one of the most industry-recognized Quality management certifications for professionals across the globe.
To know more about Lean Six Sigma Yellow Belt Certification training's worldwide, please contact us at -
Email :support@invensislearning.com
Phone - US +1-910-726-3695,
Website : https://www.invensislearning.com
Teacher’s Accomplishment Level of The Components of an E-Learning Module: A B...RSIS International
This study determined the extent to which teachers in a private institution in La Trinidad, Benguet, Philippines have accomplished the essential components of an E-learning module, and identified the factors that influenced their level of accomplishment of these components. This study used mixed method explanatory sequential design. Total enumeration was used to determine the population of respondents who were full-time tertiary teachers. Out of the thirty-six full-time teachers, twenty-eight teachers responded and answered the self-assessment survey questionnaire for the quantitative phase, while seven of them who were selected using purposive sampling were interviewed for the qualitative part. Descriptive statistics using weighted mean was used to analyze quantitative data while descriptive approach using thematic analysis for the qualitative data. Quantitative descriptive analysis revealed that the teachers’ overall accomplishment level of the criteria for a quality E-learning module is partial/moderate (2.68 overall weighted mean/WM). Specifically, the teachers partially accomplished the following components of the E-learning module: instructional design (2.84 WM); communication, interaction, and collaboration (2.85 WM); student evaluation and assessment (2.89 WM); instructional materials and technologies (2.75 WM); and accessibility (2.66 WM) while slightly accomplished the components: learner support and resources (2.21 WM), and course evaluation (2.24 WM). For the qualitative descriptive inquiry, absence of capacity-building training in making E-learning module; lack of awareness of the school services, learner support, and resources sites; lack of time; lack of systemic approach to developing an online module; non-consistency of instructions; and amotivation and lack of enthusiasm emerged as factors that influence the teachers’ level of accomplishment of the parts of an E-learning module. The results show that the E-learning modules have fallen short of strict quality standards attributable to certain dire circumstances. This study thus recommends that the institution may adopt or develop its online module criteria to guide the teachers as well as the institution in writing or designing their online modules, and may conduct training in E-module design for teachers.
Development Administration and the Challenges of Neo-liberal Reforms in the E...RSIS International
The quest of every nation state is to attract, sustain and fast-track growth and development in all ramifications. The Nigerian state between 1960 and 2020 has experienced an unabated expansion of universities. Ironically, the war against illiteracy is yet to be won, despite the experimentation of different western developmental ideologies in the Nigerian educational system. The study examined the contributions of development administration in the Nigerian educational sector as well as ascertained if the current neo-liberal reform has aided the expansion or retrogression of the educational sector most especially, the university sub-sector in Nigeria. The theoretical framework of the study was anchored on the post-colonial Nigerian state theory. Methodologically, the study strictly utilized the documentary method and data were sourced through the secondary sources and analyzed in content. The study found that the experimentation of western development ideologies in Nigeria actually aided the expansion of the universities but undermined the realization of free, quality and accessible university education for all and sundry. The study recommended among others: the applications of the action plan of Professor Okonjo as regards creating a universal tertiary education for all Nigerians and sustainably financing it.
The Nexus of Street Trading and Juvenile Delinquency: A Study of Chanchaga Lo...RSIS International
I. INTRODUCTION
Globally, the number of working children has been decreasing around the world in recent years, but child labour has continued to be a widespread problem today, especially in developing countries (Paola, Viviana, Flavia & Furio2007). International Programme on Elimination of Child Labour (IPEC 2016) reported that between 2012 to 2016, about 182 million children in the developing world aged 5-14 years were engaged in work. Against this background, governments, international organizations, and non-governmental organizations (NGOs) have focused their efforts on tackling in particular the worst forms of child labour such as forced and bonded labour, which put children in physically and mentally harmful working conditions (Bunnak 2007).
Determination of Bacteriological and Physiochemical Properties of Som-Breiro ...RSIS International
The study seeks to examine the Bacteriological and
physiochemical properties of Sambrero River in Ahoada East
Local Government Area of Rivers State. Three (3) points were
sampled from different locations designated as location (L1)
location (L2) and location (L3) respectively, samples were
collected in 0.1m of Sterile containers and were transported to
the laboratory for immediate analysis. Ten (10) physiochemical,
three (3) heavy metal sand three microbiological parameters
were observed. Data was analyzed using standard methods
(ALPHA, 1998) 20th edition and Ms-Excel version 2013 software.
The result showed little variation in physiochemical parameters
which are in line with World Health Organization (WHO)
standard of potable water but shows much variation in
microbiological parameters which are not in line with WHO
standard, thereby making the water not wholesome and not
potable for consumption except after proper treatment of the
water. The work therefore recommends that members of Ekpena
Community should ensure basic water treatment such as boiling
and chlorination before consumption.
Power and Delay Analysis of Logic Circuits Using Reversible GatesRSIS International
This paper determines the propagation delay and on
chip power consumed by each basic and universal gates and
basic arithmetic functions designed using existing reversible
gates through VHDL. Hence a designer can choose the best
reversible gates to use for any logic circuit design. The paper
does a look up table analysis of truth tables of the reversible
gates to find the occurrence of the AND OR, NAND, NOR and
basic arithmetic functions, useful to build complex combinational
digital logic circuits.
Innovative ICT Solutions and Entrepreneurship Development in Rural Area Such ...RSIS International
The use of internet and information communication
technology (ICT) infrastructures is an essential aspect of
learning, this is why a lot of information on entrepreneurship
career choices are available online. However, the emerging
growth in the use of information and communication
technologies and services towards entrepreneurship development
is a challenge for efficient information dissemination and
learning especially in rural areas. This paper pointed out an area
in which MCIU can use Information and Communication
Technology (ICT) resources/infrastructure it possess for
entrepreneurship development and poverty alleviation in its
community. Thereby, encouraging social and economic growth,
and overcome the gap between urban and rural areas
entrepreneurship development. An online learning platform,
using video may contribute greatly in rural entrepreneurship
development such as MCIU community. Some examples of some
programmes like learning make over, headgear tying, bead
making, cake designing, etc online for a period of 4 to 6weeks
Indigenous Agricultural Knowledge and the Sustenance of Local Livelihood Stra...RSIS International
Natural disasters in most parts of the world have
resorted to many fatalities, forced migration and involuntary
resettlement of the affected population. Lake Nyos Gas Disaster
of 1986 which killed about 1,746 people and led to forced
migration of over 15,000 affected people and the subsequent
resettlement of survivors in resettlement camps in near by
administrative sub-divisions in the North West Region of
Cameroon is one of such natural catastrophies. The paper
evaluates the use of Indigenous Agricultural Knowledge (IAK) in
agriculture and how it has helped to sustain the livelihood of this
environmentally traumatized resettled population. The study
sampled two of these resettlement villages (Buabua and Kimbi)
to access Indigenous Agricultural Knowledge (IAK) and the
sustenance of local livelihood strategies. Field campaigns
including the administration of semi-structured questionnaires
and focus group discussions (FGDs) facilitated the collection of
data on IAK practices and how this knowledge helps in
sustaining local livelihoods. A total of 24 Indigenous Agricultural
Knowledge (IAK) were identified, with 54.16% of them used in
crop cultivation, and 45.83% in livestock farming including the
raising of small ruminants, poultry and piggery production. IAK
shows successful results after being applied as there is increased
crop and livestock yields. The use of IAK in agriculture has led
to sustainable and efficient land use within the study area.
Despite the rising use of IAK and potential benefits in
agriculture and the sustenance of local livelihoods in Buabua and
Kimbi, survivors still express a strong desire to return to the
former disaster zone. The underlying reasons behind this phobia
is mainly small land sizes ranging from 30-50 square metres that
were allocated to households for both crop cultivation and
grazing, and the fact that the limits between grazing and crop
land are not clearly demarcated. Prospects for agricultural
expansion within the area are therefore slim and need to be
addressed
Wireless radio signal drop due to foliage in illuba bore zone ethiopiaRSIS International
The exponential growth in energy utilization &
consumption in cellular network by the user devices and by
telecom equipment has imposed critical problemsbecause of
adaptation of high range frequency in available spectrum (Ultra
High Frequency-UHF) by government and technology.The other
reason for more power consumption is extensive applications of
mobile data services to video streaming, surveillance, internet
surfing and healthcare monitoring.Other important causes of
energy consumption which has been recognized are powerhungry
processors, poor design of power amplifiers etc. Presence
of different species of foliage in hilly area increases signal
attenuation, consequently in order to maintain the threshold
value of signal, the power is increased. The recent researches
predict that the data traffic is being increased by several-fold
every year. Under such predictions, energy expenditure at its
control is a major challenging task for telecom companies and
for research communities. This paper studies the actual signal
intensity drop because of irregular nature of terrestrial pattern
and foliage in Illuba Bore zone, from theoretical perspective as
well as practical point of view.
The Bridging Process: Filipino Teachers’ View on Mother TongueRSIS International
This paper recognized that teachers play the main
element in the success of the new language policy, the Mother
Tongue-Based Multilingual Education (MTB-MLE) in the
Philippines. Their views as implementer on this approach are
essential in the attainment of the MTB-MLE objectives. In this
descriptive paper, the authors report a comprehensive account of
the 35 teachers’ perception on the efficiency and effectiveness of
MTB approach in teaching at Malvar Central School, Batangas,
Philippines for the school year 2016-2017. Using adopted
questionnaire, needed data were gathered and statistically
treated. The study found out that the respondents moderately
perceived mother tongue- based approach as effective and
efficient in achieving learning goals. The implications of these
findings are discussed within the theoretical and practical issues
surrounding the use of mother tongue-based in the Philippines
Optimization of tungsten inert gas welding on 6063 aluminum alloy on taguchi ...RSIS International
In this paper, the Taguchi method is used for the
Optimization of Tungsten Inert Gas Welding on 6063
Aluminum Alloy. The Taguchi method L27 is used to
optimize the pulsed TIG welding process parameters of 6063
aluminum alloy weldments for maximizing the mechanical
properties. Analysis of Variance is used to find the impact of
individual factors. Then the optimal parameters of the TIG
welding process is determined and the experimental results
illustrate the proposed approach.
Investigation of mechanical properties of carbidic ductile cast ironRSIS International
The objective of the present work is to increase the wear resistance for long life of applications. It is found that increase in the carbides in an alloy which resulted in to enhancement in hardness and wear resistance. The wear resistance was evaluated by testing in accordance with ASTE International Committee G-99 Standard. An improved performance of wear resistanceis obtained by increasing the content of chromium in the carbidic ductile cast iron. The results are discussed based on the influence of chromium content on the casting.
Task Performance Analysis in Virtual Cloud EnvironmentRSIS International
Cloud computing based applications are beneficial for
businesses of all sizes and industries as they don’t have to invest
a huge amount on initial setup. This way, businesses can opt for
Cloud services and can implement innovative ideas. But
evaluating the performance of provisioning (e.g. CPU scheduling
and resource allocation) policies in a real Cloud computing
environment for different application techniques is challenging
because clouds show dynamic demands, workloads, supply
patterns, VM sizes, and resources (hardware, software, and
network). User’s requests and services requirements are
heterogeneous and dynamic. Applications models have
unpredictable performance, workloads, and dynamic scaling
requirements. So a demand for a Simulation toolkit for Cloud is
there. Cloudsim is self-contained simulation framework that
provides simulation and modeling of Cloud-based application in
lesser time with lesser efforts. In this paper we tried to simulate
the task performance of a cloudlet using one data center, one
VM. We also developed a Graphical User Interface to
dynamically change the simulation parameters and show
simulation results.
Design and Fabrication of Manually Operated Wood Sawing Machine: Save Electri...RSIS International
In India power cut is big problem also having many
remote places where electricity not reached and that will affect
many small scale business and ongoing work, like Carpentry,
ongoing work got stop because of power cut. To overcome this
problem manually operated economical; conceptual model of a
machine which would be capable of performing different
operation like sawing/cutting and grinding without use of power
i.e. wood working machine is introduced.
In this paper, design concept and fabrication of manually
operated wood sawing/cutting machine is explained. It is
designed and fabricated so portable that it can be move and used
at various places. It is used for sawing/cutting of wood, plywood,
thin metals (<=2mm), and pvc pipes. The material can be cut
without any external energy like fuel or current. As machine uses
no electric power and fuel, this will help to maintain green
environment. The observations show that power required for
pedaling is well below the capacity of an average healthy human
being.
Effect of Surface Treatment on Settlement of Coir Mat Reinforced SandRSIS International
Employment in rural areas is generated when byproduct
from the natural materials is used in construction
industry. The extent of usage of coir fibres in construction
industry is restricted by the fact that it is biodegradable. Though
use of natural materials such as coir fibers is well established. In
this view, the objective the present study is to surface treat the
coir mats, making it hydrophobic. Model footing tests using
model footing of 50mm diameter resting on Surface treated coir
mat of different opening size were conducted. The results
indicate that the surface treatment of coir products is beneficial
in increasing the strength of reinforced soil when compared with
untreated coir mats
Augmentation of Customer’s Profile Dataset Using Genetic AlgorithmRSIS International
Data is the lifeblood of all type of business. Clean,
accurate and complete data is the prerequisite for the decisionmaking
in business process. Data is one of the most valuable
assets for any organization. It is immensely important that the
business focus on the quality of their data as it can help in
increasing the business performance by improving efficiencies,
streamlining operations and consolidating data sources. Good
quality data helps to improve and simplify processes, eliminate
time-consuming rework and externally to enhance a user’s
experience, further translating it to significant financial and
operational benefits [1] [2]. All organizations/ businesses strive to
retain their existing customers and gain new ones. Accurate data
enables the business to improve the customer experience. Data
augmentation adds value to base data by enhancing information
derived from the existing source. Data augmentation can help
reduce the manual intervention required to develop meaningful
information and insight of business data, as well as significantly
enhance data quality. Hence the business can provide unique
customer experience and deliver above and beyond their
expectations. The Data Augmentation is immensely important as
it helps in improving the overall productivity of the business. It
is also important in making the most accurate and relevant
information available quickly for decision making.
This work focuses on augmentation of the customer
dataset using Genetic Algorithm(GA). These augmented data are
used for the purpose of customer behavioral analysis. The data
set consists of the different factors inherent in each situation of
the customer to understand the market strategy. This behavioral
data is used in the earlier work of analyzing the data [13]. It is
found that collecting a very large amount of such data manually
is a very cumbersome process. It is inferred from the earlier
work [13] that the more number of data may give accurate
result. Hence it is decided to enrich the dataset by using Genetic
Algorithm.
System Development for Verification of General Purpose Input OutputRSIS International
In SoC no. of IP block inside it depends upon specific
application, increase in the Ip block increases no. of digital
control lines causes increase in the size of the chip. GPIO helps
internal IP blocks to share digital control lines using MUX and
avoids additional circuitry. Since design productivity cannot
follow the pace of nanoelectronics technology innovation, it has
been required to develop various design methodologies to
overcome this gap. In system level design, various design
methodologies such as IP reuse, automation of platform
integration and verification process have been proposed. GPIO
configuration register decides in which mode system has to work
GPIO has four modes i.e input, output, functional, interrupt. As
per operation particular mode is selected and the operation get
performed. Devices with pin scarcity like integrated circuits such
as system-on-a-chip, embedded and custom hardware, and
programmable logic devices cannot compromise with size can
perform well without additional digital control line circuitry.
De-noising of Fetal ECG for Fetal Heart Rate Calculation and Variability Anal...RSIS International
Fetal monitoring is the way of checking the condition
of unborn baby during labor and delivery by continuously
monitoring his or her heart rate. A normal fetal heart rate (FHR)
can reassure safe birth of the baby. Fetal monitoring techniques
are broadly classified into invasive and non-invasive techniques.
Non-invasive techniques are involves monitoring the fetus
through mother’s abdominal region. This can be done in all
gestation weeks and during the delivery also. Abdominal ECG
(AECG) is a composite ECG signal containing both mother’s as
well as fetal ECG. This paper presents an efficient technique to
extract FECG from abdominal ECG. A modified Pan Tompkin’s
method is employed for the QRS detection. It involves series of
filters and methods like band pass filter, derivative filter,
squaring, integration and adaptive thresholding. Further heart
rate of fetus and mother is calculated and heart rate variability
analysis is done using detected R-peaks. The algorithm is tested
on 5 different non-invasively recorded abdominal and direct
FECG signals taken from MIT PhysioNet database and the
results are obtained using MATLAB software. The performance
of the QRS detector is evaluated using parameters like
Sensitivity and Positive Prediction.
A finite element modelling of composite plate with
integrated piezoelectric layers, acting as sensor/actuator, for
active vibration control is presented in this paper. The
displacement feedback (DF) and direct velocity feedback (DVF)
controls are integrated into the FE software ANSYS to perform
closed loop analysis for vibration control. A smart laminated
composite beam with different layup configurations under free
and forced vibration condition is studied and the results shows
suppression of vibration achieved successfully in both DF and
DVF controls.
LabVIEW Based Measurement of Blood Pressure using Pulse Transit TimeRSIS International
Blood Pressure is the pressure exerted by blood on the
walls of arteries. Normal blood pressure is considered to be a
systolic blood pressure of 120 millimetres of mercury and
diastolic pressure of 80 millimetres of mercury (stated as "120
over 80"). If an individual were to have a consistent blood
pressure reading of 140 over 90, he would be evaluated for
having high blood pressure. If left untreated, high blood pressure
can damage important organs, such as the brain and kidneys, as
well as lead to a stroke. Thus it becomes important to measure
blood pressure as it can lead to early diagnosis of diseases that
may be linked to high or low blood pressure. PTT is the time
taken by the arterial pulse propagating from the heart to a
peripheral site. This can be calculated from ECG signals and
PlethysmoGram signals. Since, PTT has been found to be
correlated to Blood Pressure, it is imperative to calculate PTT
accurately. In this paper, a relation has been developed between
PTT and Blood Pressure using regresssion analysis. Another
indicator, known as the Photoplethysmogram Intensity Ratio,
henceforth known as the PIR has also been used in estimation of
the Blood Pressure. The coding has been done in LabVIEW
which is has a graphical programming syntax that makes it
simple to visualize, create, and code engineering systems.
Artificial intelligence (AI) offers new opportunities to radically reinvent the way we do business. This study explores how CEOs and top decision makers around the world are responding to the transformative potential of AI.
Oprah Winfrey: A Leader in Media, Philanthropy, and Empowerment | CIO Women M...CIOWomenMagazine
This person is none other than Oprah Winfrey, a highly influential figure whose impact extends beyond television. This article will delve into the remarkable life and lasting legacy of Oprah. Her story serves as a reminder of the importance of perseverance, compassion, and firm determination.
Modern Database Management 12th Global Edition by Hoffer solution manual.docxssuserf63bd7
https://qidiantiku.com/solution-manual-for-modern-database-management-12th-global-edition-by-hoffer.shtml
name:Solution manual for Modern Database Management 12th Global Edition by Hoffer
Edition:12th Global Edition
author:by Hoffer
ISBN:ISBN 10: 0133544613 / ISBN 13: 9780133544619
type:solution manual
format:word/zip
All chapter include
Focusing on what leading database practitioners say are the most important aspects to database development, Modern Database Management presents sound pedagogy, and topics that are critical for the practical success of database professionals. The 12th Edition further facilitates learning with illustrations that clarify important concepts and new media resources that make some of the more challenging material more engaging. Also included are general updates and expanded material in the areas undergoing rapid change due to improved managerial practices, database design tools and methodologies, and database technology.
The Team Member and Guest Experience - Lead and Take Care of your restaurant team. They are the people closest to and delivering Hospitality to your paying Guests!
Make the call, and we can assist you.
408-784-7371
Foodservice Consulting + Design
Six Sigma Methods and Formulas for Successful Quality Management
1. 4th
International Conference on Multidisciplinary Research & Practice (4ICMRP-2017) P a g e | 152
www.rsisinternational.org ISBN: 978-93-5288-448-3
Six Sigma Methods and Formulas for Successful
Quality Management
Suresh .V. Menon
Grey Campus Edutech Private Limited, Hyderabad, Telangana, India
Abstract: This paper is about the five phases of Six Sigma which are
Define, Measure, Analyze, Improve& Control. The methods used in
each phase are discussed in detail and the various tests used in
Analyze Phase of Six Sigma are given; Six Sigma can be implemented
in an organization by using the methods and formulas used in each
phase combined with the help of Statistical Software Minitab 18.
Keywords: DMAIC, 3.4 Defects, USL, LSL, Standard Deviation,
Mean, Anova, Hypothetical Tests
ix Sigma is basically the application of Statistical formulas
and Methods to eliminate defects, variation in a product or a
process. For example if you want to find the average height of
male population in India, you cannot bring the entire population of
more than 2 billion into one room and measure their height for a
scenario like this we take samples that is we pick up
sample(people) from each state and use statistical formulas to
draw the inference about the average height of male population in
a population which is more than 2 billion. One more example
would be say a company manufactures pistons used in motor
cycles the customer demand is that the piston should not a
diameter more than 9 cm and less than 5 cm anything
manufactured outside this limits is said to be a variation and the
six sigma consultant should confirm that the pistons are
manufactured within the said limits else if there is variation in the
range then the company is not operating at 6 sigma level it is
operating at a very low level.
A company is operating at six sigma level implies that there are
only 3.4 defects per million opportunities for example an airline
company operating at six sigma level means that it loses only 3.4
baggage’s per million of the passenger it handles.
Below is Shown the Six Sigma Table and a graph explaining the
meaning of various levels of Six Sigma.
Sigma Level Defect Rate Yield Percentage
2 σ
308,770 dpmo (Defects
Per Million
Opportunities)
69.10000 %
3 σ 66,811 dpmo 93.330000 %
4 σ 6,210 dpmo 99.38000 %
5 σ 233 dpmo 99.97700 %
6 σ 3.44dpmo 99.99966 %
Six Sigma is Denoted by the Greek alphabet σ which is shown in
the table above and is called as Standard deviation. The father of
Six Sigma is Bill Smith who coined the term Six Sigma and
implemented it in Motorola in the 1980’s.
Six Sigma is implemented in Five Phases which are Define,
Measure, Analyze, Improve, Control and we will discuss each
phases in brief and the various methods used in Six Sigma.
DEFINE
The objectives within the Define Phase which is first
phase in DMAIC framework of Six Sigma are:-
Define the Project Charter
Define scope, objectives, and schedule
Define the Process (top-level) and its stake holders
Select team members
Obtain Authorization from Sponsor
Assemble and train the team.
Project charters the charter documents the why, how, who
and when of a project include the following elements
Problem Statement
Project objective or purpose, including the business
need addressed
Scope
Deliverables
Sponsor and stakeholder groups
Team members
Project schedule (using GANTT or PERT as an
attachment)
Other resources required
S
2. 153 | P a g e Six Sigma Methods and Formulas for Successful Quality Management
www.rsisinternational.org ISBN: 978-93-5288-448-3
Work break down Structure
It is a process for defining the final and intermediate products
of a project and their relationship. Defining Project task is
typically complex and accomplished by a series of
decomposition followed by a series of aggregations it is also
called top down approach and can be used in the Define
phase of Six Sigma framework.
Now we will get into the formulas of Six Sigma which is shown
in the table below.
Central tendency is defined as the tendency for the values
of a random variable to cluster round its mean, mode, or
median.
Central
Tendency
Population Sample
Mean/Average µ =∑ (Xi)^2/N XBar=∑ (Xi)^2/n
Median Middle value of Data
Most occurring value in the dataMode
Dispersion
Variance
σ ^2 = ∑(X- µ)^2/
N
S^2 = ∑(Xi- XBar)^2/ n-
1
Standard
Deviation
σ= √ ∑(Xi- µ)^2/ N
S = √ ∑(Xi- XBar)^2/ n-
1
Range Max-Min
Where mean is the average for example if you have taken 10
sample of pistons randomly from the factory and measured their
diameter the average would be sum of the diameter of the 10
pistons divided by 10 where 10 the number of observations the
sum in statistics is denoted by ∑. In the above table X, Xi are the
measures of the diameter of the piston and µ ,XBaris the average.
Mode is the most frequently observed measurement in the
diameter of the piston that is if 2 pistons out 10 samples collected
have the diameter as 6.3 & 6.3 then this is the mode of the sample
and median is the midpoint of the observations of the diameter of
the piston when arranged in sorted order.
From the example of the piston we find that the formulas of mean,
median , mode does not correctly depict variation in the diameter
of the piston manufactured by the factory but standard deviation
formula helps us to find the variance in the diameter of the piston
manufactured which is varying from the customer mentioned
upper specification limit and lower specification limit.
The most important equation of Six Sigma is Y = f(x) where Y is
the effect and x are the causes so if you remove the causes you
remove the effect of the defect. For example headache is the effect
and the causes are stress, eye strain, fever if you remove this
causes automatically the headache is removed this is implemented
in Six Sigma by using the Fishbone or Ishikawa diagram invented
by Dr Kaoru Ishikawa.
MEASURE PHASE
In the Measure phase we collect all the data as per the
relationship to the voice of customer and relevantly analyze using
statistical formulas as given in the above table. Capability
analyses is done in measure phase.The process capability is
calculated using the formula CP = USL-LSL/6 * Standard
Deviation where CP = process capability index, USL =
Upper Specification Limit and LSL = Lower Specification
Limit.
The Process capability measures indicates the following
1. Process is fully capable
2. Process could fail at any time
3. Process is not capable.
When the process is spread well within the customer
specification the process is considered to be fully capable that
means the CP is more than 2.In this case, the process standard
deviation is so small that 6 times of the standard deviation
with reference to the means is within the customer
specification.
Example: The Specified limits for the diameter of car
tires are 15.6 for the upper limit and 15 for the lower limit
with a process mean of 15.3 and a standard deviation of
0.09.Find Cp and Cr what can we say about Process
Capabilities ?
Cp= USL-LSL/ 6 * Standard deviation = 15.6 – 15 / 6 * 0.09
= 0.6/0.54 = 1.111
Cp= 1.111
Cr = 1/ 1.111 = 0.9
Since Cp is greater than 1 and therefore Cr is less than 1; we
can conclude that the process is potentially capable.
ANALYZE PHASE
In this Phase we analyze all the data collected in the measure
phase and find the cause of variation. Analyze phase use various
tests like parametric tests where the mean and standard deviation
of the sample is known and Nonparametric Tests where the data is
categorical for example as Excellent, Good, bad etc.
Parametric Hypothesis Test -A hypothesis is a value judgment
made about a circumstance, a statement made about a
population .Based on experience an engineer can for instance
assume that the amount of carbon monoxide emitted by a
certain engine is twice the maximum allowed legally.
However his assertions can only be ascertained by
conducting a test to compare the carbon monoxide generated
by the engine with the legal requirements.
If the data used to make the comparison are parametric data
that is data that can be used to derive the mean and the
standard deviation, the population from which the data are
taken are normally distributed they have equal variances. A
standard error based hypothesis testing using the t-test can be
used to test the validity of the hypothesis made about the
population. There are at least 3 steps to follow when
conducting hypothesis.
3. 4th
International Conference on Multidisciplinary Research & Practice (4ICMRP-2017) P a g e | 154
www.rsisinternational.org ISBN: 978-93-5288-448-3
1. Null Hypothesis: The first step consists of stating
the null hypothesis which is the hypothesis being
tested. In the case of the engineer making a
statement about the level of carbon monoxide
generated by the engine , the null hypothesis is
H0: the level of carbon monoxide generated by the
engine is twice as great as the legally required
amount. The Null hypothesis is denoted by H0
2. Alternate hypothesis: the alternate (or alternative)
hypothesis is the opposite of null hypothesis. It is
assumed valid when the null hypothesis is rejected
after testing. In the case of the engineer testing the
carbon monoxide the alternative hypothesis would
be
H1: The level of carbon monoxide generated by the
engine is not twice as great as the legally required
amount.
3. Testing the hypothesis: the objective of the test is to
generate a sample test statistic that can be used to
reject or fail to reject the null hypothesis .The test
statistic is derived from Z formula if the samples are
greater than 30.
Z = Xbar-µ/σ/ √n
If the samples are less than 30, then the t-test is used
T= X bar -µ/ s/√n where X bar and µ is the mean and s is the
standard deviation.
1-Sample t Test (Mean v/s Target) this test is used to
compare the mean of a process with a target value such as an
ideal goal to determine whether they differ it is often used to
determine whether a process is off center
1 Sample Standard Deviation This test is used to compare
the standard deviation of the process with a target value such
as a benchmark whether they differ often used to evaluate
how consistent a process is
2 Sample T (Comparing 2 Means) Two sets of different
items are measured each under a different condition there the
measurements of one sample are independent of the
measurements of other sample.
Paired T The same set of items is measured under 2 different
conditions therefore the 2 measurements of the same item are
dependent or related to each other.
2-Sample Standard This test is used when comparing 2
standard deviations
Standard Deviation test This Test is used when comparing
more than 2 standard deviations
Non Parametric hypothesis Tests are conducted when data is
categorical that is when the mean and standard deviation are
not known examples are Chi-Square tests, Mann-Whitney U
Test, Kruskal Wallis tests & Moods Median Tests.
Anova
If for instance 3 sample means A, B, C are being compared
using the t-test is cumbersome for this we can use analysis of
variance ANOVA can be used instead of multiple t-tests.
ANOVA is a Hypothesis test used when more than 2 means
are being compared.
If K Samples are being tested the null hypothesis will be in
the form given below
H0: µ1 = µ2 = ….µk
And the alternate hypothesis will be
H1: At least one sample mean is different from the others
If the data you are analyzing is not normal you have to make it
normal using box cox transformation to remove any outliers (data
not in sequence with the collected data).Box Cox Transformation
can be done using the statistical software Minitab.
IMPROVE PHASE
In the Improve phase we focus on the optimization of the process
after the causes are found in the analyze phase we use Design of
experiments to remove the junk factors which don’t contribute to
smooth working of the process that is in the equation Y = f(X) we
select only the X’s which contribute to the optimal working of the
process.
Let us consider the example of an experimenter who is trying
to optimize the production of organic foods. After screening
to determine the factors that are significant for his experiment
he narrows the main factors that affect the production of
fruits to “light” and “water”. He wants to optimize the time
that it takes to produce the fruits. He defines optimum as the
minimum time necessary to yield comestible fruits.
To conduct his experiment he runs several tests combining
the two factors (water and light) at different levels. To
minimize the cost of experiments he decides to use only 2
levels of the factors: high and low. In this case we will have
two factors and two levels therefore the number of runs will
be 2^2=4. After conducting observations he obtains the
results tabulated in the table below.
Factors Response
Water –High Light High 10 days
Water high – Light low 20 days
Water low – Light high 15 days
Water low – Light low 25 days
CONTROL PHASE
In the Control phase we document all the activities done in all the
previous phases and using control charts we monitor and control
the phase just to check that our process doesn’t go out of control.
Control Charts are tools used in Minitab Software to keep a check
4. 155 | P a g e Six Sigma Methods and Formulas for Successful Quality Management
www.rsisinternational.org ISBN: 978-93-5288-448-3
on the variation. All the documentation are kept and archived in a
safe place for future reference.
CONCLUSION
From the paper we come to understand that selection of a Six
Sigma Project is Critical because we have to know the long term
gains in executing these projects and the activities done in each
phase the basic building block is the define phase where the
problem statement is captured and then in measure phase data is
collected systematically against this problem statement which is
further analyzed in Analyze phase by performing various
hypothesis tests and process optimization in Improve phase by
removing the junk factors that is in the equation y = f(x1,
x2,x3…….) we remove the causes x1, x2 etc. by the method of
Design of Experiments and factorial methods. Finally we can
sustain and maintain our process to the optimum by using control
charts in Control Phase.
REFERENCES
The Certified Six Sigma Master Black Belt by Dr. Kubiak
ABOUT THE AUTHOR
Suresh has above 24 years’ of overall experience in IT, 14
Years in Quality Assurance which includes 5 years in Project
Management and 2 years as Six Sigma Black Belt Consultant
& Corporate Trainer, 7 Years in ERP Gap & Fit,
Implementation, Consulting & Testing for Manufacturing
Companies and 3 years as Corporate Trainer.
Currently doing (Ph.D.) in Computer Science from Nov
2015 onwards.
He also received the award certificate from
Multidisciplinary Conference held at AMA-Ahmedabad
on his article “Quality Management Using Six Sigma” and
also a certificate from the Board of International Journal
of Research & Scientific Innovation.