International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Randomization tests provide an alternative to t-tests and F-tests that does not rely on assumptions of normality or random sampling. However, randomization tests can be too liberal or conservative depending on differences in sample sizes. Bootstrapping and Gill's algorithm can help address these issues. Bootstrapping resamples the larger sample to match the size of the smaller sample, controlling for liberal bias. Gill's algorithm uses Fourier expansion to efficiently calculate p-values from all permutations, reducing computational cost compared to full enumeration. However, conservative bias remains a challenge without a known solution.
Experimental design and statistical power in swine experimentation: A present...Kareem Damilola
This document discusses experimental design and statistical power considerations for swine experimentation. It begins by explaining the importance of valid and precise animal experiments for developing new feeds and feeding standards. It then describes common experimental designs like completely randomized design and randomized complete block design. The document emphasizes that sufficient replication is needed within experiments to maximize statistical power. It also stresses the importance of performing a power analysis to determine adequate sample sizes and avoid type II errors. The document concludes by noting that power analyses are vital for efficient experimental planning and limiting unnecessary replications in swine research.
We provide an overview of some recent developments in machine learning tools for dynamic treatment regime discovery in precision medicine. The first development is a new off-policy reinforcement learning tool for continual learning in mobile health to enable patients with type 1 diabetes to exercise safely. The second development is a new inverse reinforcement learning tools which enables use of observational data to learn how clinicians balance competing priorities for treating depression and mania in patients with bipolar disorder. Both practical and technical challenges are discussed.
We present recent advances and statistical developments for evaluating Dynamic Treatment Regimes (DTR), which allow the treatment to be dynamically tailored according to evolving subject-level data. Identification of an optimal DTR is a key component for precision medicine and personalized health care. Specific topics covered in this talk include several recent projects with robust and flexible methods developed for the above research area. We will first introduce a dynamic statistical learning method, adaptive contrast weighted learning (ACWL), which combines doubly robust semiparametric regression estimators with flexible machine learning methods. We will further develop a tree-based reinforcement learning (T-RL) method, which builds an unsupervised decision tree that maintains the nature of batch-mode reinforcement learning. Unlike ACWL, T-RL handles the optimization problem with multiple treatment comparisons directly through a purity measure constructed with augmented inverse probability weighted estimators. T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs. However, ACWL seems more robust against tree-type misspecification than T-RL when the true optimal DTR is non-tree-type. At the end of this talk, we will also present a new Stochastic-Tree Search method called ST-RL for evaluating optimal DTRs.
This document discusses hypothesis testing and its four steps:
1) State the null and alternative hypotheses
2) Set the criteria for decision making, typically a 5% level of significance
3) Compute the test statistic, usually a z-score, to determine how likely the sample results are if the null is true
4) Make a decision to either reject or fail to reject the null hypothesis based on comparing the test statistic to the significance level
It then provides two examples problems demonstrating how to apply these four steps to test hypotheses about population parameters.
Computational Pool-Testing with Retesting StrategyWaqas Tariq
Pool testing is a cost effective procedure for identifying defective items in a large population. It also improves the efficiency of the testing procedure when imperfect tests are employed. This study develops computational pool-testing strategy based on a proposed pool testing with re-testing strategy. Statistical moments based on this applied design have been generated. With advent of computers in 1980‘s, pool-testing with re-testing strategy under discussion is handled in the context of computational statistics. From this study, it has been established that re-testing reduces misclassifications significantly as compared to Dorfman procedure although re-testing comes with a cost i.e. increase in the number of tests. Re-testing considered improves the sensitivity and specificity of the testing scheme.
Fisher's exact test is a statistical significance test used in the analysis of contingency tables with small sample sizes. It can be used to examine independence in a 2x2 contingency table and is most useful when sample sizes are small and expected cell counts are less than 5. The test computes the exact hypergeometric probability of obtaining a table equal or more extreme than what was observed, given the fixed marginal totals. The output provides left, right, and two-tailed p-values for assessing independence between the variables.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 8: Hypothesis Testing
8.3: Testing a Claim About a Mean
Randomization tests provide an alternative to t-tests and F-tests that does not rely on assumptions of normality or random sampling. However, randomization tests can be too liberal or conservative depending on differences in sample sizes. Bootstrapping and Gill's algorithm can help address these issues. Bootstrapping resamples the larger sample to match the size of the smaller sample, controlling for liberal bias. Gill's algorithm uses Fourier expansion to efficiently calculate p-values from all permutations, reducing computational cost compared to full enumeration. However, conservative bias remains a challenge without a known solution.
Experimental design and statistical power in swine experimentation: A present...Kareem Damilola
This document discusses experimental design and statistical power considerations for swine experimentation. It begins by explaining the importance of valid and precise animal experiments for developing new feeds and feeding standards. It then describes common experimental designs like completely randomized design and randomized complete block design. The document emphasizes that sufficient replication is needed within experiments to maximize statistical power. It also stresses the importance of performing a power analysis to determine adequate sample sizes and avoid type II errors. The document concludes by noting that power analyses are vital for efficient experimental planning and limiting unnecessary replications in swine research.
We provide an overview of some recent developments in machine learning tools for dynamic treatment regime discovery in precision medicine. The first development is a new off-policy reinforcement learning tool for continual learning in mobile health to enable patients with type 1 diabetes to exercise safely. The second development is a new inverse reinforcement learning tools which enables use of observational data to learn how clinicians balance competing priorities for treating depression and mania in patients with bipolar disorder. Both practical and technical challenges are discussed.
We present recent advances and statistical developments for evaluating Dynamic Treatment Regimes (DTR), which allow the treatment to be dynamically tailored according to evolving subject-level data. Identification of an optimal DTR is a key component for precision medicine and personalized health care. Specific topics covered in this talk include several recent projects with robust and flexible methods developed for the above research area. We will first introduce a dynamic statistical learning method, adaptive contrast weighted learning (ACWL), which combines doubly robust semiparametric regression estimators with flexible machine learning methods. We will further develop a tree-based reinforcement learning (T-RL) method, which builds an unsupervised decision tree that maintains the nature of batch-mode reinforcement learning. Unlike ACWL, T-RL handles the optimization problem with multiple treatment comparisons directly through a purity measure constructed with augmented inverse probability weighted estimators. T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs. However, ACWL seems more robust against tree-type misspecification than T-RL when the true optimal DTR is non-tree-type. At the end of this talk, we will also present a new Stochastic-Tree Search method called ST-RL for evaluating optimal DTRs.
This document discusses hypothesis testing and its four steps:
1) State the null and alternative hypotheses
2) Set the criteria for decision making, typically a 5% level of significance
3) Compute the test statistic, usually a z-score, to determine how likely the sample results are if the null is true
4) Make a decision to either reject or fail to reject the null hypothesis based on comparing the test statistic to the significance level
It then provides two examples problems demonstrating how to apply these four steps to test hypotheses about population parameters.
Computational Pool-Testing with Retesting StrategyWaqas Tariq
Pool testing is a cost effective procedure for identifying defective items in a large population. It also improves the efficiency of the testing procedure when imperfect tests are employed. This study develops computational pool-testing strategy based on a proposed pool testing with re-testing strategy. Statistical moments based on this applied design have been generated. With advent of computers in 1980‘s, pool-testing with re-testing strategy under discussion is handled in the context of computational statistics. From this study, it has been established that re-testing reduces misclassifications significantly as compared to Dorfman procedure although re-testing comes with a cost i.e. increase in the number of tests. Re-testing considered improves the sensitivity and specificity of the testing scheme.
Fisher's exact test is a statistical significance test used in the analysis of contingency tables with small sample sizes. It can be used to examine independence in a 2x2 contingency table and is most useful when sample sizes are small and expected cell counts are less than 5. The test computes the exact hypergeometric probability of obtaining a table equal or more extreme than what was observed, given the fixed marginal totals. The output provides left, right, and two-tailed p-values for assessing independence between the variables.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 8: Hypothesis Testing
8.3: Testing a Claim About a Mean
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 4: Probability
4.2: Addition Rule and Multiplication Rule
Building a model for expected cost function to obtain doubleIAEME Publication
This document presents a model for determining the parameters of a double Bayesian sampling inspection plan to minimize total expected quality control costs. The model considers costs associated with inspecting samples, accepting or rejecting lots, and continuing to a second sample. It defines variables such as sample sizes, acceptance/rejection numbers, and costs. Equations are provided for the average inspection cost of the first and second samples considering the sampling distribution and conditional probability of defective items. The model aims to obtain optimal parameters (n1, a1, r1, n2, a2, r2) of the double sampling plan by minimizing the total expected regret function according to producer and consumer risk levels.
A Moment Inequality for Overall Decreasing Life Class of Life Distributions w...inventionjournals
:A moment inequality is derived for the system whose life distribution is in an overall decreasing life (ODL) class of life distributions. A new nonparametric test statistic for testing exponentiality against ODL is investigated based on this inequality. The asymptotic normality of the proposed statistic is presented. Pitman's asymptotic efficiency, power and critical values of this test are calculated to assess the performance of the test. Real examples are given to elucidate the use of the proposed test statistic in the reliability analysis. Wealso proposed a test for testing exponentiality versus ODL for right censored data and the power estimates of this test are also simulated for censored data for some commonly used distributions in reliability. Finally, real data are used as an example for practical problems.
This document discusses sample size calculations for clinical trials. It covers the key requirements for determining sample size such as power, significance level, variability, and smallest effect size. Methods for calculating sample size are presented, including formulas, software, tables, and nomograms. Considerations for adjustments to the sample size due to factors like dropout rates and unequal group sizes are also described. Examples of sample size calculations for comparing means and proportions in independent groups using t-tests and chi-squared tests are provided.
Chapter 7 – Confidence Intervals And Sample Sizeguest3720ca
This document discusses confidence intervals for means and proportions. It defines key terms like point estimates, interval estimates, confidence levels, and confidence intervals. It provides formulas for calculating confidence intervals for means when the population standard deviation is known or unknown, and when the sample size is greater than or less than 30. Formulas are also given for calculating confidence intervals for proportions, and for determining the minimum sample size needed for estimating means and proportions within a desired level of accuracy. Examples of applying these concepts to sample data are also included.
Special Double Sampling Plan for truncated life tests based on the Marshall-O...ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
On Extension of Weibull Distribution with Bayesian Analysis using S-Plus Soft...Dr. Amarjeet Singh
This document discusses Bayesian analysis of the Weibull distribution using different priors in S-Plus software. It proposes a new lifetime distribution that is an extension of the Weibull distribution and derives the Weibull distribution as a special case of the proposed model. It describes the likelihood and posterior distributions for the Weibull parameters under different prior distributions and discusses Bayesian estimation methods and a simulation study to compare the mean square error of the estimators.
P G STAT 531 Lecture 7 t test and Paired t testAashish Patel
This document discusses the t-test, which is used to test hypotheses about population means using small sample sizes. It provides background on how the t-test was developed by William Gosset under the pseudonym "Student" for Guinness brewery research. The main applications of the t-test are explained: comparing a sample to a population mean, comparing means of two independent samples, and comparing means of paired samples. Examples are provided to demonstrate how to perform t-tests to compare sample means to hypothesized values and determine if differences are statistically significant. The document also notes that t-tests can be conducted using statistical software like SPSS.
Estimating sample size through simulationsArthur8898
Determining sample size is one critical and important procedure for designing an experiment. The sample size for most statistical models can be easily calculated by using the POWER procedure. However, the PROC POWER cannot be used for a complicated statistical model. This paper reviews a more generalized method to estimate the sample size through a simulation approach by using SAS® software. The simulation approach not only applies to the simple but also to a more complex statistical design.
📺Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 7: Estimating Parameters and Determining Sample Sizes
7.1: Estimating a Population Proportion
About CORE:
The Culture of Research and Education (C.O.R.E.) webinar series is spearheaded by Dr. Bernice B. Rumala, CORE Chair & Program Director of the Ph.D. in Health Sciences program in collaboration with leaders and faculty across all academic programs.
This innovative and wide-ranging series is designed to provide continuing education, skills-building techniques, and tools for academic and professional development. These sessions will provide a unique chance to build your professional development toolkit through presentations, discussions, and workshops with Trident’s world-class faculty.
For further information about CORE or to present, you may contact Dr. Bernice B. Rumala at Bernice.rumala@trident.edu
Standard error is used in the place of deviation. it shows the variations among sample is correlate to sampling error. list of formula used for standard error for different statistics and applications of tests of significance in biological sciences
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 9: Inferences from Two Samples
9.3 Two Means, Two Dependent Samples, Matched Pairs
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 6: Normal Probability Distribution
6.6: Normal as Approximation to Binomial
- The document discusses factors that determine appropriate sample size for various study types, including exploratory studies, cross-sectional studies, and analytical studies. It provides formulas to calculate sample size needed based on variables like standard deviation, confidence interval, and expected difference between groups. Appropriate sample size is a balance between what is desirable and feasible given time, resources, and study objectives. Sample size determination involves using formulas or tables to calculate the needed size to achieve necessary precision or significant difference.
Many Decision Problems in business and social systems can be modeled using mathematical optimization, which seeks to maximize or minimize some objective which is a function of the decisions.
Stochastic Optimization Problems are mathematical programs where some of the data incorporated into the objective or constraints are Uncertain.
whereas, Deterministic Optimization Problems are formulated with known parameters.
Common evaluation measures in NLP and IRRushdi Shams
This document discusses various evaluation measures used in information retrieval and natural language processing. It describes precision, recall, and the F1 score as fundamental measures for unranked retrieval sets. It also covers averaged precision and recall, accuracy, novelty and coverage ratios. For ranked retrieval sets, it discusses recall-precision graphs, interpolated recall-precision, precision at k, R-precision, ROC curves, and normalized discounted cumulative gain (NDCG). The document also discusses agreement measures like Kappa statistics and parses evaluation measures like Parseval and attachment scores.
Generalized Additive and Generalized Linear Modeling for Children DiseasesQUESTJOURNAL
ABSTRACT: This paper is necessarily restricted to application of Generalised Linear Models(GLM) and Generalised Additive Models(GAM), and is intended to provide readers with some measure of the power of these mathematical tools for modeling Health/Illness data systems. We are all aware that illness, in general and children illness, in particular is amongst the most serious socio-economic and demographic problems in developing countries, and they have great impact on future development. In this paper we focus on some frequently occurring diseases among children under fourteen years of age, using data collected from various hospitals of Jammu district from 2011 to 2016.The success of any policy or health care intervention depends on a correct understanding of the socio economic environmental and cultural factors that determine the occurrence of diseases and deaths. Until recently, any morbidity information available was derived from clinics and hospitals. Information on the incidence of diseases, obtained from hospitals represents only a small proportion of the illness, because many cases do not seek medical attention .Thus, the hospital records may not be appropriate from estimating the incidence of diseases from programme developments. The use of DHS data in the understanding of the childhood morbidity has expanded rapidly in recent years. However, few attempts have been made to address explicitly the problems of non linear effects on metric covariates in the interpretation of results .This study shows how the GAM model can be adapted to extent the analysis of GLM to provide an explanation of non linear relationship of the covariate. Incorporation of non linear terms in the model improves the estimates in the terms of goodness of fit. The GLM model is explicitly specified by giving symbolic description of the linear predictor and a description of the error distribution and the GAM model is fit using the local scoring algorithm, which iteratively fits weighted additive models by back fitting. The back fitting algorithm is a Gauss-Seidel method of fitting additive models by the iteratively smoothing partial residuals. The algorithm separates the parametric from the non parametric parts of the fit, and fits the parametric part using weighted linear least squares within the back fitting algorithm.
The document discusses hypothesis testing and statistical methods. It defines key terms like the null hypothesis, alternative hypothesis, dependent and independent variables. It then outlines the steps in hypothesis testing which include stating the hypotheses, selecting a test statistic, determining confidence levels, computing and plotting the test statistic to make conclusions. It discusses using p-values and provides an example comparing blood pressure before and after exercise. It also explains how to compare means for one and two independent groups as well as two dependent groups using t-tests.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 9: Inferences from Two Samples
9.4: Two Variances or Standard Deviations
Experimental Studies on Performance and Emission Characteristics of Fish Oil ...IJERA Editor
Biodiesel is one of the most versatile alternative fuel options for direct injection CI engine applications. In the recent research of biodiesel in India receives its attention towards fish oil based biodiesel. The present work aimed at production of biodiesel from the fish oil extracted from marine fish species by transesterification process which is used as fuel in direct injection CI engine to evaluate its performance, and emission characteristics at different injection opening pressures of 190bar, 200bar, 210bar. The different blends of fish oil biodiesel with diesel, B10, B20, B30, B40, B50 and B100 were used in the experiments and the results indicate that brake thermal efficiency were higher with B30 blend fuel than that of diesel at 210bar as compared at 190bar and 200bar. The brake specific energy consumption for B30 blend at 210bar shows better results than that of diesel. By considering these two performance parameters B30 blend at 210 bar injection opening pressure is taken as optimum. At full load for B30 fuel at 210bar injection opening pressure the emission results shows that there is increase in NOx and CO2 emission but reduction in CO and HC emissions by 20% and 15.55% respectively with reference to diesel fuel.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 4: Probability
4.2: Addition Rule and Multiplication Rule
Building a model for expected cost function to obtain doubleIAEME Publication
This document presents a model for determining the parameters of a double Bayesian sampling inspection plan to minimize total expected quality control costs. The model considers costs associated with inspecting samples, accepting or rejecting lots, and continuing to a second sample. It defines variables such as sample sizes, acceptance/rejection numbers, and costs. Equations are provided for the average inspection cost of the first and second samples considering the sampling distribution and conditional probability of defective items. The model aims to obtain optimal parameters (n1, a1, r1, n2, a2, r2) of the double sampling plan by minimizing the total expected regret function according to producer and consumer risk levels.
A Moment Inequality for Overall Decreasing Life Class of Life Distributions w...inventionjournals
:A moment inequality is derived for the system whose life distribution is in an overall decreasing life (ODL) class of life distributions. A new nonparametric test statistic for testing exponentiality against ODL is investigated based on this inequality. The asymptotic normality of the proposed statistic is presented. Pitman's asymptotic efficiency, power and critical values of this test are calculated to assess the performance of the test. Real examples are given to elucidate the use of the proposed test statistic in the reliability analysis. Wealso proposed a test for testing exponentiality versus ODL for right censored data and the power estimates of this test are also simulated for censored data for some commonly used distributions in reliability. Finally, real data are used as an example for practical problems.
This document discusses sample size calculations for clinical trials. It covers the key requirements for determining sample size such as power, significance level, variability, and smallest effect size. Methods for calculating sample size are presented, including formulas, software, tables, and nomograms. Considerations for adjustments to the sample size due to factors like dropout rates and unequal group sizes are also described. Examples of sample size calculations for comparing means and proportions in independent groups using t-tests and chi-squared tests are provided.
Chapter 7 – Confidence Intervals And Sample Sizeguest3720ca
This document discusses confidence intervals for means and proportions. It defines key terms like point estimates, interval estimates, confidence levels, and confidence intervals. It provides formulas for calculating confidence intervals for means when the population standard deviation is known or unknown, and when the sample size is greater than or less than 30. Formulas are also given for calculating confidence intervals for proportions, and for determining the minimum sample size needed for estimating means and proportions within a desired level of accuracy. Examples of applying these concepts to sample data are also included.
Special Double Sampling Plan for truncated life tests based on the Marshall-O...ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
On Extension of Weibull Distribution with Bayesian Analysis using S-Plus Soft...Dr. Amarjeet Singh
This document discusses Bayesian analysis of the Weibull distribution using different priors in S-Plus software. It proposes a new lifetime distribution that is an extension of the Weibull distribution and derives the Weibull distribution as a special case of the proposed model. It describes the likelihood and posterior distributions for the Weibull parameters under different prior distributions and discusses Bayesian estimation methods and a simulation study to compare the mean square error of the estimators.
P G STAT 531 Lecture 7 t test and Paired t testAashish Patel
This document discusses the t-test, which is used to test hypotheses about population means using small sample sizes. It provides background on how the t-test was developed by William Gosset under the pseudonym "Student" for Guinness brewery research. The main applications of the t-test are explained: comparing a sample to a population mean, comparing means of two independent samples, and comparing means of paired samples. Examples are provided to demonstrate how to perform t-tests to compare sample means to hypothesized values and determine if differences are statistically significant. The document also notes that t-tests can be conducted using statistical software like SPSS.
Estimating sample size through simulationsArthur8898
Determining sample size is one critical and important procedure for designing an experiment. The sample size for most statistical models can be easily calculated by using the POWER procedure. However, the PROC POWER cannot be used for a complicated statistical model. This paper reviews a more generalized method to estimate the sample size through a simulation approach by using SAS® software. The simulation approach not only applies to the simple but also to a more complex statistical design.
📺Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 7: Estimating Parameters and Determining Sample Sizes
7.1: Estimating a Population Proportion
About CORE:
The Culture of Research and Education (C.O.R.E.) webinar series is spearheaded by Dr. Bernice B. Rumala, CORE Chair & Program Director of the Ph.D. in Health Sciences program in collaboration with leaders and faculty across all academic programs.
This innovative and wide-ranging series is designed to provide continuing education, skills-building techniques, and tools for academic and professional development. These sessions will provide a unique chance to build your professional development toolkit through presentations, discussions, and workshops with Trident’s world-class faculty.
For further information about CORE or to present, you may contact Dr. Bernice B. Rumala at Bernice.rumala@trident.edu
Standard error is used in the place of deviation. it shows the variations among sample is correlate to sampling error. list of formula used for standard error for different statistics and applications of tests of significance in biological sciences
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 9: Inferences from Two Samples
9.3 Two Means, Two Dependent Samples, Matched Pairs
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 6: Normal Probability Distribution
6.6: Normal as Approximation to Binomial
- The document discusses factors that determine appropriate sample size for various study types, including exploratory studies, cross-sectional studies, and analytical studies. It provides formulas to calculate sample size needed based on variables like standard deviation, confidence interval, and expected difference between groups. Appropriate sample size is a balance between what is desirable and feasible given time, resources, and study objectives. Sample size determination involves using formulas or tables to calculate the needed size to achieve necessary precision or significant difference.
Many Decision Problems in business and social systems can be modeled using mathematical optimization, which seeks to maximize or minimize some objective which is a function of the decisions.
Stochastic Optimization Problems are mathematical programs where some of the data incorporated into the objective or constraints are Uncertain.
whereas, Deterministic Optimization Problems are formulated with known parameters.
Common evaluation measures in NLP and IRRushdi Shams
This document discusses various evaluation measures used in information retrieval and natural language processing. It describes precision, recall, and the F1 score as fundamental measures for unranked retrieval sets. It also covers averaged precision and recall, accuracy, novelty and coverage ratios. For ranked retrieval sets, it discusses recall-precision graphs, interpolated recall-precision, precision at k, R-precision, ROC curves, and normalized discounted cumulative gain (NDCG). The document also discusses agreement measures like Kappa statistics and parses evaluation measures like Parseval and attachment scores.
Generalized Additive and Generalized Linear Modeling for Children DiseasesQUESTJOURNAL
ABSTRACT: This paper is necessarily restricted to application of Generalised Linear Models(GLM) and Generalised Additive Models(GAM), and is intended to provide readers with some measure of the power of these mathematical tools for modeling Health/Illness data systems. We are all aware that illness, in general and children illness, in particular is amongst the most serious socio-economic and demographic problems in developing countries, and they have great impact on future development. In this paper we focus on some frequently occurring diseases among children under fourteen years of age, using data collected from various hospitals of Jammu district from 2011 to 2016.The success of any policy or health care intervention depends on a correct understanding of the socio economic environmental and cultural factors that determine the occurrence of diseases and deaths. Until recently, any morbidity information available was derived from clinics and hospitals. Information on the incidence of diseases, obtained from hospitals represents only a small proportion of the illness, because many cases do not seek medical attention .Thus, the hospital records may not be appropriate from estimating the incidence of diseases from programme developments. The use of DHS data in the understanding of the childhood morbidity has expanded rapidly in recent years. However, few attempts have been made to address explicitly the problems of non linear effects on metric covariates in the interpretation of results .This study shows how the GAM model can be adapted to extent the analysis of GLM to provide an explanation of non linear relationship of the covariate. Incorporation of non linear terms in the model improves the estimates in the terms of goodness of fit. The GLM model is explicitly specified by giving symbolic description of the linear predictor and a description of the error distribution and the GAM model is fit using the local scoring algorithm, which iteratively fits weighted additive models by back fitting. The back fitting algorithm is a Gauss-Seidel method of fitting additive models by the iteratively smoothing partial residuals. The algorithm separates the parametric from the non parametric parts of the fit, and fits the parametric part using weighted linear least squares within the back fitting algorithm.
The document discusses hypothesis testing and statistical methods. It defines key terms like the null hypothesis, alternative hypothesis, dependent and independent variables. It then outlines the steps in hypothesis testing which include stating the hypotheses, selecting a test statistic, determining confidence levels, computing and plotting the test statistic to make conclusions. It discusses using p-values and provides an example comparing blood pressure before and after exercise. It also explains how to compare means for one and two independent groups as well as two dependent groups using t-tests.
Please Subscribe to this Channel for more solutions and lectures
http://www.youtube.com/onlineteaching
Chapter 9: Inferences from Two Samples
9.4: Two Variances or Standard Deviations
Experimental Studies on Performance and Emission Characteristics of Fish Oil ...IJERA Editor
Biodiesel is one of the most versatile alternative fuel options for direct injection CI engine applications. In the recent research of biodiesel in India receives its attention towards fish oil based biodiesel. The present work aimed at production of biodiesel from the fish oil extracted from marine fish species by transesterification process which is used as fuel in direct injection CI engine to evaluate its performance, and emission characteristics at different injection opening pressures of 190bar, 200bar, 210bar. The different blends of fish oil biodiesel with diesel, B10, B20, B30, B40, B50 and B100 were used in the experiments and the results indicate that brake thermal efficiency were higher with B30 blend fuel than that of diesel at 210bar as compared at 190bar and 200bar. The brake specific energy consumption for B30 blend at 210bar shows better results than that of diesel. By considering these two performance parameters B30 blend at 210 bar injection opening pressure is taken as optimum. At full load for B30 fuel at 210bar injection opening pressure the emission results shows that there is increase in NOx and CO2 emission but reduction in CO and HC emissions by 20% and 15.55% respectively with reference to diesel fuel.
Sustainable Energy Resource Buildings: Some Relevant Feautures for Built Envi...IJERA Editor
Energy has become a critical issue in national and global economic development. Its crucial importance to the nation’s building makes the development of energy resources one of the leading agenda of the present democratic government of Nigeria, towards lifting the nation to the comity of twenty (20) nations with the fastest growing economy in 2020. In achieving this, the building industry and in particular the architectural profession has a leading role to play in adopting education, designs, materials, and technology capable of reducing energy consumption in building within tropic region. This paper, therefore, appraises the important features of energy performance building through the use of sustainable innovative materials and technology that respond to climate condition while being environmentally friendly.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document describes the design and analysis of a surface acoustic wave (SAW) based microelectromechanical systems (MEMS) gas sensor for detecting volatile organic gases. The sensor uses interdigitated transducers on a lithium niobate piezoelectric substrate coated with a polyisobutylene sensing film. Finite element modeling was used to simulate the sensor's response to various gases. The simulations showed shifts in resonant frequency when exposed to different gases, allowing for gas detection via mass loading effects. Sensitivity analysis found the sensor responded most strongly to tetrachloroethene exposure due to its high absorbed partial density in the sensing film. The SAW sensor design could enable applications in chemical industry, environmental monitoring, and other
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Ron Mueck is an Australian sculptor known for his hyperrealistic sculptures. Some of his most notable works include "Seated Woman" from 1996 that stands 1.76 meters tall, and "Pregnant Woman" from 1997 that is 2.52 meters tall. Mueck is renowned for incredibly lifelike details in his sculptures that often appear indistinguishable from humans.
This document describes an automatic power factor correction (APFC) system using a capacitive bank. The system measures the power factor of an electrical load using a microcontroller. If the power factor is low, indicating poor efficiency, the microcontroller switches capacitors into the system from the capacitive bank to compensate and improve the power factor towards unity. The document provides details on the components, operation, and simulation of the APFC system to automatically correct the power factor without manual intervention. This improves efficiency and reduces costs for both utilities and customers.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
М.С. Грушевский и его вклад в развитие исторической науки в Украине»
Малышев Александр – студент 411 класса
(научный руководитель: к.и.н., доцент кафедры ГД Рыбак И.В.)
A Real Time Video Colorization Method with Gabor Feature SpaceIJERA Editor
Video colorization has become one of the most challenging tasks for multimedia Technologists. The present method of video colorization requires high processing costs and will induce temporal artifacts in a video space. A new approach for video colorization in a spatio temporal manner with optimization in rotation with gabor feature space. The work is implemented on graphics hardware Platform which is parallel to achieve real time performance of color optimization. This can be achieved with real time color propagation by clustering video frames and then extending gabor filtering to optical flow computation. Temporal coherence is further refined through user scribbles in video frames. This approach was demonstrated with appropriate video examples and observed most qualified colorized videos.
The document provides information about source images from the internet and a website called www.planetbossi.ch. It does not contain any other text or details that could be summarized in 3 sentences or less.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Mathematical Model for the Genetic Variation of Prolactin and Prolactin Rec...IJERA Editor
The Weibull distribution is a widely used model for studying fatigue and endurance life in engineering devices and materials. Recent advances in Weibull theory have also created numerous specialized Weibull applications. Modern computing technology has made many of these techniques accessible across the engineering spectrum. Despite its popularity, and wide applicability the traditional 2 – parameter and 3- parameter Weibull distribution is unable to capture all the lifetime phenomenon for instance the data set which has a non – monotonic failure rate function. Recently several generalization of Weibull distribution has been studied. An approach to the construction of flexible parametric model is to embed appropriate competing models into a larger model by adding shape parameter. Some recent generalizations of Weibull distribution including the Exponentiated Weibull, Extended Weibull, Modified Weibull are discussed and references [5] therein, along with their reliability functions. In this paper a new generalization of Weibull distribution called the transmuted Weibull distribution is utilized for our medical application. For example, Prolactin and prolactin receptors are present in normal breast tissue, benign breast disease, breast cancer cell lines, and breast tumour tissue, leading to speculation that the proliferative and antiapoptotic effects of prolactin in breast epithelial cells could be a factor in breast carcinogenesis. In this paper, a test for the distribution of prolactin concentrations in controls, by menopausal status and relationships with Serum Prolactin Levels and Breast Cancer Risk was investigated in the application part, by using Transmuted Weibull Distribution. As a result the mathematical curves for the Probability Density Function, Reliability Function and Hazard Rate Function are obtained for the corresponding medical curve given in the application part.
The document summarizes key findings from analyzing patient data from Komfo Anokye Teaching Hospital in Ghana. It finds that:
1) The average age of patients who visited the surgical ward was estimated to be 29 years with a standard error of 0.3770333 years.
2) Patients were stratified by complications, and the average age for intestinal perforation cases was estimated to be 29.352 years with a standard error of 1.133 years.
3) Statistical hypothesis testing did not reject the null hypothesis that the mean age is 29 years, as the p-value was greater than the significance level of 0.05.
Stochastic Model To Find The Effect Of Glucose-Dependent Insulinotropic Hormo...IJERA Editor
This document summarizes a study that used a stochastic model based on the Cauchy distribution to analyze the effect of glucose-dependent insulinotropic hormone (GIP) based on the length of the small intestine. The study infused glucose into either a 60cm segment or the entire small intestine of 8 males over 60 minutes on different days. GIP levels increased after 10 minutes and plateaued at 30 minutes with no difference between infusion methods. The document presents equations to model the largest discontinuity in GIP levels based on the Cauchy distribution scale and derives the probability that the kth largest discontinuity is less than a given value.
1) The document presents a mathematical model for finding the maximum value of growth hormone deficiency in patients with fibromyalgia using Gaussian process.
2) It summarizes evidence that growth hormone deficiency, as defined by low insulin-like growth factor levels, occurs in approximately 30% of fibromyalgia patients and is likely responsible for some morbidity.
3) The model fits growth hormone deficiency data over time using a Gaussian process, allowing determination of the maximum deficiency value, which can help medical professionals in treatment.
The document discusses parametric and non-parametric tests. It provides examples of commonly used non-parametric tests including the Mann-Whitney U test, Kruskal-Wallis test, and Wilcoxon signed-rank test. For each test, it gives the steps to perform the test and interpret the results. Non-parametric tests make fewer assumptions than parametric tests and can be used when the data is ordinal or does not meet the assumptions of parametric tests. They provide a distribution-free alternative for analyzing data.
Assessment of the Fixation Principle of Goat Milking Installations by the Hyp...IRJET Journal
This study assessed the impact of different goat fixation principles (random vs arranged) on milking installation operations. The arranged principle aims to fix goats in a defined, adjacent order on the milking platform. Three qualitative indicators were evaluated: operator working conditions, goat welfare, and goat productivity. Experts rated the impact of the arranged principle compared to random fixation on these indicators. Statistical analysis using a t-test found that the arranged principle significantly improved all three indicators compared to random fixation, with the greatest impact on operator working conditions. The study concludes the arranged fixation principle positively impacts milking installation process quality.
This document discusses sampling distributions and related statistical concepts. It defines descriptive and inferential statistics, and explains that inferential statistics uses samples to draw conclusions about populations. Key concepts covered include sampling, probability distributions, sampling distributions, and the central limit theorem. The sampling distribution of the sample mean is examined in depth. For a sample mean, the expected value is equal to the population mean, while the standard error depends on factors like the population standard deviation and sample size. Examples are provided to illustrate these statistical properties.
The document discusses sample size determination for clinical and epidemiological research. It explains that proper sample size is important for validity, accuracy, and reliability of research findings. Key factors to consider in sample size calculations include the study objective, details of the intervention, outcomes, covariates, research design, and study subjects. Precision analysis and power analysis are two common approaches, with power analysis being most suitable for studies aiming to detect an effect. The document provides formulas and examples for calculating sample sizes for comparative and descriptive studies with both continuous and dichotomous outcomes. It also discusses the concepts of type I and II errors and their relationship to statistical power.
This quiz consists of 20 questions most appear to be similar but now.docxamit657720
This quiz consists of 20 questions most appear to be similar but now really. I ned someone who is familiar with bio-statistics and math. The due date is tomorrow 4 pm PST. or (16:00). Please if you accept handshake you must do the work not get from previous papers or tell me you had emergency an hour before its due. This is important to me.
attached is the file just in case you need it in word format. Thank you in advance.
1.
The standard deviation of the diameter at breast height, or DBH, of the slash pine tree is less than one inch. Identify the Type I error. (Points : 1)
[removed] Fail to support the claim σ < 1 when σ < 1 is true.
[removed] Support the claim μ < 1 when μ = 1 is true.
[removed]
Support the claim σ < 1 when σ = 1 is true.
[removed] Fail to support the claim μ < 1 when μ < 1 is true.
1a. The EPA claims that fluoride in children's drinking water should be at a mean level of less than 1.2 ppm, or parts per million, to reduce the number of dental cavities. Identify the Type I error. (Points : 1)
[removed] Fail to support the claim σ < 1.2 when σ < 1.2 is true.
[removed] Support the claim μ < 1.2 when μ = 1.2 is true.
[removed] Support the claim σ < 1.2 when σ = 1.2 is true.
[removed] Fail to support the claim μ < 1.2 when μ < 1.2 is true.
2.
Biologists are investigating if their efforts to prevent erosion on the bank of a stream have been statistically significant. For this stream, a narrow channel width is a good indicator that erosion is not occurring. Test the claim that the mean width of ten locations within the stream is greater than 3.7 meters. Assume that a simple random sample has been taken, the population standard deviation is not known, and the population is normally distributed. Use the following sample data:
3.3 3.3 3.5 4.9 3.5 4.1 4.1 5 7.3 6.2
What is the P-value associated with your test statistic? Report your answer with three decimals, e.g., .987 (Points : 1)
2a. Medical researchers studying two therapies for treating patients infected with Hepatitis C found the following data. Assume a .05 significance level for testing the claim that the proportions are not equal. Also, assume the two simple random samples are independent and that the conditions np ≥ 5 and nq ≥ 5 are satisfied.
Therapy 1
Therapy 2
Number of patients
39
47
Eliminated Hepatitis
20
13
C infection
Construct a 95% confidence interval estimate of the odds ratio of the odds for having Hepatitis C after Therapy 1 to the odds for having Hepatitis C after Therapy 2. Give your answer with two decimals, e.g., (12.34,56.78) (Points : 0.5)
[removed]
3. Researchers studying sleep loss followed the length of sleep, in hours, of 10 individuals with insomnia before and after cognitive behavioral therapy (CBT). Assume a .05 significance level to test the claim that there is a difference between the length of sleep of individuals before and after CBT. Also, assume the data consist of matched pair ...
Ties Adjusted Nonparametric Statististical Method For The Analysis Of Ordered...inventionjournals
This paper proposes a nonparametric statistical method for the analysis of repeated measures that adjusts for the possibility of tied observation in the data which may be observations in time, space or condition. The proposed method develops a test statistic for testing whether subjects are progressively improving (or getting worse) in their experience over time, space or condition. The method is illustrated with some data and shown to be at least as powerful as the Friedman’s two way analysis of variance test by ranks even in the absence of any in-built ordering in the variable under study.
Ties Adjusted Nonparametric Statististical Method For The Analysis Of Ordered...inventionjournals
This paper proposes a nonparametric statistical method for the analysis of repeated measures
that adjusts for the possibility of tied observation in the data which may be observations in time, space or
condition. The proposed method develops a test statistic for testing whether subjects are progressively
improving (or getting worse) in their experience over time, space or condition. The method is illustrated with
some data and shown to be at least as powerful as the Friedman’s two way analysis of variance test by ranks
even in the absence of any in-built ordering in the variable under study
Mathematical Model for Hemodynamicand Hormonal Effects of Human Ghrelin in He...IJERA Editor
Hemodynamicand hormonal effects of human ghrelin in healthy volunteers.To investigate hemodynamic and
hormonaleffects of ghrelin, a novel growth hormone (GH)-releasing peptide, we gave six healthy men an
intravenousbolus of human ghrelin or placebo and vice versa1–2 wk apart in a randomized fashion. Ghrelin
elicited amarked increase in circulating GH . The elevation ofGH lasted longer than 60 min after the bolus
injection.Injection of ghrelin significantly decreased mean arterialpressure without a significant changein heart
rate .In summary, human ghrelin elicited a potent, longlastingGH release and had beneficial hemodynamic
effectsvia reducing cardiac afterload and increasing cardiac outputwithout an increase in heart rate. Thus, the
purpose of thisstudy was to investigate hemodynamic and hormonaleffects of intravenous ghrelin in healthy
volunteers. This paper discussed the constant stress level of healthy volunteers with times to damage of stress
effect andrecoveries
This document proposes a double acceptance sampling plan for truncated life tests where the lifetime of a product follows a Kumaraswamy-log-logistic distribution. The plan uses a zero-one failure scheme where the first sample size is n1, the second is n2, and the acceptance numbers are c1=0 and c2=1. The minimum sample sizes n1 and n2 are determined to ensure the median life is greater than or equal to the specified lifetime m0 at a given consumer confidence level P*. The operating characteristics and minimum median life ratios are analyzed to minimize producer and consumer risks at specified levels. Numerical examples are provided to illustrate the application of the sampling plan.
This document discusses sample size calculations for various study designs. It explains that determining an adequate sample size is important to ensure reliable results from epidemiological studies. Different study designs require different sample size calculation methods. Sample size should be large enough for accuracy but not too large to waste resources. Factors like confidence level, margin of error, population parameters, and variability must be considered in sample size calculations. Formulas are provided for estimating sample sizes needed for proportions, means, case-control studies, cohort studies, and more.
Inferential statistics takes data from a sample and makes inferences about the larger population from which the sample was drawn.
Make use of the PPT to have a better understanding of Inferential statistics.
Extending A Trial’s Design Case Studies Of Dealing With Study Design IssuesnQuery
This document discusses several case studies of dealing with complex study design issues in clinical trials, including non-proportional hazards, cluster randomization, and three-armed trials. The agenda outlines topics on non-proportional hazards modeling and sample size considerations, cluster randomized and stepped-wedge designs, and methods for analyzing data from three-armed trials that include experimental, reference, and placebo groups. Worked examples are provided to illustrate sample size calculations and statistical approaches for each of these complex trial design scenarios.
1. The document discusses key concepts in statistics including population, sampling, random sampling, standard error, and standard error of the mean.
2. A population is the total set of observations, while a sample is a subset selected from the population. Random sampling selects subjects entirely by chance so each member has an equal chance of being selected.
3. The standard error is the standard deviation of a statistic's sampling distribution and indicates how much a statistic may vary between samples. It decreases with larger sample sizes. The standard error of the mean specifically measures how much the sample mean may differ from the population mean.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdf
Ci4201564569
1. A. Venkatesh et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.564-569
RESEARCH ARTICLE
www.ijera.com
OPEN ACCESS
Acceptance sampling for the secretion of Gastrin using crisp and
fuzzy Weibull distribution
A. Venkatesh* and G. Subramani**
*Assistant Professor of Mathematics, A. V. V. M. Sri Pushpam College, Poondi, Thanjavur (Dt).
**Assistant Professor of Mathematics, Jayaram college of Engineering and technology, Tiruchirappalli.
Abstract
The acceptance sampling plan problem is an important tool in the statistical quality control. The theory of
probability distribution and fuzzy probability distribution may be used to solve it. In this paper, we compare the
significant elevation of plasma concentrations of stimulatory hormone gastrin in the pancreatic secretion after
the meal using acceptance sampling in the Weibull distribution with crisp and fuzzy parameter.
Key words:
Acceptance sampling, Fuzzy Weibull distributions, gastrin, Weibull distributions
2010 Mathematics Subject Classification: 94D05, 60A86, 62A86, 62E86
I.
Introduction
Statistical quality control is one of the
important topics in statistics which is used by many
practitioners in manufacturing industry. Acceptance
sampling is an inspecting procedure applied in
statistical quality control. It is used in the decision
making process for the purpose of quality
management. Acceptance sampling plans have many
applications in the field of industries and bio-medical
sciences. The main concern in an acceptance
sampling plan is to minimize the cost and time
required for the quality control.
Single acceptance sampling plan for
acceptance or rejection of a lot is characterized by the
sample size and the acceptance number. An
acceptance sampling plan involves quality
contracting on product orders between the producers
and the consumers. In a time-truncated sampling a
random sample is selected from a lot and put on the
test where the number of failures is recorded by the
specified time. The lot will be accepted if the number
of failures observed is less than or equal to the
acceptance number.
Usual sampling plans have crisp parameters,
however the probability of defective items is often
not known precisely in decision making problems,
while there also some uncertainty exists in the values
of p. The theory of fuzzy set is used in solving such
problems. The uncertainty existing in the problem is
resolved by assuming that the parameter is a fuzzy
number. In humans, gastrin is one of the best studied
gut hormones. It occurs in many forms in human
serum. Gastrin and vagal nerves are the main
regulators of gastric acid secretion. Gastrin is a
peptide hormone that stimulates secretion of gastric
acid by the parietal cells of the stomach and acids in
gastric motility.
www.ijera.com
Acceptance single sampling plan for
truncated life test have been discussed by many
authors in the literature using different probability
distribution. The acceptance sampling plan was
discussed by Epstein[1] for exponential distribution.
Gupta et al. [3] found the acceptance sampling plan
using gamma distribution. Life test sampling plan for
normal and log normal distribution was developed by
Gupta[4]. Kandam et al[5] used half logistic
distribution in acceptance sampling. Goode et al.[2]
developed the acceptance sampling plan using the
Weibull probability distribution. Aslam et al. [7, 8]
developed group acceptance sampling plan and twostage group acceptance sampling plan using Weibull
distribution. E. Baloui et al.[11] have been discussed
acceptance single sampling plan with fuzzy
environment. Zdenek et al [9] described the use of
Weibull fuzzy distribution for reliability of concrete
structures. Konturek SJ [10] have been discussed the
elevation of plasma concentrations of Gastrin in
postprandial.
II.
Notation
n
c
d
𝜆
𝛽
t
𝜆[𝛼]
𝛽[𝛼]
p
PA
crisp
–
–
–
–
–
–
–
–
–
–
P [α]
fuzzy
–
sample size
acceptance number
observed defective item
scale parameter
shape parameter
test termination time
alpha cut of scale parameter
alpha cut of shape parameter
probability of failure
probability of acceptance in
Probability of acceptance in
564 | P a g e
2. A. Venkatesh et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.564-569
III.
Preliminaries and Definitions
In fuzzy set theory, the concepts of
membership function are most important and are used
to represent various fuzzy sets. Many membership
www.ijera.com
functions such as triangular, trapezoidal, normal,
gamma etc. have been used to represent fuzzy
numbers. However triangular and trapezoidal fuzzy
numbers are mostly used, as they can easily represent
the
imprecise
information.
A triangular fuzzy number is denoted by the triplet A = (a 1, a2, a3) with the membership function
𝑥 − 𝑎1
𝑖𝑓 𝑎1 ≤ 𝑥 ≤ 𝑎2
𝑎2 − 𝑎1
𝑖𝑓 𝑥 = 𝑎2
𝜇𝐴 𝑥 = 1
𝑎2 − 𝑥
𝑖𝑓 𝑎2 ≤ 𝑥 ≤ 𝑎3
𝑎3 −𝑎2
3.1 Definition
The fuzzy subset 𝑁 of a real ling R with the membership function 𝜇 𝑁 : 𝑅 → [0,1] is a fuzzy number iff (i) 𝑁 is
normal (ii) 𝑁 is fuzzy convex (iii) 𝜇 𝑁 is upper semi continuous (iv) sup(𝑁 ) is bounded.
3.2 Definition
The 𝛼- cut of a fuzzy number 𝐴 is non –fuzzy set defined as 𝑀[𝛼] = {𝑥 ∈ 𝑅 ∶ 𝜇 𝐴 (𝑥) ≥ 𝛼}. Hence we have
𝑀[𝛼] = [𝑀 𝛼1 , 𝑀 𝛼2 ] . The interval of confidence defined by alpha cuts can be written as
𝑀 𝛼 = [ 𝑎2 − 𝑎1 𝛼 + 𝑎1 , 𝑎2 − 𝑎3 𝛼 + 𝑎3 ]
IV.
Crisp and Fuzzy Weibull Distribution
The Weibull distribution is widely used in statistical model for live data. Among all statistical
techniques it may be in use for engineering analysis with smaller sample sizes than any other method.
A continuous random variable T with probability density function
𝑡
𝛽
𝑓 𝑡, 𝜆, 𝛽 = 𝛽𝜆−𝛽 𝑡 𝛽 −1 𝑒 − 𝜆 , 𝑡 ≥ 0, 𝜆 ≥ 0, 𝛽 ≥ 0
where 𝛽 > 0 is the shape parameter, 𝜆 > 0 is the scale parameter is called Weibull distribution and is denoted
by 𝑊(𝛽, 𝜆).
The cumulative distribution function of the Weibull distribution is
𝐹 𝑡 = 1 − 𝑒−
𝑡 𝛽
𝜆
The shape parameter gives the flexibility of Weibull distribution by changing the value of the shape
parameter. However sometimes we face situations when the parameter is imprecise. Therefore we consider the
Weibull distribution with fuzzy parameters by replacing the scale parameter 𝜆 into the fuzzy number 𝜆 and
shape parameter 𝛽 into 𝛽.
If a random variable T has a crisp Weibull distribution 𝑊(𝛽, 𝜆) then the corresponding fuzzy random
variable 𝑇 with fuzzy weibull distribution 𝑊(𝛽, 𝜆) has cumulative distribution function 𝐹 𝑡 = 1 − 𝑒
that for 𝛼 ∈ [0,1] the alpha cuts of fuzzy weibull distribution function is 𝑃 𝛼 = [𝑃1 𝛼 , 𝑃2 𝛼 ] where
𝑃1 𝛼 = inf − 𝑒 −
{1
𝑃2 𝛼 = sup − 𝑒
{1
−
𝑡 𝛽
𝜆 ,
𝜆 ∈ 𝜆 𝛼
𝜆 ∈ 𝜆 𝛼
𝑡 𝛽
𝜆
. So
𝛽∈ 𝛽 𝛼}
𝑡 𝛽
𝜆 ,
−
𝛽∈ 𝛽 𝛼}
V.
Acceptance Sampling
All sampling plans are formulated to provide a specific producer’s and consumer’s risk. However, it is
a consumer’s best interest to keep the average number of items examined to a minimum, because that keeps the
cost of inspection low. Sampling plans differ with respect to the average number of items examined.
The single acceptance sampling plan is a decision rule to accept or reject a lot based on the results of
one random sample from the lot. The procedure is to take a random sample of size n and check each item. If the
number of defective items does not exceed a specified number c, the consumer accepts the entire lot. Any
defects found in the sample are either repaired or returned to the producer. If the number of defects in the
www.ijera.com
565 | P a g e
3. A. Venkatesh et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.564-569
www.ijera.com
sample is greater than c, the consumer rejects the entire lot and returns it to the producer. The single sampling
plan is easy to use but usually result in a larger average number of items examined than other plans.
In this section, we have used the single sampling plan for classical attributes suppose that we like to
check a lot of size N, first we select and check a random sample of size n and observe the number of defective
items. If the number of observed defective items d is less than or equal to the acceptance number c, the lot is
accepted otherwise the lot is rejected. If the size of the lot is very large the probability of acceptance of the lot is
𝑐
𝑛
𝑃𝐴 = 𝑃 𝑑 ≤ 𝑐 =
𝑝 𝑑 1 − 𝑝 𝑛−𝑑
𝑑
𝑑=0
where p is the probability of the defective item.
𝑡
𝛽
The probability p for the Weibull distribution is given by 𝑝 = 1 − 𝑒 − 𝜆 . If the probability of
defective item is not known precisely, we represent this parameter with a fuzzy number 𝑝 as follows
𝑝 = (a1, a2, a3).
VI.
Application
Let us consider an example of the plasma concentration of stimulatory hormone gastrin in the
pancreatic secretion after the meal. Fig. 1 describes the plasma concentrations of gastrin in the pancreatic
secretion after the meal.
Plasma Gastrin (pM/L)
60
Gastrin
50
40
30
20
10
0
1
3
5
7
9
11
13
15
17
Time (Min.)
Fig. 1 Stimulating hormone Gastrin in the control of pancreatic secretion
6.1 Solution by crisp Weibull distribution
From the Fig. 1 the scale and shape parameter of Weibull distributions are 𝜆 = 36.54 𝑎𝑛𝑑 𝛽 = 3.43. By
assuming the value of t as 14, the probability of failure using Weibull distribution is p = 0.0365. The probability
of acceptance of various sample sizes for the acceptance numbers 1, 2 and 3 are shown in the Table – 1.
Table – 1 Probability of acceptance in crisp parameter
Probability of acceptance
sample size
(n)
c=1
c=2
c=3
5
0.9995
0.999991
10
0.9507
0.9952
0.999688
15
0.8978
0.9841
0.998248
20
0.8355
0.9652
0.994623
25
0.7686
0.9385
0.98785
30
0.7002
0.9049
0.977198
35
0.633
0.8654
0.962218
40
0.5684
0.8214
0.942745
45
www.ijera.com
0.9876
0.5075
0.7741
0.918862
566 | P a g e
4. A. Venkatesh et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.564-569
www.ijera.com
50
0.4509
0.7248
0.890855
55
0.3989
0.6746
0.859162
60
0.3516
0.6245
0.824319
65
0.3088
0.5751
0.786923
70
0.2705
0.5272
0.74759
75
0.2362
0.4812
0.706926
80
0.2058
0.4374
0.665511
85
0.1789
0.3962
0.623876
90
0.1553
0.3576
0.582495
95
0.1345
0.3218
0.541784
100
0.1162
0.2887
0.502093
105
0.1003
0.2583
0.46371
110
0.0865
0.2305
0.426867
115
0.0744
0.2052
0.391735
120
0.064
0.1822
0.358439
125
0.055
0.1615
0.327059
6.2 Solution by Fuzzy Weibull distribution
In some situations the value of the scale and shape parameters of the Weibull distribution are not known
precisely. Therefore we consider triangular numbers for the scale and shape parameter. The triangular fuzzy
number of the scale and the shape parameters respectively are 𝜆 = [36, 36.54, 37] and 𝛽 = [3.3, 3.43, 3.6]
The alpha cut of scale and shape parameters respectively are
𝜆 𝛼 = 36 + 0.54 𝛼, 37 − 0.46𝛼 and
𝛽 𝛼 = [3.3 + 0.13𝛼, 3.6 − 0.17𝛼]
Under the alpha cut zero, the fuzzy probability of acceptance of various sample sizes for the acceptance
number 1, 2 and 3 are shown in Table – 2
Table – 2 Probability of acceptance in fuzzy parameter
c=1
c=2
c=3
sample
size (n)
𝑷𝟏 𝜶
𝑷𝟐 𝜶
𝑷𝟏 𝜶
𝑷𝟐 𝜶
𝑷𝟏 𝜶
𝑷𝟐 𝜶
5
0.9828
0.9917
0.9992
0.9997
0.999983
0.999996
10
0.933
0.9661
0.9923
0.9973
0.999402
0.999859
15
0.8643
0.9283
0.975
0.9909
0.996734
0.999184
20
0.7861
0.8821
0.9466
0.9795
0.990246
0.997426
25
0.7048
0.8307
0.908
0.963
0.978539
0.994028
30
0.6249
0.7764
0.861
0.9414
0.960759
0.988493
35
0.5489
0.721
0.8077
0.9151
0.936619
0.980433
40
0.4784
0.666
0.7504
0.8848
0.906323
0.969583
45
0.4143
0.6122
0.691
0.851
0.870455
0.955801
50
0.3568
0.5604
0.6312
0.8146
0.829853
0.939064
55
0.3058
0.5111
0.5724
0.7761
0.785501
0.919446
60
0.261
0.4647
0.5156
0.7363
0.738434
0.89711
65
0.2219
0.4212
0.4617
0.6958
0.689673
0.872279
70
0.188
0.3808
0.4112
0.655
0.640168
0.845229
www.ijera.com
567 | P a g e
5. A. Venkatesh et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.564-569
www.ijera.com
75
0.1589
0.3435
0.3644
0.6144
0.590767
0.816266
80
0.1339
0.3091
0.3215
0.5745
0.542202
0.785712
85
0.1126
0.2777
0.2824
0.5355
0.495075
0.753897
90
0.0944
0.249
0.2471
0.4978
0.449865
0.721146
95
0.0791
0.2228
0.2155
0.4614
0.406933
0.687773
100
0.0661
0.1992
0.1873
0.4266
0.366532
0.654072
105
0.0551
0.1778
0.1623
0.3935
0.328822
0.620317
110
0.0459
0.1584
0.1402
0.3622
0.293882
0.586756
115
0.0382
0.141
0.1208
0.3327
0.261724
0.55361
120
0.0317
0.1254
0.1039
0.3049
0.232307
0.521072
125
0.0263
0.1114
0.0891
0.279
0.205546
0.48931
c=2
crisp value
fuzzy value-1
1
fuzzy value-2
0.8
0.6
0.4
0.2
0
5
fuzzy value-1
1
fuzzy value-2
0.8
0.6
0.4
0.2
20 35 50 65 80 95 110 125
sample size
Fig. 2 Crisp and fuzzy Weibull acceptance probability
for c = 1
crisp value
1.2
0
5
15
25
35
45
55
65
75
85
95
105
115
125
1.2
probability of acceptance
probability of acceptance
c=1
sample size
Fig. 3 Crisp and fuzzy Weibull acceptance probability
for c = 2
c=3
probability of acceptance
1.2
1
crisp value
fuzzy value-1
fuzzy value-2
0.8
0.6
0.4
0.2
0
sample size
Fig. 4 Crisp and fuzzy Weibull acceptance probability for c = 3
VII.
Conclusion
In this paper we have reported the results of
the comparative study to predict the sample size of
determining the plasma concentration of gastrin by
acceptance single sampling using a Weibull
distribution in crisp and fuzzy environment. The
www.ijera.com
curve obtained by a Weibull distribution in crisp
parameter lies between fuzzy parameters. We hope
this work may be used to predict the sample size to
be selected for testing the plasma concentrations of
gastrin in real life.
568 | P a g e
6. A. Venkatesh et al Int. Journal of Engineering Research and Applications
ISSN : 2248-9622, Vol. 4, Issue 2( Version 1), February 2014, pp.564-569
www.ijera.com
REFERENCES
[1] B. Epstein, Truncated life tests in the
exponential case, Annals of Mathematical
Statistics, vol. 25, 1954, 555–564.
[2] H. P. Goode and J. H. K. Kao, Sampling
plans based on the Weibull distribution, in
Proceedings of the 7th National Syposiumon
Reliability
and
Qualilty
Control,
(Philadelphia, Pa, USA, 1961) 24 – 40.
[3] S. S. Gupta and P. A. Groll, Gamma
distribution in acceptance sampling based on
life tests, Journal of the American
StatisticalAssociation, vol. 56, 1961, 942 –
970.
[4] S. S. Gupta, Life test sampling plans for
normal
and
lognormal
distribution,
Technometrics, vol. 4, 1962,. 151 – 175.
[5] R. R. L. Kantam and K. Rosaiah, Half
Logistic distribution in acceptance sampling
based on life tests, IAPQR Transactions, vol.
23, no. 2, 1998, 117 – 125.
[6] A. Baklizi, Acceptance sampling based on
truncated life tests in the pareto distribution
of
the
second
kind,
Advances
andApplications in Statistics, vol. 3, 2003, 33
– 48.
[7] Aslam, M., Jun, C.-H. A group acceptance
sampling plan for truncated life test having
Weibull distribution. Journal of Applied
Statistics, 39, 2009, 1021 – 1027.
[8] Aslam, M., Jun, C.-H., Rasool, M., Ahmad,
M., A time truncated two – stage group
sampling plan for Weibull distribution.
Communications of the Korean Statistical
society , 17, 2010, 89 – 98.
[9] Zdenek Karpisek, Petr Stepanek, Petr Junrak,
Weibull fuzzy probability distribution for
reliability of concrete structures, Engineering
Mechanics, Vol. 17, 2010, 363 – 372.
[10] Konturek. SJ, Physiology of pancreatic
secretion,
Journal of
Physiology and
Pharmacology, Vol.46, 1993, 5 – 24 .
[11] E.Baloui , B.Sadeghpur gildeh and G.Yari,
Acceptance single sampling plan with fuzzy
parameter, Iranian journal of fuzzy systems,
Vol. 8, No. 2, 2011, 47 – 55.
www.ijera.com
569 | P a g e